id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9911/astro-ph9911327.html | ar5iv | text | # Stellar populations in blue compact galaxies
## 1 Introduction
Blue compact galaxies (BCGs) were first observed spectroscopically by Sargent & Searle (sargent (1970)), who clearly established that the properties of these galaxies implied high star formation rates at low metallicities (Doublier et al. doublier (1997)). BCGs have been thought to represent a different and extreme environment for star formation compared to the Milky Way and many other nearby galaxies. They are very important for understanding the star formation process and galactic evolution (Kinney et al. kinney (1993), Martin martin (1998)). BCGs are characterized by their compact morphology and very blue UBV colors (Sage et al. sage (1992), Hunter & Thronson hunter (1995)). Their optical spectra show strong narrow emission lines superposed on a nearly featureless continuum, similar to the spectrum of HII regions (Izotov et al. izotov97 (1997), รstlin et al. ostlin (1999)). Radio observations at 21-cm have shown BCGs contain large amounts of neutral hydrogen. The mean value of $`M_{\mathrm{HI}}/M_{\mathrm{tot}}`$ for a sample of 122 BCGs is 0.16 (Krรผger et al. kruger95 (1995), Salzer & Norton salzer (1999)). Systematic spectroscopic studies of BCGs have shown that about one-third of BCGs have broad W-R bumps, mainly at $`\lambda 4650`$ that are characteristic of late WN stars (Conti conti (1991), Izotov et al. izotov97 (1997)).
Ever since their discovery, the question has arisen whether BCGs are truly young systems where star formation is occurring for the first time, or whether they are old galaxies with current starburst superposed on an old underlying stellar population (Garnett et al. garnett (1997), Lipovetsky et al. lipovetsky (1999)). Because the star formation rate in BCGs is very high, the metallicity could have reached the observed value even within a time $`\mathrm{T}10^8\mathrm{yr}`$ (Fanelli et al. fanelli (1988)). Hence, one interpretation for the low metallicity, high gas content and high star formation rate is that BCGs are young objects, and they are being seen at the epoch of the formation of the first generation stars ($`\mathrm{T}10^8\mathrm{yr}`$). The other interpretation is that they are old objects in which star formation occurs in short bursts with long quiescent phases in between (Krรผger et al. kruger95 (1995), Gondhalekar et al. gondhalekar (1998), รstlin et al. ostlin (1999)).
The low metal abundance together with the high star formation rates and large gas masses makes BCGs most suitable to determine the element abundance (Thuan et al. thuan95 (1995), thuan96 (1996)), the primordial helium abundance $`\mathrm{Y}_p`$ (Izotov et al. izotov94 (1994)) and to study the variations of one chemical element relative to another (van Zee et al. van98 (1998)). It also provides a wealth of diagnostics for the study of intrinsic physical conditions (Izotov & Thuan izotov99 (1999)). The results of these papers are based on direct measurements of the emission line intensities, but according to Vaceli et al. (1997), the observed emission line intensities are affected by the underlying stellar absorption. Since the stellar absorption can affect substantially some fundamental emission lines used for the derivation of reddening and other physical and chemical properties, one of the first and most critical steps in the analysis of BCG spectroscopic properties is to quantify and remove the contribution of the stellar population.
If we can resolve the stellar population of a BCG, we can know its age and star formation regime. We can then subtract the stellar absorption line from the emission-line spectrum. With the launch of HST and 10-m class telescope, we are now witnessing a new era that allows to analyze in detail nearby objects, such as Galactic HII regions or 30 Dor in the LMC and resolve old red giants in a few distant galaxies(Grebel 1999). Such studies allow us to resolve and study individual stars in massive star clusters. However, as one studies objects at larger distances, individual stars (except for some giants) are unresolved and hence we are limited to studying their global properties (Mas-Hesse & Kunth mas (1998)). In this paper we have selected 10 BCGs and determined their stellar population by applying a population synthesis method based on star cluster integrated spectra. In a subsequent paper, we will apply an evolution population synthesis method to these galaxies and the results of the two papers will be considered together.
The outline of the paper is as follows. In Sect. 2 we describe the observations and data reduction. In Sect.3 we present measurements of equivalent widths and continuum analysis for the BCG spectra. In Sect. 4 we carry out the population synthesis and give the results of computation. In Sect. 5 we subtract the stellar population synthesis spectra from the observed ones and study the resulting emission line spectra. The results are presented in Sect. 6. In Sect. 7, we summarize our conclusions. Throughout this paper, we use a Hubble constant of $`\mathrm{H}_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$.
## 2 Observations and Data Reduction
Our sample of 10 blue compact galaxies was selected from Kinney et al. kinney (1993); seven of these have $`M_\mathrm{B}>20`$ mag and are dwarf galaxies (BCDG). Table 1 describes the sample properties. The target names are listed in Column 1. Column 2 to Column 8 respectively give the source coordinates, morphological type, the radial velocity relative to the Local Group, photographic magnitude, absolute magnitude and Galactic reddening.
Long-slit spectroscopic observations were carried out in March and July of 1997. All the observations were made with the 2.16 m telescope at the Xinglong Station of Beijing Astronomical Observatory, using a Zeiss universal spectrograph with a grating of 300 grooves $`mm^1`$ dispersion. A Tek 1024$`\times `$1024 CCD was employed, covering a spectral range from 3500 to 7500ร
with a resolution of 9.6ร
(2 pixels). The slit aperture was fixed at $`250\mu m`$, corresponding to 2.5<sup>โฒโฒ</sup> projected on the sky, to match the typical seeing value at Xinglong Station, and is set at position angle $`90^{}`$. All the spectra were extracted by adding the contributions of 5 pixels around the nucleus. The sky background was estimated from a linear interpolation of two regions located 30<sup>โฒโฒ</sup> from the nucleus. The last two columns of Table 1 list the observation date and exposure time.
The standard reduction to flux units and transformation to linear wavelength scale were made using the IRAF<sup>1</sup><sup>1</sup>1IRAF is provided by NOAO packages. The IRAF packages CCDRED, TWODSPEC and ONEDSPEC were used to reduce the long-slit spectral data. The wavelength was calibrated using a He/Ar comparison lamp. 20-odd lines were used to establish the wavelength scale, by fitting a first-order cubic spline. The accuracy of the wavelength calibration was better than 2ร
. On most nights, more than two KPNO standard stars were used for relative flux calibration. We followed standard procedure in the data reduction: bias subtraction, flat fielding, sky subtraction, cosmic ray extinction, CCD response curve calibration, wavelength and photometric calibration, and extinction correction. The dark counts were so low that they were not subtracted. An estimate of the radial velocity of the galaxies was made to correct to zero red shift. The foreground reddening due to our Galaxy was corrected using the value from Burstein & Heiles (burstein (1984)). Atmospheric extinction was corrected using the mean extinction coefficients for the Xinglong station. The final extracted spectra of the BCGs (labeled as OBS) are shown in Figure 1.
## 3 Measurements
Our main goal is to study the stellar population of BCGs. To do this, we have used the synthesis method described in Schmitt, Bica & Pastoriza (schmitth (1996)). The method minimizes the differences between the observed and synthetic equivalent widths for a set of spectral features. To measure the equivalent widths accurately, it is extremely important first to have a good fit to the continuum. Hence the analysis of each of the sample spectra proceeds in two steps (1) determining a pseudo-continuum at selected pivot-points, and (2) measuring the equivalent widths (EWs) for a set of selected spectral lines.
The continuum and EW measurements followed the method outlined in Bica & Alloin (bica86 (1986)), Bica (bica88 (1988)) and Bica et al. (bica94 (1994)), Cid Fernandes et al. (cid (1998)), and subsequently used in several studies of both normal and emission line galaxies (e.g., Jablonka et al. jablonka (1990), Storchi-Bergmann et al. storchi (1995), McQuade et al. mcquade (1995), Bonatto et al. bonatto (1998)). This method first determines a pseudo-continuum at a few pivot-wavelengths, and then integrates the flux difference with respect to this continuum in the defined wavelength windows (Table 3) to determine the EWs. The pivot wavelengths used in this work are based on the same as those used by the above authors; these were chosen to avoid regions of strong emission or absorption features (Table 2). Four point flux values (3784, 3814, 3866, 3918 ร
) were used for the Balmer discontinuity. The use of a compatible set of pivot points and wavelength windows is important, since it allows a detailed quantitative analysis of the stellar population by the synthesis techniques using the spectral library of star clusters (Bica & Alloin bica86 (1986)).
The determination of the continuum was done interactively, was taking into account the flux level, noise and small uncertainties in wavelength calibration, as well as the presence of emission lines. The 5870ร
point, in particular, is sometimes buried underneath the HeI 5876ร
emission line. In such cases, adjacent wavelength regions guided the placement of the continuum. The final measured fluxes, normalized to the flux at 5870ร
, are given in Table 2. After fitting the continuum points to a continuum spectrum, we measured the equivalent widths of seven characteristic absorption lines. When a noise spike was present in the wavelength window, it was necessary to make โcosmeticโ corrections. The results are given in Table 3. The first two rows give the names of the spectral lines and corresponding wavelength range. The EWs are in units of ร
, and negative sign denotes emission lines.
## 4 Stellar Population Synthesis
To study the history of star formation, age, and ionization mechanism of BCGs, we applied the synthesis method of Schmitt et al. (schmitth (1996)) for a determination of the stellar population in their nuclear region. In this method, we start with a sample of star clusters, consists of 3 clusters in the SMC, 12 in the LMC, 41 Galactic globular clusters and 3 rich compact Galactic open cluster, together with 4 HII regions (Bica & Alloin bica86 (1986)). Then a grid of base components (comprising the values of the continuum at selected points and of the EWs of selected absorption lines) is constructed by interpolation and extrapolation in the $`\mathrm{Age}[Z/Z_{}]`$ plane of those quantities of the clusters of the sample. The grid consists of 34 points with ages at $`10^7,5\times 10^7,10^8,5\times 10^8,10^9,5\times 10^9,\mathrm{and}>10^{10}\mathrm{yr}`$, and $`[Z/Z_{}]`$= 0.6, 0.3, 0.0, -0.5, -1.0,-1.5, -2.0, plus one point representing the HII region. The method, when applied to a given target, consists of adjusting the percentage contributions of the 35 base components by minimizing the difference between the resulting and measured EWs of the selected set of absorption lines. When all the resulting EWs reproduce those of the galaxy within allowed limits, we said to have obtained an acceptable solution. We then take all the acceptable solutions to form an average solution (Schmitt et al. schmitth (1996)).
The computation can be performed in two ways: one way spans the whole $`\mathrm{Age}[Z/Z_{}]`$ plane (multi-minimization procedure, hereafter MMP), while the other is restricted to chemical evolutionary paths through the plane (direct combination procedure, hereafter DCP). We combine these two methods in our paper. We first use the MMP method to single out the main contributing components. And then, based on their resemblance to the whole-plane solution and on their reduced chi-square ($`\chi ^2`$), we select the best evolutionary path and use the DCP method to give the final result.
### 4.1 The results of MMP
A detailed analysis of the MMP method can be found in Schmidt et al. (schmidta (1989)). This method searches the vector space of resolutions generated by the entire 35 component basis, leading to a representative set of acceptable solutions to the synthesis problem. We tried various combinations of the 35 components until a good match between the equivalent widths of the synthetic and observed lines is obtained. An iterative optimization procedure was used, and each iteration alters the percentages of different components. The input parameters are the measured equivalent widths of the selected absorption lines, the continuum ratios and a set of trial values of $`E(BV)`$ between 0.0 and 1.0 at steps of 0.02. The result is expressed with the flux fractions at 5870ร
for each component. The flux fractions of different components for the 10 BCGs are shown in Table 4 (HIIR denotes the HII region component).
From the results of the MMP method, we find some obvious trends for all the BCGs. First, the dominant population are young ($`\mathrm{T}5\times 10^8\mathrm{yr}`$) stellar clusters. Second, there is a small population of old ($`\mathrm{T}>10^{10}\mathrm{yr}`$) and high metallicity ($`[Z/Z_{}]0`$) globular clusters. Within old globular clusters low metallicity components contribute more than high metallicity components. Third, for the young and intermediate age ($`\mathrm{T}10^95\times 10^9\mathrm{yr}`$) clusters, components with metallicities below or equal to the solar value make a large contribution. It shows that the stars in the BCGs have low metallicities, and this is consistent with our previous knowledge. Lastly, the younger components show a large dispersion in the plane, with no clear evolutionary paths. This could be due to the fact we have only used spectral data in the visible range. Additional data in the near ultraviolet and infrared will produce better-constrained solutions in the plane. Nevertheless, the presence of starbursts with $`T5\times 10^8\mathrm{yr}`$ is easily recognized from the MMP results.
### 4.2 The results of DCP
To reduce the dispersion in metallicity, all the population synthesis so far made assumed some arbitrarily chosen chemical evolution path in the $`\mathrm{Age}[Z/Z_{}]`$ plane. The improvement in the method of this paper is: we shall pick out from the MMP results those components that contribute most importantly and use them to define paths of chemical evolution, thus reducing the degree of arbitrariness. The 3 bright BCGs appear to follow a path containing the 11 components along the time sequence $`\mathrm{T}=10^75\times 10^9\mathrm{yr}`$ at fixed metallicity $`[Z/Z_{}]=0.5`$, the metallicity sequence $`[Z/Z_{}]=0.52.0`$ with $`\mathrm{T}>10^{10}\mathrm{yr}`$, and the HII region. The 7 BCDGs follow a path containing the 12 components along the time sequence $`\mathrm{T}=10^75\times 10^9\mathrm{yr}`$ at fixed metallicity $`[Z/Z_{}]=0.0`$, the metallicity sequence $`[Z/Z_{}]=0.02.0`$ with $`\mathrm{T}>10^{10}\mathrm{yr}`$, and the HII region. In addition, we also tested other possible paths with different maximum metallicities; we found that the $`\chi ^2`$ of the path selected from MMP is the smallest.
Table 5 reports the path solution. The numbers of acceptable solutions and corresponding reduced chi-squared ($`\chi ^2`$) are also given. There is a great similarity between the path (DCP) and the whole plane (MMP) solution. Table 5a, 5g, 5j provide the results for the 3 bright BCGs (IC1586, NGC4194 and MRK499). We see that the younger components with age $`10^710^8`$yr make an appreciable contribution, $`35\%`$. The old globular clusters make even larger contributions. The intermediate-age components are small. The other tables in Table 5 provide the results for the 7 BCDGs. The dominant stellar components ($`>30\%`$) are in the young age bins ($`\mathrm{T}5\times 10^8\mathrm{yr}`$). The old globular clusters have different values in different galaxies; in some, lower than $`10\%`$, and in others as much as $`35\%`$. The BCDGs differ from the bright BCGs in two obvious aspects. First, the intermediate age components in the BCDGs amount to $`20\%`$, higher than in the bright BCGs. A peak occurs at about $`5\times 10^8\mathrm{yr}`$, which indicates that an enhanced star formation event occurred at that epoch. Second, the youngest components ($`\mathrm{T}=10^7\mathrm{yr}`$) in the BCDGs are much lower than in the bright BCGs; probably indicating that the star formation rate of BCDGs is lower now. The HII region component is a featureless continuum, which acts in the synthesis as a dilutor of absorption lines. From Table 5, we note that some BCDGs have large contributions from the young components indicating intense star formation, and small contribution from HII regions, which suggests that intense starbursts have converted most gas into stars.
## 5 Synthesized Spectra
We now display the results of stellar population synthesis in a more visible form in Figure 1. OBS represents the observed spectrum of galaxy (OBS+5, the +5 indicating that it is displaced upwards by 5 units). SYN represents the synthetic spectrum resulting from the percentage contribution in Table 5. The various stellar components are designated as follow. OGC stands for the contribution from the old globular cluster ($`\mathrm{T}>10^{10}\mathrm{yr}`$); IYC for the contribution from the intermediate age star cluster ($`\mathrm{T}10^95\times 10^9\mathrm{yr}`$); YBC for the contribution from the young blue star clusters $`10^7\mathrm{T}5\times 10^8\mathrm{yr}`$), and HIIR stands for the contribution from HII regions. The emission line spectrum (OBS-SYN) resulting from subtracting SYN from OBS, is shown in the lower part of figure.
The figure shows that the synthesized spectrum gives a good fit to the observed continuum and absorption lines for each galaxy. We can conclude that the continuum of BCGs comes both from the stars (particularly the young and intermediate-age stars) and HII regions. It may be interpreted that the main energy sources of BCGs are young hot O, B stars, which lead to the formation of HII regions around them.
The stellar subtracted spectra (OBS-SYN) can be used to study the emission lines. We have measured the main strong emission line intensities with Gaussian fits. The results, relative to that of $`\mathrm{H}\alpha `$ are shown in Table 6. At this stage our spectra are corrected for the foreground reddening and the internal reddening from the stellar populations (see next section). First, we have used the $`\mathrm{H}\alpha /\mathrm{H}\beta `$ ratio in Table 6 to derive the internal reddening value associated with the line-emitting regions. It will be calculated in next section. Second, we have attempted to identify the ionizing mechanism in these nuclei, using the emission line ratios in the visible region.
We compared emission-line ratios calculated from Table 6 with the diagnostic diagram $`[\mathrm{NII}]6584/\mathrm{H}\alpha [\mathrm{OII}]3727/[\mathrm{OIII}]5007`$ from Baldwin, Phillips & Terlevich (baldwin (1981)). We plot the results in Figure 2. From this figure, we find that the BCGs are always located near or within the HII region. None of them are located in the loci of planetary nebulae, power-law, or shock-heated region. This result indicates that the young, massive mass stars formed in the nucleus are heating the gas in the nucleus of BCGs. This result is very similar to those found from population synthesis. We also show in this diagram the effect of internal reddening. We can see that it, even in the case of NGC4194, with the strongest reddening derived from its line-emitting regions, is not large.
## 6 Results
From the stellar population analysis in Sect. 4 and the emission line spectrum in Sect. 5, combined with results from other studies, it is now possible to reveal some global properties of BCGs.
### 6.1 Age and Star Formation Rate
There are two major competing theories for BCGs. The first one claims that BCGs are truly young systems undergoing the first star formation episode in the galaxyโs lifetime. The second model suggests that BCGs are old galaxies, which are mainly composed of older stellar populations, with, however, a brief episode of violent star formation, in order to account for the observed spectroscopic features and spectral energy distributions.
Using the population synthesis method, we know the stellar population (different ages and metallicities) percentage contributions at $`\lambda =5870`$ร
for each of the 10 BCGs. These results show clearly that, for each, the old globular clusters and intermediate age components ($`\mathrm{T}10^9\mathrm{yr}`$) make sizeable contributions to the galactic spectrum. The presence of large fractions of old or intermediate age components indicates that the star formation happened already at an early stage, and at a high rate. Our results support the second model that BCGs are old galaxies.
For IC1586, NGC4194 and MRK499, the contributions coming from the young and the old stellar components are large, but that from the intermediate age component is small. It suggests that the rate of star formation during the intermediate age period is smaller than in the other periods, and the star formation process is not continuous in these galaxies. For the BCDGs, the contribution from the intermediate age component is important, star formation was most vigorous in its intermediate age period, with relatively small contributions from the other periods. This also implies that star formation in these galaxies is also discontinuous.
The other result of our population synthesis is that while the observed properties of the bright BCGs (IC1586, NGC4194, MRK499) and the BCDGs are very similar, their stellar components and star formation regimes are generally different. There are many old and young stellar components in bright BCGs, and its recent star formation rate is very high. For BCDGs, the old and young stellar components are relatively small, but the contribution from intermediate age stellar populations is important.
The stellar population synthesis suggests that BCGs are old galaxies, in which the process of star formation is intermittent. Star formation has been violent in one of its evolution periods. These results are also supported by other observations (Papaderos et al. papaderos (1996), Sung et al. sung (1998), Aloisi et al. 1999). It illustrates that the present method is more than a simple population synthesis since it provides a direct estimate of the chemical evolution of the galaxy.
### 6.2 Internal Reddening
When we investigate the internal energy source, physical condition and internal structure of galaxies, we must take into account the effect of internal reddening (Pizagno & Rix pizagno (1998)). The effect of dust extinction on the emerging radiation is one of the least understood physical phenomena (Calzetti calzetti97 (1997), Ho et al. ho (1997)). To study the internal reddening properties of BCGs, we quantify the discrepancy between the dust extinction measured from the emission line ratios and the optical continuum.
In the method of population synthesis used in this paper, the internal reddening is taken as an adjustable parameter, so that an estimate for the internal reddening is made at the same time as the stellar composition. We try various values of the internal reddening, make the appropriate correction in the continuum spectrum, then use this corrected continuum spectrum in the synthesis to find the best solution. This is an empirical way of determining the galactic reddening, its advantage is that it is assumption-free. The values of galactic internal reddening are listed in Table 6, $`E(BV)_{\mathrm{MMP}}`$ is the result from the MMP method, $`E(BV)_{\mathrm{DCP}}`$ is the result from the DCP method. We find that the values are small ($`E(BV)0.35`$), which is consistent with the BCGs being metal-poor and dust-poor. We find the reddening also clearly depends on the shape of the spectrum. The flattest spectrum (NGC4194) goes with the largest color excess. The steeper spectrum, the less the extinction.
The Balmer line ratio $`\mathrm{H}\alpha /\mathrm{H}\beta `$ allows us to characterize the dust extinction in the regions where the nebular lines are produced. We measured the internal reddening value $`E(BV)_{\mathrm{H}\alpha /\mathrm{H}\beta }`$ of the 10 BCGs using the observed emission lines $`\mathrm{H}\alpha `$ and $`\mathrm{H}\beta `$. The difference in the calculation of $`E(BV)_{\mathrm{H}\alpha /\mathrm{H}\beta }`$ between the previous work and ours is that we can correct the underlying stellar absorption $`\mathrm{EW}_{abs}`$ from the results of stellar population synthesis without making any hypotheses. The result of $`E(BV)`$ is listed in the last 3 columns of Table 6.
From this table, we can find that the internal reddening of the stellar continuum in BCGs is generally lower than that of ionized gas. A model of foreground dust clumps, with different covering factors for gas and stars, is a possible explanation for the difference. The covering factor by dusty clumps is greater for the gas region that generates the emission lines than for the stars that produce the continuum. That the continuum emission of stars is less obscured than are the emission lines of ionized gas, has been pointed out for other kinds of emission line galaxies (Calzetti et al. calzetti94 (1994)).
## 7 Summary
We have observed the optical spectra for the nuclear regions of 10 blue compact galaxies. We have studied their stellar populations by matching the spectra of these objects to a library of integrated spectra of star clusters. Our conclusions can be summarized as follows.
The quantitative analysis indicates that the nucleus of the 3 bright BCGs is dominated by young components and the star-forming process is still ongoing. The maximum metallicity of the stellar population is $`[Z/Z_{}]=0.5`$. The nucleus of the dwarf BCGs (BCDGs, which have $`M_\mathrm{B}>20`$), on the other hand, is dominated by the intermediate age component. The metallicity has, at most, reached up to the solar value. The young component is not so important as in the bright BCGs, but it is still not negligible.
For all BCGs, the old population in the range $`[Z/Z_{}]0.0`$ is important, the very metal-rich component provides quite a small contribution in these galaxies. The stellar populations of BCGs suggest that they are old galaxies with intermittent star formation history.
A good match can be achieved between the synthesized and observed spectrum of BCGs. It suggests that the stellar radiation is an important energy source for BCGs.
The emission-line spectra from the gaseous components in these objects were isolated and analyzed. Using these stellar subtracted spectra, we have calculated the internal reddening value of the emission line regions and attempted to identify the ionizing mechanism in BCGs. Comparing with the reddening derived from the continuum, we conclude that the continuum and emission line regions have different degrees of dust obscurations.
The stellar subtracted spectra should be very useful for further investigation of physical conditions and chemical abundance of the emission line regions of BCGs.
###### Acknowledgements.
We are grateful to the Chinese 2.16m Telescope time allocation committee for their support of this programme and to the staff and telescope operators of the Xinglong Station of Beijing Astronomical Observatory for their support. Especially we would like to thank Prof. J.Y. Hu and Dr. J.Y. Wei for their active cooperation which enable all of the observations to go through smoothly. We are also deeply grateful to Henrique R. Schmitt for kindly providing us the procedure of stellar population synthesis. We also thank the anonymous referee for helpful comments and constructive suggestions. Special Thanks to Dr. S. Mao and Dr. T. Kiang for their hard work of English revision of this paper. This work was supported by grants from the National Panden Project and Natural Science Foundation of China. |
no-problem/9911/hep-ph9911287.html | ar5iv | text | # QCD SPECTRAL SUM RULES AND SPONTANEOUSLY BROKEN CHIRAL SYMMETRY Work supported in part by BMBF, GSI and Conselleria de Cultura, Educaciรณ i Ciรจncia de la Generalitat Valenciana.
## Abstract
The gap $`\mathrm{\Delta }=4\pi f_\pi 1.2`$ GeV of spontaneous chiral symmetry breaking is introduced as a scale delineating resonance and continuum regions in the QCD spectral sum rules for vector mesons. Basic current algebra results are easily recovered, and accurate sum rules for the lower moments of the spectral distributions are derived. The in-medium scaling of vector meson masses finds a straightforward interpretation, at least in the narrow width limit.
PACS: 11.30.Qc; 11.55.Hx
Spectral sum rules have frequently been used to connect observable information through dispersion relations with the operator product expansion (OPE) in QCD, in the form of either SVZ sum rules or finite energy sum rules (FESR) . Prototype examples are the sum rules for the lightest ($`\rho `$ and $`\omega `$) vector mesons. The starting point is the vacuum current-current correlation function,
$`\mathrm{\Pi }_{\mu \nu }(q)`$ $`=`$ $`i{\displaystyle d^4xe^{iqx}0|๐ฏj_\mu (x)j_\nu (0)|0}`$ (1)
$`=`$ $`(g_{\mu \nu }{\displaystyle \frac{q_\mu q_\nu }{q^2}})\mathrm{\Pi }(q^2),`$
where $`๐ฏ`$ denotes the time-ordered product and the currents are specified, for the case of interest here, as $`j_\mu ^{(\rho )}=(\overline{u}\gamma _\mu u\overline{d}\gamma _\mu d)/2`$ for the $`\rho `$ channel and $`j_\mu ^{(\omega )}=(\overline{u}\gamma _\mu u+\overline{d}\gamma _\mu d)/6`$ for the $`\omega `$ channel. We work as usual with the spectrum
$$R(q^2)=\frac{12\pi }{q^2}\text{Im}\mathrm{\Pi }(q^2)$$
(2)
normalized to the ratio $`\sigma (e^+e^{}\text{hadrons})/\sigma (e^+e^{}\mu ^+\mu ^{})`$.
The nuclear physics interest in QCD sum rules is motivated by applications of SVZ type Borel sum rules not only in vacuum, but also in nuclear matter in order to extract in-medium properties of vector mesons . The commonly adopted procedure is to use a schematic โdualityโ ansatz for $`R`$, with the vector meson resonance represented as a $`\delta `$-function and a step function continuum starting at a threshold $`s_0`$. The position of the resonance and the threshold $`s_0`$ are then fitted by requiring consistency with the vacuum or in-medium OPE side of the sum rule.
The continuum threshold $`s_0`$ is usually introduced as a free parameter. On the other hand, spontaneous chiral symmetry breaking in QCD suggests that the mass gap which separates the QCD ground state from the high energy continuum should be expressed as some multiple of the pion decay constant, $`f_\pi =92.4`$ MeV, since this is the only remaining scale in the limit of vanishing quark masses.
In the present paper we propose to identify this continuum threshold with the chiral gap parameter, $`\mathrm{\Delta }=4\pi f_\pi 1.2`$ GeV, by setting
$$\sqrt{s_0}=\mathrm{\Delta }=4\pi f_\pi ,$$
(3)
and thereby unifying QCD sum rules with spontaneous chiral symmetry breaking.
Starting from this assumption we shall demonstrate that QCD spectral sum rules combined with vector meson dominance (VMD) immediately imply well known current algebra relations, a very welcome feature. When realistic spectral distributions are used, the transition between resonance region and continuum is no longer sharp, but the chiral mass gap (3) is shown still to control the smooth turnover from the hadronic part to the asymptotic QCD domain of the spectrum for both $`\rho `$ and $`\omega `$ channels. We also point out briefly that our hypothesis (3) permits one to understand the in-medium results of ref. in a simple and straightforward way, using the leading density dependence of the pion decay constant.
Our approach is based on rigorous sum rules for the lowest moments of the spectral distribution (2). A direct access to these moment sum rules is best given by the FESR method . Consider the vacuum correlation function $`\mathrm{\Pi }(q^2=s)`$ of Eq. (1) in the complex $`s`$-plane where it has a cut along the positive real axis. Choose a closed loop $`\gamma `$ consisting of a path which surrounds and excludes the cut along$`\text{ Re}s>0`$, and joins with a circle $`C_{s_0}`$ of fixed radius $`s_0`$. Cauchyโs theorem implies $`_\gamma ๐ss^{N1}\mathrm{\Pi }(s)=0`$ for integer $`N0`$. Separating this integral and using Eq. (2) gives
$$_0^{s_0}๐ss^NR(s)=6\pi i_{C_{s_0}}๐ss^{N1}\mathrm{\Pi }(s)=6\pi s_0^N_0^{2\pi }๐\theta e^{iN\theta }\mathrm{\Pi }(s_0e^{i\theta }).$$
(4)
It remains to evaluate the r.h.s. integral along the circle of radius $`s_0`$. For sufficiently large $`s_0`$ one can use perturbative QCD and add non-perturbative corrections via the OPE:
$$\mathrm{\Pi }(s)=\mathrm{\Pi }^{\text{pQCD}}(s)+\frac{d}{12\pi ^2}\underset{n0}{}()^n\frac{c_{n+1}}{s^n},$$
(5)
with $`d=3/2`$ or $`1/6`$ for the $`\rho `$ or $`\omega `$ channels, respectively. The parameters $`c_n`$ have dimension (mass)<sup>2n</sup>, with $`c_1=3(m_u^2+m_d^2)`$ and the dimension-4 condensates $`c_2=(\pi ^2/3)(\alpha _s/\pi )G_{\mu \nu }G^{\mu \nu }+4\pi ^2m_u\overline{u}u+m_d\overline{d}d`$ which are reasonably well under control. The $`c_3`$ involves the (much less certain) four-quark condensates. In practice $`c_1`$ is negligibly small, and the quark condensate piece in $`c_2`$ can be dropped in comparison with the gluon condensate term. As usual, we ignore logarithmic corrections to the condensates.
The pQCD part of $`\mathrm{\Pi }(s)`$ is calculated to third order on $`\alpha _s`$ using the $`\overline{MS}`$ scheme . The result is
$$\mathrm{\Pi }^{\text{pQCD}}(s)=\frac{d}{12\pi ^2}\underset{n=0}{\overset{3}{}}\left(\frac{\alpha _s(\mu ^2)}{\pi }\right)^n\mathrm{\Pi }^{(n)}(s;\mu ^2)$$
(6)
at a renormalization point $`\mu ^2`$, with $`\mathrm{\Pi }^{(0)}=s[K_0+\mathrm{ln}(s/\mu ^2)]`$ and $`\mathrm{\Pi }^{(n)}=s[K_n+_{m=1}^nA_{mn}\mathrm{ln}^m(s/\mu ^2)]`$ for $`n=1`$, 2, 3. The constants $`K_n`$ are irrelevant since they drop out in the loop integral (4). The relevant coefficients (for $`N_f=3`$ flavours) of the logarithmic terms are $`A_{11}=1`$, $`A_{12}=1.641`$, $`A_{22}=1.125`$, $`A_{13}=10.28`$, $`A_{23}=5.69`$, $`A_{33}=1.69`$. The renormalization point can be chosen at $`\mu ^2=s_0`$.
Inserting $`\mathrm{\Pi }(s_0e^{i\theta })`$ from Eqs. (5,6) and using $`\mathrm{ln}(e^{i\theta })=i(\theta \pi )`$, the r.h.s. integral of Eq. (4) is easily worked out and one arrives at the following set of sum rules for the lowest spectral moments with $`N=0`$, 1, 2:
$$_0^{s_0}๐ss^NR(s)=d\left[\frac{s_0^{N+1}}{N+1}(1+\delta _N)+()^Nc_{N+1}\right].$$
(7)
The perturbative QCD corrections up to $`O(\alpha _s^3)`$ are summarized as
$`\delta _N`$ $`=`$ $`{\displaystyle \frac{\alpha _s}{\pi }}+\left({\displaystyle \frac{\alpha _s}{\pi }}\right)^2\left[A_{12}{\displaystyle \frac{2}{N+1}}A_{22}\right]`$ (8)
$`+\left({\displaystyle \frac{\alpha _s}{\pi }}\right)^3\left[A_{13}{\displaystyle \frac{2}{N+1}}A_{23}+\left({\displaystyle \frac{6}{(N+1)^2}}\pi ^2\right)A_{33}\right],`$
where $`\alpha _s`$ is taken at $`\mu ^2=s_0`$. Note that different $`\delta _N`$ apply for the various moments of $`R(s)`$, and that condensates of different mass dimension appear well separated in the different moments. For example, uncertain four-quark condensates enter only at $`N=2`$, whereas the moments with $`N=0,1`$ are free of such uncertainties. It can be readily demonstrated that the results, Eqs. (7,8), are rigorously consistent with those obtained using the Borel sum rule method. It is also interesting to note that our deduction of the sum rules (7) is analogous to the procedure used to extract $`\alpha _s`$ from $`\tau `$ decays .
Before applying these sum rules to realistic spectral distributions, we turn first to a schematic model for $`R(s)`$ which combines vector meson dominance (VMD) and the QCD continuum:
$$R_V(s)=12\pi ^2\frac{m_V^2}{g_V^2}\delta (sm_V^2)+\text{continuum}(V=\rho ,\omega ),$$
(9)
with $`g_\rho =g_\omega /3=g`$, the universal vector coupling constant. For convenience we discuss the $`\rho `$ meson sum rules first. We set $`s_0=16\pi ^2f_\pi ^2`$ according to our conjecture (3) and find the sum rules for the first two moments:
$$_0^{s_0}๐sR_\rho (s)=12\pi ^2\frac{m_\rho ^2}{g^2}=\frac{3}{2}(4\pi f_\pi )^2(1+\delta _0)+\frac{3}{2}c_1,$$
(10)
$$_0^{s_0}๐ssR_\rho (s)=12\pi ^2\frac{m_\rho ^4}{g^2}=\frac{3}{4}(4\pi f_\pi )^4(1+\delta _1)\frac{3}{2}c_2.$$
(11)
Once the hypothesis (3) is launched, there are no free parameters in these sum rules. Dropping the QCD corrections for the moment, the sum rule (10) immediately gives
$$m_\rho ^2=2g^2f_\pi ^2,$$
(12)
the well-known KSFR relation , while the sum rule (11) for the first moment further specifies
$$g=2\pi .$$
(13)
The results (12,13) are quite remarkable: identifying the onset of the continuum spectrum with the gap $`\mathrm{\Delta }=4\pi f_\pi `$, the scale for spontaneous chiral symmetry breaking, a unification of QCD spectral sum rules with current algebra emerges, yielding $`m_V=\sqrt{8}\pi f_\pi =\mathrm{\Delta }/\sqrt{2}`$ in leading order (identical relations hold for both $`\rho `$ and $`\omega `$ meson). The condition $`g=2\pi `$ is actually consistent with the effective action of the $`SU(3)\times SU(3)`$ non-linear sigma model and the Wess-Zumino term. Application of the QCD corrections moves $`g`$ to within less than 10% of the empirical $`g_\rho 5.04`$ deduced from the $`\rho e^+e^{}`$ decay width.
Let us now turn from schematic to realistic spectral distributions. The vector mesons have energy dependent widths from their leading decay channels $`\rho \pi \pi `$ and $`\omega 3\pi `$, etc. Multipion ($`n\pi `$) channels (with $`n`$ odd/even for $`I=0,1`$) open up and continue toward the asymptotic pQCD spectrum. It would then seem difficult, at first sight, to locate the gap $`\mathrm{\Delta }`$ which delineates the resonance from the QCD continuum. Remarkably, though, the turnover from resonance to continuum is still governed by the scale set by the chiral gap $`\mathrm{\Delta }=4\pi f_\pi `$. One can simply replace the sharp edge at $`s_0=\mathrm{\Delta }^2`$ by a smooth interpolation in the interval $`\mathrm{\Delta }^20.6\text{GeV}^2<s<\mathrm{\Delta }^2+0.6\text{GeV}^2`$. In essence, the spreadings of the resonance and the gap edge amount to incorporating $`1/N_c`$ corrections to the zero width spectrum (9).
In practice the calculation proceeds as follows. Consider the $`\rho `$ meson first. The resonant part of the spectrum is well described using effective field theory as shown in ref. , including $`\rho \omega `$ mixing. Guided by this approach we use a parametrized form, given in Ref. , which reproduces the $`e^+e^{}\pi ^+\pi ^{}`$ data (see dashed curve in Fig. 1). We let the QCD continuum start at $`s_c=2`$ GeV<sup>2</sup> and use a linear interpolation across the interval $`0.8\text{GeV}^2<s<s_c`$ centered at $`\mathrm{\Delta }^2`$ (dashed-dotted curve in Fig. 1). When added to the tail of the $`\rho `$ resonance this interpolation obviously works well in reproducing the total $`e^+e^{}2\pi ,4\pi ,\mathrm{}`$ data in the $`I=1`$ channel. We now employ the sum rules (7) with $`s_0=s_c`$ and check overall consistency, using $`\alpha _s(s_0)=0.39`$. For the lowest moment with $`N=0`$ the left-hand side gives $`_0^{s_c}๐sR_\rho (s)=3.527`$ GeV<sup>2</sup>, while the right-hand side gives 3.521 GeV<sup>2</sup>. For $`N=1`$ the l.h.s. integral gives $`_0^{s_c}๐ssR_\rho (s)=3.27`$ GeV<sup>4</sup>, while the r.h.s. using a gluon condensate $`(\alpha _s/\pi )G^2=(0.36\text{GeV})^2`$ yields 3.32 GeV<sup>4</sup>, so there is consistency to better than 2%. The second moment ($`N=2`$) involves uncertain four-quark condensates and is more sensitive to the detailed form of the spectrum at higher energies. Its discussion will be delegated to a forthcoming paper, but at this point we can already conclude that the usual factorization ansatz for the four-quark condensate assuming ground state dominance turns out not to be justified: factorization underestimates the four-quark condensate by a large amount. The statements about the low ($`N=0,1`$) spectral moments are of course free of such uncertainties.
For the $`\omega `$ meson spectrum we are again guided by the effective Lagrangian approach (Ref. ). The resonant part is parametrized as in Ref. . (We stay in the $`u,d`$-quark sector and therefore omit $`\omega \varphi `$ mixing). Otherwise we follow a scheme analogous to that for the $`\rho `$ meson. The QCD continuum starts at $`s_c=2`$ GeV<sup>2</sup> and the same (linear) interpolation around $`s=\mathrm{\Delta }^2`$ is added to the resonance tail (dashed curve in Fig. 2) between 0.8 and 2 GeV<sup>2</sup>. The result (Fig. 2) compares quite well with the data in the $`I=0`$ $`e^+e^{}`$ hadrons channel, though with admittedly poor statistics in the region above the $`\omega `$ resonance. For the lowest spectral moment we find $`_0^{s_c}๐sR_\omega (s)=0.3917`$ GeV<sup>2</sup> as compared to 0.3912 GeV<sup>2</sup> from the r.h.s. of the $`N=0`$ sum rule (7). The $`N=1`$ moment gives $`_0^{s_c}๐ssR_\omega (s)=0.371`$ GeV<sup>4</sup>, in perfect agreement with the r.h.s. value 0.369 GeV<sup>4</sup>. Consistency of the second moment requires the same four-quark condensate as observed for the $`\rho `$ meson, substantially larger than the value suggested by factorization into $`\overline{q}q^2`$.
In summary, the degree of consistency found in this approach is quite impressive, at least for the first two moments of the spectral distributions. In particular, the crossover between $`\rho `$ and $`\omega `$ resonance regions and the asymptotic continuum, although smooth, is still controlled by the chiral symmetry breaking scale $`\mathrm{\Delta }=4\pi f_\pi `$.
With this observation in mind we can briefly comment on the in-medium version of the schematic model (9), with $`m_V`$ replaced by density dependent masses $`m_V^{}(\rho )`$ (for $`V=\rho ,\omega `$) and the continuum onset $`s_0`$ replaced by $`s_0^{}(\rho )`$. Using such a parametrization Hatsuda and Lee found $`m_V^{}m_V(10.16\rho /\rho _0)`$ in their in-medium QCD sum rule analysis . This result can now be given a straightforward interpretation. The nuclear matter analogue of the sum rules (10, 11) gives the leading behaviour $`m_V^{}(\rho )\sqrt{8}\pi f_\pi ^{}(\rho )`$, where $`f_\pi ^{}`$ is the pion decay constant in matter related to the time-component of the axial current. The in-medium PCAC relation $`f_\pi ^2=(m_q/m_\pi ^2)\overline{q}q^{}+\mathrm{}`$ implies that $`f_\pi ^{}`$ scales like the square root of the quark condensate to leading chiral order. The magnitude of this condensate at nuclear matter density $`\rho _0`$ (see for a review and further references) is expected to drop by about $`1/3`$ of its value at $`\rho =0`$, so $`m_V^{}(\rho _0)/m_Vf_\pi ^{}(\rho _0)/f_\pi 5/6`$ in the zero width limit. The same result is also found in ref. and in Brown-Rho scaling . The $`\omega `$ meson which is predicted still to survive as a reasonably narrow state in nuclear matter , should be a good indicator of the way in which the chiral gap decreases with increasing density. The much broader $`\rho `$ meson, on the other hand, is probably not very useful for this purpose .
In conclusion, introducing the chiral gap $`\mathrm{\Delta }=4\pi f_\pi `$ as a relevant scale in QCD spectral sum rules for vector mesons establishes important connections with current algebra and the in-medium scaling of vector meson masses. Further work along these lines is in progress. |
no-problem/9911/cond-mat9911256.html | ar5iv | text | # Geometry effects at conductance quantization in quantum wires
In Refs. the fabrication of quasi-onedimensional electronic systems (quantum wires) by cleaved edge overgrowth (CEO) in combination with a gate potential has been reported. Measured mean free paths of about 10$`\mu `$m indicate that the electrons pass the wire ballistically. Therefore the Landauer formula suggests the linear-response conductance $`G=\frac{2e^2}{h}M`$, where $`M`$ is the number of transverse modes in the wire which can be reduced by increasing the gate potential $`|V_g|`$. In contrast, measurements show plateaus below the theoretical value. Possible explanations may be based on either many particle interactions in the wire or geometry effects causing scattering at the ends of the wire. Our goal is to analyze the magnitude of those geometry effects neglecting interactions. Specifically, we investigate the influence of geometry and spatial potential landscape on the interference between wire states and contact states.
Our calculations were performed applying the method of equilibrium Greenโs functions following Ref. . Within the discretization in tight binding approximation ($`a=7.5`$ nm) of the Hamiltonian and Dirichlet boundary conditions the Greenโs functions are finite matrices. The leads (source and drain) are treated in terms of self-energy. We use the effective mass $`0.067m_e`$.
Fig. 1 shows the results of our calculations, where $`V_g`$ is applied over the whole range of the wire of length $`L_w`$ and width $`b_w=53`$ nm. In Fig. 1a (geometry as Fig. 1b) three plateaus appear because there are three transversal modes in the wire below the Fermi energy of 15 meV for $`V_g=0`$. Here, the width of the contacts, $`b_c`$, does not affect the transmission for $`b_cb_w`$, thus we choose $`b_c=b_w`$. The solid line in Fig. 1a correspond to a rectangular potential step between contacts and wire. The oscillations in the conductance plateaus are well known as quantum mechanical transmission through a barrier and their frequency scales with the length of the wire. The dashed line in Fig. 1a follows from a calculation for a trapezoidal gate potential with a linear ramp over 30 nm at both ends of the wire. The oscillations almost disappear and almost complete quantization in every plateau is found. Thus we are still in the adiabatic regime and no reflections at the boundary between contact and wire appear. However, in the solid curve the averaged plateau height of the lowest mode is clearly below the universal value in agreement with recent measurements in V-groove wires . If the potential changes abruptly in the transition between contacts and wire the mismatch between different states increases and there is a reduction in transmission for lower modes, which essentially affects the first jump at high $`V_g`$ while the subsequent jumps become closer to $`2e^2/h`$.
Furthermore we considered an additional attracting potential $`\varphi `$ (groove) close to the CEO interface (top boundary of Fig. 1d) which extends over the whole length $`L_w+2L_c`$ and drops linearly along the x direction by 10 meV on the scale of 53 nm. The solid line in Fig. 1c refers to the U-shaped lead configuration (solid boxes for source and drain in Fig. 1d). In this case only two plateaus are observed: the lowest mode in the groove has an energy below the contact potential and is not accessible from the leads. If we change the configuration of leads (dashed boxes in Fig. 1d), three plateaus are observed again, see the dashed curve in Fig. 1c. Alternatively, one can obtain the same effect by placing some randomly distributed localized potentials in the groove regions, which act as scatterers. Thus, it is essential from which direction the electrons are injected into the device if this additional groove potential $`\varphi `$ is present.
We have studied the geometry effects in cleaved edge overgrowth structures on the transmission through quantum wires. As long as the energy of the modes in the wire is above contact level, the junction between a wide contact and narrow wire has almost no influence on the transmission while a sharp potential step between contact and wire strongly affects the conduction. In contrast, modes with energies below the contact level may be difficult to access. Here the position of the external leads as well as the presence of scattering is of crucial importance. |
no-problem/9911/cond-mat9911186.html | ar5iv | text | # High-Field Electrical Transport in Single-Wall Carbon Nanotubes
\[
## Abstract
Using low-resistance electrical contacts, we have measured the intrinsic high-field transport properties of metallic single-wall carbon nanotubes. Individual nanotubes appear to be able to carry currents with a density exceeding $`10^9\text{A}/\text{cm}^2`$. As the bias voltage is increased, the conductance drops dramatically due to scattering of electrons. We show that the current-voltage characteristics can be explained by considering optical or zone-boundary phonon emission as the dominant scattering mechanism at high field.
\]
The potential electronic application of single-wall carbon nanotubes (SWNTs) requires a detailed understanding of their fundamental electronic properties, which are particularly intriguing due to their one-dimensional (1D) nature . Metallic SWNTs have two 1D subbands crossing at the Fermi energy. In the ideal case the resistance is thus predicted to be $`h/4e^2`$ or 6.5 k$`\mathrm{\Omega }`$. In early electrical transport experiments, however, the nanotubes typically formed a tunnel barrier of high resistance of $``$1 M$`\mathrm{\Omega }`$ with the metal contacts . Consequently the bias voltage dropped almost entirely across the contacts, and tunneling dominated the transport. A number of interesting phenomena have been observed in this regime. At low temperatures, Coulomb blockade effects prevail . At relatively high temperatures, the transport characteristics appear to be described by tunneling into the so-called Luttinger liquid โ a unique correlated electronic state in 1D conductors which is due to electron interactions .
One of the most important questions that remains to be addressed is how the electrons traverse the nanotubes, i.e., whether ballistically or being scattered by impurities or phonons. The unusual band structure of metallic tubes suggests a suppression of elastic backscattering of electrons by long-range disorder . Long mean free paths for electrons near the Fermi energy have indeed been inferred from regular Coulomb oscillations and coherent tunneling at low temperatures . However, there has been no transport study of electrons with significant excess energy above the Fermi energy. It is not clear whether such electrons would experience strong scattering and what type of scattering mechanism would dominate.
In this Letter we present electrical transport measurements of individual nanotubes using low-resistance contacts (LRCs). In contrast to the high-resistance contacts (HRCs), a bias voltage applied between two LRCs establishes an electric field across the nanotube which accelerates the electrons, enabling transport studies of high-energy electrons. We find that individual SWNTs can sustain a remarkably high current density of more than $`10^9`$ $`\text{A}/\text{cm}^2`$. The current seems to saturate at high electric field. We discuss possible scattering mechanisms and suggest that optical or zone-boundary phonon emission by high-energy electrons can explain the observed behavior. An analytic theory based on the Boltzmann equation is developed which includes both elastic scattering and phonon emission. The numerical calculations reproduce the experimental results remarkably well.
The inset to Fig. 1(a) shows an atomic force microscope (AFM) image of our typical LRC sample. The 20 nm thick, 250 nm wide Ti/Au electrodes are embedded in thermally-grown SiO<sub>2</sub> with a height difference of less than 1 nm which minimizes the deformation of the nanotubes near the electrodes. This is achieved by electron-beam lithography and anisotropic reactive-ion etching of SiO<sub>2</sub> using a single layer of PMMA as both electron-beam resist and etching mask, followed by metal evaporation and lift-off. The electrodes are cleaned thoroughly in fuming nitric acid. The nanotubes are then deposited on top of the electrodes from a suspension of SWNTs ultrosonically dispersed in dichloroethane. We find that brief annealing of the electrodes at 180C improves the reproducibility of the contact resistance. Only nanotubes with apparent height of $``$1 nm under AFM are chosen for transport measurements, which are presumably individual SWNTs. Metallic nanotubes are selected based on the absence of gate effect on transport at high temperatures . This procedure yields a typical two-terminal resistance of individual metallic tubes of less than 100 k$`\mathrm{\Omega }`$ (the lowest is 17 k$`\mathrm{\Omega }`$) at room temperature, as compared to the $``$ M$`\mathrm{\Omega }`$ resistance using Pt as the contact material in previous experiments . Similar reduction in contact resistance has also been achieved in a different contact geometry . The exact mechanism for the low contact resistance is unclear. However, clean flat single-crystalline gold facets may increase the coupling by increasing the effective contact length over which a small tube-electrode separation is realized .
Figure 1 shows the typical two-terminal current $`I`$ and differential conductance $`dI/dV`$ vs voltage $`V`$ obtained using LRC (Au) and HRC (Pt). $`dI/dV`$ is acquired simultaneously using a standard ac lock-in technique. The room-temperature zero-bias resistance of the two samples are 40 k$`\mathrm{\Omega }`$ and 670 k$`\mathrm{\Omega }`$ respectively. For both samples, the zero-bias conductance $`G`$ decreases monotonically as the temperature $`T`$ decreases . The large-bias-voltage dependence of $`dI/dV`$, however, is notably different. For the LRC sample, $`dI/dV`$ increases with increasing bias, reaching a maximum at $``$100 mV. As the bias increases further, $`dI/dV`$ drops dramatically. In contrast, the HRC sample exhibits a monotonic increase of $`dI/dV`$ as a function of voltage up to 1 V. The inset to Fig. 1(b) plots $`dI/dV`$ vs $`V`$ on a double-logarithmic scale for the HRC sample, in which it appears that $`dI/dV`$ can be fit with a power-law function for large bias. Both the temperature dependence of $`G`$ and the bias-voltage dependence of $`dI/dV`$ for the HRC sample are typical of individual SWNTs and ropes with similar or lower conductance values , which are attributed to the suppressed tunneling density of states in a Luttinger liquid . The similar behavior around zero bias for the LRC sample suggests that it comes from the same origin. In the remaining of the paper, we focus on the large-bias behavior for the LRC samples.
We have further extended the $`I`$-$`V`$ measurements up to 5 V as shown in Fig. 2. Strikingly, the $`I`$-$`V`$ curves at large bias measured at different temperatures between 4 K and room temperature essentially overlap with each other. The current at 5 V exceeds 20 $`\mu `$A, which corresponds to a current density of more than $`10^9`$ $`\text{A}/\text{cm}^2`$ if a spatial extent of the $`\pi `$-electron orbital of $``$3 ร
is used to estimate the current-carrying cross section. From the shape of the $`I`$-$`V`$ curves, it is clear that the trend of decreasing conductance continues to high bias. Extrapolating the measured $`I`$-$`V`$ curves to higher voltage would lead to a current saturation, i.e., a vanishing conductance. Interestingly, the saturation current seems to be independent of the distance between the electrodes .
We find that the resistance, $`RV/I`$, can be fit remarkably well by a simple linear function of $`V`$ over almost the entire range of applied voltage (see Fig. 2 inset):
$$R=R_0+V/I_0,$$
(1)
where $`R_0`$ and $`I_0`$ are constants. Only at high voltages near 5 V some samples start showing slight deviation from this linear behavior. From the slope of the linear part of $`R`$-$`V`$ we find $`I_025`$ $`\mu `$A which is approximately the same for all of the samples we have measured.
At first sight the current saturation might be explained from the band structure. Current in metallic nanotubes is carried by two propagating 1D subbands. In the absence of scattering, the chemical potentials of the right and left moving states will differ by the applied voltage $`eV`$. At low voltages this leads to an Ohmic response, but when $`eV`$ exceeds the Fermi energy of the 1D subbands, the left moving states will be completely depleted and the current will saturate. The Fermi energy, measured relative to the nearest band edge, is approximately 2.9 eV. Experimentally, however, the current starts saturating at a much lower voltage. An alternative model is thus needed to explain the saturation.
We expect the measured resistance to be a combination of the resistance due to the contacts and the resistance due to backscattering along the length of the nanotube. The current saturation is unlikely to arise from an increased contact resistance at high voltages since the contacts would then behave like high-resistance tunneling contacts, and one would expect to see features in the $`I`$-$`V`$ associated with tunneling into the 1D subbands of the nanotube. The measured $`I`$-$`V`$, however, is featureless.
We thus focus on the effect of backscattering in the nanotube. The behavior of $`R`$-$`V`$ suggests that in addition to a constant scattering term, which most probably comes from contact scattering or impurity scattering, there is a dominant scattering mechanism with a mean free path (mfp) which scales inversely with the voltage.
Electrons can backscatter off phonons and other electrons. Electron-electron scattering is appealing at first, since it does not involve heating the lattice. The only electron-electron scattering that contributes to resistivity is Umklapp scattering with a scattering rate directly proportional to the electron temperature $`T_\mathrm{e}`$. This gives $`V/IT_\mathrm{e}`$. $`T_\mathrm{e}`$ will be determined by how fast the heat can escape from the tube. If we assume that all the heat produced is carried by electrons into the leads and that the temperature along the tube is uniform , we have $`IV=4(\pi ^2/3)(k_\mathrm{B}T_\mathrm{e})^2/h`$, where the left-hand side is the rate at which heat is produced, and the right-hand side is the heat current carried by the two 1D channels. Hence we expect that $`IV^{1/3}`$, which cannot describe the experiments. We have verified this $`V^{1/3}`$ behavior by numerically solving a Boltzmann equation similar to that discussed below. Luttinger-liquid effects, which have been ignored in the above arguments, tend to enhance Umklapp scattering at low energies. This would make the agreement with experiment even worse.
This suggests that we must consider scattering from phonons. The coupling will be strongest for phonons which compress and stretch bonds on the lattice scale. There are three possible categories of phonons: (1) twistons or long-wavelength acoustic phonons , (2) optical phonons which are derived from the in-plane $`E_{2g_2}`$ mode of graphite with a frequency of 1580 cm<sup>-1</sup>, and (3) in-plane zone-boundary phonons with momentum which connects the two Fermi points of graphene. While zone-boundary phonons are not directly observable optically, force-constant models put their frequency in the range $`10001500`$ cm<sup>-1</sup> in graphene. Twiston scattering is most likely not relevant since the scattering rate is smaller than the optical or zone-boundary phonon scattering by $`T/\mathrm{\Theta }_D`$, where $`T`$ is the lattice temperature and $`\mathrm{\Theta }_D2000`$ K is the Debye temperature . It is unlikely that the lattice temperature could be that high. Moreover, twistons may be pinned by the substrate.
Now we discuss backscattering due to the emission of optical or zone-boundary phonons. A related effect has been discussed previously in the context of semiconductors. The key point is that for an electron with energy $`E`$ to emit a phonon of energy $`\mathrm{}\mathrm{\Omega }`$, there must be an available state to scatter into at energy $`E\mathrm{}\mathrm{\Omega }`$. In the presence of an electric field $``$, electrons are accelerated, $`\mathrm{}\dot{k}=e`$. It is simplest to consider the case in which the coupling to the phonon is so strong that, once an electron reaches the threshold for phonon emission, it is immediately backscattered. As indicated in the schematic in the inset to Fig. 3, a steady state population is then established in which the right moving electrons are populated to an energy $`\mathrm{}\mathrm{\Omega }`$ higher than the left moving ones. The current carried in this state can be computed from a Landauer type argument to be
$$I_0=(4e/h)\mathrm{}\mathrm{\Omega }.$$
(2)
If we choose $`\mathrm{}\mathrm{\Omega }=0.16`$ eV (corresponding to 1300 cm<sup>-1</sup>), this leads to a saturation current of 25 $`\mu `$A, which is independent of sample length and agrees very well with the measured saturation current.
In this picture, the mfp for backscattering phonons $`\mathrm{}_\mathrm{\Omega }`$ is equal to the distance an electron must travel to be accelerated to an energy above the phonon energy: $`\mathrm{}_\mathrm{\Omega }=\mathrm{}\mathrm{\Omega }/e`$. This may be combined with a constant elastic scattering term via Mathiessonโs rule to obtain an effective mfp, $`\mathrm{}_{\mathrm{eff}}^1=\mathrm{}_\mathrm{e}^1+\mathrm{}_\mathrm{\Omega }^1`$, where $`\mathrm{}_\mathrm{e}`$ is the elastic scattering mfp. The resulting resistance, $`R=(h/4e^2)(L/\mathrm{}_{\mathrm{eff}})`$, then has the empirically observed form of Eq. (1) with $`R_0=(h/4e^2)L/\mathrm{}_\mathrm{e}`$ and $`I_0`$ given in Eq. (2).
To put the above interpretation on a more quantitative basis, we consider the Boltzmann equation for the distribution functions $`f_{L,R}(E_k,x,t)`$ of left and right moving $`(L,R)`$ electrons. Details will be provided in a future publication.
$$\left[_t\pm v_F_x\pm v_Fe_E\right]f_{L,R}=\left[_tf_{L,R}\right]_{\mathrm{col}}.$$
(3)
Here $`v_F`$ is the Fermi velocity, and we have chosen to express the momentum dependence of $`f`$ in terms of $`E_k=\pm \mathrm{}v_Fk`$. The left-hand side describes the collisionless evolution of the electrons in the presence of an electric field $``$. For the collision term on the right, we consider a sum of three terms: (1) Elastic scattering, $`\left[_tf_L\right]_\mathrm{e}=(v_F/\mathrm{}_\mathrm{e})(f_Rf_L)`$, where $`\mathrm{}_\mathrm{e}`$ is the elastic mfp. (2) Backscattering from phonons, $`\left[_tf_L\right]_{\mathrm{pb}}=(v_F/\mathrm{}_{\mathrm{pb}})\left[(1f_L)f_R^+f_L(1f_R^{})\right]`$. Here $`f^\pm `$ are evaluated at $`E\pm \mathrm{}\mathrm{\Omega }`$. $`\mathrm{}_{\mathrm{pb}}`$, which depends on the strength of the electron-phonon coupling, is the distance an electron travels before backscattering once the phonon emission threshold has been reached. This should be contrasted with $`\mathrm{}_\mathrm{\Omega }`$, the distance required to reach the threshold. We assume that the phonon temperature is much less than the phonon energy of $``$2000 K, so that the Bose occupation factors can be ignored. Finally, we consider (3) forward scattering from phonons, $`\left[_tf_L\right]_{\mathrm{pf}}=(v_F/\mathrm{}_{\mathrm{pf}})\left[(1f_L)f_L^+f_L(1f_L^{})\right]`$.
The effects of the contact resistance may be included as a boundary condition at the ends of the tube. For instance at the left contact ($`x=0`$) we have,
$$f_R(E,0)=t_L^2f_0(E\mu _L)+(1t_L^2)f_L(E,0),$$
(4)
where $`t_L^2`$ is the transmission probability for the contact, and $`f_0(E\mu _L)=(\mathrm{exp}[(E\mu _L)/k_\mathrm{B}T]+1)^1`$ is the Fermi distribution function of the left contact with electrochemical potential $`\mu _L`$ and temperature $`T`$.
We have solved Eqs. (3,4) numerically to obtain the steady state distribution function $`f_{L,R}(E,x)`$ in the presence of an applied voltage $`\mu _L\mu _R=eV`$ as a function of $`\mathrm{}\mathrm{\Omega }`$, $`\mathrm{}_\mathrm{e}`$, $`\mathrm{}_{\mathrm{pb},\mathrm{pf}}`$, and $`t_{L,R}^2`$. The current is then simply given by $`I=(4e/h)๐E(f_Lf_R)`$. Figure 3 shows an example of the numerical calculation of $`I`$-$`V`$ characteristic for a sample length of $`L=1\mu `$m. The parameters used in the plot are $`\mathrm{}\mathrm{\Omega }=0.15`$ eV, $`t_{L,R}^2=0.5`$, $`\mathrm{}_\mathrm{e}=300`$ nm, $`\mathrm{}_{\mathrm{pb}}=10`$ nm, and $`\mathrm{}_{\mathrm{pf}}=\mathrm{}`$. The resemblance to the experiment is remarkable. It is interesting to note that the current is insensitive to the contact scattering for $`V0.5`$ V. Contact scattering affects only the low bias resistance, giving rise to the positive curvature in $`R`$-$`V`$ near $`V=0`$.
Assuming local thermal equilibrium the Boltzmann equation may be used to derive hydrodynamic equations which govern the transport of charge and energy. These equations may then be solved analytically and give results which agree well with the simulations. They show that: (i) The empirical formula \[Eq. (1)\] is exact in the limit $`eV\mathrm{}\mathrm{\Omega }L/\mathrm{}_{\mathrm{pb}}`$, which means that the energy gained by an electron within distance $`\mathrm{}_{\mathrm{pb}}`$ must be much less than the phonon energy, or equivalently, $`\mathrm{}_{\mathrm{pb}}\mathrm{}_\mathrm{\Omega }`$. (ii) For larger $`V`$ the simple formula breaks down, and in the limit of very large $`V`$, the resistance becomes constant, $`R=(h/4e^2)L(\mathrm{}_\mathrm{e}^1+\mathrm{}_{\mathrm{pb}}^1)`$. For the parameters used in Fig. 3, the crossover voltage is roughly 15 eV. Indeed, there appears a small negative curvature at 5 V in the $`V/I`$ vs $`V`$ plot (inset Fig. 3), which signals the beginning of the breakdown of the empirical formula. The curvature would be less pronounced if a shorter value for $`\mathrm{}_{\mathrm{pb}}`$ is used. We note that 10 nm seems rather short. An estimate using a simple deformation potential model gives $`\mathrm{}_{\mathrm{pb}}150`$ nm for a nearly armchair nanotube. More work is needed to have a more accurate estimate of the electron-phonon coupling strength.
We have assumed that the heat generated in the tube escapes sufficiently quickly to avoid raising the lattice temperature too high. A simple estimate of the nanotubeโs thermal conductivity indicates that it is unlikely that all of the heat could be transmitted through the contacts. However, the nanotube is in intimate contact along its entire length with the substrate, which may be regarded as a thermal reservoir at $`T300`$ K. It would clearly be desirable to study further the nature of the thermal contact between the nanotube and substrate. Measurements on suspended nanotubes may provide some useful information.
We thank R.E. Smalley and coworkers for providing the indispensable nanotube materials, M.P. Anantram, H. Postma and S.J. Tans for discussions, and A. van den Enden for technical assistance. The work at Delft was supported by the FOM and the work at Pennsylvania by the NSF under grant DMR 96-32598. |
no-problem/9911/astro-ph9911066.html | ar5iv | text | # Nucleosynthesis in Power-Law Cosmologies
## I Motivation
We have studied a class of cosmological models in which the Universal scale factor grows as a power of the age of the Universe ($`at^\alpha `$) and concluded that such models are not viable since constraints on the present age of the Universe and from the magnitude-redshift relation favor $`\alpha =1.0\pm 0.2`$, while those from the abundances of the light elements produced during primordial nucleosynthesis require that $`\alpha `$ lie in a very narrow range around 0.55 . Successful primordial nucleosynthesis provides a very stringent constraint, requiring that a viable model simultaneously account for the observationally inferred primordial abundances of deuterium, helium-3, helium-4 and lithium-7. For example, if the nucleosynthesis constraint is satisfied, the present Universe would be very young; $`t_0=7.7\mathrm{Gyr}`$ for a Hubble parameter $`\mathrm{H}_0=70\mathrm{kms}^1\mathrm{Mpc}^1`$ (or, requiring $`\mathrm{H}_054\mathrm{kms}^1\mathrm{Mpc}^1`$ for $`t_010\mathrm{Gyr}`$).
Recently, Sethi et al. noted that cosmologies where the scale factor grows linearly with time may produce the correct amount of $`{}_{}{}^{4}\mathrm{He}`$ provided that the Universal baryon fraction is sufficiently large. At first this result might seem counter-intuitive since such a Universe would have been very old at the time of Big Bang Nucleosynthesis (BBN) suggesting that all neutrons have decayed and are unavailable to be incorporated in $`{}_{}{}^{4}\mathrm{He}`$. In fact, as Sethi et al. correctly pointed out, the expansion rate is so slow that the weak reactions remain in equilibrium sufficiently long to permit a โsimmeringโ synthesis of the required amount of $`{}_{}{}^{4}\mathrm{He}`$. However, such an old Universe also leaves more time to burn away D and $`{}_{}{}^{3}\mathrm{He}`$ so that no astrophysically significant amounts can survive. The observations of deuterium in high-redshift, low-metallicity QSO absorbers , the observations of lithium in very old, very metal-poor halo stars (the โSpite plateauโ) , and those of helium in low-metallicity extragalactic H $`\mathrm{II}`$ regions require an internally consistent primordial origin. The Sethi et al. claim that deuterium could have a non-primordial origin is without basis as shown long ago by Epstein, Lattimer & Schramm . Nevertheless, the paper of Sethi et al. prompted us to reinvestigate primordial nucleosynthesis in those power-law cosmologies which may produce โinterestingโ amounts of $`{}_{}{}^{4}\mathrm{He}`$ so as to study the predicted yields for D, $`{}_{}{}^{3}\mathrm{He}`$, and $`{}_{}{}^{7}\mathrm{Li}`$ .
## II Nucleosynthesis In Power-Law Cosmologies
Preliminaries. For a power-law cosmology it is assumed that the scale factor varies as a power of the age independent of the cosmological epoch:
$$a/a_0=(t/t_0)^\alpha =(1+z)^1,$$
(1)
where the subscript โ0โ refers throughout to the present time and โ$`z`$โ is the redshift. We may relate the present cosmic background radiation (CBR) temperature to that at any earlier epoch by $`T=(1+z)\beta T_0`$, where $`\beta 1`$ accounts for any entropy production. For the models we consider, $`\beta =1`$ after electron-positron annihilation. The Hubble parameter is then given by
$$H=\frac{\dot{a}}{a}=\frac{\alpha }{t_0}\left(\frac{T}{\beta T_0}\right)^{\frac{1}{\alpha }}.$$
(2)
The second equality should be read with the understanding that it is not valid during the epoch of electron-positron annhilation due to the non-adiabatic nature of annhilations. Power-law cosmologies with large $`\alpha `$ share the common feature that the slow Universal expansion rate permits neutrinos to remain in equilibrium until after electron-positron annihilation has ended so that neutrino and photon temperatures remain equal. In this case the entropy factor for $`T>m_e/3`$ in eq. 2 is,
$$\beta =(29/43)^{1/3}$$
(3)
in contrast to the standard Big Bang nucleosynthesis (SBBN) value of $`(4/11)^{1/3}`$. As $`\alpha `$ increases, the expansion rate at a fixed temperature decreases due to the dominant effect of the $`1/\alpha `$ power. Another useful way to view this is that at a fixed temperature, a power-law Universe with a larger $`\alpha `$ is older. As a consequence of the decreasing expansion rate, the reactions remain in equilibrium longer. In particular, as pointed out by Sethi et al. for the linear expansion model ($`\alpha =1`$), the weak interactions remain in equilibrium to much lower temperatures than in the SBBN scenario, allowing neutrons and protons to maintain equilibrium at temperatures below 100 keV, as can be seen in Fig. 1. As is evident from Fig. 1, the $`{}_{}{}^{4}\mathrm{He}`$ production rate below about $`0.4\mathrm{MeV}`$ is too slow to maintain nuclear statistical equilibrium. However, the presence of neutrons in equilibrium and the enormous amount of time available for nucleosynthesis during neutron-proton equilibrium (compared to SBBN) make it possible to build up a significant abundance of $`{}_{}{}^{4}\mathrm{He}`$ .
The above discussion is not restricted to $`\alpha =1`$, but applies for all values of $`\alpha `$ which are sufficiently large (so that the expansion rate is sufficiently small) to allow neutrons to stay in equilibrium long enough to enable synthesis of $`{}_{}{}^{4}\mathrm{He}`$ in sufficient amounts, as we show in Fig. 2. Although we explore a larger range in $`\alpha `$ in this paper, we present detailed results for $`0.75\alpha 1.25`$, a range consistent with the age and expansion rate of the Universe, and we check these results for consistency with independent (i.e., non-BBN) constraints on the baryon density. The iso-abundance contours in Fig. 2 show clearly that as $`\alpha `$ decreases towards 0.75, a larger baryon density is required to produce the same abundance of $`{}_{}{}^{4}\mathrm{He}`$. For example, although $`\mathrm{Y}_\mathrm{P}=0.24`$ can be synthesized in the $`\alpha =0.75`$ model, the density of baryons required is very large: $`\mathrm{\Omega }_\mathrm{B}h^220`$. These โlarge $`\mathrm{\Omega }_\mathrm{B}`$โ models are constrained by dynamical estimates of the mass density, an issue we discuss later.
Helium-4 abundance. In an earlier study , we showed that there is a very small region, centered on $`\alpha =0.55`$, for which the light elements can be produced in abundances similar to those predicted by SBBN. But, this small window is closed by the SNIa magnitude-redshift data . Here we are concerned with larger values of $`\alpha `$ and, correspondingly, larger baryon-to-photon ratios ($`\eta `$). First we consider the nucleosynthesis of $`{}_{}{}^{4}\mathrm{He}`$ in these models. Figure 2 shows the connection between the baryon density ($`\mathrm{\Omega }_Bh^2=\eta _{10}/273`$, where $`\eta _{10}10^{10}n_N/n_\gamma `$) and $`\alpha `$ set by the requirement that the primordial helium mass fraction lie in the generous range $`0.22\mathrm{Y}_\mathrm{P}0.26`$. We have included in Fig. 2 the region investigated in Ref. , $`\alpha <0.6`$, as well. To understand the features in Fig. 2, we need to isolate the important factors controlling the synthesis of helium. In SBBN, the $`{}_{}{}^{4}\mathrm{He}`$ abundance is essentially controlled by the number density ratio of neutrons to protons ($`n/p`$) at the start of nucleosynthesis ($`T=T_{BBN}80`$ keV). This ratio in turn is determined by (1) the $`np`$ ratio at โfreeze-outโ ($`T=T_f`$) of the neutron-proton interconversion rates which may be approximated by $`(n/p)_f=\mathrm{exp}(Q/T_f)`$, where $`Q=1.293\mathrm{MeV}`$ is the neutron-proton mass difference and, (2) the time available for neutrons to decay after freeze-out, $`\mathrm{\Delta }t_d=t(T_{BBN})t(T_f)`$. In contrast, for power-law cosmologies another factor comes into play โ the time available for nucleosynthesis, $`\mathrm{\Delta }t_{BBN}`$, before the nuclear reactions freeze-out. For larger $`\alpha `$, the expansion rate of the Universe (at fixed temperature) is smaller and the Universe is older. Hence, for larger $`\alpha `$ neutrons remain in equilibrium longer and the freeze-out temperature ($`T_f`$) is smaller, so that $`(n/p)_f`$ is smaller. However, the effect of the increase in $`\mathrm{\Delta }t_{BBN}`$ as $`\alpha `$ increases dominates that due to the change in $`T_f`$. For $`\alpha =0.50`$ the freeze-out temperature is around 4 MeV whereas for $`\alpha =0.55`$, $`T_f1`$ MeV which implies a decrease in $`(n/p)_f`$ by a factor of about 2.5. On the other hand, the age of the Universe at $`T=10`$ keV (about the temperature when SBBN ends) is a factor of 25 larger for $`\alpha =0.55`$ relative to that for $`\alpha =0.50`$. Thus, for the same $`\eta `$, increasing $`\alpha `$ from 0.50 to 0.55 has the effect of increasing the <sup>4</sup>He abundance because more time is available for nucleosynthesis. But, since decreasing the baryon density decreases the nuclear reaction rates leading to a decrease in <sup>4</sup>He, we may understand the trend of the smaller baryon density requirement as $`\alpha `$ increases from 0.50 to about 0.55, even though the decrease in $`T_f`$ opposes this effect. The time-delay between โfreeze-outโ and BBN, $`\mathrm{\Delta }t_d`$, which has, until now, been much smaller than $`\tau _n`$, becomes comparable to it at $`\alpha 0.55`$. Since a larger $`\alpha `$ results in an older Universe at a fixed temperature, $`\mathrm{\Delta }t_d`$ increases with $`\alpha `$. Thus for $`\alpha \begin{array}{c}>\hfill \\ \hfill \end{array}0.55`$, $`\mathrm{Y}_\mathrm{P}`$ is increasingly suppressed (exponentially) as $`\alpha `$ is increased. The only way to compensate for this is by increasing $`T_{BBN}`$ (since $`\mathrm{\Delta }t_d(T_{BBN})^{1/\alpha }`$), which may be achieved by increasing the baryon density. But since $`T_{BBN}`$ depends only logarithmically on the baryon density , this accounts for the exponential rise in the required value of $`\mathrm{\Omega }_Bh^2`$ as $`\alpha `$ increases. This trend cannot continue indefinitely; the curve must turn over for reasons we describe below.
From Fig. 2, it is apparent that in the โlarge $`\alpha `$โ range, the required value of $`\mathrm{\Omega }_\mathrm{B}h^2`$ decreases with increasing $`\alpha `$. In our previous analysis of $`{}_{}{}^{4}\mathrm{He}`$ nucleosynthesis which concentrated on $`\alpha `$ in the vicinity of 0.55, we implicitly assumed that the age of the Universe at $`T=T_f`$ was not large enough for appreciable amounts of $`{}_{}{}^{4}\mathrm{He}`$ to have been built up. This assumption breaks down for large values of $`\alpha `$ and $`\eta `$. Since D, $`{}_{}{}^{3}\mathrm{He}`$ and $`{}_{}{}^{3}\mathrm{H}`$ are not present in appreciable quantities, a large value of $`\eta `$ is needed to boost the $`{}_{}{}^{4}\mathrm{He}`$ production rate. Now, the larger the value of $`\alpha `$, the longer neutrons remain in equilibrium, thus allowing more $`{}_{}{}^{4}\mathrm{He}`$ to be slowly built up, with the neutrons incorporated in $`{}_{}{}^{4}\mathrm{He}`$ being replaced via $`pn`$ reactions. Roughly speaking, the required value of $`\eta `$ for a given $`\alpha `$ is set by the condition:
$$\left[\frac{\mathrm{dY}_\mathrm{P}}{\mathrm{d}t}\right]_{T=T_f}\mathrm{\hspace{0.33em}\hspace{0.33em}0.24}/t(T_f).$$
(4)
The effects of $`\alpha `$ on $`t(T_f)`$ and $`\eta `$ on $`\mathrm{dY}_\mathrm{P}/\mathrm{d}t`$ complement each other, giving rise to the trend shown by the $`{}_{}{}^{4}\mathrm{He}`$ iso-abundance curves in Fig. 2 for $`\alpha \begin{array}{c}>\hfill \\ \hfill \end{array}0.75`$.
Light element abundances in the linear expansion model. We now turn to the production of deuterium and $`{}_{}{}^{3}\mathrm{He}`$. For large $`\alpha `$ (e.g., $`\alpha =1`$), we expect the deuterium abundance to be insignificant since D can be efficiently burned to $`{}_{}{}^{3}\mathrm{He}`$ during the long time available for nucleosynthesis. The mean lifetime of deuterium against destructive collisions with protons at a low temperature of 10 keV is around 3 days; at this temperature the $`\alpha =1`$ Universe is already 300 years old! The fact that the timescales are so different allows us to derive analytical expressions for the deuterium, helium-3 and lithium-7 (beryllium-7) mass fractions (to be denoted by $`X_D`$, $`X_3`$ and $`X_7`$ respectively). The generic equation for the rate of evolution of the mass fraction of nuclide โ$`a`$โ can be parameterized as
$$\frac{\mathrm{d}X_a}{\mathrm{d}t}=R_{\mathrm{prod}}(a)R_{\mathrm{dest}}(a)X_a,$$
(5)
where โprodโ and โdestโ refer to the production and destruction rates of nuclide โ$`a`$โ. Given that the universe remains at the same temperature for a very long time (compared to the reaction time scales), it is not surprising that $`X_a`$ achieves its steady-state value at each temperature (for a detailed discussion in the context of SBBN, see ),
$$X_a\frac{R_{\mathrm{prod}}(a)}{R_{\mathrm{dest}}(a)}.$$
(6)
We can write this explicitly for the simplest case โ deuterium:
$$X_D=2\frac{\left(\mathrm{\Gamma }_{np}+\mathrm{\Gamma }_{pp}/2\right)X_p}{\mathrm{\Gamma }_{pD}+\mathrm{\Gamma }_{\gamma D}}$$
(7)
where the various $`\mathrm{\Gamma }`$s represent the relevant deuterium creation ($`n+pD+\gamma `$ and $`p+pD+e^++\nu `$) rates per target proton, and destruction ($`D(p,\gamma )`$$`{}_{}{}^{3}\mathrm{He}`$ and $`D(\gamma ,p)n`$) rates per target deuterium. All of these rates can be obtained from Ref. . Once the reaction rates become smaller than the universal expansion rate (say at some temperature $`T_{}`$), the abundances freeze out with values close to $`X_a`$ at the corresponding $`T_{}`$. This is illustrated in Fig. 4 which clearly shows that the steady-state solution works very well. We note here that the steady state (dotted) curves in Fig. 4 are not independent analytic derivations, but use the abundances of the various nuclei as calculated by the numerical code. The figure intends to emphasize that nucleosynthesis in this (linear expansion) model can be well represented by the steady-state solutions in eq. 6.
In the expression for $`X_D`$ (see eq. 7), the $`n+p`$ reaction term dominates until about 20 keV after which the $`p+p`$ reaction makes the dominant contribution. The final deuterium abundance is thus determined by the weak pp reaction ($`p+pD+e^++\nu `$), the effect of which can be seen in Fig. 3 as the very slow rise in $`X_D`$ between temperatures of 10 keV and 1 keV (at which point the D abundance freezes out). Since both <sup>3</sup>He and <sup>7</sup>Li freeze out much earlier, they do not get any significant boost from the weak pp reaction.
From eq. 7, $`X_D`$, and thus $`X_3`$ (<sup>3</sup>He is formed from $`D`$), are proportional to $`X_n`$, the neutron abundance. One striking feature in Fig. 3 is the boost to the neutron abundance (and hence the abundances of D and <sup>3</sup>He) at temperatures around 40 keV. The effect is subtle and may be missed in BBN codes with a limited nuclear reaction network. The slow rate of expansion of the universe during nucleosynthesis facilitates the production of a relatively large โmetalโ (A$`8`$) abundance ($`X_{\mathrm{metals}}3\times 10^7`$). In particular, <sup>13</sup>C is produced in these models through the chain: <sup>12</sup>C+p$``$ <sup>13</sup>N+$`\gamma `$ and the subsequent beta-decay of <sup>13</sup>N. In this environment <sup>13</sup>C+<sup>4</sup>He$``$<sup>16</sup>O+$`n`$ leads to the production of free neutrons.
The mass-7 abundance is entirely due to the production of <sup>7</sup>Be through the reaction <sup>3</sup>He+<sup>4</sup>He$``$<sup>7</sup>Be+$`\gamma `$. <sup>7</sup>Be decays to <sup>7</sup>Li by electron capture once the universe has cooled sufficiently to permit the formation of atoms. Once formed, it is difficult to destroy <sup>7</sup>Be at temperatures $`\begin{array}{c}<\hfill \\ \hfill \end{array}100`$ KeV. In contrast, <sup>7</sup>Li is very easily destroyed, specifically through its interaction with protons. Since the <sup>7</sup>Be production (and thus $`{}_{}{}^{7}\mathrm{Li}`$) follows the evolution of the <sup>3</sup>He abundance, and there is very little destruction of <sup>7</sup>Be, <sup>7</sup>Li also benefits from the boost to the neutron abundance described in the last paragraph. This has the effect of boosting the $`{}_{}{}^{7}\mathrm{Li}`$ abundance from $`10^{11}`$ (if this source of neutrons were not included) to $`10^9`$. This is significant in that, at the level of a few parts in 10<sup>10</sup> (e.g.), the primordial lithium abundance lies between these two estimates.
Light element abundances vs. $`\alpha `$. Having explored BBN in the linear model ($`\alpha =1`$) it is now important to ask how these results depend on $`\alpha `$. It is clear from Fig. 5 that nothing dramatically different happens as $`\alpha `$ changes; this is simply because the key physics remains the same. In preparing Fig. 5 we adjust the value of $`\eta `$ (baryon density) for each choice of $`\alpha `$ so that the primordial $`{}_{}{}^{4}\mathrm{He}`$ mass fraction lies between 22% and 26%. As $`\alpha `$ increases, the nuclei freeze out at lower temperatures since the expansion rate at the same temperature is lower for a larger $`\alpha `$. The effect of this can be gauged by the behavior of $`X_D`$, $`X_3`$ and $`X_7`$ with respect to temperature, as given by eq. 6. For deuterium this implies a small increase with $`\alpha `$ due to the pp (weak) reaction, which is also reflected in the behavior of <sup>3</sup>He for $`\alpha \begin{array}{c}>\hfill \\ \hfill \end{array}1`$. The fall of <sup>3</sup>He with increasing $`\alpha `$ for $`\alpha \begin{array}{c}<\hfill \\ \hfill \end{array}1`$ is due to larger destruction of <sup>3</sup>He because of the increase in the time available for the nuclear reactions. As already mentioned, the abundance of <sup>7</sup>Be depends critically on the evolution of the <sup>3</sup>He abundance; so while the mass-7 (<sup>7</sup>Be) abundance increases appreciably with the increase in baryon density, it is relatively unaffected by a change in $`\alpha `$.
Note that in those power-law models which can simultaneously reproduce an acceptable $`{}_{}{}^{4}\mathrm{He}`$ abundance along with a consistent age and expansion rate, the corresponding baryon density must be very large, $`0.04\mathrm{\Omega }_\mathrm{B}h^26.4`$ ($`11\eta _{10}1750`$; see Fig. 2). Most โ if not all โ of this range is far too large for consistency with independent (non-BBN) estimates of the universal density of baryons ($`\eta \begin{array}{c}<\hfill \\ \hfill \end{array}7.4`$ ) or, for that matter, the total matter density . Conservatively, clusters limit the total (gravitating) matter density to $`\mathrm{\Omega }_M\begin{array}{c}<\hfill \\ \hfill \end{array}0.4`$ so that if there were no non-baryonic dark matter, $`\mathrm{\Omega }_\mathrm{B}h^2\begin{array}{c}<\hfill \\ \hfill \end{array}0.2`$ ($`\eta \begin{array}{c}<\hfill \\ \hfill \end{array}54`$) for $`h0.7`$. However, if the X-ray emission from clusters is used to estimate the cluster baryon fraction (see ), the universal baryon density should be smaller than this very conservative estimate by a factor of 7-8 (consistent with the upper bound from the baryon inventory of Fukugita, Hogan and Peebles ). Thus power-law cosmologies constrained to reproduce $`{}_{}{}^{4}\mathrm{He}`$ (only), an acceptable age and magnitude-redshift relation, and an acceptable baryon density, must have $`\alpha `$ restricted to a very narrow range: $`1\begin{array}{c}<\hfill \\ \hfill \end{array}\alpha \begin{array}{c}<\hfill \\ \hfill \end{array}1.2`$. Furthermore, the baryon density in even this restricted range is large when compared with estimates of the baryon density from cluster X-rays. Finally, for $`\alpha `$ in the narrow range of $`1\begin{array}{c}<\hfill \\ \hfill \end{array}\alpha \begin{array}{c}<\hfill \\ \hfill \end{array}1.2`$ and $`0.22\mathrm{Y}_\mathrm{P}0.26`$, the other light element abundances are restricted to $`{}_{}{}^{7}\mathrm{Li}`$/H $`>10^9`$, $`{}_{}{}^{3}\mathrm{He}`$/H $`<3\times 10^{13}`$, and D/H $`<3\times 10^{18}`$. For deuterium and helium-3 this is in very strong disagreement (by 8 to 13 orders of magnitude!) with observational data (for a review see ). Although the predicted <sup>7</sup>Li abundance is comparable to that observed in the solar system, the local ISM, and in Pop $`\mathrm{I}`$ stars, it is larger than the primordial abundance inferred from the Pop $`\mathrm{II}`$ halo stars , and marginally inconsistent with the observations of lithium in the ISM of the LMC .
## III Conclusions
In response to the claim that a power-law Universe expanding linearly with time could be consistent with the constraints on BBN, we have reexamined these models. Although it is true that observationally consistent amounts of $`{}_{}{}^{4}\mathrm{He}`$ can be produced in these models, this is not the case for the other light elements D, $`{}_{}{}^{3}\mathrm{He}`$, $`{}_{}{}^{7}\mathrm{Li}`$. Furthermore, consistency with $`{}_{}{}^{4}\mathrm{He}`$ at $`\alpha =1`$ requires a very high baryon density ($`75\eta _{10}86`$ or $`0.27\mathrm{\Omega }_\mathrm{B}h^20.32`$), inconsistent with non-BBN estimates of the universal baryon density and, even with the total mass density. We have also investigated BBN in power-law cosmologies with $`\alpha >1`$ and have confirmed that although the correct $`{}_{}{}^{4}\mathrm{He}`$ abundance can be produced, the yields of the other light elements D, $`{}_{}{}^{3}\mathrm{He}`$, and $`{}_{}{}^{7}\mathrm{Li}`$ are inconsistent with their inferred primordial abundances. In general, power-law cosmologies are unable to account simultaneously for the early evolution of the Universe (BBN) (which requires $`\alpha 0.55`$) and for its presently observed expansion (which requires $`\alpha =1\pm 0.2`$).
###### Acknowledgements.
This work was supported at Ohio State by DOE grant DE-AC02-76ER01545. |
no-problem/9911/astro-ph9911137.html | ar5iv | text | # Multiple stellar populations in the globular cluster ๐ Centauri as tracers of a merger event
## References
1. Ibata, R. A., Gilmore, G. & Irwin, M. J. A dwarf satellite galaxy in Sagittarius, Nature 370, 194-196 (1994).
2. Sarajedini, A. & Layden, A. C. A photometric study of the globular cluster M54 and the Sagittarius dwarf galaxy: Evidence for three distinct populations, Astron. J. 109, 1086-1094 (1995).
3. Layden, A. C. & Sarajedini, A. The globular cluster M54 and the star formation history of the Sagittarius dwarf galaxy, Astrophys. J. 486, L107-L110 (1997).
4. Stetson, P. B. DAOPHOT: A computer program for crowded-field stellar photometry, Publ. Astron. Soc. Pacif. 99, 191-222 (1987).
5. Woolley, R. v. d. R. Studies of the globular cluster $`\omega `$ Centauri. I. Catalogue of magnitudes and proper motions, Royal Observatory Annals No. 2, 1-128 (1966).
6. Demarque, P. et al. New Yale isochrones (Yale University Observatory, New Haven (1996).
7. Lee, Y.-W., Demarque, P. & Zinn, R. The horizontal-branch stars in globular clusters. II. The second parameter phenomenon, Astrophys. J. 423, 248-265 (1994).
8. Norris, J. E., Freeman, K. C. & Mighell, K. J. The giant branch of $`\omega `$ Centauri. V. The calcium abundance distribution, Astrophys. J. 462, 241-254 (1996).
9. Searle, L. & Zinn, R. Compositions of halo clusters and the formation of the galactic halo, Astrophys. J. 225, 357-379 (1978).
10. van den Bergh, S. Mergers of Globular Clusters, Astrophys. J. 471, L31-L32 (1996).
Acknowledgements. Support for this work was provided by the Creative Research Initiatives Program of the Korean Ministry of Science & Technology, and in part, by the Korea Science & Engineering Foundation. S.-C.R. was a visiting astronomer at CTIO/NOAO, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
Correspondence should be addressed to Y.-W. L. (e-mail: ywlee@csa.yonsei.ac.kr). |
no-problem/9911/astro-ph9911108.html | ar5iv | text | # Millimetre/submillimetre-wave emission line searches for high-redshift galaxies
## 1 Introduction
The redshifted far-infrared/submillimetre-wave line emission from the interstellar medium (ISM) in galaxies could be exploited to detect new samples of distant gas-rich galaxies and active galactic nuclei (AGN) (Loeb 1993; Blain 1996; van der Werf & Israel 1996; Silk & Spaans 1997; Stark 1997; Combes, Maoli & Omont 1999; van der Werf 1999). This emission is attributable both to molecular rotational transitions, in particular from carbon monoxide (CO), and to atomic fine-structure transitions, in particular from the singly ionized 158-$`\mu `$m carbon \[Cii\] line.
Redshifted CO emission has been detected from a range of known high-redshift galaxies and quasars in the millimetre/submillimetre waveband, as summarized by Frayer et al. (1998) and Combes et al. (1999). Many of these galaxies are known to be gravitationally lensed by a foreground galaxy, an effect which potentially complicates the interpretation of the results by altering the ratios of the inferred luminosities in the continuum and the detected lines (Eisenhardt et al. 1996). In only a small subsample of the detected galaxies (Solomon, Downes & Radford 1992; Barvainis et al. 1997; Downes et al. 1999) have multiple lines been detected, providing an opportunity to investigate the astrophysics of the ISM.
So far there have been very few detections of redshifted fine-structure lines, despite careful searches, for example, for both \[Cii\] (Isaak et al. 1994; Ivison, Harrison & Coulson 1998a; van der Werf 1999) and singly ionized 205-$`\mu `$m \[Nii\] emission (Ivison & Harrison 1996). Neutral carbon \[Ci\] emission, which is considerably less intense than \[Cii\] and \[Nii\] emission in the Milky Way (Wright et al. 1991) and nearby galaxies (Stacey et al. 1991), has been detected from the gravitationally lensed Cloverleaf quasar (Barvainis et al. 1994). The most luminous high-redshift galaxies and quasars have necessarily been targetted in these searches.
\[Cii\] fine-structure emission is powerful in both the Milky Way and in sub-$`L^{}`$ galaxies, in which it accounts for about 0.5 per cent of the bolometric far-infrared luminosity (Nikola et al. 1998). However, based on observations of a limited number of low-redshift galaxies using the Infrared Space Observatory (ISO) (Malhotra et al. 1997; Luhman et al. 1998; Pierini et al. 1999), it appears that a systematically lesser fraction of the bolometric luminosity of more luminous galaxies appears as \[Cii\] emission, about 0.1 per cent (Luhman et al. 1998). As noted by Luhman et al. (1998) and van der Werf (1999), the results of these ISO observations are fully consistent with the non-detection of redshifted fine-structure emission from high-redshift galaxies using ground-based submillimetre-wave telescopes. The results of Kuiper Airborne Observatory (KAO) observations of the Galactic centre (Erickson et al. 1991) indicate that 63- and 146-$`\mu `$m neutral oxygen \[Oi\] fine-structure emission becomes steadily more luminous as compared with that from \[Cii\] as the far-infrared luminosity of gas clouds increases. However, currently there is insufficient published data available to address this issue in external galaxies.
We attempt here to predict the counts of distant gas-rich line-emitting galaxies that could be detected in the millimetre/submillimetre waveband. There are two challenges to making reliable predictions. First, there are limited data available from which to construct a clear understanding of the astrophysics of the ISM in high-redshift galaxies. There are only a few tens of detections of line emission from these objects, the majority of which have been made in galaxies that are gravitationally lensed by foreground galaxies. Because of the potential for differential magnification across and within the lensed galaxy, neither the ratios of the line and continuum luminosities nor the excitation conditions in the ISM are known accurately in these cases. As shown by ISO \[Cii\] observations, extrapolation of the observed properties of low-redshift galaxies with relatively low luminosities to greater luminosities in high-redshift galaxies is not necessarily reliable. Secondly, the space density and form of evolution of gas-rich galaxies at high redshifts has not been well determined. Thus the existing predictions of the observability of high-redshift submillimetre-wave line emission have concentrated on either discussing the potential observability of individual high-redshift galaxies (van der Werf & Israel 1996; Silk & Spaans 1997; Combes et al. 1999; van der Werf 1999), or have relied on extensive extrapolations, from the populations of low-redshift ultraluminous infrared galaxies (ULIRGs) (Blain 1996) and from low-redshift $`L^{}`$ bulges to the properties of proto-quasars at $`z10`$ (Loeb 1993).
Both of these difficulties can be addressed by exploiting the results of deep 450- and 850-$`\mu `$m dust continuum radiation surveys made using the Submillimetre Common-User Bolometer Array (SCUBA) camera (Holland et al. 1999) at the James Clerk Maxwell Telescope (JCMT; Smail, Ivison & Blain 1997; Barger et al. 1998; Hughes et al. 1998; Barger, Cowie & Sanders 1999; Blain et al. 1999b, 2000; Eales et al. 1999). These surveys are sensitive to galaxies at very high redshifts (Blain & Longair 1993), and have detected a considerable population of very luminous dust-enshrouded galaxies. The 15-arcsec angular resolution of the JCMT is rather coarse, but reliable identifications can be made by combining the SCUBA images with multi-waveband follow-up images and spectra (Ivison et al. 1998b, 2000; Smail et al. 1998, 2000; Barger et al. 1999b; Lilly et al. 1999), and crucially with observations of redshifted CO emission, which are currently available for two submillimetre-selected galaxies: SMM J02399$``$0136 at $`z=2.81`$ and SMM J14011+0252 at $`z=2.56`$ (Frayer et al. 1998, 1999; Ivison et al. 1998b, 2000). The bolometric and CO-line luminosities of these galaxies are reasonably well known, and because they are lensed by clusters rather than individual foreground galaxies, their inferred line ratios are not subject to modification by lensing. These observations thus provide a useful template with which to describe the properties of the ISM and line emission in high-redshift, dust-enshrouded, gas-rich galaxies.
In Section 2 we discuss the existing line observations, and summarize our current state of knowledge about the evolution and redshift distribution of galaxies that have been discovered in submillimetre-wave dust continuum surveys. In Section 3 we describe our model of line emission from these galaxies, and present the results, as based on our understanding of high-redshift continuum sources. In Section 4 we discuss the observability of this hypothetical population using existing and future millimetre/submillimetre-wave spectrographs. Unless otherwise stated, we assume that $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_0=1`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$.
## 2 Background information
### 2.1 Line observations
Ground-based telescopes have detected molecular rotation lines and atomic fine-structure lines from low-redshift galaxies (e.g., Sanders et al. 1986; Wild et al. 1992; Devereux et al. 1994; Gerin & Phillips 1999; Mauersberger et al. 1999). Atomic fine-structure lines have also been observed from bright galactic star-forming regions and nearby galaxies using the KAO (Stacey et al. 1991; Nikola et al. 1998), COBE (Wright et al. 1991) and ISO (Malhotra et al. 1997; Luhman et al. 1998; Pierini et al. 1999).
CO rotational line emission has been detected successfully from various high-redshift galaxies and quasars, including the first identified high-redshift ULIRG IRAS F10214+4724 (Solomon et al. 1992), the gravitationally lensed Cloverleaf quasar H 1413+117 (Barvainis et al. 1994; Kneib et al. 1998), various quasars at $`z4`$, including BR 1202$``$0725 (Ohta et al. 1996, 1998; Omont et al. 1996) and the extremely luminous APM 08279+5255 (Lewis et al. 1998; Downes et al. 1999), and the submillimetre-selected galaxies SMM J02399$``$0136 and SMM J14011+0252 (Frayer et al. 1998, 1999). A significant fraction of the dynamical mass in many of these systems is inferred to be in the form of molecular gas, and it is plausible that they are observed in the process of forming the bulk of their stellar populations.
### 2.2 Atomic fine-structure lines
Atomic fine-structure lines emitted at wavelengths longer than about 100 $`\mu `$m โ \[Cii\] at 1900 GHz, \[Nii\] at 1460 and 2460 GHz, \[Oi\] at 2060 GHz, and \[Ci\] at 492 and 809 GHz โ are redshifted into atmospheric windows for galaxies at redshifts $`z<5`$, the redshift range within which at least 80 per cent of dust-enshrouded galaxies detected by SCUBA appear to lie (Smail et al. 1998; Barger et al. 1999b; Lilly et al. 1999). There are many other mid-infrared lines with shorter restframe emission wavelengths (see, e.g., Lutz et al. 1998); however, unless massive galaxies exist at $`z10`$, these lines will not be redshifted into atmospheric windows accessible to ground-based telescopes.
Here we assume that the line-to-bolometric luminosity ratio $`f_{\mathrm{line}}=10^4`$ for the \[Cii\] line in all high-redshift gas-rich galaxies, corresponding to the value observed in low-redshift ULIRGs. The equivalent value for \[Ci\]$`_{492\mathrm{GHz}}`$, $`f_{\mathrm{line}}=2.9\times 10^6`$ is chosen to match the value observed by Gerin & Phillips (1999) in Arp 220. The values of $`f_{\mathrm{line}}`$ for other fine-structure lines listed in Table 1 are chosen by scaling the results of observations of the Milky Way and low-redshift galaxies (Genzel et al. 1990; Erickson et al. 1991; Stacey et al. 1991; Wright et al. 1991). This approach should lead to a reliable estimate of the counts of \[Ci\] and \[Cii\] lines, but greater uncertainty in the N and O line predictions. In the absence of more observational data, which would ideally allow luminosity functions to be derived for each line, we stress that the predictions of the observability of redshifted fine-structure lines made in Section 3 must be regarded as tentative and preliminary.
### 2.3 CO rotational transitions
#### 2.3.1 CO line excitation
Much more observational data is available about the properties of the ladder of CO rotational transitions in the ISM. The energy of the $`J`$th level in the CO molecular rotation ladder $`E_J=k_\mathrm{B}T`$, where $`T=J(J+1)`$\[2.77 K\], and so the energy of a photon produced in the $`J+1J`$ transition is $`h\nu =k_\mathrm{B}(J+1)`$\[5.54 K\]. The population of the $`J`$ states can be calculated by assuming a temperature and density for the emitting gas. The primary source of excitation is expected to be collisions with molecular hydrogen (H<sub>2</sub>), which dominates the mass of the ISM, with a role for radiative excitation, including that attributable to the cosmic microwave background radiation (CMBR). By taking into account the spontaneous emission rate, $`A_{J+1,J}\nu ^3(J+1)/(2J+3)`$ with $`A_{1,0}=6\times 10^8`$ s<sup>-1</sup>, and details of the optical depth and geometry of gas and dust in the emitting region, the luminosities of the various $`J+1J`$ rotational transitions can be calculated. If the $`J`$ state is to be thermally populated, then the rate of COโH<sub>2</sub> collisions in the ISM gas must be greater than about $`A_{J+1,J}^1`$. This condition will not generally be met for a temperature of 50 K in the CO(5$``$4) transition unless the density of H<sub>2</sub> molecules exceeds about $`2\times 10^5`$ cm<sup>-3</sup>, which is many times denser than the $`10^4`$ cm<sup>-3</sup> that appears to be typical of low-redshift ULIRGs (Downes & Solomon 1998). Radiative excitation and optical depth effects, perhaps in very non-isotropic geometries, with very pronounced substructure, will complicate the situation greatly in real galaxies. In general, calculations of level populations are very complex, and at high redshifts there are very few data with which to constrain models.
Probably the best way to investigate the conditions in very luminous distant galaxies is to study their low-redshift ULIRG counterparts, and the rare examples of high-redshift galaxies and quasars for which more than one CO transition has been detected (e.g. Downes et al. 1999), bearing in mind the potential effects of gravitational lensing.
In order to try and make reasonable predictions for the line ratios in high-redshift galaxies, we employed a standard large velocity gradient (LVG) analysis (e.g. de Jong, Dalgarno & Chu 1975) to estimate how the CO line ratios are affected by the temperature, density and finite spatial extent of the ISM, and by the radiative excitation caused by the CMBR. In the third column of Table 1 we show the results from this model, assuming a density of 10<sup>4</sup> cm<sup>-3</sup>, which is typical of the central regions of ULIRGs (Downes & Solomon 1998), and a standard value of $`X(\mathrm{CO})/(\mathrm{d}v/\mathrm{d}r)=3\times 10^5(\mathrm{km}\mathrm{s}^1\mathrm{pc}^1)^1`$. We assume a kinetic temperature of 53 K, which is the temperature of the dominant cool dust component in the SCUBA galaxy SMM J02399$``$0136 (Ivison et al. 1998b). Higher dust temperatures of about 80 and 110 K are inferred for other well-studied high-redshift galaxies IRAS F10214+4724 and APM 08279+5255 respectively, but these are very exotic galaxies, and the results are potentially modified by the effects of differential gravitational lensing. We assume a background temperature of 10 K, the temperature of the CMBR at $`z=2.7`$, the mean redshift of the two SCUBA galaxies with CO detections. The line-to-continuum bolometric luminosity ratio $`f_{\mathrm{line}}=L_{J+1J}/L_{\mathrm{FIR}}`$ can be calculated if the bolometric continuum luminosity $`L_{\mathrm{FIR}}`$ is known. There is a clear trend of a reduction in the CO line-to-bolometric luminosity ratio of luminous infrared galaxies as the bolometric luminosity increases, with a large scatter, which is consistent with $`f_{\mathrm{line}}L_{\mathrm{FIR}}^{0.5}`$ (Sanders et al. 1986). We normalize the results to the observed CO(3$``$2) line luminosity in the $`L_{\mathrm{FIR}}10^{13}`$ L SCUBA galaxies SMM J02399$``$0136 and SMM J14011+0252, in which the ratios of the luminosity in the CO(3$``$2) line to $`L_{\mathrm{FIR}}`$ is about $`2.1\times 10^6`$ and $`5.3\times 10^6`$ respectively (Frayer et al. 1998, 1999), with errors of order 50 per cent. The dominant source of error is the uncertainty in the bolometric luminosity.
We compare the results obtained in a simple equilibrium case, in which the density of CO molecules, the spontaneous emission rates and the transition energies in different $`J`$ states are multiplied to give the luminosity in each transition,
$$L_{J+1J}\nu ^3(J+1)^2\mathrm{exp}\{[2.77\mathrm{K}](J+1)(J+2)/T\}.$$
(1)
The results are shown in the fourth and fifth columns of Table 1 for $`J9`$, assuming kinetic temperatures of 38 and 53 K respectively, the temperature generated by simple fits to the observed counts of IRAS and ISO galaxies (Blain et al. 1999c), and the spectral energy distribution (SED) of SMM J02399$``$0136.
#### 2.3.2 The effects of different excitation conditions
The values of $`f_{\mathrm{line}}`$ listed in Table 1 for CO transitions in the LVG and the 38- and 53-K thermal equilibrium models differ. However, in lines with $`J<7`$ the differences are less than a factor of a few. Given the current level of uncertainty in the data that support these calculations, this is an acceptable level. The differences between the results are more marked at large values of $`J`$. The consequences of these differences for the key predictions of the source counts of line-emitting galaxies are discussed in Section 3.2.
Throughout the paper we use the LVG model to describe the CO line emission of dusty galaxies. Observations of the CO line ratios in low-redshift dusty galaxies indicate a wide range of excitation conditions, $`T_\mathrm{b}`$\[CO(3$``$2)/CO(1$``$0)\] $`0.2`$โ1 (Mauersberger et al. 1999). In the central regions of starburst nuclei this temperature ratio tends to be systematically higher, with $`T_\mathrm{b}`$\[CO(3$``$2)/CO(1$``$0)\] $`0.5`$โ1 (Devereux et al. 1994). In our chosen LVG model, listed in Table 1, this ratio is $`0.9`$, which is consistent with the observations of the central regions of M82 (Wild et al. 1992) and Arp 220 (Mauersberger et al. 1999). The value of the $`X`$ parameter in the model has little effect on these ratios; however, reducing the density from 10<sup>4</sup> to 3300 and 1000 cm<sup>-3</sup> reduces the predicted ratio to 0.81 and 0.51 respectively. Our model looks reasonable in the light of these observations, as the high-redshift galaxies would typically be expected to be ULIRGs with high gas densities.
There have been two recent discussions of the observability of CO line emission from high-redshift galaxies. Silk & Spaans (1997) describe the effect of the increasing radiative excitation of high-$`J`$ lines at very high redshifts because of the increasing temperature of the CMBR. The median redshift of the SCUBA galaxies is likely to be about 2โ3 (Barger et al. 1999b; Smail et al. 2000; Lilly et al. 1999), with perhaps 10โ20 per cent at $`z>5`$, and because there is currently no strong evidence for the existence of a large population of metal-rich galaxies at $`z>10`$, this effect is unlikely to be very important. Combes et al. (1999) include a hot dense 90-K/10<sup>6</sup> cm<sup>-3</sup> component in the ISM of their model high-redshift galaxies, in addition to the cooler less dense component included in our models, and conduct LVG calculations to determine CO emission-line luminosities. Understandably, the luminosity of high-$`J`$ CO lines is predicted to be greater in their models as compared with the values listed in Table 1. The continuum SED of the best studied SCUBA galaxy SMM J02399$``$0136 certainly includes a contribution from dust at temperatures greater than 53 K, but here we avoid including additional hot dense phases of the ISM in our models in order to avoid complicating the models and to try and make conservative predictions for the observability of high-$`J`$ CO lines at high redshift.
Only additional observations of CO in high-redshift galaxies will allow us to improve the accuracy of the conditions in the ISM that are assumed in these models. The detection of the relative intensities of the CO(9$``$8) and CO(5$``$4) emission from APM 08279+5255 (Downes et al. 1999), and the ratio of the intensities of the multiple CO lines detected in BR 1202$``$0725 at $`z=4.7`$ (Ohta et al. 1996, 1998; Omont et al. 1996) are broadly consistent with the values of $`f_{\mathrm{line}}`$ listed in the 53-K thermal equilibrium model: see Table 1.
The evolution of the abundance of CO and dust throughout an episode of star formation activity have been investigated by Frayer & Brown (1997), and the detailed appearance of the submillimetre-wave emission-line spectrum of galaxies requires a careful treatment of the radiative transfer between stars, AGN, gas and dust in an appropriate geometry. However, given that the amount of information on the spectra of high-redshift galaxies is currently not very great, it seems sensible to base estimates of the properties of line emission on the template of the submillimetre-selected galaxies studied by Frayer et al. (1998, 1999).
#### 2.3.3 Other molecular emission lines
There could also be a contribution from rotational lines emitted by other species, such as NH<sub>3</sub>, CS, HCN, HCO<sup>+</sup> and H<sub>2</sub>O; however, it seems unlikely that these emission lines would dominate the energy emitted in CO unless the densities and excitation temperatures are very high.
### 2.4 High-redshift dusty galaxies
The surface density of 850-$`\mu `$m SCUBA galaxies is now known reasonably accurately between flux densities of 1 and 10 mJy (Barger et al. 1999a; Blain et al. 1999b, 2000). By combining knowledge of the properties of the SCUBA galaxies, the low-redshift 60-$`\mu `$m IRAS galaxies (Saunders et al. 1990; Soifer & Neugebauer 1991), the 175-$`\mu `$m ISO galaxies (Kawara et al. 1998; Puget et al. 1999) and the intensity of far-infrared background radiation (Fixsen et al. 1998; Hauser et al. 1998; Schlegel, Finkbeiner & Davis 1998), it is possible to construct a series of self-consistent models that can account for all these data (Guiderdoni et al. 1998; Blain et al. 1999b,c), under various assumptions about the formation and evolution of galaxies. The โGaussianโ model, which is described by Blain et al. (1999b) and based on pure luminosity evolution of the low-redshift 60-$`\mu `$m luminosity function (Saunders et al. 1990), was modified slightly to take account of the tentative redshift distribution derived from the SCUBA lens survey by Barger et al. (1999b). This โmodified Gaussianโ model is used as a base for the predictions of line observability presented here. The evolution function increases as $`(1+z)^\gamma `$ with $`\gamma 4`$ at $`z<1`$, has a 1.0-Gyr-long Gaussian burst of luminous activity centred at $`z=1.7`$, in which $`L^{}`$ is 70 times greater than the value of $`L^{}`$ at $`z=0`$ (Saunders et al. 1990), and then declines at $`z>3`$.
This model contains fewer parameters than there are constraining pieces of information, and so should provide a reasonable description of the properties of high-redshift dust-enshrouded galaxies, which are expected to be the most easily detectable sources of line emission. Future observations will inevitably provide more information and demand modifications to the model of galaxy evolution discussed above; however, it is likely to provide a sound basis for the predictions below.
## 3 Line predictions
In this paper we are concerned with the detectability of redshifted lines rather than with their resolution. As a result, we want to estimate the total luminosity of a line, and will not be concerned with details of its profile. The results are all presented as integrated flux densities, determined over the whole line profile. Where relevant, a line width of 300 km s<sup>-1</sup> is assumed.
The evolution of the 60-$`\mu `$m luminosity function of dusty galaxies (Saunders et al. 1990) is assumed to be defined by the modified Gaussian model. The bolometric far-infrared continuum luminosity function $`\mathrm{\Phi }_{\mathrm{bol}}(L_{\mathrm{FIR}})`$ can then be calculated by integrating over a template dusty galaxy SED (Blain et al. 1999b). $`\mathrm{\Phi }_{\mathrm{bol}}`$ can in turn be converted into a luminosity function for each line listed in Table 1, $`\mathrm{\Phi }_{\mathrm{line}}(L_{\mathrm{line}},z)`$, by evaluating $`\mathrm{\Phi }_{\mathrm{bol}}`$ at the bolometric luminosity $`L_{\mathrm{FIR}}`$ that corresponds to the line luminosity $`L_{\mathrm{line}}`$.
Based on observations by Sanders et al. (1986), the ratio of the luminosity in the CO(1$``$0) line, $`L_{\mathrm{CO}}`$, to $`L_{\mathrm{FIR}}`$ is a function of $`L_{\mathrm{FIR}}`$, with $`L_{\mathrm{CO}}L_{\mathrm{FIR}}^{0.5}`$. Hence, when making the transformation from $`L_{\mathrm{line}}`$ to $`L_{\mathrm{FIR}}`$ for the CO lines listed in Table 1, we use the relationship $`L_{\mathrm{FIR}}=L_{\mathrm{line}}/f_{\mathrm{line}}`$, in which $`f_{\mathrm{line}}L_{\mathrm{FIR}}^{0.5}`$ and is normalized to the value listed in Table 1 at $`10^{13}`$ L, the luminosity of the submillimetre-selected galaxies SMM J02399$``$0136 and SMM J14011+0252. There is no evidence from ISO observations of any systematic luminosity dependence in the value of $`f_{\mathrm{line}}`$ for \[Cii\] emission from ULIRGs, and so we assume that the values of $`f_{\mathrm{line}}`$ listed in Table 1 describe the line-to-bolometric luminosity relation in the fine-structure lines at all luminosities, that is $`L_{\mathrm{FIR}}=L_{\mathrm{line}}/f_{\mathrm{line}}`$, where $`f_{\mathrm{line}}`$ is constant.
The surface density of line-emitting galaxies $`N(>S)`$ that can be detected at integrated flux densities brighter than $`S`$, as measured in Jy km s<sup>-1</sup> or W m<sup>-2</sup>, in an observing band spanning the frequency range between $`\nu _{\mathrm{obs}}`$ and $`\nu _{\mathrm{obs}}+\mathrm{\Delta }\nu _{\mathrm{obs}}`$ can be calculated by integrating the luminosity function of a line $`\mathrm{\Phi }_{\mathrm{line}}`$ over the redshifts for which the line is in the observing band, and luminosities $`L_{\mathrm{line}}`$ greater than the detection limit $`L_{\mathrm{min}}(S,z)`$. The count of galaxies is thus
$$N(>S)=_{z_1}^{z_2}_{L_{\mathrm{min}}(S,z)}^{\mathrm{}}\mathrm{\Phi }_{\mathrm{line}}(L_{\mathrm{line}},z)dL_{\mathrm{line}}D^2(z)\frac{\mathrm{d}r}{\mathrm{d}z}dz.$$
(2)
$`D`$ is the comoving distance parameter to redshift $`z`$, and $`\mathrm{d}r`$ is the comoving distance element. For a line with a restframe emission frequency $`\nu _{\mathrm{line}}`$, the limits in equation (2) are
$$z_1=\frac{\nu _{\mathrm{obs}}+\mathrm{\Delta }\nu _{\mathrm{obs}}}{\nu _{\mathrm{line}}}1,$$
(3)
and
$$z_2=\frac{\nu _{\mathrm{obs}}}{\nu _{\mathrm{line}}}1.$$
(4)
If $`z_1z_0`$, where $`z_0`$ is the maximum redshift assumed for the galaxy population, or if $`z_20`$, then the count $`N=0`$. Here $`z_0=10`$ is assumed. The minimum detectable line luminosity,
$$L_{\mathrm{min}}(S,z)=4\pi SD^2(1+z)^2.$$
(5)
If the integrated flux density of a line $`S`$ observed at frequency $`\nu `$, is 1 W m<sup>-2</sup>, then the same quantity can be expressed as $`3(\nu /\mathrm{Hz})^1\times 10^{31}`$ Jy km s<sup>-1</sup>.
The counts calculated for the transitions listed in Table 1 are shown in Figs 1โ3. Predictions are made at centre frequencies (bandwidths) of 23 (8) โ the radio K band, 50 (8) โ the radio Q band, 90 (1), 230 (8), 345 (8), 650 (8), 200 (100) GHz, and 1 (1) THz, in Figs 2(a), 2(b), 1(a), 1(b), 1(c), 1(d), 3(a) and 3(b) respectively. The 8-GHz radio bandwidth is matched to the performance specified for the upgraded VLA. The 1-GHz bandwidth at 90 GHz is approximately matched to the current performance of millimetre-wave interferometer arrays. The 8-GHz bandwidth at 230, 345 and 650 GHz matches that for the SPIFI FabryโPerot spectrograph (Stacey et al. 1996; Bradford et al. 1999, in preparation) and is a plausible value for the bandwidth of the future ground-based Atacama Large Millimetre Array (ALMA; Ishiguro et al. 1994; Brown et al. 1996; Downes et al. 1996), although the current goal is for a 16-GHz bandwidth (Wootten 2000). The count predictions in the very wide atmospheric window between 150 to 250 GHz is shown in Fig. 3(a). This band cannot be observed simultaneously using heterodyne instruments, but could plausibly be covered using an advanced grating or FabryโPerot spectrograph feeding a sensitive bolometer detector array. In Fig. 3(b) the far-infrared/submillimetre band between 460 and 1500 GHz is shown. This band is matched to the specified spectral range of the SPIRE Fourier Transform Spectrograph destined for the FIRST satellite (Griffin 1997; Griffin et al. 1998).
### 3.1 The shape of the counts and optimal surveys
The integrated counts, $`N(>S)S^\alpha `$ presented in Figs 1 to 3 all have a characteristic form, with a relatively flat slope, $`\alpha 0.3`$, at faint flux densities, which rolls over to a steep decline, $`\alpha 3`$, at brighter fluxes. Note that the detection rate of galaxies in a survey is maximized at the depth at which $`\alpha =2`$. Because the counts of lines presented here have a pronounced knee at a certain flux density, at which the value of $`\alpha `$ crosses $`2`$, this flux density is the optimal depth for a blank-field line survey, and surveys that are either shallower or deeper should be much less efficient.
In Fig. 4 the predicted counts of line-emitting galaxies are shown as a function of observing frequency between 50 and 2000 GHz, at both faint and relatively bright values of the integrated line flux density $`S`$, $`10^{22}`$ and $`10^{20}`$ W m<sup>-2</sup>. The form of the curve for each emission line is determined by the form of evolution of far-infrared luminous galaxies specified in the modified Gaussian model. In any particular line, galaxies detected at higher frequencies are at lower redshifts. This effect accounts for the double-peaked distribution that is most apparent in the counts of bright galaxies shown in Fig. 4(b). The broad peak at lower frequencies corresponds to the very luminous Gaussian burst of activity at $`z1.7`$, while less luminous low-redshift galaxies account for the second sharper peak at higher frequencies. The double peak is more noticeable for the brighter counts, to which the contribution of low-redshift galaxies is more significant.
Based on the predicted counts of CO lines, which are probably quite reliable, the most promising frequency range in which to aim for CO line detections is 200โ300 GHz, regardless of whether faint or bright CO lines are being sought. At frequencies greater than about 500 GHz the counts are expected to be dominated by fine-structure lines, and although these counts are still very uncertain, the detectability of fine-structure line emitting galaxies is expected to be maximized at a frequency between about 400 and 800 GHz. Future blank-field searches for fine-structure emission from distant galaxies should probably be concentrated in this range.
### 3.2 The effects of CO excitation
The three different sets of values of $`f_{\mathrm{line}}`$ listed for CO transitions in Table 1 were derived assuming different excitation conditions in the ISM. As the detailed astrophysics of gas-rich high-redshift galaxies is very uncertain and only weakly constrained by observations, there is an inevitable uncertainty in the counts predicted in Figs 1โ4.
The most significant differences between the values of $`f_{\mathrm{line}}`$ predicted in the LVG, 38-K and 53-K models occur in the higher $`J`$ lines, for which the effect of the higher excitation temperature and the subthermal excitation of lines in the LVG model is greater. The increase is most significant at bright flux densities. If the alternative values of $`f_{\mathrm{line}}`$ listed in the thermal equilibrium models are used to predict counts similar to those in Figs 1โ4, then for $`J7`$, at interesting line flux densities of about $`10^{22}`$ W m<sup>-2</sup>, the counts differ by less than a factor of about 3. This uncertainty is greater than that in the surface density of high-redshift submillimetre-selected galaxies that were used to normalize the model of galaxy evolution, and so the counts predicted in Figs 1โ4 should be reliable to within a factor of about 3. Note that the counts in Fig. 4 indicate that the CO(5$``$4) line is likely to be provide the most significant contribution to the counts, and so the degree of excitation of lines with $`J7`$ is not likely to affect the detection rate in future blank-field line surveys.
### 3.3 The effects of cosmology
In Fig. 5 the 230-GHz counts expected in three different world models are compared, based on the same underlying population of distant dusty galaxies. In world models with $`\mathrm{\Omega }_0=0.3`$ and either $`\mathrm{\Omega }_\mathrm{\Lambda }=0.0`$ or 0.7 the faint counts are expected to be greater, and the bright counts are expected to be less, as compared with the predictions of an Einsteinโde Sitter model. The count changes as a function of cosmology by a factor of about 10 in the most abundant lines over the wide range of flux densities presented in Fig. 5, which correspond to a range of count values between about 10<sup>4</sup> and 10<sup>-3</sup> deg<sup>-2</sup>. At a flux density of $`10^{20}`$ W m<sup>-2</sup>, which is likely to be similar to the limiting depth of future submillimetre-wave line surveys, the counts are modified by only a factor of about 2 by changing the world model. Hence, at present the uncertainties in the excitation conditions in the ISM are expected to be greater than those introduced by uncertainties in the cosmological parameters.
## 4 Line observations
### 4.1 Millimetre-wave interferometer arrays
Four millimetre-wave interferometer arrays are currently operating: the BIMA array with ten 6-m antennas; the IRAM array with five 15-m antennas; the Nobeyama Millimeter Array with six 10-m antennas; and the OVRO Millimeter Array with six 10.4-m antennas. These instruments operate at wavelengths longer than 1 mm, and their fields of view and sensitivities translate into similar mapping speeds to equivalent flux density levels. For example, in a 20-hr integration at a frequency of 90 GHz, the OVRO Millimeter Array is able to reach a $`5\sigma `$ sensitivity limit of about $`5\times 10^{21}`$ W m<sup>-2</sup> across a 1-GHz band in a 1-arcmin<sup>2</sup> primary beam.
The counts of lines expected in a 90-GHz observation with a 1-GHz bandwidth are shown in Fig. 1(a) as a function of integrated line flux density. At this limiting flux density, a count of about 2 deg<sup>-2</sup> is expected, and so the serendipitous detection of an emission line in a blank-field survey would be expected to occur after approximately $`4\times 10^4`$ hr. In 230-GHz observations with the same 1-GHz bandwidth, which are possible in good weather, the sensitivity required for a 5$`\sigma `$ detection in a 20-h integration is about $`2\times 10^{20}`$ W m<sup>-2</sup>, in a 0.25-arcmin<sup>2</sup> primary beam. The 230-GHz count expected at this depth, after correcting the results shown in Fig. 1(b) for the narrower bandwidth, is about 50 deg<sup>-2</sup>. A serendipitous line detection would thus be expected about every 6000 hr. Thus, while known high-redshift galaxies can certainly be reliably detected in reasonable integration times using these arrays, blank-field surveys for emission lines are not currently practical.
The detection of redshifted CO lines at a 90-GHz flux density of about $`10^{20}`$ W m<sup>-2</sup> from two submillimetre-wave continuum sources, with a surface density of several 100 deg<sup>-2</sup> (Blain et al. 1999b, 2000), using the OVRO array (Frayer et al. 1998, 1999) is consistent with the 90-GHz line count of about 2 deg<sup>-2</sup> predicted in Fig. 1(a) at this flux density. This is because observations in many tens of 1-GHz bands would have been required to search for a CO emission line from these galaxies without an optical redshift, and so the source count of lines at the detected flux density is expected to be about two orders of magnitude less than that of the dust continuum-selected galaxies.
The large ground-based millimetre/submillimetre-wave interferometer array ALMA will provide excellent subarcsecond angular resolution and a large collecting area for observations of submillimetre-wave continuum and line radiation. Based on the performance described for ALMA at 230 GHz by Wootten (2000), a 300-km-s<sup>-1</sup> line with an integrated flux density of $`5\times 10^{22}`$ W m<sup>-2</sup> could be detected at $`5\sigma `$ significance, but not resolved, anywhere within a 16-GHz band in about 1 hr in the 0.15-arcmin<sup>2</sup> primary beam. The surface density of lines brighter than this flux density is expected to be about $`2.5\times 10^4`$ deg<sup>-2</sup> (Fig. 1b; see also Blain 2000), and so a detection rate of about 1.7 hr<sup>-1</sup> would be expected. The knee in the counts in Fig. 1(b), indicating the most efficient survey depth, is at a flux density greater than about $`10^{20}`$ W m<sup>-2</sup>, a depth reached in an integration time of about 10 s at 5$`\sigma `$ significance. Hence, making a large mosaic map at this depth, covering an area of 0.017 deg<sup>2</sup> hr<sup>-1</sup> should maximize the detection rate, which would then be about 15 galaxies per hour, neglecting scanning overheads incurred in the mosaiking process. This detection rate would allow large samples of line-emitting high-redshift galaxies to be compiled rapidly using ALMA. The performance of ALMA in line searches is discussed in more detail by Blain (2000).
### 4.2 Ground-based single-antenna telescopes
At present the heterodyne spectrographs fitted to the JCMT, the Caltech Submillimeter Observatory (CSO), and the IRAM 30-m antenna are not sufficiently sensitive to allow a blank-field survey to search for distant line emitting galaxies. For example, the 230-GHz receiver at the JCMT, which is the least susceptible to atmospheric noise, can reach a 5$`\sigma `$ sensitivity of about $`2\times 10^{19}`$ W m<sup>-2</sup> in a 1-hr observation in a 1.8-GHz band centred on 230 GHz. At this flux density the surface density of CO line emitting galaxies is expected to be $`<10`$ deg<sup>-2</sup> (see Fig. 1b), and so because the beam area is $`2.4\times 10^5`$ deg<sup>2</sup>, many tens of thousands of hours of observation would be required to detect a source serendipitously. The development of wide-band correlators, such as the 3.25-GHz WASP (Isaak, Harris & Zmuidzinas 1999), will improve the performance of single-antenna telescopes significantly for the detection of faint lines in galaxies with a known redshift, but unless bandwidths are increased by at least an order of magnitude, blank-field line searches from the ground will not be practical.
#### 4.2.1 Future instrumentation
At present, submillimetre-wave emission lines from a particular distant galaxy can only be detected if an accurate redshift has been determined, because the instantaneous bandwidth of receivers is narrow, and accurate tuning is required for a target line to be observed within the available band. For galaxies and AGN detected in blank-field submillimetre continuum surveys, obtaining a spectroscopic redshift requires a very considerable investment of observing time (see, e.g., Smail et al. 1999a). However, if a much wider bandwidth of order $`115\mathrm{GHz}/(1+z)`$ could be observed simultaneously, then a CO line would always lie within the observing band from a galaxy at redshift $`z`$.
There are currently two ways to increase the instantaneous bandwidth of a ground-based millimetre-wave telescope, both based on bolometer detectors rather than heterodyne mixers. Either a resonant cavity, such as a FabryโPerot, or a diffraction grating could be used to feed an array of bolometer detectors. The potential of such instruments are discussed in the subsections below.
#### 4.2.2 A FabryโPerot device: SPIFI
SPIFI is a $`5\times 5`$ element Fabry-Perot interferometer, for use at frequencies from 460 to 1500 GHz on both the 1.7-m AST-RO telescope at the South Pole and the 15-m JCMT (Stacey et al. 1996; Bradford et al. 1999, in preparation). On AST-RO, in the 350-$`\mu `$m (850-GHz) atmospheric window, the field of view of the instrument is about 25 arcmin<sup>2</sup>. At a coarse 300-km-s<sup>-1</sup> resolution, a bandwidth of about 8 GHz can be observed to a $`5\sigma `$ detection threshold of about $`3.2\times 10^{17}`$ W m<sup>-2</sup> in a 1-hr integration. On the 15-m JCMT, at the same frequency and for the same bandwidth, the field of view and the $`5\sigma `$ detection threshold are both less, about 0.3 arcmin<sup>2</sup> and $`8\times 10^{19}`$ W m<sup>-2</sup> respectively.
From Fig. 1(d), the count at flux densities brighter than the 5$`\sigma `$ detection threshold for the JCMT are expected to be about 30 deg<sup>-2</sup>, and to be dominated by \[Cii\]-emitting galaxies, for which the counts are very uncertain. Hence a serendipitous detection would be expected about every 400 hr, and so blank-field line surveys using SPIFI are on the threshold of being practical. A second-generation instrument with a wider field of view should be capable of making many detections in a reasonable integration time.
#### 4.2.3 A millimetre-wave grating spectrograph
An alternative approach to obtaining a wide simultaneous bandwidth would be to build a grating spectrograph to disperse the signal from a 10-m ground-based telescope onto linear arrays of bolometers. Such an instrument would be able to observe a reasonably large fraction of the clear 150โ250 GHz atmospheric window simultaneously, at a background-limited $`1\sigma `$ sensitivity of about $`10^{20}`$ W m<sup>-2</sup> in a 1-hr observation, which is uniform to within a factor of about two across this spectral range. The predicted counts of lines in this spectral range are of order 100 deg<sup>-2</sup> at this depth (Fig. 3a), and are expected to be dominated by CO lines, for which the predicted counts should be reasonably accurate. At 200 GHz the field of view of a 10-m telescope is about 0.2 arcmin<sup>2</sup>, and so a 5$`\sigma `$ detection rate of about 0.05 hr<sup>-1</sup> would be expected in a blank-field line survey. A grating instrument with these specifications is thus on the threshold of being useful for conducting such a survey.
It would also be very valuable for detecting CO lines emitted by the high-redshift galaxies detected in dust continuum surveys whose positions are known to within about 10 arcsec. An instantaneous bandwidth of order 100 GHz would accommodate either 2 or 3 CO lines for a $`z>2`$ galaxy, and so a spectroscopic redshift could be determined without recourse to radio, optical or near-infrared telescopes. For example, the $`z=2.8`$ SCUBA galaxy SMM J02399$``$0136 has an integrated flux density of $`1.5\times 10^{21}`$ W m<sup>-2</sup> in the CO(3$``$2) transition (Frayer et al. 1998), and all the CO($`J`$+1$``$$`J`$) transitions with $`J`$=5, 6, 7 and 8 fall within the 150โ250 GHz spectral range. Based on the LVG model (Table 1), these lines are expected to have integrated flux densities of $`3.0`$, $`3.6`$, $`3.2`$ and $`1.0\times 10^{20}`$ W m<sup>-2</sup> respectively. Thus, in a 5-hr integration in the field of SMM J02399$``$0136, all but the CO(8$``$7) line could be detected using a background-limited grating spectrograph, allowing its redshift to be determined unequivocally in a comparable time to that required to obtain an optical redshift using a 4-m telescope (Ivison et al. 1998b). There are two further advantages. First, the CO-line redshift would be the redshift of cool gas in the ISM of the galaxy, and not that of optical emission lines, which are typically blueshifted by several 100 km s<sup>-1</sup> with respect to the ISM. Secondly, the ratios of the luminosities of the detected CO lines would reveal information about the physical conditions in the cool ISM where star-formation takes place.
### 4.3 Ground-based centimetre-wave telescopes
Low-$`J`$ CO lines are redshifted into the centimetre waveband for redshifts $`z<10`$. Instruments operating in this waveband include the VLA and the Green Bank Telescope (GBT).
Currently, the VLA can observe spectral lines in a very narrow 87.5-MHz band, centred in the K band (22โ24 GHz) and Q band (40โ50 GHz). In the K band, Ivison et al. (1996) attempted to detect CO(1$``$0) emission from the environment of a dusty radio galaxy at $`z=3.8`$, and in 12 hours they obtained a $`5\sigma `$ upper limit of about $`4\times 10^{23}`$ W m<sup>-2</sup> in a 2-arcmin beam. Because of the very narrow bandwidth, the detection rate of CO line-emitting galaxies with the K-band VLA in a blank-field survey is expected to be about 1 yr<sup>-1</sup>. However, by 2002 fibre-optic links and a new correlator will be installed, increasing the simultaneous bandwidth greatly to 8 GHz, and improving the $`5\sigma `$ sensitivities to $`1.3\times 10^{23}`$ and $`8.0\times 10^{23}`$ W m<sup>-2</sup> in 12-hr K- and Q-band integrations respectively. The counts of line-emitting galaxies at $`z10`$ in these bands are shown in Fig. 2. At these sensitivities, K- and Q-band counts of about 500 and 2000 deg<sup>-2</sup> are expected respectively, each corresponding to a detection rate of about 0.03 hr<sup>-1</sup>. The knee in the counts at which the detection rate is maximized is expected at rather similar flux densities of about $`3\times 10^{23}`$ and $`2\times 10^{22}`$ W m<sup>-2</sup> in the K and Q bands respectively, depths which can be reached in about 140- and 190-min integrations and at which the counts are expected to be about 100 and 400 deg<sup>-2</sup> respectively. The most efficient detection rates in the K and Q bands are thus expected to be about 0.04 and 0.03 hr<sup>-1</sup>. Hence, low-$`J`$ blank-field CO-line surveys will be practical using the upgraded VLA. High-redshift galaxies with luminosities comparable to the SCUBA galaxies detected by Frayer et al. (1998, 1999) should be readily detectable, as their integrated flux densities in the CO(1$``$0) line would be of order $`7\times 10^{23}`$ and $`4\times 10^{22}`$ W m<sup>-2</sup> when redshifted into the K and Q bands, from $`z=4`$ and 1.3 respectively.
The 100-m clear-aperture GBT will operate with a 3.2-GHz bandwidth in the K and Q bands, reaching 5$`\sigma `$ sensitivity limits in 1-hr integrations of about 1.3 and $`5\times 10^{22}`$ W m<sup>-2</sup> respectively. These sensitivities make the GBT even more suitable for detecting low-$`J`$ CO lines from known high-redshift galaxies than the VLA, but the subarcminute field of view of the GBT is too small to allow a practical blank-field survey. At these sensitivity limits, only $`2\times 10^5`$ and $`6\times 10^4`$ lines per beam are expected in the K and Q bands respectively.
### 4.4 Air- and space-borne instruments
The 3.5-m space-borne telescope FIRST (Pilbratt 1997) will carry the HIFI far-infrared/submillimetre-wave spectrometer (Whyborn 1997) and the SPIRE bolometer array camera (Griffin et al. 1998). Both of these instruments operate at frequencies for which fine-structure lines are expected to dominate the counts of line-emitting galaxies, and so the number of detectable galaxies is necessarily uncertain.
The spectral coverage of HIFI extends from 500 to 1100 GHz. The instrument will have a 4-GHz bandwidth, a 0.4-arcmin<sup>2</sup> field of view and a 5$`\sigma `$ sensitivity of about $`5\times 10^{19}`$ W m<sup>-2</sup> in a 1-hr integration at 650 GHz. The SPIRE Fourier Transform Spectrograph (SPIRE-FTS) will provide a spectroscopic view of a 2-arcmin-square field at all frequencies between 460 GHz and 1.5 THz simultaneously. The 5$`\sigma `$ detection threshold of a line in a 1-hr integration is expected to be about $`1.5\times 10^{16}`$ W m<sup>-2</sup>. The slope of the counts shown in Figs 1(d) and 3(b), which are based on the estimated, and very uncertain, counts of \[Cii\] lines, indicate that the most efficient survey depth is greater than $`10^{18}`$ W m<sup>-2</sup> for HIFI and about $`5\times 10^{18}`$ W m<sup>-2</sup> for SPIRE-FTS. At a depth of $`5\times 10^{18}`$ W m<sup>-2</sup>, the optimal detection rates using HIFI and SPIRE-FTS are expected to be about $`6\times 10^3`$ and $`7\times 10^5`$ hr<sup>-1</sup> respectively, and so the detection rate of line-emitting galaxies using FIRST is expected to be quite low.
The currently quoted sensitivities of FIRST-HIFI are about seven times better than those of the heterodyne instruments attached to the 2.5-m SOFIA airborne telescope (Becklin 1997; Davidson 2000). Thus, when the larger field of view of SOFIA is taken into account, a line-emitting galaxy could be detected using SOFIA about every 4500 hr. Hence, SOFIA could not carry out a successful blank-field line emission survey. Note, however, that the instruments aboard SOFIA will be upgraded throughout its 20-year lifetime, and so the development of multi-element detectors and innovative wide-band spectrographs may change this situation.
The proposed space-borne far-infrared interferometer SPECS (Mather et al. 1998) has very great potential for resolving and obtaining very high signal-to-noise spectra of high-redshift galaxies. Although predictions of the detectability of fine-structure lines at frequencies greater than 600 GHz are very uncertain, with a field of view of about 0.25 deg<sup>-2</sup> at 650 GHz and a $`5\sigma `$ sensitivity of about 10<sup>-18</sup> W m<sup>-2</sup> in a 24-hr integration, about 5 \[Cii\] galaxies could be detected per day using SPECS in a 8-GHz-wide band (Fig. 1d). Hence, although SPECS is predominantly an instrument to study known galaxies in great detail, its sensitivity is sufficient to carry out successful blank-field line surveys.
### 4.5 Summary of prospects for line surveys
The most promising instruments for future surveys of CO and fine-structure lines from high-redshift galaxies are the ALMA interferometer array in the 140- and 230-GHz bands, which should be able to detect about 15 galaxies an hour in a survey to a line flux density of $`10^{20}`$ W m<sup>-2</sup> in a 16-GHz-wide band. Other instruments for which detection rates are expected to exceed one source per hundred hours are the future SPECS space-borne interferometer, which is likely to detect of order of five fine structure line-emitting galaxies per day, and a ground-based, wide-band, background-limited millimetre-wave grating spectrograph, which should be able to detect a line-emitting galaxy every 20 hr, the same rate as the upgraded K- and Q-band VLA.
## 5 Conclusions
We draw the following conclusions
1. We have predicted the surface densities of high-redshift galaxies that emit molecular rotation and atomic fine-structure line radiation that is redshifted into the millimetre/submillimetre waveband. The results depend on both the excitation conditions in the ISM of the galaxies and the evolution of the properties of distant gas-rich galaxies. We incorporate the latest results of CO-line observations and the redshift distribution of galaxies discovered in submillimetre-wave continuum surveys to provide a sound observational basis for these predictions.
2. The predicted counts of CO line-emitting galaxies are probably accurate to within a factor of about 5 for lines with $`J<7`$. The uncertainties are inevitable, and caused by the lack of knowledge of both the cosmological model and the excitation state of the emitting gas in the ISM of high-redshift galaxies.
3. The predictions of the counts of atomic fine-structure lines in high-redshift ultraluminous galaxies are more weakly constrained by observations, and we present likely order-of-magnitude estimates of the counts in six such lines.
4. The most efficient frequency for a survey that aims to detect CO emission is probably in the range 200โ400 GHz, which includes the 230- and 350-GHz atmospheric windows. The optimal frequencies for a fine-structure line survey are probably in the range 400โ800 GHz, which includes the 670- and 850-GHz atmospheric windows.
5. There are excellent prospects for using a range of millimetre- and submillimetre-wave instruments that are currently under development to conduct blank-field surveys for the redshifted emission lines from the ISM in high-redshift galaxies. ALMA will probably prove the most capable facility for blank-field surveys, and should detect up to about 15 line-emitting galaxies per hour. The most efficient survey is likely to be made in the 140- and 230-GHz bands to a an integrated line flux density of a few $`10^{20}`$ W m<sup>-2</sup>. The required sensitivities could also be achieved by future wide-band spectrometers on single-antenna telescopes. Low-$`J`$ CO transitions will be detectable using the upgraded VLA at longer millimetre and centimetre wavelengths.
6. A wide range of millimetre/submillimetre-wave lines should be detected in the spectra of galaxies selected in other wavebands, and in blank-field dust continuum surveys. Redshifts for millimetre/submillimetre-selected galaxies could be determined directly in these wavebands if a wide bandwidth of order $`100/(1+z)`$ GHz was available, with a centre frequency of about 200 GHz.
## Acknowledgements
The results in this paper are based on the properties of the SCUBA lens survey galaxies detected at the Owens Valley Millimeter Array in collaboration with Aaron Evans and Min Yun. The core of the SCUBA lens survey was carried out by Ian Smail, Rob Ivison, AWB and Jean-Paul Kneib. We thank the referee Paul van der Werf for his careful reading of the manuscript and valuable comments, and also Jackie Davidson, Kate Isaak, Rob Ivison, Richard Hills, Brett Kornfeld, Malcolm Longair, Phil Lubin, Kate Quirk, John Richer and Gordon Stacey for helpful conversations and comments. AWB โ the Raymond & Beverly Sackler Foundation Research Fellow at the IoA, Cambridge โ gratefully acknowledges generous support from the Raymond & Beverly Sackler Foundation as part of the Deep Sky Initiative programme at the IoA. AWB thanks the Caltech AY visitors program for support while this work was conducted. |
no-problem/9911/cond-mat9911148.html | ar5iv | text | # Parity Effect in the Resonant Tunneling
## Abstract
A mechanism of the parity effect in the thermally assisted resonant tunneling is proposed in the view point of nonadiabatic transitions of thermally excited states. In this mechanism, alternating enhancement of the relaxation is naturally understood as a general property of quantum relaxation of uniaxial magnets at finite temperatures where appreciable populations are pumped up to excited states. It is also found that the enhanced sequence depends on the sweeping rate of the field.
PACS number: 75.40.Gb,76.20.+q
As to the relaxation of metastable magnetization of uniaxial nanoscale molecular magnets such as Mn<sub>12</sub> and Fe<sub>8</sub>, the resonant tunneling phenomena have been paid attention and various interesting properties of the phenomena have been reported. The key mechanism of the relaxation comes from their discrete energy structure due to a finite number of degrees of freedom. The eigenvalues of the system are functions of the parameters of the system, such as an external field. If we change a parameter infinitesimally small, then the system changes adiabatically, i.e., if the system is initially in the ground state, then it stays in the ground state of the system with the current value of the parameter. On the other hand, if the changing rate is finite, the system cannot completely follow the change of parameter and then so-called nonadiabatic transition occurs. For example in uniaxial magnetic systems, if we sweep the field very slowly from parallel to antiparallel to the initial magnetization, the magnetization adiabatically follows the field and reverses its direction. This change of magnetization corresponds to the tunneling (the adiabatic transition). If the sweeping rate is fast, then the magnetization only partially changes (the nonadiabatic transition).
In uniaxial magnets, quantum fluctuation is relevant only at avoided level crossing points and changes of magnetization only occur at those points. The sweeping rate dependence of the probability of staying in the original state in this type of nonadiabatic transition has been given by Landau , Zener , and Stรผkelberg (LZS). We have studied changes of magnetization in a sweeping field from the view point of the LZS mechanism and proposed to obtain the tunneling gap from the magnetization change in a sweeping field and also explained the step-like magnetization process as a characteristic feature of nanoscale magnets. The LZS mechanism is pure quantum mechanical and it is independent of the temperature. However in experiments, strong temperature dependences have been observed, which brought an idea โthermally assisted resonant tunnelingโ. In order to explain this temperature dependence, various theoretical attempts have been done.
The sweeping rate dependences of magnetization process have been also observed in experiments at very low temperatures in Mn<sub>12</sub> and also in Fe<sub>8</sub>. There data do not depend on the temperature any more. We have pointed out that even in such cases there is still inevitable effect of the environment, and that we could nevertheless estimate the pure quantum transition probability.
Recently new aspects of the resonant tunneling have been reported, e.g., the parity effect of the resonant tunneling where amount of relaxation changes at the resonant points alternately, and $`\sqrt{t}`$-dependence of the initial relaxation of the magnetization at the resonant points. In the present Letter, we would like to propose a mechanism of the parity effect as a universal property of the thermally assisted resonant tunneling.
In order to investigate characteristics of temperature dependence of the resonant tunneling, we have constructed an equation of motion of the density matrix where effects of a thermal bath are taken into account (the quantum master equation):
$`{\displaystyle \frac{\rho (t)}{t}}`$ $`=`$ $`i[,\rho (t)]\lambda \left([X,R\rho (t)]+[X,R\rho (t)]^{}\right),`$ (1)
where $``$ is the Hamiltonian, $`\rho (t)`$ is the density matrix of the system and $`X`$ is a system operator through which the system and the bath couple with the constant $`\lambda `$. Here we set $`\mathrm{}`$ to be unity. The first term of the right-hand side describes the pure quantum dynamics of the system while the second term represents effects of environments at a temperature $`T(=\beta ^1)`$. There $`R`$ is defined as follows:
$`k|R|m`$ $`=`$ $`\zeta (E_kE_m)n_\beta (E_kE_m)k|X|m,`$ (2)
$`\zeta (\omega )`$ $`=`$ $`I(\omega )I(\omega ),\mathrm{and}n_\beta (\omega )=(e^{\beta \omega }1)^1,`$ (3)
where $`|k`$ and $`|m`$ represent the eigenstates of $``$ with the eigenenergies $`E_k`$ and $`E_m`$, respectively. Here we adopt a thermal bath which consists of an infinite number of bosons $`_\mathrm{B}=_\omega \omega b_\omega ^{}b_\omega `$, where $`b_\omega `$ and $`b_\omega ^{}`$ are the annihilation and creation boson operators of the frequency $`\omega `$. We adopt the spectral density of the boson bath $`I(\omega )`$ in the form $`I(\omega )=I_0\omega ^2`$, which is associated with phonon reservoir. As to the interaction between the system and the bath we adopt a form $`X_\omega (b_\omega +b_\omega ^{})`$.
As a more relevant source of a noise at very low temperatures, we may consider the dipole field from other molecules or$`/`$and the hyperfine interaction from nuclear spins. For such noises we have to take into account another contribution to $`R`$. In this sense the bath treating here does not represent the experimental situation very appropriately. In the present Letter, however, we discuss only general natures which do not depend on the detail of the thermal bath.
Equation (1) with $`X=S_x+S_z`$ was used to study the aforementioned inevitable effects of environments in the magnetization process under a sweeping field at a very low temperature . For the process the existence of interaction is essential but detailed nature of the mechanism of the dissipation is not important. Thus we could discuss the property of the process as a universal property of the relaxation on nanoscale magnets.
When the temperature goes up and the effect of the bath increases, the dissipative process becomes to depend on specific features of the bath and the coupling. Thus it becomes difficult to treat relaxation without specifying nature of the bath. However, in the present Letter, we will point out that the aforementioned parity effect is a universal property of the resonant tunneling at finite temperatures, which is independent of detailed nature of the bath.
Let us consider a general model of an $`S=10`$ uniaxial magnet in an external field:
$$=DS_z^2\mathrm{\Gamma }S_xH(t)S_z+Q,$$
(4)
where $`Q`$ represents extra terms such as $`(S^+)^4+(S^{})^4`$, etc. We will propose a mechanism of the parity effect as a general property of uniaxial magnets. Thus we put $`Q=0`$.
In Fig. 1(a) we show a magnetization process $`M(t)=S_z/S`$ for the case with $`T=1.0`$, $`\lambda =0.00005`$, $`\mathrm{\Gamma }=0.45`$. We sweep the magnetic field $`H(t)=ctH_0`$ with $`c=0.0001`$ and $`H_0=0.3`$. In Fig. 1(b), the derivative $`dM/dH(=c^1dM/dt)`$ is also shown. In these figures, we find an alternate change of the amount of changes of the magnetization clearly.
In Fig. 2, energy levels as a function of the field are shown. Here we see straight lines along which the magnetization is approximately given by $`m`$ or $`m^{}`$. These lines denoting energy levels for the diagonal parts of the Hamiltonian are called diabatic states. At each crossing point, a small energy gap is created by the off-diagonal terms and so-called avoided level crossing is formed, where large enhancement of relaxation occurs (resonant tunneling). The energy gaps of avoided level crossing points are listed in Table I, where we denote avoided level crossing points of levels of $`m`$ and $`m^{}`$ by $`(m,m^{})`$. There we find that the energy gaps at the same horizontal level in Fig. 2 (denoted by the same symbols) are about the same. This is easily understood from the fact that the gap is of order $`\mathrm{\Gamma }^{|mm^{}|}`$ where $`m`$ and $`m^{}`$ are the magnetizations of the crossing levels for $`\mathrm{\Gamma }=0`$ .
In order to see what processes are going, we show, in Fig. 3, the time evolution of distribution of occupation probabilities at the $`i`$th level, which is expressed by $`i|\rho |i`$.
This figure shows that the population along the line of $`m=8`$ decays at $`(8,5)`$ in Fig. 2 and the populations along the line $`m=9`$ and $`10`$ decay at $`(9,4)`$ and $`(10,3)`$, respectively. These points are shown by squares in Fig. 2.
If the system has an equilibrium distribution in the initial state, the population distributes on levels. The population at each line decays at an avoided level crossing point where the LZS transition probability
$$p=1\mathrm{exp}\left[\frac{\pi (\mathrm{\Delta }E)^2}{2c|mm^{}|}\right]$$
(5)
has an appreciable value. Here $`\mathrm{\Delta }E`$ is the energy gap at $`(m,m^{})`$. In Table I the transition probabilities are listed. For example the transition probability for $`c=0.0001`$ at $`(m,m^{})=(8,5)`$ (see Fig. 3) is 0.913, while at the point $`(8,6)`$ it is $`0.07`$. Thus most of the population of the line of $`m=8`$ decays at $`(8,5)`$. The population of the line of $`m=9`$ decays very little until $`(9,4)`$ because the transition probabilities at $`(9,6)`$ and $`(9,5)`$ are very small, i.e., 0.0006 and 0.04, respectively. The population of the line of $`m=10`$ decays at the point $`(10,3)`$.
As we found here, the parity effect simply comes from the structure of the energy levels. The structure of energy levels in Fig. 2 is inherent to the systems of the uniaxial Hamiltonian (4) regardless of the form $`Q`$ and we expect that the parity effect is observed generally in uniaxial magnets.
Here it should be noted that the relevant sequence of decays (in the above case $`H=0.3,0.5`$, and $`0.7`$) depends on the sweeping rate $`c`$. If $`c`$ decreases, then the transition probabilities increase. Thus the populations on the lines decay at the circled avoided level crossing points before reaching the points of squares. Thus in this case relaxation is enhanced at $`H=0.2,0.4`$, and $`0.6`$ instead of $`H=0.3,0.5`$, and $`0.7`$ in the case of $`c=0.0001`$. In Fig. 4, we plot the time evolution of the magnetization process in case of 20 times slower sweeping rate $`c=0.000005`$. In Fig. 5, we show the time evolution of probabilities at levels, where we actually find large decreases of the population at $`H=0.2,0.4`$, and $`0.6`$. Furthermore, in case of much slower sweeping rate $`c=0.00000005`$, we expect that the transitions occur at the points of triangles in Fig. 2 (see also Table I). We do not demonstrate it because it takes $`100`$ times longer simulation time. This sweeping rate dependence is also a general property of uniaxial magnets and we expect the shift of the sequence to be also found in experiments.
Finally we would like to point out a strange property of the system when the Hamiltonian (4) includes the interaction
$$Q=C[(S^+)^4+(S^{})^4]$$
(6)
which has been discussed in literatures. As has been pointed out by Wernsdorfer and Sessoli, in the present model energy gaps at the avoided level crossing point change nonmonotonically with the value of $`C`$ ($`\mathrm{\Gamma }`$ is fixed) and even gapless points exist, which causes irregular behavior of resonant tunneling and simple parity effect is disturbed. The reason why gapless points appear is not clear at this moment, which would be an interesting problem.
We would like to thank B. Barbara, H. De Raedt and W. Wernsdorfer for their valuable discussions. The present work is partially supported by the Grant-in-Aid from the Ministry of Education. |
no-problem/9911/astro-ph9911023.html | ar5iv | text | # The Formation Rate of Blue Stragglers in 47 Tucanae
## 1 INTRODUCTION
Blue stragglers in globular clusters are thought to be created by stellar mergers. Such mergers can occur in two ways: through the spiraling in and merger of two components of a binary system, or through the direct collision of two stars. The former mechanism is not strongly dependent on cluster density, but the latter occurs more often as the stellar collision rate increases. In a cluster of single stars, the collision rate is a function of cluster density and velocity dispersion (Verbunt & Hut, 1987), but a significant binary population can increase the collision rate well beyond that of a cluster of single stars. The enhanced collisions are caused by resonant encounters with binary stars, which create many more opportunities for the stars involved to collide, thus greatly increasing the collisional cross-section (Leonard, 1989). Thus the formation rate of collisional blue stragglers depends on the current and past cluster density profile, velocity dispersion, and binary population. By studying the number of blue stragglers and their distribution in the color magnitude diagram, we can therefore hope to probe the dynamical history and stellar populations of the cluster.
HST observations of the cores of globular clusters, combined with models of blue straggler formation, have been used to infer global properties of clusters (Bailyn & Pinsonneault, 1995; Sills & Bailyn, 1999; Ouellette & Pritchet, 1998). These studies suggest that blue stragglers in the cores of dense clusters are indeed collisional in origin, and place limits on the binary fraction, mass function, central density, and velocity dispersion of the clusters. Recently, Ferraro et al. (1999) found a remarkably high blue straggler concentration in M80, which was difficult to explain given the relatively low inferred collision rate. A similar situation pertains in M3, although it is less pronounced (Ferraro et al., 1997). Ferraro et al. therefore suggested that M80, and possibly M3, may be in an unusual dynamical state, in which the density has recently become large enough to create a large number of encounters involving primordial binaries, engendering anomalously large collision rates. Such a state may also be required to explain the anomalous, and probably short lived, remnants in the core of NGC 6397 (Cool et al., 1998; Edmonds et al., 1999). The high central density of binaries in NGC 6752 (Rubenstein & Bailyn, 1997) may also imply that the cluster is in an unusual dynamical phase. Once the initial population of primordial binaries has been โburnedโ, the collision rate would then be expected to decrease, even as the cluster density continues to rise. Since the primordial binary burning phase is presumably short, it is somewhat disturbing that such a phase has to be invoked at the current time in several different clusters.
In our previous exercises in blue straggler population synthesis, we have assumed approximately constant collision rates. Here, we explore the effects of significant changes in the blue straggler formation rate on the observed distribution of blue stragglers in the color-magnitude diagram. We find that the currently observed blue straggler populations should vary significantly depending on the past formation rate. We apply our results to a new data set from 47 Tuc. The results suggest that this cluster may well have undergone a burst of blue straggler formation which ended several Gyr in the past. In section 2 we present the theoretical models of blue straggler distributions. We discuss the observations of 47 Tuc in section 3, and compare the theory with these observations in section 4. We summarize our findings in section 5.
## 2 MODELS OF BLUE STRAGGLER DISTRIBUTIONS
We calculate blue straggler distributions in the color-magnitude diagram (hereafter CMD) of 47 Tuc following the method described in detail in Sills & Bailyn (1999). We assume that the blue stragglers are all formed through stellar collisions between single stars during an encounter between a single star and a binary system. The trajectories of the stars during the collision are modeled using the STARLAB software package (McMillan & Hut, 1996). The masses of the stars involved are chosen randomly from a mass function for the current cluster and a different mass function which governs the mass distribution within the binary system. A binary fraction, and a distribution of semi-major axes must also be assumed. The output of these simulations is the probability that a collision between stars of specific masses will occur. We have chosen standard values for the mass functions and binary distribution. The current mass function has an index $`x=2`$, and the mass distribution within the binary systems are drawn from a Salpeter mass function ($`x=1.35`$). We chose a binary fraction of 20% and a binary period distribution which is flat in log P. The effect of changing these values is explored in Sills & Bailyn (1999). The collision products are modeled by entropy ordering of gas from colliding stars (Sills & Lombardi, 1997) and evolved from these initial conditions using the Yale stellar evolution code YREC (Guenther et al., 1992). The models reported here used a metallicity appropriate for 47 Tuc, but the general features we report are similar for any metallicity. By weighting the resulting evolutionary tracks by the probability that the specific collision will occur, we obtain a predicted distribution of blue stragglers in the CMD.
In order to explore the effects of non-constant blue straggler formation rates, we examined a series of truncated rates. In these models we assumed that the blue straggler formation rate was constant for some portion of the cluster lifetime, and zero otherwise. This assumption is obviously unphysical โ the relevant encounter rates would presumably change smoothly on timescales comparable to the relaxation time. However these models do demonstrate how the distribution of blue stragglers in the CMD depend on when the blue stragglers were created, and thus provide a basis for understanding more complicated and realistic formation rates.
In Figure 1 we show blue straggler distributions in the CMD for formation rates which were initially zero, and then abruptly โswitched onโ at some point in the clusterโs past, and continued at a constant rate until the present day. The first panel is the limiting case of constant formation rate throughout the clusterโs lifetime. There are dramatic changes as the onset of blue straggler formation moves closer to the present. In particular, the redder blue stragglers disappear, starting from the faint end, until in Figure 1E the lower part of the blue straggler distribution closely approximates the zero age main sequence (ZAMS).
This behavior is straightforward to interpret. Lower mass blue stragglers start out essentially as ZAMS stars โ they are generally formed from low mass precursors which have not processed significant amounts of nuclear fuel, so they have no chemical anomalies. Since their main sequence lifetimes are $`>>1`$ Gyr, the bottom of the sequence of recently formed blue stragglers closely approximates the ZAMS. In contrast, the more massive blue stragglers evolve much faster, and they are also formed far from the ZAMS in the first place, since their precursors have already undergone considerable nuclear processing. Thus a burst of recent blue straggler formation will create a blue straggler distribution like that in Figure 1E, with a narrow sequence at the low L end, and a relatively large number of stars with a range of temperatures at the bright end.
Figure 2 shows a sequence of blue straggler distributions in which blue straggler formation began at the start of the cluster lifetime, but terminated at some point in the past. The limiting case when the termination point is the present is the same as Figure 1A and has thus been omitted. Figure 2 shows progressively older blue straggler sequences. Once again, the dramatic changes in distribution are easy to understand. The more massive and luminous blue stragglers evolve first, and move away from the ZAMS, and then out of the blue straggler region altogether when they become giants. A population of blue stragglers like that shown in Figure 2d, in which all of the blue stragglers have ages $`8`$Gyrs, will therefore contain only relatively faint blue stragglers and will be skewed toward the red away from the ZAMS. The dramatic difference between Figure 1E and Figure 2D, which were created using identical assumptions about binary fraction, mass function, and other dynamical parameters, illustrates the importance of including changes in formation rate in studies of blue straggler distributions. It is difficult to produce such drastic changes in the shape of the blue straggler distribution by varying the mass functions and binary fraction, although these parameters do have a strong influence on the total number of blue stragglers (Sills & Bailyn, 1999).
Fig 3 shows distributions of blue stragglers in which the formation rate turned on at some point after the cluster was born, and then turned off again prior to the present. As might be expected, these distributions show characteristics similar to those in both Figs 1 and 2, since both effects described above apply in these cases. We have also used the binary destruction rate from Figure 3 of Hut, McMillan & Romani (1992) as an approximation for blue straggler creation, since both effects result from the same close stellar encounters. The resulting distribution is dominated by old, low luminosity blue stragglers, but also contains a small, but potentially observable population of younger blue stragglers (Figure 4).
## 3 OBSERVATIONS OF BLUE STRAGGLERS IN 47 TUCANAE
Observations of 47 Tucanae were obtained between July 1996 and January 1997. Data were taken nearly every night for 6 weeks, with some additional coverage over six months with the CTIO 0.9 m telescope and 2K CCD. Repeated UBVI images were obtained for a 13โ $`\times `$ 13โ field centered on RA,DEC (2000) = 00:22:06.75 -72:04:22.1, with the closest edge 138.5โ west of cluster center. The primary purpose was to study variability on the giant branch, and the time series results will be presented elsewhere. In this paper, we present color-magnitude diagrams created from summed data. The exposure times were chosen to avoid saturation of giant branch stars, and are therefore deepest in bluer bandpasses, which makes this data set ideal for studying hot stars, such as blue stragglers. The summed images were analyzed with DAOPHOT and calibrated with Landolt standards. The calibration agrees with that of Hesser et al. (1987) to within 1% for B and V. The stars presented and analyzed in this paper are only those which contribute $`>50\%`$ of the light within one PSF radius of their centers. This criterion results in the loss of many crowded stars, especially at or below the main sequence turnoff. However the principle sequences derived are quite clean, and the completeness above the turnoff is high, though not 100%. Figure 5 presents the resulting CMD.
In order to study the blue stragglers, we must have a consistent way of selecting them from the color-magnitude diagram. It is necessary to make the selection in two colors, since some stars which are present in the blue straggler region in one color may show up as photometric anomalies in other colors. These stars could be foreground or background objects, photometric errors, or other kinds of strange stars which are not blue stragglers. The initial selection was done in the U, U-B diagram (see Figure 5). An additional selection made in V, B-V diagram (see Figure 6), and then stars far from the principle sequence in the color-color diagram were rejected (see Figure 7). Although some of the stars excluded may still be blue stragglers, we have adopted these criteria so that we can have a clean comparison of the data to our theoretical distributions. The star at V$``$14.7, B-V$``$0.2 and U-B$``$-0.05 is known to be a variable star (Edmonds 1999, private communication) and is likely an SX Phoenicis star. However, it does not satisfy our selection criteria, and therefore has been rejected from our sample. Using these criteria, we find 61 blue stragglers (compared to the 20 found by de Marchi, Paresce & Ferraro (1993)). It should be noted that some of the blue stragglers within 0.75 magnitudes above the cluster turnoff could result from the superposition of main sequence stars, either by chance or from being a physical binary. The blue straggler frequency relative to horizontal branch stars (as defined by Ferraro et al. (1999)) is 0.37. We matched our theoretical distributions of BSs to the observations by forming our distributions from those parts of the evolutionary paths which satisfied the above observational selection criteria. We chose to use this data set alone, rather than combining it with the earlier HST data on blue stragglers from the core of the cluster. In order to have a convincing comparison of theory to data, we need a consistent way of selecting the blue stragglers, which can be done best with a large homogeneous data set. In order to understand the properties of 47 Tuc as a whole, eventually data from all sources and all regions of the cluster will have to be considered. The implications of our choice will be discussed in the following section.
## 4 COMPARISON OF THEORY WITH OBSERVATIONS
The theoretical blue straggler distribution with a constant blue straggler formation rate is shown in Figure 8, along with the 61 selected blue stragglers. This distribution does not match the observations in three important ways. Firstly, the models predict a large peak of low luminosity blue stragglers which is not observed. This is likely a selection effect, since fainter stars are less likely to pass the 50% contamination test noted above. Secondly, there are too many observed blue stragglers at the red end of the distribution. These so-called โyellow stragglersโ have been noted as anomalies before (Stetson, 1994) and may be due to the composite colors of binary stars, or chance superpositions which are fit by only one star in the reduction procedure. Both of these suggestions can be well studied with simulations of the completeness and crowding effects, and will be discussed in a future paper. Thirdly, the theory predicts too many bright blue stragglers. Since the first two problems cannot be addressed in the context of our theoretical models, we focus here on the third point, and explore what is required to produce a theoretical blue straggler distribution which terminates at the same magnitudes as the observed blue stragglers.
As discussed above, the bright blue stragglers have high masses, and do not live very long. Therefore, in order to have a population of blue stragglers which lacks bright stars, the blue stragglers must have stopped forming some time ago. In the context of the models described above, we find that a blue straggler formation rate which terminated 3 Gyr ago reproduces the upper part of the observed blue straggler distribution quite well (Figure 9). However, we caution that the precise date of the termination of blue straggler formation should not be taken too seriously. First, the formation rates used here are not realistic. A full dynamical model of the evolution of the cluster would be required to produce accurate time-dependent rates. Second, our results are influenced by our choice of binary parameters and mass functions, although the same qualitative effects will apply regardless of the choice of these parameters. Third, the observed sample is biased in two important ways. Incompleteness due to crowding will affect the distribution, particularly at the faint end. However this should not affect the lack of bright blue stragglers, which is the observed feature we are trying to reproduce. More importantly, we do not have complete spatial coverage of the cluster. HST results suggest that the blue straggler distribution extends to brighter limits in the cluster core (Gilliland et al., 1998). It is possible that the blue straggler distribution is different in the core because the contribution of blue stragglers created by binary mergers, rather than stellar collisions, is larger in the outer regions. If so, the lack of bright blue stragglers in this region may be more closely related to the characteristics of the binary population in this region than the stellar collision rate. Detailed models of binary merger evolutionary tracks, combined with predictions of the binary populationโs merger rate, will be necessary to untangle this degeneracy. The lack of bright blue stragglers outside the core could be explained by mass segregation, either because the more massive blue stragglers sink to the cluster center (although this effect should not be dominant since the mass difference between the bright and faint blue stragglers would be relatively small), or by driving the few remaining binaries toward the center of the cluster. However, we do not believe that mass segregation alone could account for the sharp cutoff in blue straggler luminosities that we observe in the absence of a significant change in the blue straggler formation rate, since it is hard to believe the upper part of the blue straggler distribution could be lost from our observations given that there are large numbers of observed blue stragglers in the $`1.11.4M_{\mathrm{}}`$ range.
Thus the data appear to suggest that 47 Tuc has passed through a stage similar to the current state of M80 at some point in the past. The large extent of the blue straggler sequence in M80 observed by Ferraro et al. (1999) tends to support this interpretation, since a cluster whose blue straggler formation rate is unusually high at the present time should tend to appear like that in Figure 1D and 1E. Extending this idea to other clusters, we suggest that the magnitude of the bright end of the blue straggler distribution may be an indicator of when the phase of primordial binary burning occurred in clusters, and may thus correlate with the dynamical properties of the cluster, and the formation rate of other anomalous populations which require stellar encounters (Bailyn, 1995). If this scenario is correct, one might expect that the present binary fraction of 47 Tuc should be substantially lower than those of M80 and M3. The decrease in the binary fraction could be a function of binary properties, such as binary period or mass ratio, since the collisional cross section for binary stars depends on both quantities. The decrease in binary fraction could also be function of radial distance from the core, since we expect that binary stars at the center of the cluster will be destroyed earlier than those further out. Testing these ideas in detail will require construction of complete models of the dynamical history of the relevant clusters, including consideration of the evolution of the binary population, an effort well beyond the scope of this paper.
## 5 SUMMARY
We find that changes in the past formation rate of blue stragglers produces drastic changes in their observed distribution in the CMD. A comparison between our parameterized models and observed blue stragglers in 47 Tuc suggest that this cluster may have undergone an epoch of enhanced BS formation several Gyrs ago. We associate this enhanced blue straggler formation rate with the epoch of primordial binary burning invoked to explain the current characteristics of several other clusters. Since this epoch may well be short, it is reassuring to find a cluster which has evidently gone through this stage in the past, rather than experiencing it currently. Much more detailed dynamical models will be required to explore whether the primordial burning scenario is consistent with the observed blue straggler sequences in globular clusters.
A. S. wishes to recognize support from the Natural Sciences and Engineering Research Council of Canada. C.D.B. is supported by NASA grant LTSA NAG5-6076. |
no-problem/9911/hep-ph9911363.html | ar5iv | text | # Energy Loss of Ultrahigh Energy Protons in Strong Magnetic Fields
## 1 Introduction
It is well known that the propagation of ultra high energy (UHE) protons in the Universe is limited by the Greisen-Zatsepin-Kuzmin (GZK) mechanism, . The propagating protons undergo inelastic scattering on the photons of the cosmic microwave background radiation (CMBR) and produce pions. On the average, the initial energy is shared (roughly) equally by the nucleon and pion in the final state. The energy at which the GZK mechanism becomes important can be crudely estimated by saturating the inelastic cross section by the $`\mathrm{\Delta }`$ resonance. Given the fact that the average energy of the CMBR photons is around $`3\times 10^4`$eV, one gets that pion production becomes significant around proton energies of the order of $`10^{19}`$eV. (In fact, a similar simple estimate was used in Greisenโs original paper.)
Here we discuss a mechanism for energy loss of UHE protons, hitherto apparently ignored, viz. by inelastic scattering on virtual photons. This mechanism plays no significant role in limiting the propagation of UHE protons in intergalactic space. However, it becomes significant when the propagation is considered in an environment where strong magnetic fields exist, e.g. in jets emerging from gamma ray bursters (GRB) or jets in active galactic nuclei (AGN).
The โaverageโ energetics of pion production on virtual photons (typically, on an external magnetic field) is very different from that of the GZK mechanism. In fact, if the size of the external magnetic field is characterized by a length $`L`$, then the typical momentum of the virtual photon is of the order of $`1/L`$. Consequently, the average invariant center of mass (CMS) energy available for pion production is of the order,
$$sm^2+2E/L,$$
where $`m`$ stands for the nucleon mass and $`E`$ is the energy of the incident proton in the rest frame of the local universe. Obviously, this energy is much less than the analogous quantity in the GZK process for any macroscopic $`L`$. Nevertheless, for a sufficiently strong and spatially confined magnetic field, one obtains an appreciable production rate, due to the fact that the Fourier spectrum of a confined field is rather slowly decreasing with the wave number (typically, as a power). As a consequence, Fourier components with $`\left|๐ค\right|1/L`$ can play a significant role.
This paper is organized as follows. In the next section, we obtain a general expression of the cross section for the process $`p+BX`$, where $`B`$ stands for an external magnetic field. We also discuss the approximations one can make in order to simplify the calculation. The subsequent section, 3 contains an evaluation of the interaction rate for random magnetic fields: we believe that this serves as a first model for energy loss in the chaotic fields present in typical astrophysical environments. Two situations are considered in detail: an isotropic and a cylindrically symmetric probability distribution of the random field.
While an isotropic environment largely serves to illustrate the physical features of the process in a simple context, it is also potentially applicable to a situation in which the size of the magnetic field is substantially larger than the interaction mfp. The calculation of the interaction rate for a cylindrically symmetric environment is relevant for jets, for instance, emerging from an AGN or GRB. The results are discussed in sec. 4.
## 2 General expression of the cross section in an external magnetic field.
The calculation described here is an elementary application of the optical theorem. The amplitude of a proton interacting with an external field and producing a final state $`|X`$ is given by
$$T(p+BX)=d^4xA_\mu (x)X\left|j^\mu (x)\right|p$$
(1)
Here, $`A_\mu (x)`$ stands for the vector potential of the external field and $`j_\mu (x)`$ is the density of the electromagnetic current. Squaring (1) and summing over the final states $`|X`$, one expresses the cross section in terms of the current correlation function. This is textbook material, see for instance . We also assume that that the external magnetic field is static. This assumption simplifies the calculation. From a physical point of view, it is justifiable even if astrophysical objects of bulk Lorentz factors of the order of a few hundred are considered: the protons we are interested in have Lorentz factors which are ten or eleven orders of magnitude larger.
On writing for the Fourier transform of the vector potential
$$A_\mu (q)=\delta \left(q_0\right)a_\mu \left(๐ช\right)$$
(2)
and using a gauge in which $`A_0=0`$, one gets:
$$\sigma =\frac{4\pi ^2\alpha m}{E}d^3qa_i^{}(๐ช)a_k(๐ช)W_{ik}$$
(3)
In equation (3) $`m`$ and $`E`$ stand for the mass and energy of the incident proton, respectively and $`W_{ik}`$ is the spatial part of the standard polarization tensor:
$$W_{\mu \nu }=\frac{F_1}{m}\left(g_{\mu \nu }+\frac{q_\mu q_\nu }{q^2}\right)+\frac{F_2}{\nu }\left(p_\mu q_\mu \frac{(pq)}{q^2}\right)\left(p_\nu q_\nu \frac{(pq)}{q^2}\right)$$
(4)
The notation is standard, $`p`$ and $`q`$ are the four momenta of the incident proton and virtual photon, respectively, $`\nu =(pq)/m`$.
A further simplification is possible due to the fact that the protons we are interested in are extreme relativistic and the average value of the momentum of the virtual photon is of the order of $`1/L`$. In order to motivate this simplification, we Lorentz transform to the rest frame of the proton. In that frame the components of the four momentum $`q`$ and the field quantities are distinguished by a prime. Components perpendicular to the direction of motion are denoted by capital letters; longitudinal components by a subscript $`l`$. Since we have $`v1`$, the transformation formulรฆ are:
$$q_0^{^{}}\frac{1}{2}\mathrm{exp}(y)q_l,q_l^{^{}}\frac{1}{2}\mathrm{exp}(y)q_l,q_A^{^{}}q_A.$$
(5)
$`B_l^{^{}}=B_l,E_l^{^{}}=E_l=0,`$
$`B_A^{^{}}{\displaystyle \frac{1}{2}}\mathrm{exp}(y)B_A,E_A^{^{}}{\displaystyle \frac{1}{2}}\mathrm{exp}(y)ฯต_{AB}B_B.`$ (6)
In the last two equations, $`y`$ stands for the rapidity.
As a consequence, apart from corrections of $`O(\mathrm{exp}(y))`$,
$$q^20,๐๐0,๐^2๐^20.$$
(7)
In a reference frame comoving with the proton, the magnetic field appears as a stream of (almost real) photons: consequently, the contribution of the structure function $`F_2`$ to the cross section is negligibly small.
One can then express the cross section on the external magnetic field in terms of the photoproduction cross section, $`\sigma _\gamma `$, viz.
$$\sigma \frac{1}{E}d^3qa_i(๐ช)a_j(๐ช)^{}\frac{(๐ฉ๐ช)}{๐ช^\mathrm{๐}}\sigma _\gamma \left(\delta _{ij}๐ช^2q_iq_j\right)$$
(8)
One readily recognizes that eq. (8) is equivalent to a Weizsรคcker-Williams approximation to the cross section. Because of the presence of a transverse projector, that expression is a manifestly gauge invariant one. It is worth noticing that in the Weizsรคcker-Williams approximation the expression of the cross section is independent of the mass of the projectile. Hence the same expression can be used to describe e.g. photon induced reactions in a magnetic field.
Finally, one considers the evaluation of $`\sigma _\gamma `$. Due to the fact that the photoabsorption cross section is to be evaluated near the pion production threshold, to a good approximation one can saturate it by the contribution of the $`\mathrm{\Delta }`$ resonance. A narrow resonance approximation is sufficiently accurate. Hence we put
$$\sigma _\gamma \sigma _0\delta \left(sm_\mathrm{\Delta }^2\right),$$
(9)
where the dimensionless quantity, $`\sigma _0`$ is the integral of the pion photoproduction cross section across the resonance,
$$\sigma _0=_{(res)}\sigma (s)๐s.$$
Using a standard invariant Breit-Wigner fit and the data available, one gets $`\sigma _00.3`$.
## 3 Random magnetic fields
We model the chaotic magnetic fields present in the astrophysical environments of interest by means of a Gaussian random field of zero mean. The central object in the theory of random fields is the generating functional of the correlation functions. In the case of a Gaussian field, only the second cumulant is different from zero. We write the generating functional as follows .
$$Z[j]=๐๐\mathrm{exp}\left[S+id^3kj_r\left(๐ค\right)a_r\left(๐ค\right)\right]$$
(10)
In eq. (10) $`๐ฃ`$ stands for an external source, $`๐`$ is the Fourier transform of the vector potential, cf. eq. (2). The functional $`S`$ is a generalized entropy; for a Gaussian field it is a (gauge invariant) quadratic functional of $`๐`$. We write $`S`$ as follows.
$$S=d^3ka_i^{}(๐ค)a_j(๐ค)\left(\delta _{ij}\frac{k_ik_j}{๐ค^2}\right)\frac{๐ค^2}{4\pi L^3B^2}\left(1+L^2k_rk_su_{rs}\right)^2.$$
(11)
In eq. (11), $`L`$ stands for the root mean square correlation length (the average is taken over directions). The tensor $`u_{rs}`$ characterizes the directional distribution of the probability density. For a general, arbitrarily anisotropic distribution, $`u_{rs}`$ has 6 independent components: e.g. the three, mutually orthogonal, principal correlations and the three angles describing the orientation of the principal correlations. In what follows, however, we consider environmernts of high symmetry; consequently, fewer parameters are sufficient. The factor $`\left(1+L^2k_ru_{rs}k_s\right)^2`$ ensures an exponential decrease of the correlation function with distance, cf. ref. .
In considering particle production in a random field, one has to replace factors such as $`a_i^{}a_j`$ in eq. (3) and subsequent ones by their expectation values in the ensemble defined by eqs. (10) and (11).
We now consider two special cases of the ensembles in order to calculate particle production cross sections.
### 3.1 Isotropic ensemble.
This ensemble is characterized by the tensor $`u_{ij}`$ in eq. (11) being the unit tensor, $`u_{ij}=\delta _{ij}`$. One finds:
$$a_i(๐ช)a_j(๐ช^{^{}})^{}=\delta ^3\left(๐ช๐ช^{^{}}\right)\left(\delta _{ij}\frac{q_iq_j}{๐ช^2}\right)\frac{4\pi B^2L^3}{๐ช^2}\left(1+L^2๐ช^2\right)^2$$
(12)
In the expression of the cross section, however, one finds a factor $`a_i(๐ช)a^{}(๐ช)`$, which is infinite, see the last equation. This is due to the fact that we idealized a region of non vanishing magnetic field by one of infinite extent albeit of an exponentially decreasing correlation function. In order to correct for the inconsistency caused by this idealization, we replace the delta function of vanishing argument by a quantity proportional to the volume, viz.
$$\delta ^3(0)\frac{1}{8\pi ^3}V.$$
(This is a consistent procedure provided the density of levels can be approximated by the Rayleigh - Jeans formula, as done here. In the problem under consideration, the conditions for the validity of that approximation are satisfied.) We take $`V`$ to be the correlation volume; thus, in the isotropic case, $`V=4\pi L^3/3`$; clearly, different geometries give rise to different expressions of the correlation volume. The important fact is, however that the incident flux is $`1/V`$. Thus the reciprocal mfp is independent of the choice of the volume.
With this and using eq. (9) the cross section can be evaluated in terms of elementary functions. Quoting directly the inverse of the absorption mfp which is the relevant quantity for the applications, one finds:
$$\frac{1}{\lambda _a}=\frac{L}{\pi }B^2\frac{\sigma _0}{m_\mathrm{\Delta }^2m^2}f(w).$$
(13)
Here $`f`$ is a function of the dimensionless variable, $`w=\left(m_\mathrm{\Delta }^2m^2\right)L/2E`$. Its explicit form is:
$$f(w)=w^2\left[\mathrm{ln}\left(\frac{1+w^2}{w^2}\right)+\frac{1}{1+w^2}\right]$$
(14)
However, it was pointed out in the Introduction that $`1/L`$ is a small momentum. As a consequence, we only need eq. (14) for large values of $`w`$. In that case, eq. (14) simplifies to:
$$f(w)\frac{1}{2w^2}(w1).$$
Hence, the expression of $`\lambda _a`$ becomes:
$$\frac{1}{\lambda _a}\frac{2\sigma _0}{L\pi }B^2\frac{E^2}{\left(m_\mathrm{\Delta }^2m^2\right)^3}$$
(15)
### 3.2 Cylindrically symmetric ensemble
This geometry is a more realistic one. In particular, an astrophysical jet as emerging, for instance, from a GRB or an AGN can be approximated by a cylindrical geometry at the early stages of expansion. (At early stages, the lateral expansion is negligibly small compared to the longitudinal one.) Approximating the jet by one of cylindrical geometry means that the lateral expansion is neglected altogether. It has been known for a long time that this is an acceptable approximation in the initial stages of expansion of a relativistic fluid
In this case, the tensor $`u_{ij}`$ in eq. (11) effectively depends on one parameter only. It is convenient to introduce the longitudinal and transverse correlation lengths with respect to the axis of the cylinder and an anisotropy parameter, $`\alpha `$, such that
$$L_T^2=\alpha L^2,L_L^2=L^2(1\alpha ).$$
In practice, $`\alpha 1`$, say $`\alpha 0.1`$ or so<sup>1</sup><sup>1</sup>1E. Vishniac, private communication.. Using this parametrization, we have:
$$\left(1+L^2q_iu_{ij}q_j\right)=\left(1+\left(\alpha ๐ช_{๐}^{}{}_{}{}^{2}+(1\alpha )q_L^2\right)\right)$$
(16)
In eq. (16), $`๐ช_๐`$ and $`q_l`$ stand for the momentum components perpendicular and parallel to the axis of the cylinder, respectively.
In the case of such a geometry, the integral occurring in eq. (8) cannot be calculated in a closed form. In essence, this is due to the fact that the expression of the absorption cross section now contains two directions: that of the incident proton and the axis of the cylinder. However, instead of resorting to a numerical evaluation, we observe that the variable $`w`$ is large and the absorption can only be significant if the angle between the incident proton and the axis of the cylinder is not too large: efficient absorption requires a coherent magnetic field.
These simplifications allow a calculation of the absorption mfp in a closed form. A somewhat tedious, but elementary calculation leads to the result:
$$\frac{1}{\lambda _a}\frac{4\sigma _0}{\pi L}F(\mathrm{\Theta })\frac{E^2B^2}{\left(m_\mathrm{\Delta }^2m^2\right)^3}$$
(17)
The factor $`F(\mathrm{\Theta })`$ is given by the expression:
$$F(\mathrm{\Theta })\left(\mathrm{cos}\mathrm{\Theta }\right)^3\mathrm{ln}\left(\frac{(\mathrm{cos}\mathrm{\Theta })^2}{\alpha }1\right)$$
(18)
In eq. (18) $`\mathrm{\Theta }`$ stands for the angle between the incident proton and the axis of the cylinder. Obviously, this expression holds only if the angle $`\mathrm{\Theta }`$ is small. From the physical point of view, however, this is not a serious limitation: due to the presence of the factor $`\left(\mathrm{cos}\mathrm{\Theta }\right)^3`$, the mean free path becomes very large unless the angle of incidence with respect to the axis of the cylinder is small.
## 4 Discussion
Our approach has the advantage that it does not depend on the details of the production process, since it is based on the use of the optical theorem. However, its limitation is that the absorption cross section is obtained to lowest order in the fine structure constant. This poses no problem as long as $`\sqrt{B^2}\stackrel{<}{}m^2/e=B_{\text{crit}}`$, where $`m`$ is the mass of a charged particle involved in the process. For electrons and light quarks (u,d), the value of $`B_{\text{crit}}`$ is around $`10^{14}`$Gauss. In magnetic fields of this order of magnitude, radiative corrections and pair production become important. To our knowledge, no results are available for such field strengths. Existing calculations, such as Erberโs, assume a homogeneous magnetic field. Calculations of this type can be used to estimate energy losses as long as the magnetic fields are approximately homogeneous on the scale of the Larmor radius of the propagating charged particle. For realistic circumstances, however, this is hardly the case. Thus, the question about the energy loss of charged particles in astrophysically important magnetic field approaching the critical value of the field, is still an open one. Our formulae, however, are expected to give at least a qualitative insight into the question of absorption even for near-critical fields.
Our results show that the circumstances neeeded for the applicability of eq. (15) are hardly met: one needs magnetic fields with a coherence length substantially in excess of the Larmor radius at high energies. Nevertheless, that equation is an instructive one: due to its simplicity, the general features of the absorption cross section is easily understood.
From the physical point of view, eq. (17) is more interesting. In order to assess the importance of the process discussed it is worth converting eq. (17) into a form permitting numerical estimates. The value of $`\sigma _0`$ has been quoted before; the rest of the numbers is also taken from ref. . One obtains:
$$\frac{1}{\lambda _a}\frac{5.5}{L}F\left(\mathrm{\Theta }\right)\left(\frac{E}{10^{20}\text{eV}}\right)^2\frac{B^2}{\left(10^9\text{Gauss}\right)^2}$$
(19)
(We used the usual conversion factor between the natural and conventional units of the magnetic field, viz. $`B/1(\text{MeV})^2=1.9\times 10^{14}B/1\text{Gauss}`$.)
We find that a paraxially propagating proton of energy $`10^{20}`$eV traversing a โ relatively modest โ magnetic field of $`10^9`$Gauss has an absorption mfp about $`(1/5)^{\text{th}}`$ the size of the magnetic field.
In a collision at the relevant energies, on the average the nucleon and the produced pion in the final state share the incident energy equally. As a consequence, using the continuous energy loss approximation, the energy loss per unit path length is:
$$\frac{dE}{dx}=ฯต\frac{E}{\lambda }$$
(20)
In the last equation, $`ฯต`$ stands for the fractional energy loss of a nucleon (in the laboratory system) due to pion production. Assuming as we do throughout this paper that the pion production cross section is dominated by the $`\mathrm{\Delta }`$ resonance, one gets,
$$ฯต\frac{m_\pi ^2}{2m_\mathrm{\Delta }m}$$
Due to the fact that $`1/\lambda E^2`$, the energy loss per unit path length grows as $`E^3`$.
In fact, by inserting the expression for the mfp given by eq. (19) into eq. (20), the equation for the energy loss is readily integrated. We exhibit the result for the energy loss of paraxial protons ($`F(\mathrm{\Theta })1`$) over one correlation length, $`L`$. We get:
$$\frac{E(x)}{E(0)}\left[1+0.1\left(\frac{E}{10^{20}\text{GeV}}\right)^2\left(\frac{B^2}{\left(10^9\text{Gauss}\right)^2}\right)\frac{x}{L}\right]^{1/2}$$
(21)
We used the mass values listed in ref. in order to arrive at eq. (21).
We conclude that the mechanism described in this paper appears to be a major obstacle to accelerating protons up to energies of the order of $`10^{19}`$eV or more by a conventional Fermi acceleration mechanism. One notices for instance that a proton of $`E=10^{20}`$eV injected into a field of $`\sqrt{B^2}=10^{10}`$Gauss loses about 70% of its initial energy over a correlation length.
This adds to the puzzle of the highest energy cosmic rays: it is known that particles of energy about $`10^{20}`$eV arrive to the Earth and they give rise to extensive air showers. At the same time, it appears to be increasingly difficult to find an efficient mechanism for producing them at the usually suspected sites, for instance in active galactic nuclei or gamma ray bursters. |
no-problem/9911/cond-mat9911290.html | ar5iv | text | # Effect of the ๐-term for a ๐ก-๐-๐ Hubbard ladder
\[
## Abstract
Antiferromagnetic and $`d_{x^2y^2}`$-pairing correlations appear delicately balanced in the 2D Hubbard model. Whether doping can tip the balance to pairing is unclear and models with additional interaction terms have been studied. In one of these, the square of a local hopping kinetic energy $`H_W`$ was found to favor pairing. However, such a term can be separated into a number of simpler processes and one would like to know which of these terms are responsible for enhancing the pairing. Here we analyze these processes for a 2-leg Hubbard ladder.
\]
The interplay of antiferromagnetism and $`d_{x^2y^2}`$ superconductivity in the 2D Hubbard model remains an open question. Weak coupling calculations originally suggested that doping could drive the ground state from an antiferromagnet to a $`d_{x^2y^2}`$ superconductor. However, numerical Monte Carlo calculations have found only short range $`d_{x^2y^2}`$ pairing correlations. This may be due to the finite lattice sizes that have been studied, the difficulty in attaining low temperature results or possibly that the $`tU`$ Hubbard model lies just outside the superconducting parameter regime.
One approach to this problem is then to add various terms to the basic Hubbard model and see what it takes to drive it into a superconduting state. In this spirit, a recent Monte Carlo study added a term $`H_W`$, involving the square of the local hopping kinetic energy around a site,
$$H_W=W\underset{i}{}K_i^2$$
(1)
with $`K_i`$ equal to the local kinetic energy involving site $`i`$ and its near neighbors at $`i+\delta `$,
$$K_i=\underset{\delta ,\sigma =,}{}\left(c_{i,\sigma }^{}c_{i+\delta ,\sigma }+c_{i+\delta ,\sigma }^{}c_{i,\sigma }\right).$$
(2)
With $`H_W`$ added to the 2D $`tU`$ Hubbard model, the half-filled system exhibited a transition from an antiferromagnetic phase to a $`d_{x^2y^2}`$-pairing phase at a critical value of $`W`$. Separating $`H_W`$ into various pieces, it was found that it contained one-electron hopping terms, exchange interactions and triplet and singlet four particle scattering terms. One would like to understand which of these terms or what combination of the terms are responsible for enhancing superconductivity. Unfortunately because of the fermion sign problem it has not been possible to carry out a Monte Carlo calculations for the individual terms. However, density matrix renormalization group (DMRG) techniques can be used to study the individual pieces of the $`H_W`$ interaction. Here we describe the results of such a study for a 2-leg ladder. For such a system, we can determine the effect of the individual terms for both the half-filled and the doped system. As we will discuss in the conclusion, it is important to note that the half-filled 2-leg ladder has a spin gap which distinguishes it from the 2D half-filled Hubbard model. Nevertheless, it is instructive to see what effect the various parts of $`W`$ have on the pairing correlations for a ladder.
We begin with the usual Hubbard Hamiltonian
$$H_U=t\underset{<ij>,\sigma =,}{}\left(c_{i,\sigma }^{}c_{j,\sigma }+c_{j,\sigma }^{}c_{i,\sigma }\right)+U\underset{i}{}n_in_i$$
(3)
with a one electron hopping kinetic energy and an onsite Coulomb interaction $`U`$. The sum $`<ij>`$ is over all pairs of nearest neighbors. We will measure all energies in units of $`t`$. We then add the interaction (1) with $`W`$ positive. Monte Carlo calculations for a 2D half-filled system with the Hamiltonian
$$H=H_U+H_W$$
(4)
find a quantum phase transition between an antiferromagnetic Mott insulator and a $`d_{x^2y^2}`$-wave superconducting phase when $`W`$ is increased to a value of order 0.35. However, the 2D Hubbard model at half-filling has an antiferromagnetic ground state while a 2-leg ladder is characterized by a spin gap. Thus, as we will see, the behavior of a two-leg ladder as $`W`$ is turned on, can be different.
It is convenient to decompose the interaction $`H_W`$ as follows
$$H_W=\underset{i}{}H_{W_i}$$
(5)
with
$`H_{W_1}=4W_1{\displaystyle \underset{i}{}}(n_i+n_i)`$ (6a)
$`H_{W_2}=W_2{\displaystyle \underset{i,\delta ,\delta ^{}}{}}{\displaystyle \underset{\sigma }{}}c_{i+\delta ,\sigma }^{}c_{i+\delta ^{},\sigma }`$ (6b)
$`H_{W_3}=W_3{\displaystyle \underset{i,\delta ,\delta ^{}}{}}{\displaystyle \underset{\sigma }{}}\left(c_{i,\sigma }^{}c_{i,\sigma }^{}c_{i+\delta ^{},\sigma }c_{i+\delta ,\sigma }+\text{h.c.}\right)`$ (6c)
$`H_{W_4}=+W_4{\displaystyle \underset{i,\delta ,\delta ^{}}{}}\left(T_{i\delta ^{},1}^{}T_{i\delta ,1}+T_{i\delta ^{},1}^{}T_{i\delta ,1}+T_{i\delta ^{},0}^{}T_{i\delta ,0}\right)`$ (6d)
$`H_{W_5}=W_5{\displaystyle \underset{i,\delta }{}}\mathrm{\Delta }_{i\delta }^{}\mathrm{\Delta }_{i\delta }`$ (6e)
$`H_{W_6}=W_6{\displaystyle \underset{i,\delta \delta ^{}}{}}\mathrm{\Delta }_{i\delta }^{}\mathrm{\Delta }_{i\delta ^{}}.`$ (6f)
Here $`T_{i\delta ,1}^{}=c_{i,}^{}c_{i+\delta ,}^{}`$, $`T_{i\delta ,1}^{}=c_{i,}^{}c_{i+\delta ,}^{}`$, $`T_{i\delta ,0}^{}=\left(c_{i,}^{}c_{i+\delta ,}^{}+c_{i,}^{}c_{i+\delta ,}^{}\right)/\sqrt{2}`$ are triplet pair creation operators, and $`\mathrm{\Delta }_{i\delta }^{}=\left(c_{i,}^{}c_{i+\delta ,}^{}c_{i,}^{}c_{i+\delta ,}^{}\right)/\sqrt{2}`$ is a singlet pair creation operator. If one sets all the $`W_i`$ equal to $`W`$, the original $`H_W`$ interaction (1) is recovered. Here we will examine the effect of the individual terms. $`H_{W_1}`$ renormalizes the chemical potential and $`H_{W_2}`$ contains next-nearest and next-next-nearest neighbor one-electron hopping terms. $`H_{W_3}`$ scatters an onsite singlet to neighbors sites while $`H_{W_4}`$, which comes with a positive sign, is a triplet scattering term. Finally $`H_{W_5}`$ and $`H_{W_6}`$ involve singlet pairs. It had been thought for the 2D system that the relevant terms for the quantum transition were $`H_{W_5}`$ and $`H_{W_6}`$.
Here, in order to determine the effects of the individual terms, we have studied the model on a two-leg ladder using DMRG techniques. All the runs were done on $`2\times 32`$ ladders keeping up to 800 states leading to a maximum discarded weight of $`10^6`$. We calculated the singlet pairing correlation function $`D_{\alpha \beta }(\mathrm{})`$ defined as
$`D_{xx}(\mathrm{})=\mathrm{\Delta }_x(i+\mathrm{})\mathrm{\Delta }_x^{}(i)`$ (7)
$`D_{xy}(\mathrm{})=\mathrm{\Delta }_x(i+\mathrm{})\mathrm{\Delta }_y^{}(i)`$ (8)
$`D_{yy}(\mathrm{})=\mathrm{\Delta }_y(i+\mathrm{})\mathrm{\Delta }_y^{}(i)`$ (9)
where $`\mathrm{\Delta }_\alpha (i)=c_{i,}^{}c_{i+\delta _\alpha ,}^{}c_{i,}^{}c_{i+\delta _\alpha ,}^{}`$, $`\delta _x=(1,0)`$ and $`\delta _y=(0,1)`$. For clarity, in the following we show the rung-rung correlation function $`D_{yy}(\mathrm{})`$. $`D_{xx}(\mathrm{})`$ and $`D_{yy}(\mathrm{})`$ were always positive while $`D_{xy}(\mathrm{})`$ was negative corresponding to a $`d_{x^2y^2}`$-like strucure.
The results for the half-filled case with $`U=4`$ and $`W_i=0`$ or 0.25 are shown in Fig. 1. In the plot of $`D_{yy}(\mathrm{})`$ we have kept $`\mathrm{}12`$, with the measurements made in the central portion of the ladder. In this region the effects of the open ends are negligible. We clearly see in part (a) that when all $`W_i`$ are turned on there is an enhancement of the pairing (as found in the 2D Monte-Carlo simulations). However, if we only turn on $`W_5`$, there is a suppression of pairing. For the 2-leg ladder, this can be understood by noting that $`H_{W_5}`$ can be written as an antiferromagntic exchange interaction
$$H_{W_5}=2W_5\underset{<ij>}{}\left(๐_i๐_j\frac{1}{4}n_in_j\right).$$
(10)
Now as one knows, a 2-leg Heisenberg ladder has a spin gap $`\mathrm{\Delta }_s0.51J`$. Thus the effect of $`H_{W_5}`$ at half-filling is to increase the spin gap by a factor of order $`W_5`$ and this leads to an exponentially more rapid decay of the pairing correlations.
Fig. 1 (b) shows the effect of the other terms. $`H_{W_1}`$ has no effect as expected since it just renormalizes the chemical potential and we have fixed $`n=1`$. The additional one electron hopping term $`H_{W_2}`$ leads to only a small change in the pairing. However $`H_{W_3}`$ which scatters an onsite singlet to neighbors sites enhances the pairing despite the presence of $`U`$ which lowers the double occupancy.
We have performed other calculations including only $`H_{W_3}`$ which show that for larger $`U`$ this enhancement is supressed as one would expect (see Fig. 2). Nevertheless, for $`U/t=4`$ where the previous 2D Monte Carlo calculations were run, $`H_{W_3}`$ contributes to enhanced the pairing. $`H_{W_4}`$ also leads to enhanced singlet pairing. Note that it has a positive coefficient which suppresses triplet pairing, leaving more phase space for singlet pairing. Finally $`H_{W_6}`$, which describes singlet pair hopping for $`(i,\delta )`$ to $`(i,\delta ^{})`$, also enhances the pairing. We should point out that although there is an enhancement in the pairing, this enhancement is in fact relatively small for the two-leg ladder for reasonable values of $`W`$ and as $`U`$ is increased, so that the $`W_3`$ term is reduced, the suppression of the pairing by the $`W_5`$ term becomes dominant. This is clearly seen in Fig. 3, where we show the pairing correlation function for the half-filled ladder with all $`W_i=0.25`$ and various values of $`U`$.
We now turn to the doped case and consider the same lattice with $`U=4`$ and 8 holes corresponding to a filling $`n=0.875`$. Fig. 5 show results for $`D_{yy}(\mathrm{})`$. We clearly see in part (a) that in this case the inclusion of $`H_{W_5}`$ enhances the pairing while in part (b) we see that all of the remaining terms are essentially irrelevant. Thus the $`W_5`$ term, which corresponds to adding a near neighbor exchange $`J=2W_5`$ enhances the pairing correlation in the doped system.
Thus we conclude, that while $`H_W`$ with $`U=4t`$ can slightly enhance the pairing correlations of a half-filled ladder, this is in fact a small effect. Furthermore, for large values of $`U/t`$, $`H_W`$ leads to a suppression of the half-filled pairing correlations. This can be understood in terms of the dominance of $`H_{W_5}`$, which represents an effective antiferromagnetic exchange increasing the spin gap and suppressing the pairing correlations. However, for the doped ladder, $`W_5`$ acts to enhance the pairing correlation since it increases the effective exchange interaction and the pair binding energy. Clearly, in light of the 2D results, where it was found that $`H_W`$ could lead at half-filling to a $`d_{x^2y^2}`$ pairing state, one would like to extend the DMRG calculations to a 3-leg ladder which has a vanishing spin gap at half-filling.
We wish to thank D. Duffy and M. Fischer for helpful discussions. SD acknowledges support from the Swiss National Science Foundation. DJS and SRW wish to ackowledge the support from the US Department of Energy under Grant No. DE-FG03-85ER45197. |
no-problem/9911/astro-ph9911309.html | ar5iv | text | # NUC-MINN-99/15-TNovember 1999 THE LAST EIGHT MINUTES OF A PRIMORDIAL BLACK HOLE
## Acknowledgements
I am grateful to G. Amelino-Camelia for many discussions on Hawking radiation and to Paul Ellis, Larry McLerran and Yong Qian for comments on the manuscript. This work was supported by the US Department of Energy under grant DE-FG02-87ER40328. |
no-problem/9911/astro-ph9911091.html | ar5iv | text | # Energetic particle acceleration in shear layers
## 1 Introduction
The first order Fermi acceleration in shock waves and the second order acceleration in turbulent MHD media are widely considered as main sources of cosmic ray particles in astrophysical conditions. In the present paper we consider an alternative mechanism involving particle acceleration at velocity shear layers formed in non-uniform plasma flows, e.g. in a magnetosheath enveloping the Earth magnetosphere or at the interface between the relativistic jet and its ambient medium. Till now the considered complicated physical phenomenon was only occasionally discussed in the literature. The process was introduced into consideration by Berezhko and collaborators in a series of papers in early eighties (cf. Berezhko 1981, 1982a,b; Berezhko & Krymsky 1983; Bezrodnykh et al. 1984a,b, 1987; summarised in a review by Berezhko 1990). Much later an independent discussion of such processes acting in non-relativistic shear layers was presented by Earl et al. (1988), Jokipii et al. (1989) and Jokipii & Morfill (1990), and for relativistic tangential flow discontinuities by Ostrowski (1990; cf. also Berezhko 1990). A discussion of possible cosmic ray acceleration in mildly relativistic jets up to ultra-high energies and consequences of acting such acceleration processes in ultra-relativistic (โmili-arc-secondโ) jets were considered in recent papers by Ostrowski (1998a,b; 1999). Below, we will shortly discuss the main results obtained by the above authors.
## 2 Particle acceleration in a shear layer
A high energy particle scattered after crossing a shear flow layer can gain or loose energy. It is due to a respective velocity difference of the final scattering centre rest frame with respect to the particle starting point,
$$\mathrm{\Delta }\stackrel{}{U}=\frac{d\stackrel{}{U}}{dx}\mathrm{\Delta }x,$$
(1)
where we consider a 1-D situation with the flow velocity $`\stackrel{}{U}`$ directed along the $`z`$-axis, and the velocity gradient along the $`x`$-axis of the reference frame. In absence of magnetic field $`\mathrm{\Delta }x=v_x\mathrm{\Delta }t`$ is a free path along the $`x`$-axis ($`\stackrel{}{v}=[v_x,v_y,v_z]`$ is the particle velocity). Let us assume for a while the scattering centres to be static with respect to the local plasma rest frame. Then, in the scattering centre rest frame the particle momentum changes with respect to the one in the starting point plasma rest frame at
$$\mathrm{\Delta }p=\frac{\mathrm{\Delta }\stackrel{}{U}\stackrel{}{p}}{v}.$$
(2)
For a mean $`\mathrm{\Delta }Uv`$ the full process can be described as the momentum diffusion with the diffusion coefficient
$$D=\frac{1}{2}\frac{(\mathrm{\Delta }p)^2}{\mathrm{\Delta }t}=\frac{p^2}{15}\left(\frac{U}{x}\right)^2\tau ,$$
(3)
where the second equality comes from averaging over an isotropic particle distribution, $`\tau \mathrm{\Delta }t`$ is the mean scattering time and the term $`(U/x)^2`$ is the shear scalar in the considered simple flow pattern.
In the presence of magnetic field the mean particle shift in the $`x`$ direction can be much smaller than $`v\mathrm{\Delta }t`$. Then the introduced $`\tau `$ parameter equals the ratio of the particle mean free path (shift) along the $`x`$-axis, $`\lambda _x`$, to the respective mean particle velocity $`v_x`$, $`\tau =\lambda _x/v_x`$. More exactly the particle energy change and the $`\tau `$ parameter in Eq. 3 has to be derived by averaging over actual particle trajectories.
If the parameter $`\tau `$ scales with the particle momentum as $`\tau p^\eta `$, then the acceleration process acting within the shear layer produces the high energy asymptotic phase-space distribution (Berezhko 1982)
$$f(p)p^{(3+\eta )}.$$
(4)
One should note that the considered acceleration process in a shear layer plays a substantial role if the mean plasma velocity difference at successive scatterings (Eq. 1) is larger than the turbulent velocities leading to the ordinary second-order Fermi acceleration (cf. Eq. 10 below).
## 3 Cosmic ray viscous acceleration in the Heliosphere
Because of insufficient information about the turbulent shear flow patterns within the astrophysical shear layers it is not possible to give a firm evaluation of the viscous acceleration rates. However numerous measurements show enhancements of energetic particle populations in the Heliosphere, where the Solar Wind forms shear flows. In these cases one can estimate the highest energies for accelerated particles by the viscous mechanism and compare its to the measured ones. Within the Heliosphere such estimates were provided for the observed shear flow sites (cf. Berezhko 1990, Jokipii & Morfill 1990), including the Earth and the Jupiter sheared magnetosheath, interplanetary magnetic field sector boundaries, boundaries of the high speed Solar Wind streams. In general the evaluated energies are within the observed ranges.
## 4 Particle acceleration at relativistic shear layers
The relativistic shear layers occur in a number of objects in space, including galactic and extragalactic relativistic jets and accretion discs near black holes. Below we consider the jet side boundary layer as an example of the relativistic shear flow.
For particles with sufficiently high energies the transition layer between the jet and the ambient medium can be approximated as a surface of a discontinuous velocity change, a tangential discontinuity (โtdโ). It becomes an efficient cosmic ray acceleration site provided the considered velocity difference $`U`$ is relativistic and the sufficient amount of turbulence is present in the medium. The situation with highly relativistic jet ($`\mathrm{\Gamma }(1U^2)^{1/2}1`$) was not quantitatively discussed till now and, thus, our present discussion is mostly based on the results derived for mildly relativistic flows by Ostrowski (1990, 1998a).
### 4.1 Energy gains
Any high energy particle crossing the jet boundary changes its energy, $`E`$, according to the respective Lorentz transformation. It can gain or loose energy. In the case of uniform magnetic field the successive transformation at the next boundary crossing changes the particle energy back to its original value. However, in the presence of perturbations there is a positive mean energy change:
$$\mathrm{\Delta }E=\eta _\mathrm{E}(\mathrm{\Gamma }1)E.$$
(5)
The numerical factor $`\eta _\mathrm{E}`$ increases with the growing magnetic field perturbations and slowly decreases for increasing $`\mathrm{\Gamma }`$. For mildly relativistic flows, in the strong scattering limit particle simulations give values of $`\eta _\mathrm{E}`$ as substantial fractions of unity (Ostrowski 1990). For large $`\mathrm{\Gamma }`$ we assume the following scaling
$$\eta _\mathrm{E}=\eta _0\frac{2}{\mathrm{\Gamma }},$$
(6)
where $`\eta _0=\eta (\mathrm{\Gamma }=2)`$. In general $`\eta _0`$ depends also on particle energy. During the acceleration process, particle scattering is accompanied with the jetโs momentum transfer into the medium surrounding it. On average, a single particle with the momentum $`p`$ transports across the jetโs boundary the following amount of momentum:
$$\mathrm{\Delta }p=\mathrm{\Delta }p_\mathrm{z}=\eta _p(\mathrm{\Gamma }1)Up,$$
(7)
where the $`z`$-axis of the reference frame is chosen along the flow velocity. The numerical factor $`\eta _p\eta _\mathrm{E}`$ and there acts a drag force per unit surface of the jet boundary and the opposite force at the medium along the jet, of the magnitude order of the accelerated particlesโ energy density. Independent of the exact value of $`\eta _\mathrm{E}`$, the acceleration process can proceed very fast due to the fact that average particle is not able to diffuse โ between the successive energizations โ far from the accelerating interface. One should remember that in the case of shear layer or tangential discontinuity acceleration and, contrary to the shock waves, there is no particle advection off the โaccelerating layerโ. Of course, particles are carried along the jet with the mean velocity of order $`U/2`$ and, for efficient acceleration, the distance travelled this way must be shorter than the jet breaking length.
The simulations (Ostrowski 1990) show that the discussed acceleration process can be quite rapid, with the time scale given in the ambient medium rest frame as
$$\tau _{\mathrm{td}}=\alpha \frac{r_\mathrm{g}}{c},$$
(8)
where $`r_\mathrm{g}`$ is a characteristic value of the particle gyroradius. For efficient scattering the numerical factor $`\alpha `$ can be as small as $`10`$ (Ostrowski 1990). One may note that the applied diffusion model involves particles with infinite diffusive trajectories between the successive interactions with the discontinuity. However, quite flat spectra, nearly coincident with the stationary spectrum (cf. Fig. 1), are generated in short time scales given by Eq. 8 and these distributions are considered in the present discussion. For the mean magnetic field $`B_\mathrm{g}`$ given in the Gauss units and the particle (proton) energy $`E_{\mathrm{EeV}}`$ given in EeV ($`1`$ EeV $`10^{18}`$ eV) the time scale (8) reads as
$$\tau _{\mathrm{td}}10^5\alpha E_{\mathrm{EeV}}B_\mathrm{G}^1[\mathrm{s}].$$
(9)
For low energy cosmic ray particles the velocity transition zone at the boundary is a finite-width turbulent shear layer. We do not know of any attempt in the literature to describe the internal structure of such layer on the microscopic level (cf. Aloy et al. 1999, Henriksen, at this meeting). Therefore, we limit the discussion of the acceleration process within such a layer to quantitative considerations only. From rather weak radiation and the observed effective collimation of jets in the powerful FR II radio sources one can conclude, that interaction of a presumably relativistic jet with the ambient medium is relatively weak. Thus the turbulent boundary layer must be relatively thin, with thickness denoted with $`D`$. Within it two acceleration processes energise low energy โ the ones with the mean radial free path $`\lambda D`$ โ particles. The first one, discussed in section 2 above, is connected with the velocity shear and is called โcosmic ray viscosityโ. The second one is the ordinary Fermi process in the turbulent medium. The acceleration time scales can not be evaluated with accuracy for these processes, but โ for particles residing within the considered layer โ we can give an acceleration time scale estimate
$$\tau _{\mathrm{II}}=\frac{2\pi r_\mathrm{g}}{c}\frac{c^2}{V^2+\left(U\frac{\lambda }{D}\right)^2},$$
(10)
where $`V`$ is the turbulence velocity. One expects that the first term in the denominator can dominate at low particle energies, while the second one for larger energies, with $`\tau _{\mathrm{II}}`$ approaching the value given in Eq. 8 for $`\lambda D`$. If the second-order Fermi acceleration dominates, $`\lambda <D(V/U)`$, the time scale (10) reads as $`\tau _{\mathrm{II}}10^8E_{\mathrm{TeV}}B_\mathrm{G}^1V_3^2`$ $`[s]`$, where $`V_3`$ is the turbulence velocity in units of $`3000`$ km/s. Depending on the choice of parameters this scale can be comparable or longer than the expansion and internal evolution scales for relativistic jets. In order to efficiently create high energy particles for the further acceleration by the viscous process and the tangential discontinuity acceleration one have to assume that the turbulent layer includes high velocity turbulence, with $`V_3`$ reaching values substantially larger than $`1`$, or other high energy particles sources are present. For the following discussion we will assume that such effective pre-acceleration takes place, but the validity of this assumption can be estimated only a posteriori from comparison of our conclusions with the observational data.
### 4.2 Energy losses
To estimate the upper energy limit for accelerated particles, at first one should compare the time scale for energy losses due to radiation and inelastic collisions to the acceleration time scale. The discussion of possible loss processes is presented by Rachen & Biermann (1993). The derived loss time scale for utra-high energy protons can be written in the form
$$T_{\mathrm{loss}}510^9B_\mathrm{g}^2(1+Xa)^1E_{\mathrm{EeV}}^1[\mathrm{s}],$$
(11)
where $`a`$ is the ratio of the energy density of the ambient photon field relative to that of the magnetic field and $`X`$ is a quantity for the relative strength of p$`\gamma `$ interactions compared to synchrotron radiation. For cosmic ray protons the acceleration dominates over the losses (Eqs. 9, 11) up to the maximum energy $`E_{\mathrm{EeV}}210^2\alpha ^1\left[B_\mathrm{G}(1+Xa)\right]^{1/2}`$.
### 4.3 Spectra of accelerated particles
The acceleration process acting at the tangential discontinuity of the velocity field leads to the flat energy spectrum and the spatial distribution expected to increase their extension with particle energy. Below, for illustration, we propose two simple acceleration and diffusion models describing these features.
#### A turbulent shear layer
At first we consider โlow energyโ particles wandering in an extended turbulent shear layer, with the particle mean free path $`\lambda p`$. With the assumed conditions the mean time required for increasing particle energy on a small constant fraction is proportional to the energy itself, and the mean rate of particle energy gain is constant, $`\dot{E}_{\mathrm{gain}}=\text{const.}`$ Let us take a simple expression for the synchrotron energy loss, $`\dot{E}_{\mathrm{loss}}p^2`$, to represent any real process acting near the discontinuity. One may note that the jet radius and the escape boundary distance provide energy scales to the process. Another scale for particle momentum, $`p_\mathrm{c}`$, is provided as the one for equal losses and gains, $`\dot{E}_{\mathrm{gain}}=\dot{E}_{\mathrm{loss}}`$. As a result, a divergence from the power-law and a cut-off have to occur at high energies in the spectrum.
We use a simple Monte Carlo simulations to model the acceleration process for a continuous particle injection, uniform within the considered layer. The diffusion coefficient $`\kappa _{}`$ is taken to be proportional to particle momentum, but independent of the spatial position $`x`$. We neglected particle escape through the shear layer side boundaries and we assumed $`\frac{f}{x}=0`$ . For the escape term we simply assume a characteristic escape momentum $`p_{\mathrm{max}}`$. In Fig. 1 we use $`p_\mathrm{c}`$ as the unit for particle momentum, so it defines also a cut-off for $`p_\mathrm{c}<p_{\mathrm{max}}`$ . At small momenta the spectrum has a power-law form โ in our model the averaged over angles $`f(t,p)p^4`$ (cf. Eq. 4) โ with a cut-off momentum growing with time. However, in long time scales, when particles reach momenta close to $`p_\mathrm{c}`$, losses lead to spectrum flattening and pilling up particles at $`p`$ close to $`p_\mathrm{c}`$. Then, a low energy part of the spectrum does not change any more and only a narrow spike at $`pp_\mathrm{c}`$ grows with time. Let us also note that in the case of efficient particle escape, i.e. when $`p_{\mathrm{max}}<p_\mathrm{c}`$, the resulting spectrum would be similar to the presented by Ostrowski (1998a) short time spectrum witha cut-off at $`p_{\mathrm{max}}`$.
#### Tangential discontinuity acceleration
An illustration of the acceleration process at the tangential discontinuity have to take into account a spatially discrete nature of the acceleration process. Here, particles are assumed to wander subject to radiative losses outside the discontinuity, with the mean free path $`p`$ and the loss rate $`p^2`$. At each crossing of the discontinuity a particle is assumed to gain a constant fraction $`\mathrm{\Delta }`$ of momentum (cf. Eqs. 5, 6), $`p^{}=(1+\mathrm{\Delta })p`$ , and, due to losses, during each free time $`\mathrm{\Delta }t`$ its momentum decreases from $`p_{\mathrm{in}}`$ to $`p`$ according to
$$\frac{1}{p}\frac{1}{p_{\mathrm{in}}}=\mathrm{const}\mathrm{\Delta }t.$$
(12)
The time dependent energy spectra obtained within this model are presented in lower panel in Fig. 1, where we choose units in a way to put the constant in Eq. 12 equal to one and the particle mean free paths are equal in two considered models at $`p=p_\mathrm{c}`$. Comparison of the results in two models allows to evaluate the modification of the acceleration process by changing the momentum dependence of the particle diffusion coefficient. For slowly varying diffusion coefficient (the โ$`\lambda =\mathrm{const}`$โ model) high energy particles which diffuse far away off the discontinuity and loose there much of their energy still have a chance to diffuse back to be accelerated at the discontinuity. In the model with $`\kappa `$ quickly growing with particle energy (the โ$`\lambda =Cp`$โ model) such distant particles will decrease their mobility in a degree sufficient to break, or at least to limit their further acceleration. One should note that in both models the spectrum inclination at low energies is the same (here the particle density $`n(p)p^2`$).
## 5 Final remarks
Shear layers occurring in astrophysical plasma flows are able to accelerate cosmic ray particles in the so called viscous acceleration process. Depending on conditions the process can be described as the particle momentum diffusion or the tangential discontinuity acceleration. The last one can be very efficient in relativistic flows. In particular, in jets in active galactic nuclei the cosmic ray protons can reach energies in excess of $`10^{18}`$ eV.
The generated cosmic ray populations can influence the shear layer flow through viscous and/or dynamical forces (cf. Arav & Begelman 1992, Ostrowski 1999). In relativistic jets the so called cosmic ray cocoon can be formed leading to a number of observational effects. The essential problem with application and verification of the presented theory is insufficient information about the local parameters of the considered shear layers. One may note several recent observational papers showing effects which could be ascribed to, or are at least compatible with the acceleration process acting at jet boundary layer (Attridge et al. 1999, Scarpa et al. 1999, Perlman et al. 1999).
## Acknowledgments
I acknowledge support from the Komitet Badaล Naukowych within project2 P03D 002 17 and 2 P03B 112 17.
References
Aloy M.A., Ibรกรฑez J. M<sup>a</sup>, Marti J.M<sup>a</sup>, Gomez J.L., Mรผller E. (1999) Astrophys. J. Lett. (accepted).
Attridge J.M., Roberts D.H., Wardle J.F.C. (1999) Astrophys. J. Lett. 518, 87.
Arav N., Begelman M.C. (1992) Astrophys. J. 401, 125.
Berezhko E.G. (1981) Pisma v ZhETF 33, 416.
Berezhko E.G. (1982) Pisma v Astr. Zh. 8, 747.
Berezhko E.G. (1982) Geomagnetizm i Aeronomia 22, 321.
Berezhko E.G. (1990) Preprint Frictional Acceleration of Cosmic Rays, The Yakut Scientific Centre, Yakutsk.
Berezhko E.G., Krymsky G.F. (1983) Izvestiya AN SSSR, Seriya Fiz. 47, 1700.
Bezrodnykh I.P., Berezhko E.G., Plotnikov I.Ya., et al. (1984a) Izviestiya AN SSSR, Seriya Fiz. 48, 2164.
Bezrodnykh I.P., Berezhko E.G., Plotnikov I.Ya., et al. (1984b) Geomagnetizm i Aeronomia 48, 2164.
Bezrodnykh I.P., Berezhko E.G., et al. (1987) in Proc. 20th Int. Cosmic Ray Conf., Moscow 5, 453.
Bรถttcher M. (1999) Astrophys. J., Lett. (accepted).
Earl J.A., Jokipii J.R., Morfill G.E. (1988) Astrophys. J. Lett. 331, L91.
Jokipii J.R., Morfill G. (1990) Astrophys. J. 356, 255.
Jokipii J.R., Kota J., Morfill G. (1989) Astrophys. J. Lett. 345, L67.
Ostrowski M. (1990) Astron. Astrophys. 238, 435.
Ostrowski M. (1998a) Astron. Astrophys. 335, 134.
Ostrowski M. (1998b) in Frontier objects in astrophysics and particle physics (Vulcano Workshop), eds. F. Giovannelli & G. Mannocchi.
Ostrowski M. (1999) Month. Not. R. Astron. Soc. (in press).
Perlman E.S., Biretta J.A., Fang Z., Sparks W.B., Macchetto F.D. (1999) Astron. J. (accepted).
Rachen J.P., Biermann P. (1993) Astron. Astrophys. 272, 161.
Scarpa R., Urry C.M., Falomo R., Treves A. (1999) Astrophys. J. (accepted). |
no-problem/9911/hep-ph9911224.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Undoubtedly the solar and atmospheric neutrino problems provide the two most important milestones in the search for physics beyond the Standard Model (SM). Of particular importance has been the recent confirmation by the Super-Kamiokande collaboration of the zenith-angle-dependent deficit of atmospheric neutrinos. Altogether solar and atmospheric data give a strong evidence for $`\nu _e`$ and $`\nu _\mu `$ conversions, respectively. Neutrino conversions are a natural consequence of theories beyond the Standard Model . The first example is oscillations of small-mass neutrinos. The simplest way to account for the lightness of neutrinos is in the context of Majorana neutrinos: their mass violates lepton number. Its most obvious consequences would be processes such as neutrino-less double beta decay , or CP violation properties of neutrinos , so far unobserved. Neutrino masses could be hierarchical, with the light $`\nu _\tau `$ much heavier than the $`\nu _\mu `$ and $`\nu _e`$ . While solar neutrino rates favour the small mixing angle (SMA) MSW solution, present data on the recoil-electron spectrum prefer the large mixing MSW (LMA) solution . When interpreted in terms of neutrino oscillations, the observed atmospheric neutrino zenith-angle-dependent deficit clearly indicates that the mixing involved is maximal. In short we have the intriguing possibility that, unlike the case of quarks, neutrino mixing is bi-maximal. Supersymmetry with broken Rโparity provides an attractive origin for bi-maximal neutrino oscillations, which can be tested not only at the upcoming long-baseline or neutrino factory experiments but also at highโenergy collider experiments such as the LHC.
One should however bear in mind that there is a variety of alternative solutions to the neutrino anomalies. Just as an example let me stress the case for lepton flavour violating neutrino transitions, which can arise without neutrino masses . They may still fit present solar and contained atmospheric data pretty well. They may arise in models with extra heavy leptons and in supergravity theories . A possible signature of theories leading to FC interactions would be the existence of sizeable flavour non-conservation effects, such as $`\mu e+\gamma `$, $`\mu e`$ conversion in nuclei, unaccompanied by neutrino-less double beta decay if neutrinos are massless. In contrast to the intimate relationship between the latter and the non-zero Majorana mass of neutrinos due to the Black-Box theorem there is no fundamental link between lepton flavour violation and neutrino mass. Other possibilities involve neutrino decays and transition magnetic moments coupled to either to regular or to random magnetic fields .
In addition to the solar and atmospheric neutrino data from underground experiments there is also some indication for neutrino oscillations from the LSND experiment . Barring exotic neutrino conversion mechanisms one requires three mass scales in order to reconcile all of these hints, hence the need for a light sterile neutrino . Out of the four neutrinos, two of them lie at the solar neutrino scale and the other two maximally-mixed neutrinos are at the HDM/LSND scale. The prototype models proposed in enlarge the $`SU(2)U(1)`$ Higgs sector in such a way that neutrinos acquire mass radiatively, without unification nor seesaw. The LSND scale arises at one-loop, while the solar and atmospheric scales come in at the two-loop level, thus accounting for the hierarchy. The lightness of the sterile neutrino, the nearly maximal atmospheric neutrino mixing, and the generation of the solar and atmospheric neutrino scales all result naturally from the assumed lepton-number symmetry and its breaking. Either $`\nu _e`$ \- $`\nu _\tau `$ conversions explain the solar data with $`\nu _\mu `$ \- $`\nu _s`$ oscillations accounting for the atmospheric deficit , or else the rรดles of $`\nu _\tau `$ and $`\nu _s`$ are reversed . These two basic schemes have distinct implications at future solar & atmospheric neutrino experiments with good sensitivity to neutral current neutrino interactions. Cosmology can also place restrictions on these four-neutrino schemes .
## 2 Indications for New Physics
The most solid hints in favour of new physics in the neutrino sector come from underground experiments on solar and atmospheric neutrinos. The most recent data correspond to 825โday solar and 52 kton-yr atmospheric data samples, respectively .
### 2.1 Solar Neutrinos
The solar neutrino event rates recorded at the radiochemical Homestake, Gallex and Sage experiments are summarized as: $`2.56\pm 0.22`$ SNU (chlorine), $`72.3\pm 5.6`$ SNU (Gallex and Sage) . Note that only the gallium experiments are sensitive to the solar $`pp`$ neutrinos. On the other hand the <sup>8</sup>B flux from Super-Kamiokande water Cerenkov experiment is $`(2.44\pm 0.08)\times 10^6\mathrm{cm}^2\mathrm{s}^1`$ .
In Fig. (1) one can see the predictions of various standard solar models in the plane defined by the <sup>7</sup>Be and <sup>8</sup>B neutrino fluxes, normalized to the predictions of the BP98 solar model . Abbreviations such as BP95, identify different solar models, as given in ref. . The rectangular error box gives the $`3\sigma `$ error range of the BP98 fluxes. On the other hand the values of these fluxes indicated by present data on neutrino event rates are shown by the contours in the lower-left part of the figure. The best-fit <sup>7</sup>Be neutrino flux is negative! The theoretical predictions clearly lie well away from the $`3\sigma `$ contour, strongly suggesting the need for new particle physics in order to account for the data . Since possible non-standard astrophysical solutions are rather constrained by helioseismology studies one is led to assume the existence of neutrino conversions, such as those induced by very small neutrino masses. Possibilities include the MSW effect , vacuum neutrino oscillations and, possibly, flavour changing neutrino interactions . Moreover, if neutrinos have transition magnetic moments then one may have, in addition, the possibility of Majorana neutrino Spin-Flavour Precessions . Based upon these there emerge two new solutions to the solar neutrino problem: the Resonant and the Aperiodic Spin-Flavour Precession mechanisms , based on regular and random magnetic fields, respectively.
The recent 825โday data sample presents no major surprises, except that the recoil energy spectrum produced by solar neutrino interactions shows more events in the highest bins. Barring the possibly of poorly understood energy resolution effects, it has been noted that if the flux for neutrinos coming from the $`{}_{}{}^{3}\mathrm{He}+p{}_{}{}^{4}\mathrm{He}+e^++\nu _e`$, the so-called $`hep`$ reaction, is well above the (uncertain) SSM predictions, then this could significantly influence the electron energy spectrum produced by solar neutrino interactions in the high recoil region, with hardly any effect at lower energies.
Fig. 2 shows the expected normalized recoil electron energy spectrum compared with the most recent experimental data . The solid line represents the prediction for the bestโfit SMA solution with free $`{}_{}{}^{8}B`$ and $`hep`$ normalizations (0.69 and 12 respectively), while the dotted line gives the corresponding prediction for the bestโfit LMA solution (1.15 and 34 respectively). Finally, the dashed line represents the prediction for the best no-oscillation scheme with free $`{}_{}{}^{8}B`$ and $`hep`$ normalizations (0.44 and 14, respectively). Clearly the spectra with enhanced $`hep`$ neutrinos provide better fits to the data. However Fiorentini et al have argued that the required $`hep`$ amount is too large to accept on theoretical grounds. We look forward to the improvement of the situation. The increasing rรดle played rate-independent observables such as the spectrum, as well as seasonal and day-night asymmetries will eventually select amongst different solutions of the solar neutrino problem.
The required solar neutrino parameters are determined through a $`\chi ^2`$ fit of the experimental data. In Fig. (3) we show the allowed regions in $`\mathrm{\Delta }m^2`$ and $`\mathrm{sin}^2\theta `$ from the measurements of the total event rates at the Chlorine, Gallium and SuperโKamiokande (825-day data sample) experiments, combined with the zenith angle distribution, the recoil energy spectrum and the seasonal dependence of the event rates, observed in SuperโKamiokande. Panels (a) and (b) correspond to active-active and active-sterile oscillations, respectively. The bestโfit points in each case are indicated by a star , while the local best-fit points are indicated by a dot.
An analysis with free $`{}_{}{}^{8}B`$ and $`hep`$ normalizations has also been given in and does not change significantly the allowed regions.
One notices from the analysis that rate-independent observables, such as the electron recoil energy spectrum and the day-night asymmetry (zenith angle distribution), are playing an increasing rรดle in the determination of solar neutrino parameters . An observable which has been neglected in most analyses of the MSW effect and which could be sizeable in the large mixing angle regions (LMA and LOW) is the seasonal dependence in the solar neutrino flux which would result from the regeneration effect at the Earth and which has been discussed in ref. . This should play a more significant rรดle in future investigations.
A theoretical issue which has raised some interest recently is the study of the possible effect of random fluctuations in the solar matter density . The possible existence of such noise fluctuations at a few percent level is not excluded by present helioseismology studies. In Fig. (4) we show averaged solar neutrino survival probability as a function of $`E/\mathrm{\Delta }m^2`$, for $`\mathrm{sin}^22\theta =0.01`$. This figure was obtained via a numerical integration of the MSW evolution equation in the presence of noise, using the density profile in the Sun from BP95 in ref. , and assuming that the correlation length $`L_0`$ (which corresponds to the scale of the fluctuation) is $`L_0=0.1\lambda _m`$, where $`\lambda _m`$ is the neutrino oscillation length in matter. An important assumption in the analysis is that $`l_{free}L_0\lambda _m`$, where $`l_{free}10`$ cm is the mean free path of the electrons in the solar medium. The fluctuations may strongly affect the <sup>7</sup>Be neutrino component of the solar neutrino spectrum so that the Borexino experiment should provide an ideal test, if sufficiently small errors can be achieved. The potential of Borexino in probing the level of solar matter density fluctuations provides an additional motivation for the experiment .
The most popular alternative solution to the solar neutrino problem is the vacuum oscillation solution which clearly requires large neutrino mixing and the adjustment of the oscillation length so as to coincide roughly with the Earth-Sun distance.
Fig. 5 shows the regions of just-so oscillation parameters at the 95 % CL obtained in a recent fit of the data, including both the rates, the recoil energy spectrum and seasonal effects, which are expected in this scenario and could potentially help in discriminating it from the MSW scenario.
### 2.2 Atmospheric Neutrinos
Neutrinos produced as decay products in hadronic showers from cosmic ray collisions with nuclei in the upper atmosphere have been observed in several experiments . There has been a long-standing discrepancy between the predicted and measured $`\mu /e`$ ratio of the muon ($`\nu _\mu +\overline{\nu }_\mu `$) over the electron atmospheric neutrino flux ($`\nu _e+\overline{\nu }_e`$) . The anomaly has been found both in water Cerenkov experiments (Kamiokande, Super-Kamiokande and IMB) as well as in the iron calorimeter Soudan2 experiment. Negative experiments, such as Frejus and Nusex have much larger errors. Although individual $`\nu _\mu `$ or $`\nu _e`$ fluxes are only known to within $`30\%`$ accuracy, their ratio is predicted to within $`5\%`$ over energies varying from 0.1 GeV to 100 GeV . The most important feature of the atmospheric neutrino data sample is that it exhibits a zenith-angle-dependent deficit of muon neutrinos. Experimental biases and uncertainties in the prediction of neutrino fluxes and cross sections are unable to explain the data.
The most popular way to account for this anomaly is in terms of neutrino oscillations. It has already been noted that the Chooz reactor data excludes the $`\nu _\mu \nu _e`$ channel, when all experiments are combined. So I concentrate here on the other possible oscillation channels.
The results of the most recent $`\chi ^2`$ fit of the Super-Kamiokande atmospheric neutrino data in the framework of the neutrino oscillation hypothesis can be seen in Fig. (6), taken from ref. . This analysis updates previous studies in ref. and and includes the upgoing muon event samples. This figure shows the allowed regions of oscillation parameters at 90 and 99 % CL.
Notice that matter effects lead to differences between the allowed regions for the various channels. For $`\nu _\mu \nu _s`$ with $`\mathrm{\Delta }m^2>0`$ matter effects enhance the oscillations for neutrinos and therefore smaller values of the vacuum mixing angle would lead to larger conversion probabilities, so that the regions are larger than compared to the $`\nu _\mu \nu _\tau `$ case. For $`\nu _\mu \nu _s`$ with $`\mathrm{\Delta }m^2<0`$ the matter enhancement occurs only for anti-neutrinos, suppressing the conversion in $`\nu _\mu `$โs. Since the yield of atmospheric neutrinos is bigger than that of anti-neutrinos, clearly the matter effect suppresses the overall conversion probability. Therefore one needs in this case a larger value of the vacuum mixing angle. This trend can indeed be seen by comparing the regions in different columns of Fig. (6).
Notice that in all channels where matter effects play a rรดle the range of acceptable $`\mathrm{\Delta }m^2`$ is slightly shifted towards larger values, as compared with the $`\nu _\mu \nu _\tau `$ case. This follows from the relation between mixing in vacuo and in matter. In fact, away from the resonance region, independently of the sign of the matter potential, there is a suppression of the mixing inside the Earth. As a result, the lower allowed $`\mathrm{\Delta }m^2`$ value is higher than for the $`\nu _\mu \nu _\tau `$ channel.
Concerning the quality of the fits we note that the best-fit to the full sample is obtained for the $`\nu _\mu \nu _\tau `$ channel, although from the global analysis oscillations into sterile neutrinos cannot be ruled out. There is also an improvement in the quality of the fits to the contained events as compared to previous analysis performed with lower statistics . These features can be easily understood by looking at the predicted zenith angle distribution of the different event types for the various oscillation channels shown in Fig. (7) and Fig. (8). From Fig. (7) one can see the excellent agreement between the observed distributions of e-like events and the SM predictions. This has led to an improvement of the quality of the fit for any conversion mechanism that only involves muons. From Fig. (8) one can also see that due to matter effects the distributions for upgoing muons in the case of $`\nu _\mu \nu _s`$ are flatter than for $`\nu _\mu \nu _\tau `$ . The data show a somewhat steeper angular dependence which is better described by $`\nu _\mu \nu _\tau `$ oscillations. In order to exploit this feature the Super-Kamiokande collaboration has presented a preliminary partial analysis of the angular dependence of the through-going muon data in combination with the up-down asymmetry of partially contained events which seems indeed to disfavour $`\nu _\mu \nu _s`$ oscillations at the 2โ$`\sigma `$ level .
For a comparison of the oscillation parameters as determined from the atmospheric data with the sensitivity of the present accelerator and reactor experiments, as well as the expectations of upcoming long-baseline experiments see ref. .
### 2.3 LSND
The Los Alamos Meson Physics Facility looked for $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillations using $`\overline{\nu }_\mu `$ from $`\mu ^+`$ decay at rest . The $`\overline{\nu }_e`$โs are detected via the reaction $`\overline{\nu }_epe^+n`$, correlated with a $`\gamma `$ from $`npd\gamma `$ ($`2.2\mathrm{MeV}`$). The results indicate $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillations, with an oscillation probability of ($`0.31_{0.10}^{+0.11}\pm 0.05`$)%, leading to the oscillation parameters shown in Fig. (9). The shaded regions are the favoured likelihood regions given in ref. . The curves show the 90 % and 99 % likelihood allowed ranges from LSND, and the limits from BNL776, KARMEN1, Bugey, CCFR, and NOMAD.
A search for $`\nu _\mu `$ $``$ $`\nu _e`$ oscillations has also been conducted by the LSND collaboration. Using $`\nu _\mu `$ from $`\pi ^+`$ decay in flight, the $`\nu _e`$ appearance is detected via the charged-current reaction $`C(\nu _e\text{ },e^{})X`$. Two independent analyses are consistent with the above signature, after taking into account the events expected from the $`\nu _e`$ contamination in the beam and the beam-off background. If interpreted as an oscillation signal, the observed oscillation probability of $`2.6\pm 1.0\pm 0.5\times 10^3`$, consistent with the evidence for oscillation in the $`\overline{\nu }_\mu `$ $``$ $`\overline{\nu }_e`$ channel described above. Fig. 10 compares the LSND region with the expected sensitivity from MiniBooNE, which was recently approved to run at Fermilab .
A possible confirmation of the LSND anomaly would be a discovery of far-reaching implications.
### 2.4 Dark Matter
Galaxies as well as the large scale structure in the Universe should arise from the gravitational collapse of fluctuations in the expanding universe. They are sensitive to the nature of the cosmological dark matter. The data on cosmic background temperature anisotropies on large scales performed by the COBE satellite combined with cluster-cluster correlation data e.g. from IRAS can not be reconciled with the simplest COBE-normalized $`\mathrm{\Omega }_m=1`$ cold dark matter (CDM) model, since it leads to too much power on small scales. Adding to CDM neutrinos with mass of few eV (a scale similar to the one indicated by the LSND experiment ) corresponding to $`\mathrm{\Omega }_\nu 0.2`$, results in an improved fit to data on the nearby galaxy and cluster distribution . The resulting Cold + Hot Dark Matter (CHDM) cosmological model is the most successful $`\mathrm{\Omega }_m=1`$ model for structure formation, preferred by inflation. However, other recent data have begun to indicate a lower value for $`\mathrm{\Omega }_m`$, thus weakening the cosmological evidence favouring neutrino mass of a few eV in flat models with cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }=1\mathrm{\Omega }_m`$ . Future sky maps of the cosmic microwave background radiation (CMBR) with high precision at the MAP and PLANCK missions should bring more light into the nature of the dark matter and the possible rรดle of neutrinos . Another possibility is to consider unstable dark matter scenarios . For example, an MeV range tau neutrino may provide a viable unstable dark matter scenario if the $`\nu _\tau `$ decays before the matter dominance epoch. Its decay products would add energy to the radiation, thereby delaying the time at which the matter and radiation contributions to the energy density of the universe become equal. Such delay would allow one to reduce the density fluctuations on the smaller scales purely within the standard cold dark matter scenario. Upcoming MAP and PLANCK missions may place limits on neutrino stability and rule out such schemes.
### 2.5 Pulsar Velocities
One of the most challenging problems in modern astrophysics is to find a consistent explanation for the high velocity of pulsars. Observations show that these velocities range from zero up to 900 km/s with a mean value of $`450\pm 50`$ km/s. An attractive possibility is that pulsar motion arises from an asymmetric neutrino emission during the supernova explosion. In fact, neutrinos carry more than $`99\%`$ of the new-born proto-neutron starโs gravitational binding energy so that even a $`1\%`$ asymmetry in the neutrino emission could generate the observed pulsar velocities. This could in principle arise from the interplay between the parity violation present in weak interactions with the strong magnetic fields which are expected during a SN explosion . However, it has recently been noted that no asymmetry in neutrino emission can be generated in thermal equilibrium, even in the presence of parity violation. This suggests that an alternative mechanism is at work. Several neutrino conversion mechanisms in matter have been invoked as a possible engine for powering pulsar motion. They all rely on the polarization of the SN medium induced by the strong magnetic fields $`10^{15}`$ Gauss present during a SN explosion. This would affect neutrino propagation properties giving rise to an angular dependence of the matter-induced neutrino potentials. This would lead in turn to a deformation of the โneutrino-sphereโ for, say, tau neutrinos and thus to an anisotropic neutrino emission. As a consequence, in the presence of non-vanishing $`\nu _\tau `$ mass and mixing the resonance sphere for the $`\nu _e\nu _\tau `$ conversions is distorted. If the resonance surface lies between the $`\nu _\tau `$ and $`\nu _e`$ neutrino spheres, such a distortion would induce a temperature anisotropy in the flux of the escaping tau-neutrinos produced by the conversions, hence a recoil kick of the proto-neutron star. This mechanism was realized in ref. invoking MSW conversions with $`m_{\nu _\tau }`$ $`>`$$``$ 100 eV or so, assuming a negligible $`\nu _e`$ mass. This is necessary in order for the resonance surface to be located between the two neutrino-spheres. It should be noted, however, that such requirement is at odds with cosmological bounds on neutrinos masses unless the $`\tau `$-neutrino is unstable. On the other hand in ref. a realization was proposed in the resonant spin-flavour precession scheme (RSFP) . The magnetic field would not only affect the medium properties, but would also induce the spin-flavour precession through its coupling to the neutrino transition magnetic moment .
Perhaps the simplest and probably most elegant suggestion was proposed in ref. , where the required pulsar velocities would arise from anisotropic neutrino emission induced by resonant conversions of massless neutrinos (hence no magnetic moment). Raffelt and Janka have subsequently argued that the asymmetric neutrino emission effect was overestimated in , since the temperature variation over the deformed neutrino-sphere is not an adequate measure for the anisotropy of the neutrino emission. This would invalidate all neutrino conversion mechanisms, leaving the pulsar velocity problem without any viable solution. However Kusenko and Segrรฉ still maintain that sizeable pulsar kicks can arise from neutrino conversions . In any case invoking conversions into sterile neutrinos could be an interesting possibility, since the conversions could take place deeper in the star .
## 3 Making Sense of All That
Physics beyond the Standard Model is required in order to explain solar and atmospheric neutrino data. While neutrino oscillations provide an excellent fit and a powerful way to determine neutrino mass and mixing, there is a plethora of alternative mechanisms, some of which quite attractive, which could play an important rรดle in the interpretation of the data. These include flavour changing neutrino interactions both in the solar and atmospheric neutrino problems, Resonant and the Aperiodic Spin-Flavour Precession mechanisms for solar neutrinos, which use the transition magnetic moments of Majorana neutrinos , and the possibility of fast neutrino decays which could play a rรดle in the atmospheric neutrino problem . Moreover I note that more exotic explanations of the undergound neutrino data based upon violations of equivalence principle, Lorentz invariance and CPT have been proposed . Nevertheless in what follows I will assume the standard neutrino oscillation interpretation of the data.
### 3.1 Solar plus Atmospheric
These data can be accounted for with the three known neutrinos. They fix the two mass splittings $`\mathrm{\Delta }m_{}^2`$ & $`\mathrm{\Delta }m_{atm}^2`$, and two of the three neutrino mixing angles, the third being small on account of the Chooz reactor results . Such scenario can easily be accommodated in *seesaw* theories of neutrino mass since, in general, the mixing angles involved are not predicted, in particular the maximal mixing indicated by the atmospheric data and possibly also by the solar data can be accomodated by hand. In contrast, it is not easy to reconcile maximal or bi-maximal mixing of neutrinos with a predictive quark-lepton *unification seesaw* scheme that relates lepton and quark mixing angles, since the latter are known to be small. For attempts to reconcile solar and atmospheric data in unified models with specific texture anzatze, see ref. .
An alternative way to predict a hierarchical pattern of neutrino mass and mixing, which naturally accomodates the possibility of maximal mixing is to appeal to supersymmetry. In ref. it was shown that the simplest unified extension of the Minimal Supersymmetric Standard Model with bi-linear RโParity violation provides a predictive scheme for neutrino masses which can account for the observed atmospheric and solar neutrino anomalies in terms of bi-maximal neutrino mixing. The maximality of the atmospheric mixing angle arises dynamically, by minimizing the scalar potential of the theory, while the solar neutrino problem can be accounted for either by large or by small mixing oscillations. The spectrum is naturally hierarchical, since only the tau neutrino picks up mass at the tree level (though this may be itself calculable from renormalization-group evolution from unification down to weak-sacle), by mixing with neutralinos, while the masslessness of the other two neutrinos is lifted only by calculable loop corrections. Despite the smallness of neutrino masses R-parity violation can be observable at present and future highโenergy colliders, providing an unambiguous cross-check of the model, and the possibility of probing the neutrino anomalies at accelerators.
Bi-maximal models may also be tested at the upcoming long-baseline experiments or at a possible neutrino factory experiment through the CP violating phases, which could lead to non-negligible CP asymmetries in neutrino oscillations . Unfortunately the effects of the CP violation intrinsic of the Majorana neutrino system is helicity suppressed , though a potential test of the CP properties and Majorana nature of neutrinos has been suggested in ref. .
### 3.2 Solar and Atmospheric plus Dark Matter
The story gets more complicated if one wishes to account also for the hot dark matter. The only possibility to fit the solar, atmospheric and HDM scales in a world with just the three known neutrinos is if all of them have nearly the same mass , of about $``$ 1.5 eV or so in order to provide the right amount of HDM (all three active neutrinos contribute to HDM). This can be arranged in a unified $`SO(10)`$ seesaw model where, to first approximation all neutrinos lie at the above HDM mass scale ($``$ 1.5 eV), due to a suitable horizontal symmetry, the splittings $`\mathrm{\Delta }m_{}^2`$ & $`\mathrm{\Delta }m_{atm}^2`$ appearing as symmetry breaking effects. An interesting fact is that the ratio $`\mathrm{\Delta }m_{}^2/\mathrm{\Delta }m_{atm}^2`$ is related to $`m_{c}^{}{}_{}{}^{2}/m_{t}^{}{}_{}{}^{2}`$ . There is no room in this case to accommodate the LSND anomaly. To what extent this solution is theoretically natural has been discussed recently in ref. . One finds that the degeneracy is stable in the phenomenologically relevant case where neutrinos have opposite CP parities, leading to a suppression in the neutrino-less doble beta decay rate .
### 3.3 Solar & Atmospheric with Dark Matter & LSND: Four-Neutrino Models
An alternative way to include hot dark matter scale is to invoke a fourth light sterile neutrino . As a bonus we can accomodate the LSND hint. The sterile neutrino $`\nu _s`$ must also be light enough in order to participate in the oscillations together with the three active neutrinos. Since it is an $`SU(2)U(1)`$ singlet it does not affect the invisible Z decay width, well-measured at LEP. The theoretical requirements are:
* to understand what keeps the sterile neutrino light, since the $`SU(2)U(1)`$ gauge symmetry would allow it to have a large bare mass
* to account for the maximal neutrino mixing indicated by the atmospheric data, and possibly by the solar
* to account from first principles for the scales $`\mathrm{\Delta }m_{}^2`$, $`\mathrm{\Delta }m_{atm}^2`$ and $`\mathrm{\Delta }m_{LSND/HDM}^2`$
With this in mind we have formulated the simplest maximally symmetric schemes, denoted as $`(e\tau )(\mu s)`$ and $`(es)(\mu \tau )`$ , respectively. One should realize that a given scheme (mainly the structure of the leptonic charged current) may be realized in more than one theoretical model. For example, an alternative to the model in was suggested in ref. . Higher dimensional theories contain light sterile neutrinos which can arise from the bulk sector and reproduce the basic features of these models . For a recent discussion of the experimental constraints on four-neutrino mixing see ref. . For alternative theoretical and phenomenological scenarios see ref. .
Although many of the phenomenological features arise also in other models, here I concentrate the discussion mainly on the theories developed in ref. . These are characterized by a very symmetric mass spectrum in which there are two ultra-light neutrinos at the solar neutrino scale and two maximally mixed almost degenerate eV-mass neutrinos (the LSND/HDM scale ), split by the atmospheric neutrino scale . Before the global U(1) lepton symmetry breaks the heaviest neutrinos are exactly degenerate, while the other two are massless . After the U(1) breaks down the heavier neutrinos split and the lighter ones get mass. The scale $`\mathrm{\Delta }m_{LSND/HDM}^2`$ is generated radiatively at oneโloop due to the additional Higgs bosons, while the splittings $`\mathrm{\Delta }m_{atm}^2`$ and $`\mathrm{\Delta }m_{}^{2}{}_{}{}^{}`$ are twoโloop effects. The models are based only on weak-scale physics: no large mass scale is introduced. They explain the lightness of the sterile neutrino <sup>1</sup><sup>1</sup>1In higher dimensional theries such sterile neutrinos may arise from bulk matter and be light without need for a protecting symmetry, see ref. . , the large mixing required by the atmospheric neutrino data, as well as the generation of the mass splittings responsible for solar and atmospheric neutrino conversions as natural consequences of the underlying leptonโnumber symmetry and its breaking. They are minimal in the sense that they add a single $`SU(2)U(1)`$ singlet lepton to the SM. The models differ according to whether the $`\nu _s`$ lies at the dark matter scale or at the solar neutrino scale. In the $`(e\tau )(\mu s)`$ scheme the $`\nu _s`$ lies at the LSND/HDM scale, as illustrated in Fig. (11)
while in the alternative $`(es)(\mu \tau )`$ model, $`\nu _s`$ is at the solar neutrino scale as shown in Fig. (12) .
In the $`(e\tau )(\mu s)`$ case the atmospheric neutrino puzzle is explained by $`\nu _\mu `$ to $`\nu _s`$ oscillations, while in $`(es)(\mu \tau )`$ it is due to $`\nu _\mu `$ to $`\nu _\tau `$ oscillations. Correspondingly, the deficit of solar neutrinos is explained in the first case by $`\nu _e`$ to $`\nu _\tau `$ conversions, while in the second the relevant channel is $`\nu _e`$ to $`\nu _s`$. In both models one predicts close-to-maximal mixing in the atmospheric neutrino sector, a feature which emerges as the global bestโfit points in the analyses discussed above.
The presence of additional weakly interacting light particles, such as our light sterile neutrino, is constrained by BBN since the $`\nu _s`$ would enter into equilibrium with the active neutrinos in the early Universe (and therefore would contribute to $`N_\nu ^{max}`$) via neutrino oscillations , unless $`\mathrm{\Delta }m^2sin^42\theta <3\times 10^6eV^2`$. Here $`\mathrm{\Delta }m^2`$ denotes a typical mass-square difference of the active and sterile species and $`\theta `$ is the vacuum mixing angle. However, systematic uncertainties in the BBN bounds still caution us not to take them too literally. For example, it has been argued that present observations of primordial Helium and deuterium abundances may allow up to $`N_\nu =4.5`$ neutrino species if the baryon to photon ratio is small . Adopting this as a limit, clearly both models described above are consistent. Should the BBN constraints get tighter e.g. $`N_\nu ^{max}<3.5`$ they could rule out the $`(e\tau )(\mu s)`$ model, and leave out only the competing scheme as a viable alternative. However the possible rรดle of a primordial lepton asymmetry might invalidate this conclusion, for recent work on this see ref. .
It is well-known that the neutral-to-charged current ratios are important observables in neutrino oscillation phenomenology, especially sensitive to the existence of singlet neutrinos, light or heavy . On this basis the two models above would be distinguishable at future neutral-current-sensitive solar and atmospheric neutrino experiments. For example they may be tested in the SNO experiment once they measure the solar neutrino flux ($`\mathrm{\Phi }_\nu ^{NC}`$) in their neutral current data and compare it with the corresponding CC value ($`\mathrm{\Phi }_\nu ^{CC}`$). If the solar neutrinos convert to active neutrinos, as in the $`(e\tau )(\mu s)`$ model, then one expects $`\mathrm{\Phi }_\nu ^{CC}/\mathrm{\Phi }_\nu ^{NC}`$ around 0.5, whereas in the $`(es)(\mu \tau )`$ scheme ($`\nu _e`$ conversion to $`\nu _s`$ ), the above ratio would be nearly $`1`$. Looking at pion production via the neutral current reaction $`\nu _\tau +N\nu _\tau +\pi ^0+N`$ in the atmospheric data might also help in distinguishing between these two possibilities , since this reaction is absent in the case of sterile neutrinos, but would exist in the $`(es)(\mu \tau )`$ scheme.
If light sterile neutrinos indeed exist one can show that they might contribute to a cosmic hot dark matter component and to an increased radiation content at the epoch of matter-radiation equality. These effects leave their imprint in sky maps of the cosmic microwave background radiation (CMBR) and may thus be detectable with the very high precision measurements expected at the upcoming MAP and PLANCK missions as noted in ref. .
### 3.4 Heavy Tau Neutrino
Finally, the door is not closed to heavy neutrinos. Indeed, an alternative to the inclusion of hot dark matter is to simulate its effects through the late decay of an MeV tau neutrino , in the presence of a light sterile neutrino. Indeed such a model was presented where an unstable MeV Majorana tau neutrino naturally reconciles the cosmological observations of large and small-scale density fluctuations with the cold dark matter picture. The model assumes the spontaneous violation of a global lepton number symmetry at the weak scale. The breaking of this symmetry generates the cosmologically required decay of the $`\nu _\tau `$ with lifetime $`\tau _{\nu _\tau }10^210^4`$ sec, as well as the masses and oscillations of the three light neutrinos $`\nu _e`$ , $`\nu _\mu `$ and $`\nu _s`$ which may account for the present solar and atmospheric data, though a dedicated three-neutrino fit in which one of the neutrinos is sterile would be desirable.
## 4 In conclusion
The angle-dependent atmospheric neutrino deficit provides, together with the solar neutrino data, a strong evidence for physics beyond the Standard Model. Small neutrino masses provide the simplest, but not unique, explanation of the data. Allowing for alternative explanations of the underground experiments involving non-standard neutrinos opens new possibilities involving either massless or even very heavy cosmologically unstable neutrinos, which naturally arise in many models. From this point of view, it is still too early to infer with great certainty neutrino masses and angles from underground experiments alone. Keeping within the framework of the standard neutrino oscillation interpretation of the data, one has an interesting possibility of bi-maximal neutrino mixing and of testing the neutrino anomalies not only at the upcoming long-baseline or neutrino factory experiments, but also at highโenergy accelerators. On the other hand if the LSND result stands the test of time, this would be a strong indication for the existence of a light sterile neutrino. The two most attractive ways to reconcile underground observations with LSND invoke either $`\nu _e`$ \- $`\nu _\tau `$ conversions to explain the solar data, with $`\nu _\mu `$ \- $`\nu _s`$ oscillations accounting for the atmospheric deficit, or the opposite. At the moment the latter is favored by the atmospheric data. These two basic schemes have distinct implications at future neutral-current-sensitive solar & atmospheric neutrino experiments, such as SNO and Super-Kamiokande. To end up on a phylosophical mood I would say that it is important to search for manifestations of massive and/or non-standard neutrinos at the laboratory in an unbiased way. Though most of the recent excitement now comes from underground experiments, one should bear in mind that models of neutrino mass may lead to a plethora of new signatures which may be accessible also at highโenergy accelerators, thus illustrating the complementarity between the two approaches.
I am grateful to the Organizers for the kind hospitality and to all my collaborators, especially Concha Gonzalez-Garcia and her student Carlos Peรฑa for the re-analysis of solar neutrino data. This work was supported by DGICYT grant PB95-1077 and by the TMR contract ERBFMRX-CT96-0090. |
no-problem/9912/hep-ex9912038.html | ar5iv | text | # Heavy Quark Lifetimes, Mixing and CP Violation
## 1 Introduction
Much has happened this year in the subject of heavy quark studies. This brief paper cannot hope to cover all of the details of work in this area, but will focus instead on a few of the highlights that have emerged. First of all, recent high precision measurements in charm lifetimes, particularly of the $`\mathrm{D}_\mathrm{s}`$, allow better understanding of the mechanism of charm decays. Secondly, a new search for charm mixing at CLEO significantly improves upon the sensitivity of previous analyses, and has implications for the effects of new physics in the charm sector. Thirdly, significant work has continued in the measurement of $`\mathrm{B}_\mathrm{s}`$ mixing, which puts important constraints on the CKM parameter $`\mathrm{V}_{\mathrm{td}}`$. Finally, two of items of relevance to the study of CP violation in the B system have recently been made available, which will be touched on briefly.
## 2 Lifetimes
### 2.1 charm lifetimes
To motivate the discussion of heavy quark lifetimes it is useful to recall an old puzzle in charm physics. When researchers first measured the charged and neutral D meson lifetimes, they discovered that the $`\mathrm{D}^+`$ lifetime was considerably longer (about a factor of 2.5) than the $`\mathrm{D}^0`$ lifetime. This result ran counter to expectations. The decays of both mesons were believed to be dominated by the spectator decay of the charm quark (Figures 1a and 1b), which suggested the two lifetimes should be nearly identical.
In the face of experimental evidence, several arguments were constructed as to why the two lifetimes might be different. First of all, the fact that there are two identical d quarks in the $`\mathrm{D}^+`$ final state (and not in the $`\mathrm{D}^0`$) might give rise to Pauli-type interference, which could extend the lifetime of the $`\mathrm{D}^+`$. Moreover, other decay mechanisms, shown in Figures 1c to 1f, were hypothesized, but these were expected to give only small contributions to the overall decay width. The $`\mathrm{D}^+`$ could decay via the weak annihilation of the charm and anti-down quarks (Figure 1c) but this decay is Cabibbo suppressed, and therefore expected to be only a fraction of the spectator amplitude. Analogously, the $`\mathrm{D}^0`$ could decay via the W exchange diagram of Figure 1d, providing another difference between the two meson lifetimes. Both the weak annihilation and the W exchange amplitudes were expected to be small due to helicity and color suppression . However, both forms of suppression could be circumvented by the emission of a soft gluon, so the strength of the suppression was in question. Finally, penguin diagrams such as Figures 1e and 1f could also contribute, but these diagrams are Cabibbo, helicity and color suppressed, and therefore received little attention.
In any case, the suggestion that mechanisms other than spectator quark decay could contribute significantly to the widths of the charm mesons provided motivation for further research in charm lifetimes. Primarily, this effort focussed on lifetime measurements of other weakly decaying charmed particles to use as comparison. Interference patterns were expected to be different for charm baryons, thereby providing a handle on the effects of Pauli type interference. This is especially easy to see in the case of the $`\mathrm{\Omega }_c`$, where there are two identical strange quarks in the initial state. Moreover, W exchange is different for baryons, where the three quarks in the final state guarantee that the decay is neither helicity nor color suppressed. Finally, weak annihilation of charm and anti-strange quarks in $`\mathrm{D}_\mathrm{s}`$ decay is not Cabibbo suppressed, offering the possibility of studying this contribution.
Figure 2 shows where we stand today in the measurement of charm lifetimes. All seven weakly decaying charmed particles are shown. The unlabeled error bars give the world averages from the 1998 PDG review . Since then, new measurements have been made available on $`\mathrm{D}^0`$ and $`\mathrm{D}_\mathrm{s}`$ lifetimes from E791 , on all the D mesons from CLEO , and preliminary results on $`\mathrm{D}_\mathrm{s}`$ and $`\mathrm{\Lambda }_\mathrm{c}`$ from FOCUS and SELEX . The most notable feature of the plot is that the $`\mathrm{D}_\mathrm{s}`$ and $`\mathrm{D}^0`$ lifetimes are measurably different. One year ago, these two lifetimes were nearly the same within errors. The new measurements not only reduce the error on the $`\mathrm{D}_\mathrm{s}`$ lifetime, but also shift the central value. An average of currently available data, including preliminary results from FOCUS yields $`\tau _{\mathrm{D}_\mathrm{s}}/\tau _{\mathrm{D}^0}=1.211\pm 0.017`$.
The precision of these new lifetime measurements, and the promise of more to come from the Fermilab fixed target experiments, finally allows us to study charm decays in a manner we have been wanting to do for 20 years . As a simple example of how these data can be used to unravel the contributions to charm particle decay, consider the following three-step exercise to estimating Pauli interference, W exchange and weak annihilation contributions to D meson decays.
First of all, compare the doubly-Cabibbo suppressed decay $`\mathrm{D}^+\mathrm{K}^+\pi ^+`$$`\pi ^{}`$ to its Cabibbo favored counterpart $`\mathrm{D}^+\mathrm{K}^{}\pi ^+\pi ^+`$. Since the kinematics are nearly the same in the two cases, the decays differ in only two respects. First of all, the decay diagrams have different (well-known) weak couplings. Secondly, the Cabibbo-favored decay is subject to Pauli interference, while the DCS decay is not, since there are no identical particles in the DCS final state. The ratio of the two rates can therefore be expressed:
$$\frac{BR(\mathrm{D}^+\mathrm{K}^+\pi ^+\pi ^{})}{BR(\mathrm{D}^+\mathrm{K}^{}\pi ^+\pi ^+)}\frac{\mathrm{\Gamma }_{SP}}{\mathrm{\Gamma }_{PI}}\times \mathrm{tan}^4\theta _C,$$
(1)
where $`\mathrm{\Gamma }_{PI}`$ represents a spectator decay rate for the charm quark that is subject to Pauli interference, while $`\mathrm{\Gamma }_{SP}`$ represents a spectator decay rate without interference. A combination of current measurements, including preliminary FOCUS data, yields $`BR(\mathrm{D}^+\mathrm{K}^+\pi ^+\pi ^{})/BR(\mathrm{D}^+\mathrm{K}^{}\pi ^+\pi ^+)=0.68\pm 0.09\%`$. Using this value, together with an estimate for $`\mathrm{tan}^4\theta _c`$ of $`2.56\times 10^3`$, one can deduce the ratio: $`\mathrm{\Gamma }_{PI}/\mathrm{\Gamma }_{SP}=0.38`$.
In the second step, we can use the measured ratio of $`\mathrm{D}^+`$ to $`\mathrm{D}^0`$ lifetimes to relate the W exchange contribution to a standard spectator rate, according to
$$\frac{\tau _{\mathrm{D}^+}}{\tau _{\mathrm{D}^0}}=\frac{\mathrm{\Gamma }_{SP}+\mathrm{\Gamma }_{WX}+\mathrm{\Gamma }_{SL}}{\mathrm{\Gamma }_{PI}+\mathrm{\Gamma }_{SL}},$$
(2)
where $`\mathrm{\Gamma }_{SL}`$ represents the rate due to semileptonic charm decay and $`\mathrm{\Gamma }_{WX}`$ represents any additional contributions having to do with W exchange diagrams (including interference between spectator and W exchange amplitudes). Using the previous result for $`\mathrm{\Gamma }_{PI}/\mathrm{\Gamma }_{SP}`$ and a $`\mathrm{D}^0`$ semileptonic branching fraction of 13.4% (muonic and electronic combined), one can extract $`\mathrm{\Gamma }_{WX}/\mathrm{\Gamma }_{SP}=0.26`$.
Finally, to estimate the effects of the weak annihilation diagram, one can use the newest data to compare the $`\mathrm{D}_\mathrm{s}`$ lifetime (where weak annihilation is not Cabibbo suppressed) to the $`\mathrm{D}^0`$ lifetime :
$$\frac{\tau _{\mathrm{D}_\mathrm{s}}}{\tau _{\mathrm{D}^0}}=1.05\times \frac{\mathrm{\Gamma }_{SP}+\mathrm{\Gamma }_{WX}+\mathrm{\Gamma }_{SL}}{\mathrm{\Gamma }_{SP}+\mathrm{\Gamma }_{WA}+\mathrm{\Gamma }_{SL}},$$
(3)
from which one can derive $`\mathrm{\Gamma }_{WA}/\mathrm{\Gamma }_{SP}=0.07`$.
Needless to say, these results are only meant to be illustrative of the technique for isolating the various decay contributions, and cannot be taken too seriously by themselves. In practice, one must be attentive of the many uncertainties that feed into the calculations. In some cases, the results are quite sensitive to the input parameters. For example, a variation of $`\pm 10\%`$ in the semileptonic branching fraction alone (consistent with measured errors) leads to a range of answers: $`\mathrm{\Gamma }_{WX}/\mathrm{\Gamma }_{SP}`$=0.22 to 0.31 and $`\mathrm{\Gamma }_{WA}/\mathrm{\Gamma }_{SP}`$=0.04 to 0.11. In general, the lesson one should take away is that all of these contributions can be quite significant and that the current data on charm lifetimes should provide the basis for better understanding in charm decays in the near future.
### 2.2 bottom lifetimes
The current data on bottom decays are not far behind the measurements of charm decays. Figure 3 shows the most recent results on bottom lifetimes as reported by the B Lifetime Working Group . Of note is the recent SLD measurement of $`\tau _{\mathrm{B}^+}/\tau _{\mathrm{B}^0}`$, which is the most precise to date. New measurements of $`\mathrm{B}^+`$ and $`\mathrm{B}^0`$ lifetimes are also available from ALEPH and OPAL . Not shown in the diagram is the CDF measurement of the $`\mathrm{B}_\mathrm{c}`$ lifetime $`\tau _{\mathrm{B}_\mathrm{c}}=0.46_{0.16}^{+0.18}\pm 0.03`$ ps.
The status of the predictions for bottom particle lifetimes is in somewhat better shape than for the charm system. Estimates are usually based on the operator product expansion :
$$\mathrm{\Gamma }\frac{G_F^2m_Q^5}{129\pi ^3}(A_1+\frac{A_2}{m_Q^2}+\frac{A_3}{m_Q^3}+O(\frac{1}{m_Q^4})),$$
(4)
which calculates corrections in powers of one over the heavy quark mass. The $`A1`$ term represents the spectator processes, $`A2`$ parameterizes some differences between the baryons and mesons, and the $`A3`$ term includes W exchange, weak annihilation and Pauli interference effects. In charm decays, the lighter charm quark mass makes questionable the use of this expansion, but in B decays the correction terms are about 10% of what they are in the charm sector, and this provides a plausible framework for calculation. Figure 4 shows a comparison of experimental measurements and theoretical predictions for several ratios of bottom lifetimes. The shaded area shows the predictions of reference . For the mesons, there is good agreement between theory and experiment, and the measured ratios are close to unity, as expected if the decays are dominated by spectator contributions. For the baryons, the observer might be inclined to wonder at the discrepancy between theory and experiment. It should be noted however that there are other predictions for bottom lifetimes that are more conservative and include the range of current measurement. This remains a point of controversy.
The next few years will see important new measurements in this sector. Some of the most interesting should come from Run II at the Tevatron. These will provide more precise measurements of the $`\mathrm{B}_\mathrm{s}`$ and baryon lifetimes that are needed to reach a general understanding of bottom decays.
## 3 Meson Mixing
Before launching into the latest results on neutral meson mixing it is useful to begin with a review of the properties of the different systems. Figure 5 shows schematic mass plots for all four neutral systems that are subject to weak flavor mixing . In each case, the scale is artfully chosen to emphasize the mass and width differences between the physical eigenstates. Figure 5a shows the neutral kaon system, with the broad peak of the $`\mathrm{K}_\mathrm{s}`$ and, at essentially the same mass, the narrow spike of the $`\mathrm{K}_\mathrm{L}`$, which has a decay width 580 times smaller. Figure 5b shows the charm $`\mathrm{D}^0`$ system. Although only one curve appears visible, both states of the neutral D have been plotted. The Standard Model predicts an immeasurably small difference in width and mass for the two charm D mesons. Figure 5c depicts the $`\mathrm{B}_\mathrm{d}`$ mesons, with essentially the same widths and a small but noticeable difference in mass. Finally, Figure 5d provides an educated guess for the $`\mathrm{B}_\mathrm{s}`$ system. There is expected to be a small difference in width between the two mesons (20% difference in the plot) and a substantial difference in mass. Interestingly, these four systems appear to cover the range of possibilities for mixing.
In order to identify the motivation for studies in meson mixing, it is necessary to understand the source of some of the differences in Figure 5. Often, the degree of mixing of a neutral meson system is parameterised by
$$\mathrm{r}_{\mathrm{mix}}\frac{\mathrm{\Gamma }(M\overline{M}f)}{\mathrm{\Gamma }(\overline{M}f)},$$
(5)
which describes the rate for a particle to mix and then decay to a particular final state, relative to the rate for the particle to decay to that final state without mixing. In order to calculate the probability of mixing, one usually begins by calculating the amplitudes of flavor-changing box diagrams such as the ones shown in Figure 6. On the left is a mixing diagram for the $`\mathrm{D}^0`$ system. Many such box diagrams contribute, with different intermediate quark propagators. In the D system, the intermediate propagators are all down-type quarks. In the B system the intermediate propagators are all up-type quarks. Calculation for these diagrams shows that the amplitude is proportional to the mass-squared of the intermediate quark. Mixing in the B system is therefore dominated by diagrams with heavy internal top quarks, and consequently the mixing rate is large in this system ($`\mathrm{r}_{\mathrm{mix}}1`$). In D mixing, one would expect that diagrams with the heavy bottom quarks would dominate, but these are strongly suppressed by CKM couplings, and it is actually diagrams with internal strange quarks that dominate . Since the strange quark mass is so much smaller than the top quark mass, this contribution to D mixing is correspondingly smaller than the top quark contribution to B mixing. Moreover, in the charm system it is necessary to feed the heavy charm quark 4-momentum through the light strange quark internal propagators, pulling them off shell in the process and contributing another suppression factor of the form $`m_s^2/m_c^2`$. When all is said and done, mixing from SM box contributions to the D system are expected to be extremely small, leading to $`\mathrm{r}_{\mathrm{mix}}10^{10}`$. Other processes may contribute from on-shell intermediate states , nearby resonances or from penguin diagrams , but these too are predicted to be very small. Standard Model mixing in the D system is therefore expected to be immeasurable by any current experiment.
The profound difference between mixing in the bottom system and mixing in the charm system is what drives the experimental approach to these systems. In the B system, where mixing from the Standard Model is large, mixing measurements are used to study CKM couplings (most notably $`\mathrm{V}_{\mathrm{td}}`$). In the D system, where SM contributions are small, searches for mixing are used to explore possible contributions from new physics.
### 3.1 D mixing
The traditional method of observing mixing involves identifying the flavor of the meson both at production and at decay. In this way, it is possible to determine if the meson has mixed during the interim. In the charm system, the most popular means of tagging the produced D is to reconstruct $`\mathrm{D}^+`$($`\mathrm{D}^{}`$) decays to $`\pi ^+`$$`\mathrm{D}^0`$($`\pi ^{}\overline{\mathrm{D}^0}`$). In this case, the charge of the pion tells whether the initial D meson is a $`\mathrm{D}^0`$ or a $`\overline{\mathrm{D}^0}`$. The decay mode of the D can subsequently be used to determine the flavor of the D at decay. As an example, the left diagram in Figure 7 shows how a $`\mathrm{D}^0`$ can mix via a box diagram into a $`\overline{\mathrm{D}^0}`$, which then decays via a spectator process to $`\mathrm{K}^+\pi ^{}`$ or $`\mathrm{K}^+l^{}\overline{\nu }`$. Experimentally, if the sign of the reconstructed kaon is the same as the sign of the pion from the $`\mathrm{D}^{}`$ decay then the event is termed a โwrong-signโ event, and is a candidate for D mixing.
Unfortunately, for hadronic final states, there are two means for producing wrong-sign events. The first involves mixing, as in the left plot in Figure 7. The second is doubly-Cabibbo-suppressed decay, as in the right plot of Figure 7. Although the DCS rate is expected to be only about 1% of the Cabibbo-favored decay rate, it is an enormous background when compared to the extremely small mixing signal expected. Therefore, the wrong-sign rate for hadronic final states is a combination of three terms: mixing, DCS decay, and interference between the two. Equation 6 shows the time evolution of hadronic wrong-sign decays in the limit of small mixing :
$$\mathrm{\Gamma }(\mathrm{D}^0\mathrm{K}^+\pi ^{})e^{\mathrm{\Gamma }t}[4|\lambda |^2+(\mathrm{\Delta }M^2+\frac{\mathrm{\Delta }\mathrm{\Gamma }^2}{4})t^2+(2Re\lambda \mathrm{\Delta }\mathrm{\Gamma }+4Im\lambda \mathrm{\Delta }M)t],$$
(6)
where $`\lambda `$ quantifies the relative strength of DCS and CF amplitudes. The first term, proportional to $`e^{\mathrm{\Gamma }t}`$, represents the pure DCS decay rate. The second term, proportional to $`t^2e^{\mathrm{\Gamma }t}`$ represents mixing, which can have contributions from both mass and width differences of the eigenstates. The third term, proportional to $`te^{\mathrm{\Gamma }t}`$, represents the interference between mixing and DCS amplitudes. In contrast, wrong-sign semileptonic final states are not produced by DCS decays, and the time evolution of those states is simply:
$$\mathrm{\Gamma }(\mathrm{D}^0\mathrm{K}^+l^{}\nu )e^{\mathrm{\Gamma }t}(\mathrm{\Delta }M^2+\frac{\mathrm{\Delta }\mathrm{\Gamma }^2}{4})t^2$$
(7)
This mode is obviously cleaner theoretically, but more challenging experimentally because of the missing neutrino.
To set the scale for the latest measurements it is useful to review some previous results. One of the most recent (and least ambiguous) limits on mixing comes from a study at the FNAL E791 experiment , which examines semileptonic final states. In that study, the 90% C.L. limit on mixing is $`\mathrm{r}_{\mathrm{mix}}<0.50\%`$, a value that is typical of current measurements. There is also an older CLEO measurement that gives a wrong-sign signal of $`\mathrm{r}_{\mathrm{ws}}=0.77\pm 0.25\pm 0.25\%`$. However, since that result did not discriminate between DCS and mixing (there was no vertex chamber for measuring decay lengths in the old detector), it is popularly attributed to DCS decays.
This summer, a new CLEO study that examines $`\mathrm{D}^{}\mathrm{D}^0\pi (\mathrm{K}\pi )\pi `$ decays shows a dramatic improvement in sensitivity over the older results. This improvement is driven primarily by two effects: excellent mass resolution, which reduces non-$`\mathrm{D}^{}`$ backgrounds, and high efficiency at short decay times, which helps to distinguish between DCS decays and mixing. Figure 8 shows plots of the kinetic energy of the $`\mathrm{D}^0\pi `$ system, which should peak at 6 MeV for decays from the $`\mathrm{D}^{}`$. About 16000 right-sign signal events are apparent, with a mass resolution of 190 keV. This impressive resolution is due in part to a new trick being used by CLEO analysts. The slow pion from the $`\mathrm{D}^{}`$ decay is required to come from the beam ribbon , providing an extra vertex constraint that is an effective aid to improving the momentum resolution.
The right side of the figure shows the CLEO D$`\pi `$ kinetic energy distribution for the wrong-sign decays, with about 60 events in the signal peak. A little more than half of the background comes from $`\mathrm{D}^0`$ to K$`\pi `$ decays combined with a random pion to give a wrong-sign $`\mathrm{D}^{}`$ candidate decay. Smaller background contributions come from other charm decays and uds events. From these results, CLEO calculates a wrong-sign ratio of $`r_{ws}=0.34\pm 0.07\pm 0.06\%`$.
To disentangle DCS decays from the mixing contribution, the decay time distribution is fit to the three terms given in Equation 6. The results are expressed in terms of the parameters $`x^{}`$ and $`y^{}`$, which are related to the mass and width differences ($`\mathrm{\Delta }\mathrm{M}`$ and $`\mathrm{\Delta }\mathrm{\Gamma }`$) of the physical eigenstates, and the relative phase between DCS and CF decay amplitudes ($`\delta `$):
$`x^{}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }\mathrm{M}}{\mathrm{\Gamma }}}\mathrm{cos}\delta +{\displaystyle \frac{\mathrm{\Delta }\mathrm{\Gamma }}{2\mathrm{\Gamma }}}\mathrm{sin}\delta `$
$`y^{}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }\mathrm{\Gamma }}{2\mathrm{\Gamma }}}\mathrm{cos}\delta {\displaystyle \frac{\mathrm{\Delta }\mathrm{M}}{\mathrm{\Gamma }}}\mathrm{sin}\delta .`$ (8)
The CLEO 95% C.L. limits are reported as $`|x^{}|<3.2\%`$ and $`5.9\%<y^{}<0.3\%`$. Assuming that the phase angle $`\delta `$ is approximately zero , one can relate these limits to 95% C.L. limits on $`\mathrm{r}_{\mathrm{mix}}`$ due to non-zero $`\mathrm{\Delta }\mathrm{\Gamma }`$($`\mathrm{r}_{\mathrm{mix}}<0.17\%`$ ) and non-zero $`\mathrm{\Delta }\mathrm{M}`$ ($`\mathrm{r}_{\mathrm{mix}}<0.05\%`$).
The reader may note that the $`\mathrm{r}_{\mathrm{mix}}`$ limit from the constraint on $`x^{}`$ is an order of magnitude more restrictive than previous measurements. The limit from $`y^{}`$ is not as restrictive only because the central value for $`y^{}`$ is about 1.8 standard deviations away from zero. Although this discrepancy is not terribly significant, it is interesting in its own right. Recently, a direct search by E791 for non-zero $`\mathrm{\Delta }\mathrm{\Gamma }`$ has also been performed by looking for a difference in lifetimes for decays to different final states, yielding a sensitivity to $`\mathrm{\Delta }\mathrm{\Gamma }`$ comparable to the new CLEO limit. Future work along the same lines can be expected from several experiments.
This subject will be pursued vigorously in the next few years. The FOCUS experiment at FNAL has already shown preliminary results on the decay $`\mathrm{D}^{}`$ to $`\mathrm{D}^0`$ to K$`\mu \nu `$. From this mode alone, FOCUS expects to be able to set a limit on $`\mathrm{r}_{\mathrm{mix}}`$ of about $`\mathrm{r}_{\mathrm{mix}}<0.12\%`$ if there is no indication of mixing. In the near future, B factories will also contribute significantly to these studies. A design luminosity year at BaBar will produce about $`10^7`$ $`\mathrm{D}^{}\mathrm{D}^0\pi `$ decays, which should also lead to some interesting results.
### 3.2 B mixing
As was suggested earlier, the implied purpose to studying B mixing is to explore CKM parameters, $`\mathrm{V}_{\mathrm{td}}`$ in particular. This is easily illustrated with Figure 9, which shows the triangle corresponding to the CKM unitarity condition $`\mathrm{V}_{\mathrm{ud}}\mathrm{V}_{\mathrm{ub}}^{}+\mathrm{V}_{\mathrm{cd}}\mathrm{V}_{\mathrm{cb}}^{}+\mathrm{V}_{\mathrm{td}}\mathrm{V}_{\mathrm{tb}}^{}=0`$. The apex of the triangle is constrained by measurements of $`\mathrm{V}_{\mathrm{ub}}/\mathrm{V}_{\mathrm{cb}}`$ from semileptonic B decays, measurements of CP violation in the kaon system, measurements of $`\mathrm{\Delta }\mathrm{M}_\mathrm{d}`$ from B mixing, and the lower limit on $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ from $`\mathrm{B}_\mathrm{s}`$ mixing. The length of the upper right side of the triangle is given by $`\mathrm{V}_{\mathrm{td}}\mathrm{V}_{\mathrm{tb}}^{}/\mathrm{V}_{\mathrm{cd}}\mathrm{V}_{\mathrm{cb}}^{}`$. Since $`\mathrm{V}_{\mathrm{tb}}`$ is close to unity, $`\mathrm{V}_{\mathrm{cd}}`$ is well-measured from charm decays, and $`\mathrm{V}_{\mathrm{cb}}`$ is approximately $`\mathrm{sin}\theta _c`$ in the SM; $`\mathrm{V}_{\mathrm{td}}`$ remains the limiting factor in determining the length of that side of the triangle. A precise measure of that side of the triangle would provide excellent complementary information to the angle measurements expected from B factory measurements. Checking consistency in the set of measurements that over-constrain the triangle is of great interest in the search for new physics. In particular, new measurements of $`\mathrm{V}_{\mathrm{td}}`$ and $`\mathrm{sin}2\beta `$ could constrain the apex of the triangle independently of other data.
Equation 9 shows the relationship between the mass difference $`\mathrm{\Delta }\mathrm{M}_\mathrm{d}`$ measured from $`\mathrm{B}_\mathrm{d}`$ mixing and the CKM parameter $`\mathrm{V}_{\mathrm{td}}`$. Although the measurement of $`\mathrm{\Delta }\mathrm{M}_\mathrm{d}`$ is now quite good (a total of 26 measurements have been made from LEP, SLD and CDF ), the theoretical uncertainties on the many coefficients in Equation 9 lead to roughly a 20% uncertainty in $`\mathrm{V}_{\mathrm{td}}`$. However, simultaneous measurements of $`\mathrm{B}_\mathrm{d}`$ and $`\mathrm{B}_\mathrm{s}`$ mixing can give much better precision on $`\mathrm{V}_{\mathrm{td}}`$ through the ratio of mass differences shown in Equation 10 . In this case, many uncertainties cancel and there remains about a 5% theoretical uncertainty on the extraction of $`\mathrm{V}_{\mathrm{td}}`$.
$$\mathrm{\Delta }\mathrm{M}_\mathrm{d}=\frac{G_F^2}{6\pi ^2}m_{\mathrm{B}_\mathrm{d}}f_{\mathrm{B}_\mathrm{d}}^2B_{\mathrm{B}_\mathrm{d}}\eta _{QCD}F(m_t^2)|\mathrm{V}_{\mathrm{td}}\mathrm{V}_{\mathrm{tb}}^{}|^2$$
(9)
$$\frac{\mathrm{\Delta }\mathrm{M}_\mathrm{s}}{\mathrm{\Delta }\mathrm{M}_\mathrm{d}}=\frac{m_{\mathrm{B}_\mathrm{s}}f_{\mathrm{B}_\mathrm{s}}^2B_{\mathrm{B}_\mathrm{s}}}{m_{\mathrm{B}_\mathrm{d}}f_{\mathrm{B}_\mathrm{d}}^2B_{\mathrm{B}_\mathrm{d}}}\left|\frac{\mathrm{V}_{\mathrm{ts}}}{\mathrm{V}_{\mathrm{td}}}\right|^2=(1.15\pm 0.05)^2\left|\frac{\mathrm{V}_{\mathrm{ts}}}{\mathrm{V}_{\mathrm{td}}}\right|^2$$
(10)
To date, $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ has not been measured. Evidence of $`\mathrm{B}_\mathrm{s}`$ mixing is clear, but only lower limits on $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ have been determined. Nonetheless, constraints on the unitarity triangle from other measurements suggest that it may be just beyond the current measured limits. Assuming the SM, the contours in Figure 9 show the present estimate of the apex of the unitarity triangle. The central measured value of $`\mathrm{\Delta }\mathrm{M}_\mathrm{d}`$ suggests the apex should lie on the solid quarter circle centered at $`(\overline{\rho },\overline{\eta })=(1,0)`$. The limit on $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ ($`\mathrm{\Delta }\mathrm{M}_\mathrm{s}>12.4\mathrm{ps}^1`$ is used in the figure) corresponds to the dashed quarter circle just outside the $`\mathrm{\Delta }\mathrm{M}_\mathrm{d}`$ curve. Higher limits on $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ push the circle to smaller radius, further constraining the apex of the triangle.
Once again, the identification of mixed events involves tagging the flavor of the meson both at production and at decay, and measuring the time evolution of mixing. The mixing frequency determines $`\mathrm{\Delta }M`$. Several techniques have been utilized for $`\mathrm{B}_\mathrm{s}`$ decays. The initial state can be tagged by examining the charge of leptons or kaons in the opposite hemisphere, by examining an associated kaon in the same hemisphere, by calculating a weighted jet charge for either jet, or by using the jet angles in the case of polarized beams. The decaying meson can be tagged by using charged leptons in the final state, by using partially or fully reconstructed D mesons, or by reconstructing two vertices in the decay hemisphere (associated with bottom and charm decay).
The four keys to a precise measure of $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ are excellent proper time resolution, high purity of the $`\mathrm{B}_\mathrm{s}`$ decay sample (the worst backgrounds tend to come from other B decays), accurate tagging of the initial and final state mesons, and as always, high statistics. In order to illustrate the challenge to experimentalists, Figure 10 shows an idealized experiment with infinite statistics and no background for $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}=10\mathrm{ps}^1`$. Even in this case, the measurement is not easy. The vertical axis measures the fraction of events identified as mixed, as a function of the proper decay time of the $`\mathrm{B}_\mathrm{s}`$. In a perfect experiment, this curve should start at zero and oscillate between zero and one. The oscillation never makes it all the way to zero or all the way to one because the mistag rate (25% in the figure) dilutes the measurement. This effect is exacerbated by smearing due to decay length resolution (200 $`\mu `$m in the figure). At higher values of proper decay time, the amplitude of the oscillation degrades because of increased uncertainty on the decay time due to the boost resolution (10% assumed in the figure).
Figure 11 shows the combined results on $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ from LEP and CDF. In order to understand this plot, it is necessary to recognize that the probability for mixing is proportional to $`1\mathrm{cos}\mathrm{\Delta }\mathrm{M}_\mathrm{s}t`$, where $`t`$ is the $`\mathrm{B}_\mathrm{s}`$ decay time. The figure shows the results of fits for many different values of $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ to oscillation data from many experiments. For each point, the data are fit to a function proportional to $`1A\mathrm{cos}\mathrm{\Delta }\mathrm{M}_\mathrm{s}t`$, where the oscillation amplitude $`A`$ is a fit parameter. If mixing occurs at that particular value of $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$, then the fitted value of $`A`$ should be unity. At other values of $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$, the fitted value of A should be close to zero, consistent with no oscillation. In short, the plot can be thought of as a Fourier analysis of oscillation data, with the vertical axis showing the amplitude $`A`$ as a function of frequency. The limit on $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ from this plot alone is $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}>13.2\mathrm{ps}^1`$ at 95% C.L.
Figure 12 is an analogous plot for new data from SLD . These results show dramatic improvement since Moriond 99, driven primarily by new tracking and substantial improvements in decay length resolution. Three separate analyses are employed, corresponding to reconstructed final states of a charm vertex plus lepton, a high-momentum lepton, or a pair of vertices displaced from the primary vertex. Two analyses not shown, but expected in the summer of 2000, search for $`\mathrm{B}_\mathrm{s}`$ decays using final states that include an exclusively reconstructed $`\mathrm{D}_\mathrm{s}`$ decay, or a lepton and charged kaon. Once again, the plot shows the fitted value of the oscillation amplitude $`A`$ as a function of $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$. At low values of $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ the uncertainty in $`A`$ is about a factor of two larger than the results from combined CDF and LEP data. However, at high values of $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$, the uncertainties are comparable, so that these data make a significant contribution to the world average.
Figure 13 shows the combined data from all experiments, updated as of December 99 . A total of 11 analyses contribute. The 95% C.L. lower limit on $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ from these data is $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}>14.3psecinv`$, up from 12.4 $`\mathrm{ps}^1`$ reported at EPS 99 (Tampere). The new higher limit on $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ provides further constraint on the unitarity triangle of Figure 9. The dashed circle from the $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ limit now moves just inside the curve that represents the central value of $`\mathrm{\Delta }\mathrm{M}_\mathrm{d}`$. This change clips off a significant fraction of the previously allowed area for the apex of the triangle.
In the next few years, further work in this area should prove very interesting. Figure 14 gives an idea of what we should expect from future studies of $`\mathrm{B}_\mathrm{s}`$ mixing. The vertical scale of the plot is called โsignificanceโ and is the inverse uncertainty in the amplitude parameter $`A`$ of Figures 11-13. It can therefore be interpreted as the analyzing power for discriminating between $`A=0`$ and $`A=1`$. The squares map the significance as a function of $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ from the combination of all the LEP experiments. The circles plot the significance from the SLD data set when all five analyses are included. Both curves cross the 95% C.L. limit at about $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$=13 ps<sup>-1</sup>.
The comparison of these two curves is particularly interesting in two respects. First of all, it is surprising that they appear on the same graph when one considers that the LEP data sample represents 40 times more luminosity than the SLD data sample. Secondly, the shapes of the two curves are significantly different. The SLD curve is dramatically flatter than the LEP curve, and at high values of $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ the SLD significance even wins out over LEP. The reason for both these features is the very precise vertex resolution of the SLD detector. This allows SLD researchers to do more inclusive style analyses, that are more efficient, in order to compete with the statistics from LEP. It also allows SLD to retain good sensitivity to the very fast oscillation at high values of $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$. The insert in the figure shows the vertex resolution achieved by the ongoing analysis that tags $`\mathrm{B}_\mathrm{s}`$ decays via a fully reconstructed $`\mathrm{D}_\mathrm{s}`$ decay. The 46 $`\mu `$m resolution for the central Gaussian (60% of the area) is roughly four times better than the resolution achieved in a typical LEP study.
A competitive race will continue between LEP and SLD for the next year or so as each group tries to improve the sensitivity of the measurements. This will be done in the hopes of actually seeing the $`\mathrm{B}_\mathrm{s}`$ oscillation, which is predicted by the other measurements of Figure 9 as being just beyond the limits of current analysis. However, if the oscillation is not seen at LEP or SLD in the next year, new players will soon dominate the field. The triangles in the upper right corner of Figure 14 show what to expect from the CDF experiment after Run II. That curve crosses the 95% C.L. line around $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$=50 ps<sup>-1</sup>. Assuming the Tevatron experiments can trigger efficiently on displaced vertices, those experiments will dominate the $`\mathrm{B}_\mathrm{s}`$ oscillation measurements in just a few years. It is especially interesting to note that the future Tevatron data should either confirm the SM estimate of $`\mathrm{V}_{\mathrm{td}}`$ or prove that the SM is incorrect because the triangle doesnโt close (see also Michael Peskinโs comment at the end of this paper).
## 4 CP Violation
The recent turn-on of two new B factories has focussed a lot of attention on the already hot topic of CP violation. This year there are two items relevant to CPV in the B system that are worthy of note. Both of these topics are covered by other speakers at this conference, so the summary here will be brief. The first is an update on searches for CPV in $`\mathrm{B}^0\psi \mathrm{K}_\mathrm{s}`$ (see also M. Pauliniโs contribution to these proceedings). The second is a pair of measurements of $`\mathrm{\Gamma }(\mathrm{B}^0\pi ^+\pi ^{})`$ and $`\mathrm{\Gamma }(\mathrm{B}^0\mathrm{K}^+\pi ^{})`$ from CLEO that has implications for future B factory measurements of the mixing angle $`\alpha `$ (see also R. Polingโs contribution to these proceedings).
It is common knowledge that CP violation in a decay rate is due to the interference between two or more amplitudes with different CP-conserving and different CP-violating phases. In particular, if both amplitudes are pure decay amplitudes, then the CP violation is called โdirect CPVโ. In this case, CP violation is constant in time and can be measured via integrated asymmetries. On the other hand, if one of the amplitudes involves mixing then the violation is called โindirect CPVโ, and the asymmetry evolves in time. In the charm system, where mixing is expected to be negligible, the search for CP violation is generally a search for direct CP violation in integrated asymmetries. In the bottom system, where mixing is large, time dependent asymmetries are used to quantify indirect CP violation.
In the bottom system, measurements of CP asymmetries are used primarily to explore the CKM matrix. From a theoristโs view point, final states that are dominated by tree level amplitudes and are CP eigenstates are the cleanest modes for extracting the CKM parameters. Two popular examples of such modes are $`\mathrm{B}^0\psi \mathrm{K}_\mathrm{s}`$ (which is often considered the golden mode for measuring $`\beta `$) and $`\mathrm{B}^0\pi ^+\pi ^{}`$ (which is often talked about as a method to measure $`\alpha `$). Both of these modes have played important roles in recent results.
This year, the CDF experiment has updated a CP asymmetry measurement of $`\mathrm{B}^0`$($`\overline{\mathrm{B}^0}`$) decays to $`\psi \mathrm{K}_\mathrm{s}`$ and a new result from ALEPH for the same final state has also become available. Preliminary indications from CDF were available already last year, but the signal was marginal, and this year researchers have worked hard to squeeze out the last bit of sensitivity from the data. Of the roughly 400 reconstructed $`\psi \mathrm{K}_\mathrm{s}`$ events in the current CDF data sample, about half of them occur within the acceptance of the vertex detector, where the decay lengths are well-measured. These events are used to measure a time dependent asymmetry in search of CP violation. The other half of the events do not have well-measured decay times and are used in the measure of a time integrated asymmetry. Figure 15 shows the time dependence of the $`\mathrm{B}^0`$/$`\overline{\mathrm{B}^0}`$ asymmetry with the best fit for an oscillation on the left, and the single data point of the time integrated asymmetry on the right. The two results are combined to get a measure of $`\mathrm{sin}2\beta _{CDF}=0.79_{0.44}^{+0.41}`$. ALEPH performs a similar time-dependent asymmetry measurement on 23 well-reconstructed decays to get $`\mathrm{sin}2\beta _{ALEPH}=0.93_{0.880.24}^{+0.64+0.36}`$. Together, the two experiments constrain $`\mathrm{sin}2\beta `$ to be greater than zero with 98.5% probability. Although this result does not provide a very meaningful test of the previous constraints on the unitarity triangle, it does provide an indication of CPV and some reassurance that this measurement will be a good target for B factory studies in the near future.
The other interesting results this year of relevance to CPV are new measurements from the CLEO collaboration for the branching fractions of $`\mathrm{B}^0\pi ^+\pi ^{}`$ and $`\mathrm{B}^0,\mathrm{B}^+`$K$`\pi `$. Since the K$`\pi `$ final states are believed to be dominated by penguin diagrams, these decays offer a measure of the importance of penguin contributions. CLEO measurements for B$``$K$`\pi `$ range from 1.2 to 1.9$`\times 10^5`$, slightly larger than B$`\pi \pi `$ branching fractions, which are believed to be dominated by tree diagrams. The relatively large K$`\pi `$ branching fractions therefore indicate that penguin diagrams play an important role in these decays. In particular, one can use the measured $`\mathrm{B}^0\mathrm{K}^+\pi ^{}`$ and $`\mathrm{B}^0\pi ^+\pi ^{}`$ rates to get a rough estimate of the $`\pi ^+\pi ^{}`$ penguin amplitude relative to the $`\pi ^+\pi ^{}`$ tree amplitude. Following the method outlined in the BaBar physics book , section 6.1.2, one comes to:
$$0.25<\frac{A_{penguin}^{\pi \pi }}{A_{tree}^{\pi \pi }}<0.57.$$
(11)
Although the precise numerical result should not be taken too seriously, it does point out that penguin amplitudes are likely to be significant in the $`\pi ^+\pi ^{}`$ decay mode. Consequently, the study of CPV in the $`\pi ^+`$$`\pi ^{}`$ final state must include interference between tree amplitudes, mixing amplitudes, and penguin amplitudes. This naturally makes the extraction of $`\alpha `$ much more difficult. As has been pointed out by London and Gronau , the $`\pi ^+`$$`\pi ^{}`$ asymmetry can still be used in combination with branching fraction measurements of $`\mathrm{B}^+\pi ^+\pi ^0`$ and $`\mathrm{B}^0\pi ^0\pi ^0`$ to measure $`\alpha `$, but this is a considerably harder problem, with new ambiguities. The general conclusion is that measurements of $`\alpha `$ at the B factories will be a challenge.
## 5 Summary
This paper has examined four topics of recent research in heavy quark decays. In each case, interesting new results are available this year, and these point the way to even better results in the near future. First of all, new measurements of the $`\mathrm{D}_\mathrm{s}`$ lifetime provide useful data for improving our understanding in the mechanisms of charm decay. In the near future, precision measurements of charm baryon lifetimes from FNAL fixed target experiments FOCUS and SELEX should help complete that understanding. Secondly, results from a new CLEO search for charm mixing are just released, which improve the sensitivity to charm mixing by about an order of magnitude. In the next few years, efforts at FOCUS and at the B factories will further the search. Thirdly, attempts to measure the $`\mathrm{B}_\mathrm{s}`$ mixing frequency have improved this year, resulting in a higher limit on $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$. Efforts at LEP and SLD will continue for at least another year, with the hope of seeing the oscillation. If it is not found in the next year, Run II data from the Tevatron experiments is expected to extend the reach in $`\mathrm{\Delta }\mathrm{M}_\mathrm{s}`$ by more than a factor of two. In time, this will either confirm the estimates of $`\mathrm{V}_{\mathrm{td}}`$, or it will point to an interesting conflict within the Standard Model. Finally, efforts have begun to measure CP asymmetries in the B system. New results at CDF and ALEPH suggest that $`\mathrm{sin}2\beta `$ is within the expected range and should be an easy target for B factory measurements. On the other hand, measurements of $`\alpha `$ via CP asymmetries of $`\mathrm{B}^0\pi ^+\pi ^{}`$ may prove to be more difficult in light of the new CLEO measurements of the $`\mathrm{B}^0`$K$`\pi `$ and $`\mathrm{B}^0\pi \pi `$ branching fractions, which suggest that penguin contributions play an important role in these decay modes.
Discussion
Michael Peskin (SLAC): There is a comment that is implicit in your discussion of Figure 9, but it is nice to make it explicit. There is one leg of the unitarity triangle which is determined by $`V_{ub}`$. This determination is independent of possible new physics. The long leg on the right is determined by $`\mathrm{\Delta }m_s/\mathrm{\Delta }m_d`$. If indeed $`\mathrm{\Delta }m_s`$ turns out to be of the order of 16 ps<sup>-1</sup>, then these two measurements would already give an accurate determination of the unitarity triangle in the context of the standard CKM model. On the other hand, this suggestion may turn out to be wrong. But if you made the right hand leg about half that size, it would not be possible to make a triangle. That is, if $`\mathrm{\Delta }m_s`$ turns out to be greater than about 35 inverse picoseconds, the CKM model is wrong or at least incomplete. You noted that the CDF sensitivity goes far beyond this value, up to about 50 ps<sup>-1</sup>. So when CDF comes into the game, we might determine the CKM triangle within the Standard Model, or we might be able to rule out the Standard Model just on the basis of the $`\mathrm{\Delta }m_s`$ information.
Jon Thaler (University of Illinois): Ron Poling presented a measurement of $`\gamma `$ that is about two $`\sigma `$ away from your favored value. Do you have any comments on this discrepancy?
Blaylock: Since I first encountered this question at the conference, I have had time to better understand the assumptions underlying the CLEO analysis that Ron presented. That estimate of $`\gamma `$ depends upon a fit to 14 charmless decay modes of the B mesons. By assuming factorization of amplitudes, these decay modes are fit to five parameters. For decay modes that are dominated by spectator diagrams, an argument might be made in favor of factorization since the two ends of the W connect to two separated quark currents (though there is some controversy even about this point). However, most of the modes used in the fit have large penguin contributions, making a factorization argument unreasonable. In my mind it is not surprising that the CLEO fit does not yield the same result for the weak phase $`\gamma `$ as other estimates. |
no-problem/9912/astro-ph9912493.html | ar5iv | text | # 1 INTRODUCTION
## 1 INTRODUCTION
One of the signatures of activities around compact objects is the presence of jets and outflows. Outflows carry away angular momentum from the accretion disk and are partially responsible for the accretion itself. Active galaxies and quasars are supposed to harbour black holes at their centers and at the same time produce cosmic radio jets through which immense amount of matter and energy are ejected out of the core of the galaxies (See, \[1-2\] for a recent review). Similarly, micro-quasars have also been discovered very recently where outflows are formed from stellar black hole candidates . Many of these outflows show superluminal motions which are probably due to magnetic effects. With hydrodynamic effects alone, which we are employing in this paper, one should not be able to accelerate the flow more than the initial sound velocity. A well known stellar object SS433 (which is believed to be a neutron star), where pure hydrodynamic effects may be operating, produces outflows with a speed roughly one-third of the speed of light.
There are several models in the literature which study the origin, formation and collimation of these outflows. Difference between stellar outflows and outflows from these systems is that the outflows in these systems have to form out of the inflowing material only. This is because black holes and neutron stars have no atmospheres of their own. The models present in the literature are roughly of three types. The first type of solutions confine themselves to the jet properties only, completely decoupled from the internal properties of accretion disks. They study the effects of hydrodynamic or magneto-hydrodynamic pressures on the origin of jets \[4-6, Chap 3 of 1\]. In the second type, efforts are made to correlate the internal disk structure with that of the outflow using both hydrodynamic and magnetohydrodynamic considerations \[7-8\]. In the third type, numerical simulations are carried out to actually see how matter is deflected from the equatorial plane towards the axis \[9-12\]. From the analytical front, although the wind type solutions and accretion type solutions come out of the same set of governing equations \[e.g. 7-8\], there was no attempt to obtain the estimation of the outflow rate from the inflow rate. On the other hand, the mass outflow rate of the normal stars are calculated very accurately from the stellar luminosity. Theory of radiatively driven winds seems to be very well understood . The simplicity of black holes and neutron stars lie in the fact that they do not have atmospheres. But the disks surrounding them have, and similar method as employed in stellar atmospheres should be applicable to the disks. Our approach in this paper is precisely this. We first determine the properties of the rotating inflow and outflow and identify solutions to connect them. In this manner we self-consistently determine the mass outflow rates.
Before we proceed, we describe basic properties of the rotating matter around a black hole. Rotating matter behaves in a special manner at two radial distances โ (a) marginally stable orbit ($`r_{ms}`$) and (b) marginally bound orbit ($`r_{mb}`$). For $`r<r_{ms}`$ no time-like orbit is stable. The corresponding Keplerian angular momentum is $`\lambda _{ms}=3.67GM/c`$ for a Schwarzschild black hole of mass $`M`$, $`G`$ and $`c`$ being the universal gravitational constant and velocity of light respectively. For $`r<r_{mb}`$, any closed orbit is impossible and matter must dive into a black hole. The corresponding Keplerian angular momentum is $`\lambda _{mb}=4GM/c`$. Matter with a larger angular momentum ($`\lambda >\lambda _{mb}`$) must require a positive energy at infinity in order to enter into a black hole since the centrifugal barrier becomes otherwise unsurmountable (see, Fig. 12.3 of ). Thus, normally, for a black hole accretion, one is interested in flows with $`\lambda <\lambda _{mb}`$. The centrifugal force
$$F_c\lambda ^2/r^3$$
$`(1a)`$
fights against the gravitational force
$$F_gGM/r^2$$
$`(1b)`$
and in a Keplerian disk (consists of a collection of closed timelike geodesics) these two forces balance. In exact form, the Keplerian distribution of specific angular momentum in Schwarzschild geometry is given by ,
$$\lambda _{Kep}=\frac{\sqrt{GMr}}{1\frac{2GM}{c^2r}}$$
$`(2)`$
With this distribution, there is no centrifugal barrier left, since the two forces exactly cancel each other.
On the other hand, a rotating inflow with a specific angular momentum $`\lambda (r)`$ entering into a black hole will have angular momentum $`\lambda `$ constant close to the black hole for any moderate viscous stress. Physically, this is due to fact that viscosity transports momentum, and therefore angular momentum to outer parts of the disk and it takes much longer time (than the infall time of matter) to do so. Problem with a constant angular momentum flow with $`\lambda _{ms}<\lambda <\lambda _{mb}`$ is that it must be sub-Keplerian \[eq. (2)\] for $`r<r_{mb}`$. A second, and more important reason why a flow must deviate from a Keplerian disk can be understood in the following way : Consider a perfect fluid with the stress-energy tensor (using $`G=c=M=1`$),
$$T_{\mu \nu }=\rho u_\mu u_\nu +p(g_{\mu \nu }+u_\mu u_\nu )$$
$`(3)`$
where, $`p`$ is the pressure and $`\rho =\rho _0(1+\pi )`$ is the mass density, $`\pi `$ being the internal energy. We assume the vacuum metric around a Kerr black hole to be of the form
$$ds^2=g_{\mu \nu }dx^\mu dx^\nu =\frac{r^2\mathrm{\Delta }}{A}dt^2+\frac{A}{r^2}(d\varphi \omega dt)^2+\frac{r^2}{\mathrm{\Delta }}dr^2+dz^2$$
$`(4)`$
Where,
$$A=r^4+r^2a^2+2ra^2;\mathrm{\Delta }=r^22r+a^2;\omega =\frac{2ar}{A}$$
Here, $`g_{\mu \nu }`$ is the metric coefficient and $`u^\mu `$ is the four velocity component.
$$u_t=\left[\frac{\mathrm{\Delta }}{(1V^2)(1\mathrm{\Omega }\lambda )(g_{\varphi \varphi }+\lambda g_{t\varphi })}\right]^{1/2}.$$
$`(5)`$
Here, $`\lambda =u_\varphi /u_t`$ is the specific angular momentum and $`\mathrm{\Omega }=u^\varphi /u^t`$. On the horizon, for all $`a`$, $`\mathrm{\Delta }=0`$. Since all the other terms behave smoothly, $`V`$ must be unity, i.e., velocity of light. Since in the extreme equation of state of $`p=\rho /3`$, the sound speed is $`1/\sqrt{3}`$. Thus the Mach number is larger than unity, and the flow must be supersonic on the horizon. A supersonic flow is always sub-Keplerian . It is to be noted that the investigations made so far are from Keplerian disks only. In the present paper, we investigate outflow formation from more realistic flows which are necessarily sub-Keplerian.
Going back to equations (1a) and (1b), one notes that the $`F_c`$ increases much faster compared to the $`F_g`$ and becomes comparable at around $`r_{cb}\lambda ^2/GM`$. (In the rest of the paper, we use $`R_g=2GM/c^2`$ as the length unit, $`c`$ is the unit of velocity, and the mass of the black hole $`M`$ to the unit of mass.) Here, (actually, a little farther out, due to thermal pressure) matter starts piling up and produces the centrifugal pressure supported boundary layer (CENBOL for short). Further close to the black hole, the gravity always wins and matter enters the horizon supersonically after passing through a sonic point. This centrifugal pressure supported region, may or may not have a sharp boundary, depending on whether standing shocks form or not (see for references). Generally speaking, in a polytropic flow, if the polytropic index $`\gamma >1.5`$, then shocks do not form and if $`\gamma <1.5`$, only a region of the parameter space forms the shock . In this layer (CENBOL) the flow becomes hotter and denser and for all practical purposes behaves as the stellar atmosphere so far as the formation of outflows are concerned. Inflows on neutron stars behave similarly, except that the โhard-surfaceโ inner boundary condition dictates that the flow remains subsonic between the CENBOL and the surface rather than becoming supersonic as in the case of a black hole. In case where the shock does not form, regions around pressure maximum achieved just outside the inner sonic point would also drive the flow outwards. In the back of our mind, we have the picture of the outflow as obtained by numerical simulations , namely, that the outflow is thermally and centrifugally accelerated but confined by external pressure of the ambient medium.
At a first glance, it may be astonishing that a black hole, which has no hard surface, should allow a โboundary layerโ or CENBOL. Observationally, in the context of spectral properties of black hole candidates, the presence of this boundary layer has been established long ago (see, \[1-2, 16\]). It turns out that most of hard X-rays from black hole accretion disk comes out of this region . Most interestingly, the CENBOL can also oscillate similar to the boundary layer of a white dwarf thus proving beyond doubt the existance of a CENBOL. This oscillation has also been observed recently \[17-19\]. The formation of outflow from this region is clearly seen both for inviscid flow and for viscous flow \[.
There are two surfaces of utmost importance in flows with angular momentum. One is the โfunnel wallโ where the effective potential (sum of gravitational potential and the specific rotational energy) vanishes. In the case of a purely rotating flow, this is the โzero pressureโ surface. Flows cannot enter inside the funnel wall because the pressure would be negative. (Fig. 1) The other surface is called the โcentrifugal barrierโ. This is the surface
where the radial pressure gradient of a purely rotating flow vanishes and is located outside the funnel wall simply because the flow pressure is higher than zero on this surface. Flow with inertial pressure easily crosses this โbarrierโ and either enters into a black hole or flows out as winds depending on its initial parameters (detail classification of the parameter space is in ). In numerical simulations it is observed that the outflow generally hugs the โfunnel wallโ and goes out in between these two surfaces. In this paper we assume precisely this.
Outflow rates from accretion disks around black hole and neutron stars must be related to the properties of CENBOL which in turn, depend on the inflow parameters. Subsonic outflows originating from CENBOL would pass through sonic points and reach far distances as in wind solutions. Assuming free-falling conical polytropic inflow and isothermal outflows (as in stellar winds), it is easy to estimate the ratio of outflowing and inflowing rates :
$$\frac{\dot{M}_{out}}{\dot{M}_{in}}=R_{\dot{m}}=\frac{\mathrm{\Theta }_{out}}{\mathrm{\Theta }_{in}}\frac{R}{4}e^{(f_0\frac{3}{2})}f_0^{3/2}$$
$`(6)`$
where, $`\mathrm{\Theta }_{out}`$ and $`\mathrm{\Theta }_{in}`$ are the solid angles of the outflow and inflow respectively, and
$$f_0=\frac{(2n+1)R}{2n}.$$
$`(7)`$
Here, $`R`$ is the compression ratio of inflowing matter at the CENBOL and $`n=1/(\gamma 1)`$ is the polytropic constant. When $`\mathrm{\Theta }_{out}\mathrm{\Theta }_{in}`$, $`R_{\dot{m}}0.052`$ and $`0.266`$ for $`\gamma =4/3`$ and $`5/3`$ respectively. Assuming a thin inflow and outflow of $`10^o`$ conical angle, the ratio $`R_{\dot{m}}`$ becomes $`0.0045`$ and $`0.023`$ respectively.
The aim of the present paper is to compute the mass loss rate more realistically than what has been attempted so far. We calculate this rate as a function of the inflow parameters, such as specific energy and angular momentum, accretion rate, polytropic index etc. We explore both the polytropic and the isothermal outflows. Our conclusions show that the outflow rate is sensitive to the specific energy and accretion rate of the inflow. Specifically, when the outflow is not isothermal, outflow rate generally increases with the specific energy and the polytropic index $`\gamma _o`$ of the outflow, generally decreases with the polytropic index $`\gamma `$ of the inflow, but somewhat insensitive to the specific angular momentum $`\lambda `$. In the case of isothermal outflow, however, mass loss rate is sensitive to the inflow rate, since the inflow rate decides the proton temperature of the advective region of the disk which in turn fixes the outflow temperature. In this case the outflow is at least partially temperature driven. The outflow rate is also found to be anti-correlated with specific angular momentum $`\lambda `$ of the flow.
The plan of this paper is the following: In the next Section, we describe our model and present the governing equations for the inflow and outflow. In ยง3, we present the solution procedure of the equations. In ยง4, we present results of our computations. Finally, in ยง5, we draw our conclusions. A preliminary report of this kind has been published elsewhere .
## 2 Model Description and Governing Equations
### 2.1 Inflow Model
For the sake of computation of the inflow quantities, we assume that the inflow is axisymmetric and thin: $`h(r)<<r`$, so that the transverse velocity component could be ignored compared to the radial and azimuthal velocity components. We consider polytropic inflows in vertical equilibrium (otherwise known as 1.5 dimensional flows ). We ignore the self-gravity of the flow. We do the calculations using Paczyลski-Wiita potential which mimics surroundings of the Schwarzschild black hole. The equations (in dimensionless units) governing the inflow are:
(a) Conservation of specific energy is given by,
$$=\frac{u_{e}^{}{}_{}{}^{2}}{2}+na_{e}^{}{}_{}{}^{2}+\frac{\lambda ^2}{2r^2}\frac{1}{2(r1)}.$$
$`(8)`$
where, $`u_e`$ and $`a_e`$ are the radial and polytropic sound velocities respectively. $`a_e=(\gamma p_e/\rho _e)^{1/2}`$, $`p_e`$ and $`\rho _e`$ are the pressure and density of the flow. For a polytropic flow, $`p_e=K\rho _e^\gamma `$, where $`K`$ is a constant and is a measure of entropy of the flow. Here, $`\lambda `$ is the specific angular momentum and $`n`$ is the polytropic constant of the inflow $`n=(\gamma 1)^1`$, $`\gamma `$ is the polytropic index. The subscript $`e`$ refers that the quantities are measured on the equatorial plane.
Mass conservation equation, apart from a geometric constant, is given by,
$$\dot{M}_{in}=u_e\rho _erh_e(r),$$
$`(9)`$
where $`h_e(r)`$ is the half-thickness of the flow at radial co-ordinate $`r`$ having the following expression
$$h_e(r)=a_er^{\frac{1}{2}}(r1)\sqrt{\frac{2}{\gamma }}.$$
$`(10a)`$
Another useful way of writing the mass inflow is to introduce an entropy dependent quantity $`\dot{}\gamma ^nK^n\dot{M}`$ which can be expressed as
$$\dot{}=u_ea_{e}^{}{}_{}{}^{\alpha }r^{\frac{3}{2}}(r1)\sqrt{\frac{2}{\gamma }}$$
$`(10b)`$
Where, $`\dot{}`$ is really the entropy accretion rate . When the shock is not present, $`\dot{}`$ remains constant in a polytropic flow. When the shock is present, $`\dot{}`$ will increase at the shock due to increase of entropy. $`\alpha =(\gamma +1)/(\gamma 1)=2n+1`$. If the centrifugal pressure supported shock is present, the usual Rankine-Hugoniot conditions, namely, conservations of mass, energy and momentum fluxes across the shock are to be taken into account in determining the shock locations. In presence of mass loss one must incorporate this effect in the shock condition (see, eq. 22 below).
### 2.2 Outflow Models
We consider two types of outflows. In ordinary stellar mass loss computations , the outflow is assumed to be isothermal till the sonic point. This assumption is probably justified, since copious photons from the stellar atmosphere deposit momenta on the slowly outgoing and expanding outflow and possibly make the flow close to isothermal. This need not be the case for outflows from compact sources. Centrifugal pressure supported boundary layers close to the black hole are very hot (close to the virial temperature) and most of the photons emitted may be swallowed by the black holes themselves instead of coming out of the region and depositing momentum onto the outflow. Thus, the outflows could be cooler than isothermal flows. In our first model, we choose polytropic outflows with same energy as the inflow (i.e., no energy dissipation between the inflow and outflow) but with a different polytropic index $`\gamma _o<\gamma `$. Nevertheless, it may be advisable to study the isothermal outflow to find out the behavior of the extreme case. Thus an isothermal outflow is chosen in our second model. In each case, of course, we include the possibility that the inflow may or may not have standing shocks.
On the one hand, our assumption of thin inflow is for the sake of computation of the thermodynamic quantities only, but the flow itself need not be physically thin. Secondly, the funnel wall and the centrifugal barrier are purely geometric surfaces, and they exist anyway and the outflow could be supported even by ambient medium which may not necessarily be a part of the disk itself. So, we believe that our assumptions are not unjustified.
#### Polytropic Outflow
In this case, the energy conservation equation takes the form:
$$=\frac{\vartheta ^2}{2}+n^{}a_{e}^{}{}_{}{}^{2}+\frac{\lambda ^2}{2r_{m}^{}{}_{}{}^{2}(r)}\frac{1}{2(r1)}$$
$`(11)`$
and the mass conservation in the outflow takes the form:
$$\dot{M}_{out}=\rho \vartheta ๐(r).$$
$`(12)`$
Here, $`n^{}=(\gamma _o1)^1`$ is the polytropic constant of the outflow. The difference between eq. (4) and eq. (1) is that, presently, the rotational energy term contains
$$r_m(r)=\frac{\mathrm{}(r)+R(r)}{2},$$
$`(13a)`$
as the mean axial distance of the flow. The expression of $`\mathrm{}(r)`$, the local radius of the centrifugal barrier comes from balancing the centrifugal force with the gravity , i.e.,
$$\frac{\lambda ^2}{\mathrm{}^3(r)}=\frac{\mathrm{}(r)}{2r(r1)^2}.$$
$`(13b)`$
We thus obtain,
$$\mathrm{}(r)=\left[2\lambda ^2r(r1)^2\right]^{\frac{1}{4}}$$
$`(14a)`$
And the expression for $`R(r)`$, the local radius of the funnel wall, comes from vanishing of total effective potential, i.e.,
$$\mathrm{\Omega }_{toteff}(r)=\frac{1}{2(r1)}+\frac{\lambda ^2}{2R^2(r)}=0$$
$$R(r)=\lambda \left[(r1)\right]^{1/2}$$
$`(14b)`$
The difference between eq. (12) and eq. (9) is that the area functions are different. Here, $`A(r)`$ is the area between the centrifugal barrier and the funnel wall (see introduction for the motivation). This is computed with the assumption that the outflow is external pressure supported, i.e., the centrifugal barrier is in pressure balanced with the ambient medium. Matter, if pushed hard enough, can cross centrifugal barrier in black hole accretion (the reason why rapidly rotating matter can enter into a black hole in the first place). An outward thermal force (such as provided by the CENBOL) in between the funnel wall and the centrifugal barrier causes the flow to come out. Thus the cross section of the outflow is,
$$๐(r)=\pi [\mathrm{}^2(r)R^2(r)].$$
$`(15)`$
The outflow angular momentum $`\lambda `$ is chosen to be the same as in the inflow, i.e., no viscous dissipation is assumed to be present in the inner region of the flow close to a black hole. Considering that viscous time scales are longer compared to the inflow time scale, it may be a good assumption in the disk, but it may not be a very good assumption for the outflows which are slow prior to the acceleration and are therefore, prone to viscous transport of angular momentum. Such detailed study has not been attempted here particularly because we know very little about the viscous processes taking place in the pre-jet flow. Therefore, we concentrate only those cases where the specific angular momentum is roughly constant when inflowing matter becomes part of the outflow, although some estimates of the change in $`R_{\dot{m}}`$ is provided when the average angular momentum of the outflow is lower. Detailed study of the outflow rates in presence of viscosity and magnetic field is in progress and would be presented elsewhere.
#### Isothermal Outflow
The integration of the radial momentum equation yields an equation similar to the energy equation (eq. 11):
$$\frac{\vartheta _{iso}^{}{}_{}{}^{2}}{2}+C_{s}^{}{}_{}{}^{2}ln\rho +\frac{\lambda ^2}{2r_m(r)^2}\frac{1}{2(r1)}=\mathrm{Constant}$$
$`(16)`$
In this case the thermal energy term is different, behaving logarithmically. The constant sound speed of the outflow is $`C_s`$. The mass conservation equation remains the same:
$$\dot{M}_{out}=\rho \vartheta _{iso}๐(r).$$
$`(17)`$
Here, the area function remains the same above. A subscript iso of velocity $`\vartheta `$ is kept to distinguish from the velocity in polytropic case. This is to indicate the velocities are measured here using completely different assumptions.
In both the models of the outflow, we assume that the flow is primarily radial. Thus the $`\theta `$-component of the velocity is ignored ($`\vartheta _\theta <<\vartheta `$).
## 3 Procedure to solve for disks and outflows simultaneously
Before we go into the details, a general understanding of the transonic flows around a black hole is essential. In , all the solution topologies of the polytropic flow in pseudo-Newtonian geometry has been provided. In regions I and O of the parameter space the flow has only one sonic point. Matter with positive energy at a large distance must pass through that point before entering into the black hole supersonically. In regions SA and SW shocks may form in accretion and winds respectively, but no shocks are expected in winds and accretions if parameters are chosen from these branches. In NSW and NSA, two saddle type sonic points exist, but no steady shock solutions are possible.
Suppose that matter first enters through the outer sonic point and passes through a shock. At the shock, part of the incoming matter, having higher entropy density is likely to return back as winds through a sonic point, other than the one it just entered. Thus a combination of topologies, one from the region SA and the other from the region O is required to obtain a full solution. In the absence of the shocks, the flow is likely to bounce back at the pressure maximum of the inflow and since the outflow would be heated by photons, and thus have a smaller polytropic constant, the flow would leave the system through an outer sonic point different from that of the incoming solution. Thus finding a complete self-consistent solution boils down to finding the outer sonic point of the outflow and the mass flux through it. Below we present the list of parameters used in both of our models and briefly describe the procedure to obtain a satisfactory solution.
### 3.1 Polytropic Outflow
We assume that
(a) In this case, a very little amount of total energy is assumed to be lost in each bundle of matter as it leaves the disk and joins the jet. The specific energy $``$ remains fixed throughout the flow trajectory as it moves from the disk to the jet.
(b) Very little viscosity is present in the flow except at the place where the shock forms, so that the specific angular momentum $`\lambda `$ is constant in both inflows and outflows close to the black hole. At the shock, entropy is generated and hence the outflow is of higher entropy for the same specific energy.
(c) The polytropic index of the inflow ($`\gamma `$) and outflow ($`\gamma _o`$) are free parameters and in general, $`\gamma _o<\gamma `$, because of heating effect of the outflow (e.g., due to the momentum deposition coming out of the disk surface). In reality $`\gamma _o`$ is directly related to the heating and cooling processes of the outflow. When $`\dot{M}_{in}`$ is high, heating of outflow by photon momentum deposition is higher, and therefore $`\gamma _o1`$.
Thus a supply of parameters $``$, $`\lambda `$, $`\gamma `$ and $`\gamma _o`$ make a self-consistent computation of $`R_{\dot{m}}`$ possible when the shock is present. When the shock is absent, the compression ratio of the gas at the pressure maximum between the inflow and outflow $`R_{comp}`$ is supplied as a free parameter, since it may be otherwise very difficult to compute satisfactorily. In the presence of shocks, such problems do not arise as the compression ratio is obtained self-consistently.
The following procedure is adopted to obtain a complete solution:
(a) From eqs. (8) and (9) we derive an expression for the derivative,
$$\frac{du}{dr}=\left(\frac{{\displaystyle \frac{\lambda ^2}{r^3}}+{\displaystyle \frac{na^2}{\alpha }}+{\displaystyle \frac{5r3}{r(r1)}}{\displaystyle \frac{1}{2(r1)^2}}}{u{\displaystyle \frac{2na^2}{\alpha u}}}\right).$$
$`(18)`$
At the sonic point, the numerator and denominator separately vanish, and give rise to the so-called sonic point conditions:
$$a_c=\left(\frac{{\displaystyle \frac{1}{2(r_c1)^2}}{\displaystyle \frac{\lambda ^2}{r_{c}^{}{}_{}{}^{3}}}}{{\displaystyle \frac{\alpha (r_c1)r_c}{n(5r_c3)}}}\right)$$
$`(19a)`$
$$u_c=\sqrt{\frac{2n}{\alpha }}a_c$$
$`(19b)`$
where, the subscript $`c`$ represents the quantities at the sonic point. The derivative of the flow at the sonic point is computed using the LโHospitalโs rule. Using fourth order Runge-Kutta method $`\vartheta (r)`$ and $`a(r)`$ are computed along the flow till the position where the Rankine-Hugoniot condition is satisfied (if shocks form) and from there on the sub-sonic branch is integrated for the accretion as usual. With the known $`\gamma _o`$, $``$ and $`\lambda `$, one can compute the location of the outflow sonic point from eqs. (11) and (12),
$$\frac{d\vartheta }{dr}=\left(\frac{{\displaystyle \frac{a^2}{๐^2(r)}}{\displaystyle \frac{d๐(r)}{dr}}+{\displaystyle \frac{\lambda ^2}{r_{m}^{}{}_{}{}^{3}(r)}}{\displaystyle \frac{dr_m(r)}{dr}}{\displaystyle \frac{1}{2(r1)^2}}}{\vartheta {\displaystyle \frac{a^2}{\vartheta }}}\right)$$
$`(20)`$
from where the sonic point conditions at the outflow sonic point $`r_{co}`$ obtained are given by,
$$\frac{a_{co}^2}{๐_{co}^2(r)}\frac{d๐(r)}{dr}|_{co}+\frac{\lambda ^2}{r_{m}^{}{}_{co}{}^{}{}_{}{}^{3}(r)}\frac{dr_m(r)}{dr}|_{co}\frac{1}{2(r_{co}1)^2}=0$$
$`(21a)`$
and
$$\vartheta _{co}=a_{co}.$$
$`(21b)`$
At the outer sonic point, the derivative of $`\vartheta `$ is computed using the LโHospitalโs rule and the Runge-Kutta method is used to integrate towards the black hole to compute the velocity of the outflow at the shock location. The density of the outflow at the shock is computed by distributing the post-shock dense matter of the disk into spherical shell of $`4\pi `$ solid angle. The outflow rate is then computed using eq. (12).
It is to be noted that when the outflows are produced, one cannot use the usual Rankine-Hugoniot relations at the shock location, since mass flux is no longer conserved in accretion, but part of it is lost in the outflow. Accordingly, we use,
$$\dot{M}_+=(1R_{\dot{m}})\dot{M}_{}$$
$`(22)`$
where, the subscripts $`+`$ and $``$ denote the pre- and post-shock values respectively. Since due to the loss of matter in the post-shock region, the post-shock pressure goes down, the shock recedes backward for the same value of incoming energy, angular momentum & polytropic index. The combination of three changes, namely, the increase in the cross-sectional area of the outflow and the launching velocity of the outflow and the decrease in the post-shock density decides whether the net outflow rate would increased or decreased than from the case when the exact Rankine-Hugoniot relation was used.
In the case where the shocks do not form, the procedure is a bit different. It is assumed that the maximum amount of matter comes out from the place of the disk where the thermal pressure of the inflow attains its maximum. The expression for the polytropic pressure for the inflow in vertical equilibrium is,
$$๐ซ_e(r)=\frac{a_{e}^{}{}_{}{}^{2(n+1)}\dot{M}_{in}}{\gamma ^{(1+n)}\dot{}}$$
$`(23)`$
This is maximized and the outflow is assumed to have the same quasi-conical shape with annular cross-section $`๐(r)`$ between the funnel wall and the centrifugal barrier as already defined. In the absence of shocks the compression ratio of the flow between the incoming flow and outgoing flow at the pressure maximum cannot be computed self-consistently unlike the case when the shock was present. Thus this ratio is chosen freely. We take the guidance for this number from what was obtained in the case when shocks are formed. However, in this case even when the mass loss takes place, the location of the pressure maximum remains unchanged. Since the compression ratio $`R_{comp}`$ is a free parameter, $`R_{\dot{m}}`$ remains unchanged for a given $`R_{comp}`$. Let us assume that $`\dot{\mu }_{}`$ is the actual mass inflow rate and it is same before and after the pressure maximum had the mass loss rate been negligible. Let $`\dot{\mu }_+`$ be the mass inflow rate after the pressure maximum, when the loss due to outflow is taken into account. Then, by definition, $`\dot{\mu }_{}=\dot{M}_{out}+\dot{\mu }_+`$ and $`R_{\dot{m}}=\dot{M}_{out}/\dot{\mu }_{}`$. Thus the actual ratio of the mass outflow rate and the mass inflow rate, when the mass loss is taken into consideration is given by,
$$\frac{\dot{M}_{out}}{\dot{\mu }_+}=\frac{R_{\dot{m}}}{1R_{\dot{m}}}.$$
$`(24a)`$
However, this static consideration is valid only when $`R_{\dot{m}}<1`$. Otherwise, we must have,
$$\frac{dM_{disk}}{dt}+\dot{\mu }_{}=\dot{\mu }_++\dot{M}_{out}$$
i.e.,
$$\frac{dM_{disk}}{dt}=\dot{\mu }_{}(R_{\dot{m}}1)+\dot{\mu }_+$$
$`(24b)`$
Here, $`M_{disk}`$ is the instantaneous mass of the disk. Since $`R_{\dot{m}}>1`$, the disk has to evacuate. These cases hint that the assumptions of the steady solution break down completely and the solutions may become become highly time dependent.
### 3.2 Isothermal Outflow
We assume that
(a) The outflow has exactly the same temperature as that of the post-shock flow, but the energy is not conserved as matter goes from disk to the jet. In other words the outflow is kept in a thermal bath of temperature as that of the post-shock flow.
(b) Same as (b) of ยง3.1.
(c) The post-shock proton temperature is determined from the inflow accretion rate $`\dot{M}_{in}`$ using the consideration of Comptonization of the advective region. The procedure to compute typical proton temperature as a function of the incoming accretion rate has been adopted from .
(d) The polytropic index of the inflow can be varied but that of the outflow is always unity.
Thus a supply of parameters $``$, $`\lambda `$ and $`\gamma `$ makes a self-consistent computation of $`R_{\dot{m}}`$ possible when the shock is present. When the shock is absent, the compression ratio of the gas at the pressure maximum between the inflow and the outflow $`R_{comp}`$ is supplied as a free parameter exactly as in the polytropic case.
The following procedure is adopted to obtain a complete solution:
(a) From eqs. (16) and (17) we derive an expression for the derivative,
$$\frac{d\vartheta }{dr}|_{iso}=\left(\frac{{\displaystyle \frac{C_{s}^{}{}_{}{}^{2}}{๐(r)}}{\displaystyle \frac{d๐(r)}{dr}}+{\displaystyle \frac{\lambda ^2}{r_{m}^{}{}_{}{}^{3}(r)}}{\displaystyle \frac{dr_m(r)}{dr}}{\displaystyle \frac{1}{2(r_c1)^2}}}{\vartheta _{iso}{\displaystyle \frac{C_{s}^{}{}_{}{}^{2}}{\vartheta _{iso}}}}\right).$$
$`(25)`$
At the sonic point, the numerator and denominator separately vanish, and give rise to the so-called sonic point conditions:
$$\frac{C_{s}^{}{}_{}{}^{2}}{๐_{co}(r)}\frac{d๐(r)}{dr}|_{co}+\frac{\lambda ^2}{r_{m_{co}}^{}{}_{}{}^{3}(r)}\frac{dr_m(r)}{dr}|_{co}\frac{1}{2(r_{co}1)^2}=0,$$
$`(26a)`$
and
$$\vartheta _{co}=C_s,$$
$`(26b)`$
where, the subscript $`co`$ represents the quantities at the sonic point of the outflow. The derivative of the flow at the sonic point is computed using the LโHospitalโs rule. The procedure is otherwise similar to those mentioned in the polytropic case and we do not repeat them here.
## 4 Results
### 4.1 Polytropic outflow coming from the post-shock accretion disk
Figure 2 shows a typical solution which combines the accretion and the outflow. The input parameters are $`=0.0005`$, $`\lambda =1.75`$ and $`\gamma =4/3`$ corresponding to relativistic inflow. The solid curve with an arrow represents the pre-shock region of the inflow and the long-dashed curve representn
the post-shock inflow which enters the black hole after passing through the inner sonic point (I). The solid vertical line at $`X_{s3}`$ (the leftmost vertical transition) with double arrow represents the shock transition obtained with exact Rankine-Hugoniot condition (i.e., with no mass loss). The actual shock location obtained with modified Rankine-Hugoniot condition (eq. 22) is farther out from the original location $`X_{s3}`$. Three vertical lines connected with the corresponding dotted curves represent three outflow solutions for the parameters $`\gamma _o=1.3`$ (top), $`1.15`$ (middle) and $`1.05`$ (bottom). The outflow branches shown pass through the corresponding sonic points. It is evident from the figure that the outflow moves along solution curves completely different from that of the โwind solutionโ of the inflow which passes through the outer sonic point โOโ. The mass loss ratio $`R_{\dot{m}}`$ in these cases are $`0.256`$, $`0.159`$ and $`0.085`$ respectively. Figure 3 shows the ratio $`R_{\dot{m}}`$ as $`\gamma _o`$ is varied. Only the range of $`\gamma _o`$ and energy for which the shock-solution is present is shown here. The general conclusion is that as $`\gamma _o`$ is increased the ratio is also increased non-linearly. When the inflow rate is very low, due to paucity of the photons, the outflow is not heated very much and $`\gamma _o`$ remains higher. The reverse is true when the accretion rate is higher.
Thus, effectively, the ratio $`R_{\dot{m}}`$ is going up with the decrease in $`\dot{M}_{in}`$. In passing we remark that with the variation in the inflow angular momentum, $`\lambda `$, the result does not change significantly, and $`R_{\dot{m}}`$ changes only by a couple of percentage at the most.
In Fig. 4a, we show the variation of the ratio $`R_{\dot{m}}`$ of the mass outflow rate and inflow rate as a function of the shock-strength (dotted) $`M_{}/M_+`$ (Here, $`M_{}`$ and $`M_+`$ are the Mach numbers of the pre- and post-shock flows respectively.), the compression ratio (solid) $`\mathrm{\Sigma }_+/\mathrm{\Sigma }_{}`$ (Here, $`\mathrm{\Sigma }_{}`$ and $`\mathrm{\Sigma }_+`$ are the vertically integrated matter densities in the pre- and post- shock flows respectively), and the stable shock location (dashed) $`X_{s3}`$ (in the notation of ). Other parameters are $`\lambda =1.75`$ and $`\gamma _o=1.05`$. Note that the ratio $`R_{\dot{m}}`$ does not peak near the strongest shocks! Shocks are stronger when they are located closer to the black hole, i.e., for smaller energies. The non-monotonic behavior is more clearly seen in lowest curve of Fig. 4b where $`R_{\dot{m}}`$ is plotted as a function of the specific energy $``$ (along x-axis) and $`\gamma _o`$ (marked on each curve). Specific angular momentum is chosen to be $`\lambda =1.75`$ as before. The tendency of the peak in $`R_{\dot{m}}`$ is primarily because as $``$ is increased, the shock location is increased which generally increases the outflowing area $`๐(r)`$ at the shock location. However, the density of the outflow at the shock,
as well as the velocity of the outflow at the shock increases. The outflow rate, which is a product of these quantities, thus shows a peak. For the sake of comparison, we present the results for $`\gamma _o=1.05`$ (dashed curve) when the Rankine-Hugoniot relation was not corrected by eq. (22). The result generally remains the same because of two competing effects: decrease in post-shock density and increase in the area from the the outflow is launched (i.e., area between the black hole and the shock) as well as the launching velocity of the jet at the shock.
To have a better insight of the behavior of the outflow we plot in Fig. 5 $`R_{\dot{m}}`$ as a function of the polytropic index of the incoming flow ($`\gamma `$) for $`\gamma _o=1.1`$, $`=0.002`$ and $`\lambda =1.75`$. The range of $`\gamma `$ shown is the range for which shock forms in the flow. We also plot the variation of injection velocity $`\vartheta _{inj}`$, injection density $`\rho _{inj}`$ and area $`๐(r)`$ of the outflow at the location where the outflow leaves the disk. The incoming accretion rate has been chosen to be $`0.3`$ (in units of the Eddington rate). These quantities are scaled from the corresponding dimensionless quantities as $`\vartheta _{inj}0.1\vartheta _{inj}`$, $`\rho _{inj}10^{22}\rho _{inj}`$ and $`๐10^4๐`$ respectively in order to bring them in the same scale. With the increase in $`\gamma `$, the shock location is increased, and therefore the cross-sectional area of the outflow goes up.
The injection velocity goes up (albeit very slowly) as the shock recedes, since the injection surface (CENBOL) comes closer to the outflow sonic point. However, the density goes down (gas is less denser). This anti-correlation is reflected in the peak of $`R_{\dot{m}}`$.
So far, we assumed that the specific angular momentum of the outflow is exactly the same as that of the inflow, while in reality it could be different due to presence of viscosity. In the outflow, a major source of viscosity is the radiative viscosity whose coefficient is,
$$\eta =\frac{4aT^4}{15\kappa _Tc\rho }\mathrm{cm}^2\mathrm{sec}^1$$
$`(27)`$
This could be significant, since the temperature of the outflow is high, but the density is low. Assuming that the angular momentum distribution reaches a steady state inside the jet ( and references therein),
$$l_j=C_jR^{n_j}$$
$`(28)`$
where $`C_j`$ and $`n_j`$ are constants, the vanishing condition of the azimuthal velocity on the axis requires that $`n_j>1`$ inside the jet. The matter distribution in the rotationally dominant region of the โpre-jetโ is computed by integrating Euler equation. It is easy to show that the โhollowโ jet thus produced carry most of the matter and angular momentum in the outer layers of the jet . In other words, the average angular momentum of the outflow away from the base may remain roughly constant even in presence of viscosity. This is to be contrasted with the disk, where matter is more dense towards the centre while more angular momentum is concentrated towards the outer edge. If, however, the average angular momentum at the base of the outflow goes does down due to losses to ambient medium, by, say, a factor of two, we find that the mass loss rate is also reduced by around the same factor. This shows that the outflow is at least partially centrifugally driven.
An important point to note: the ratio between the โspecific entropy measureโ of the outflow to that of the post-shock inflow is obtained from the definitions of entropy accretion rate $`\dot{}`$ :
$$\frac{K_o}{K_+}=\frac{\dot{}_{out}^{\gamma _o1}}{\dot{}_+^{\gamma 1}}\left(\frac{1R_{\dot{m}}}{R_{\dot{m}}}\right)^{\gamma _o1}\dot{M}_+^{(\gamma \gamma _o)}\frac{\gamma }{\gamma _o}$$
$`(29)`$
As $`R_{\dot{m}}1`$, $`\frac{K_o}{K_+}0`$. Thus, we expect that for a polytropic flow with shocks, a hundred percent outflow is impossible since the outgoing entropy must be higher. In isothermal outflows such simple consideration do not apply.
If we introduce an extra radiation pressure term (with a term like $`\mathrm{\Gamma }/r^2`$ in the radial force equation, where $`\mathrm{\Gamma }`$ is the contribution due to radiative process), particularly important for neutron stars, the outcome is significant. In the inflow, outward radiation pressure weakens gravity and thus the shock is located farther out. The temperature is cooler and therefore the outflow rate is lower. If the term is introduced only in the outflow, the effect is not significant.
### 4.2 Polytropic outflow coming from the region of the maximum pressure
In this case, the inflow parameters are chosen from region I (see ) so that the shocks do not form. Here, the inflow passes through the inner sonic point only. The outflow is assumed to be originated from the regions where the polytropic inflow has a maximum pressure. This assumption is justified, since it is expected that winds would get the maximum kick at this region. Figure 6a shows a typical solution. The arrowed solid curve shows the inflow and the dotted arrowed curves show the outflows for $`\gamma _o=1.3`$
(top), $`1.1`$ (middle) and $`1.01`$ (bottom). The ratio $`R_{\dot{m}}`$ in these cases is given by $`0.66`$, $`0.30`$ and $`0.09`$ respectively. The specific energy and angular momentum are chosen to be $`=0.00584`$ and $`\lambda =1.8145`$ respectively. The pressure maximum occurs outside the inner sonic point at $`r_p`$ when the flow is still subsonic. Figure 6b shows the variation of thermal pressure of the flow with radial distance. The peak is clearly visible. Since the pressure maximum occurs very close to the black hole as compared to the location of the shock, the area of the outflow is smaller, but the radial velocity as well as the density of matter at the base of the outflow are much higher. As a result the outflow rate is exorbitantly higher compared to the shock case. Figure 7 shows the ratio $`R_{\dot{m}}`$ as a function of $`\gamma _o`$ for various choices of the compression ratio $`R_{comp}`$ of the outflowing gas at the pressure maximum: $`R_{comp}=2`$ for the rightmost curve and $`7`$ for the leftmost curve. We have purposely removed the solutions with $`R_{\dot{m}}>1`$, because the solution should be inherently time-dependent (see, eq. 24b) in these cases and a steady state approach is not supposed to be trusted completely. This is different from the results of ยง4.1, where shocks are considered, since $`R_{\dot{m}}`$ is non-monotonic in that case.
It is also found that in some range of parameters, the very high massflow could take place even for smaller compression ratios especially when the sonic point of the outflow $`r_o`$ is right outside the pressure maximum. These cases can cause runaway instabilities by rapidly evacuating the disk. They may be responsible for quiescent states in X-ray novae systems (GS2000+25, GRS 1124-683) \[27-28\] and also in some systems with massive black holes (Sgr A\*) \[29-30\]. Strong winds are suspected to be present in Sgr $`A^{}`$ at our galactic center \[29-30\]. We show that when the inflow rate itself is low (as in the case for Sgr $`A^{}`$), the mass outflow rate is very high, almost to the point of evacuating the disk. Thus we think that any explanation of spectral properties of our galactic center (Sgr A\*) should include winds using our model.
The location of maximum pressure being close to the black hole, it may be generally very difficult to generate the outflow from this region. Thus, it is expected that the ratio $`R_{\dot{m}}`$ would be larger when the maximum pressure is located farther out. This is exactly what we see in Fig. 8, where we plot $`R_{\dot{m}}`$ against the location of the pressure maximum (solid curve). Secondly, if our guess that the outflow rate could be related to the pressure is correct,
then the rate should increase as the pressure at the maximum rises. That is also observed in Fig. 8. We plot here $`R_{\dot{m}}`$ as a function of the actual pressure at the pressure maximum (dotted curve). The mass loss is found to be a strongly correlated with the thermal pressure. Here we have multiplied non-dimensional thermal pressure by $`1.5\times 10^{24}`$ in order to bring it in the same scale.
### 4.3 Isothermal outflow coming from the post-shock accretion disk
In this case, the outflow is assumed to be isothermal. The temperature of the outflow is obtained from the proton temperature of the advective region of the disk. The proton temperature is obtained using the Comptonization, bremsstrahlung, inverse bremsstrahlung and Coulomb processes ( and references therein). Figure 9 shows the effective proton temperature and the electron temperature of the post-shock advective region as a function of the accretion rate (in units of Eddington rate, in logarithmic scale) of the Keplerian component of the disk. The diagram is drawn for a black hole of mass $`10M_{}`$. Similar results can be obtained for black hole of any mass.
The soft X-ray luminosity for stellar mass black holes or the UV luminosity of massive black holes is basically dictated by the Keplerian rate of the disk. It is clear that as the accretion rate of the Keplerian disk is increased, the advective region gets cooler as is expected.
In Fig. 10a, we show the ratio $`R_{\dot{m}}`$ as a function of the accretion rate (in units of Eddington rate) of the incoming flow for a range of the specific angular momentum. In the low luminosity objects the ratio is larger. Angular momentum is varied from $`\lambda =1.7`$ (top curve), $`1.725`$ (middle curve) and $`1.75`$ (bottom curve). The specific energy is $`=0.003`$. Here we have used the modified Rankine-Hugoniot relation as before (eq. 22). The ratio $`R_{\dot{m}}`$ is clearly very sensitive to the angular momentum since it changes the shock location rapidly and therefore changes the post-shock temperature very much. We also plot the outflux of angular momentum $`F(\lambda )=\lambda \dot{m}_{in}R_{\dot{m}}`$ which has a maximum at intermediate accretion rates. In dimensional units, these quantities represent significant fractions of angular momentum of the entire disk and therefore the rotating outflow can help accretion processes. Curves are drawn for different $`\lambda `$ as above. In Fig. 10b, we plot the variation of the ratio directly with the proton temperature of the advecting region. The outflow is clearly thermally driven. Hotter flow produces more winds
as is expected. The angular momentum associated with each curve is same as before.
### 4.4 Isothermal outflow coming from the region of the maximum pressure
This case produces very similar result as in the above case, except that like Section 4.2 the outflow rate becomes very close to a hundred percent of the inflow rate when the proton temperature is very high. Thus, when the accretion rate of the Keplerian flow is very small, the outflow rate becomes very high, close to evacuating the disk. As noted before, this may also be related to the quiescent state of the X-ray novae.
## 5 Conclusions
In this paper, we have computed the mass outflow rate from the advective accretion disks around galactic and extra-galactic black holes. Since the general physics of advective flows are similar around a neutron star, we believe that the conclusions may remain roughly similar provided the shock
at $`X_{s3}`$ forms, although the boundary layer (which corresponds to $`X_{s1}`$ in notation) of the neutron star, where half of the binding energy could be released, may be more luminous than that of a black hole and may thus affect the outflow rate. We have chosen a limited number of free parameters just sufficient to describe the inflow and only one extra parameter for the outflow. We find that the outflow rates can vary from a very few percentage of the inflow rate, to as much as the inflow rate (causing almost complete evacuation of the accretion disk) depending on the inflow parameters. For the first time, it became possible to use the exact transonic solutions for both the disks and the winds and combine them to form a self-consistent disk-outflow system. Although we present results when centrifugally supported boundary layers are considered around a black hole, it is evident that the result is general, namely, if such a barrier is produced by other means, such as pre-heating or by pair-plasma pressure , outflows would also be produced \[33-34\]
The basic conclusions of this paper are the followings:
a) It is possible that most of the outflows are coming from the centrifugally supported boundary layer (CENBOL) of the accretion disks.
b) The outflow rate generally increases with the proton temperature of CENBOL. In other words, winds are, at least partially, thermally driven. This is reflected more strongly when the outflow is isothermal.
c) Even though specific angular momentum of the flow increases the size of the CENBOL, and one would have expected a higher mass flux in the wind, we find that the rate of the outflow is actually anti-correlated with the $`\lambda `$ of the inflow. On the other hand, if the angular momentum of the outflow is reduced by hand, we find that the rate of the outflow is correlated with $`\lambda `$ of the outflow. This suggests that the outflow is partially centrifugally driven as well.
d) The ratio $`R_{\dot{m}}`$ is generally anti-correlated with the inflow accretion rate. That is, disks of lower luminosity would produce higher $`R_{\dot{m}}`$.
e) Generally speaking, supersonic region of the inflow do not have pressure maxima. Thus, outflows emerge from the subsonic region of the inflow, whether the shock actually forms or not.
In this paper, we assumed that the magnetic field is absent. Magnetized winds from the accretion disks have so far been considered in the context of a Keplerian disk and not in the context of sub-Keplerian flows on which we concentrate here. Secondly, whereas the entire Keplerian disk was assumed to participate in wind formation, here we suggest that CENBOL is the major source of outflowing matter. It is not unreasonable to assume that CENBOL would still form when magnetic fields are present and since the Alfvรฉn speed is, by definition, higher compared to the sound speed, the acceleration, and therefore the mass outflow would also be higher than what we computed here. Such works would be carried out in future.
In the literature, not many results are present which deal with exact computations of the mass outflow rate. Molteni, Lanzafame & Chakrabarti , in their SPH simulations, found that the ratio could be as high as $`1520`$ per cent when the flow is steady. In Ryu, Chakrabarti & Molteni , $`1015`$ per cent of the steady outflow is seen and occasionally, even $`150`$ of the inflow is found to be ejected in non-stationary cases. Our result shows that high outflow rate is also possible, especially for absence of shocks and low luminosities. In Eggum, Coroniti & Katz , radiation dominated flows showed $`R_{\dot{m}}0.004`$, which also agrees with our results when we consider high accretion rates (see, e.g., Fig. 9a). Observationally, it is very difficult to obtain the outflow rate from a real system, as it depends on too many uncertainties, such as filling factors and projection effects. In any case, with a general knowledge of the outflow rate, we can now proceed to estimate several important quantities. For example, it had been argued that the composition of the disk changes due to nucleosynthesis in accretion disks around black holes and this modified isotopes are deposited in the surroundings by outflows from the disks (\[35-36\] and references therein). Similarly, it is argued that outflows deposit magnetic flux tubes from accretion disks into the surroundings . Thus a knowledge of outflows are essential in understanding varied physical phenomena in galactic environments.
Would our solution be affected if radiation pressure is included? A preliminary investigation with a $`\mathrm{\Gamma }/r^2`$ force term (whose effect is to weaken gravity) suggests that for non-zero $`\mathrm{\Gamma }`$ in the inflow, the mass-loss rate changes significantly. This is because the shock location increases when $`\mathrm{\Gamma }`$ is increased. This in turn reduces the mass loss rate. On the other hand, the when $`\mathrm{\Gamma }`$ is non-zero in the outflow, the effect is not very high, since the outflow rate is generally driven by thermal effect of the disk and not the wind. Similarly, we see a significant reduction of the outflow when the average specific angular momentum of the outflow is reduced. This is expected since the outflow is partially centrifugally driven. This effect is stronger when the outflow is isothermal.
An interesting situation arises when the polytropic index of the outflow is large and the compression ratio of the flow is also very high. In this case, the flow virtually bounces back as the winds and the outflow rate can be equal to the inflow rate or even higher, thereby evacuating the disk. In this range of parameters, most, if not all, of our assumptions may breakdown completely because the situation could become inherently time-dependent. It is possible that some of the black hole systems, including that in our own galactic centre, may have undergone such evacuation phase in the past and gone into quiescent phase.
So far, we made the computations around a Schwarzschild black hole. In the case of a Kerr black hole , the shock locations will come closer and the outflow rates should become higher. Similarly magnetic field will change the sonic point locations significantly . The mass outflow rates in these conditions are being studied and the results would be reported elsewhere . |
no-problem/9912/hep-ph9912367.html | ar5iv | text | # I Introduction
## I Introduction
In ultrarelativistic heavy ion collisions in the upcoming experiments at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and Large Hadron Collider (LHC) at CERN, a very high multiplicity of particles will be produced. Most of the models predict the multiplicities of produced hadrons per unit of rapidity in a given event to be of the order of
$`{\displaystyle \frac{dN}{dy}}\mathrm{\hspace{0.17em}10}^3,`$ (1)
where the exact number depends on the particular model. For these high multiplicities the scale of statistical fluctuations in one unit of rapidity is expected to be of the order of $`\sigma \sqrt{dN/dy}`$ so that
$`{\displaystyle \frac{\sigma }{dN/dy}}\mathrm{\hspace{0.17em}10}^110^2.`$ (2)
Thus the multiplicities would not change much within each narrow bin in rapidity, as long as the number of particles produced in that rapidity bin is large. Collisions may therefore be studied on an event by event basis with little ambiguity. The fluctuations of multiplicities in each of the rapidity bins could be observed by examining several different events. One might wonder whether the numbers of particles produced in different rapidity bins are completely independent of each other, or there are certain correlations among them. In this paper we are going to address the question of rapidity correlations in the framework of McLerranโVenugopalan model of particle production by a series of classical emissions .
In McLerranโVenugopalan model the high multiplicity per unit area gives rise to a dimensionfull parameter characterizing the nuclear collision
$`\mathrm{\Lambda }^2={\displaystyle \frac{1}{\pi R^2}}{\displaystyle \frac{dN}{dy}}`$ (3)
with $`R`$ the nuclear radius. It was argued that this scale represents the typical transverse momentum of the particles produced in the early stages of the nuclear collisions . Due to the high multiplicities of Eq. (1) this typical transverse momentum scale of produced partons given by Eq. (3) may become large compared to the QCD scale
$`\mathrm{\Lambda }^2\mathrm{\Lambda }_{QCD}^2.`$ (4)
The QCD coupling at the scale $`\mathrm{\Lambda }`$ is weak, $`\alpha _s(\mathrm{\Lambda })1`$. Together with the fact that the number of color charge sources in the nuclei is large this allows us to assume that the gluons produced in a nuclear collision could be described by the classical field of the colliding nuclei . Here by classical field we mean that the gluon field is a solution of YangโMills equations of motion with the nuclei providing the source term for the equations . A renormalization group (RG) approach has been developed recently to account for a series of classical emissions . The procedure invented in is the following: at each step of the evolution in rapidity each nucleus through classical emissions produces several partons with soft longitudinal momenta. At the next step of the RG the partons produced in the previous step are included in the source, off which the next generation of even softer partons is emitted. That way the produced soft partons modify the density of color sources, for which one writes a renormalization group equation . The exact form of this equation is not important for our discussion here. We just note that at the lowest order it reproduces the well known Balitsky, Fadin, Kuraev, Lipatov (BFKL) equation and the full equation should be able to resum multiple reggeon exchanges in the structure functions. The crucial assumption which we have to keep in mind is that the series of classical emissions in both nuclei provide us with a set of color charge sources, which give rise to the color field of the produced gluons. This is illustrated in Fig. 1. The QCD evolution, which consists of real terms (classical emissions) and virtual corrections provide us with the color sources, which are denoted by crosses in Fig. 1.
To construct the classical field one has to solve Yang-Mills equations treating the color charges generated through evolution of Fig. 1 in both nuclei as contributions to the source term in the equation. For the field to be purely classical the generated color sources should be separated by the rapidity interval $`\delta y1/\alpha _s`$. An example of the diagram contributing to the classical field generated this way is given in Fig. 2. The classical field of colliding nuclei corresponds to gluon production in the approximation, where one resums all powers of the parameter $`\alpha _s^2L`$ , where $`L`$ is the number of sources at a given impact parameter. If the total rapidity interval between the colliding nuclei is not very large, $`Y1/\alpha _s`$, so that the quantum evolution has not become important yet, the valence quarks in the nucleons will be the sources of color charge and the resummation parameter will become $`\alpha _s^2A^{1/3}`$ . One would have the nucleons instead of the crosses in Fig. 2. Finding the classical field even for this somewhat simplified situation appears to be a complicated task. The problem has been solved analytically only for the gluon production in the case of a proton scattering on a nucleus in . Several attempts have been made for the case of gluon production in nucleusโnucleus collisions , giving only the lowest order in $`\alpha _s`$ result. There have also been done extensive numerical studies of the nucleusโnucleus collisions . In this paper we will not be interested in the detailed structure of the field. What will be important to us is the fact that the classical field is boost invariant, and, therefore, is rapidity independent. Thus if one fixes the configuration of sources in Fig. 2 the classical field in the rapidity interval $`\mathrm{\Delta }y1/\alpha _s`$ would give rise to a rapidity independent distribution of the produced particles within this rapidity interval.
The classical gluon field has another interesting property. In the regime when the number of color sources is large, making the resummation parameter seizable, $`\alpha _s^2L1`$, the classical field of Fig. 2 becomes strong, $`A_\mu 1/g`$ . Since gluons are massless bosons and the typical phase space density is large, we expect that the gluon distribution function at momentum scales less than $`\mathrm{\Lambda }`$ will saturate with occupation number
$`{\displaystyle \frac{1}{\pi R^2}}{\displaystyle \frac{dN}{d^2p_Tdy}}\mathrm{\hspace{0.17em}1}/\alpha _s.`$ (5)
That way the field is very strong and non-perturbative, even though it could be obtained in the weak coupling regime by usual perturbative methods. The corresponding multiplicities are large \[Eq. (5)\].
If the field is classical in each collision, the produced particles arise from a fixed source on an event by event basis. Each event is characterized by a certain configuration of the color charges in the source, off which the classical field of Fig. 2 is emitted. The source results from the fluctuations in the color charge density of those quarks and gluons at rapidities larger or smaller than that at which we measure the distribution of produced particles. We know that the spectrum of fluctuations is Poissonian for a coherent state corresponding to a fixed source density which produces the field. This is similar to the results of reggeon calculus: after fixing the sources the classical fields might be considered as reggeons independently producing particles (see Fig. 3). The spectrum therefore will be Poissonian . Moreover, the classical effective action in a random background field of describes both the dynamics of the gluon fields and the fluctuations induced by changing the source strength itself. Therefore using the model of classical emissions we should be able to predict the spectrum of rapidity fluctuations at particle formation time.
The nature of the fluctuations in the source density is complicated by the fact that the source density itself satisfies a renormalization group equation , and is correlated with what the source would have been at other values of rapidity. One of the purposes of this paper is to disentangle this dependence and to predict the general from of the multiplicity fluctuation spectrum. In Sect. II we shall show that as a consequence of the renormalization group behavior, the fluctuation spectrum exhibits the Koba, Nielsen, and Olesen (KNO) scaling . The KNO scaling for hadronโhadron collisions states that the probability of producing a given multiplicity of particles $`N`$ at some given high energy $`E`$, which we denote $`dP/dN`$, multiplied by the average multiplicity of produced particles at this energy $`\overline{N}(E)`$ is described at all energies by the same scaling function of $`N/\overline{N}(E)`$
$`\overline{N}(E){\displaystyle \frac{dP}{dN}}=f(N/\overline{N}(E)),`$ (6)
characteristic of given hadron species. That way the energy dependence of the high energy inclusive cross sections comes in only through the average multiplicity of the produced particles $`\overline{N}(E)`$.
The nature of this scaling is slightly modified in the case of a nucleus since in the nuclear collisions there is another parameter in the problem โ atomic number of the nucleus $`A`$. Many nucleons in the nucleus enhance multiple interactions, changing the shape of the KNO distribution. In this paper we will for simplicity consider only the collisions of identical nuclei having the same atomic number $`A`$. The relative width of the distribution depends on the baryon number of the nucleus, and must scale roughly like $`1/\sqrt{A^\delta }`$, where $`\delta `$ could be $`2/3`$, $`1`$ or $`4/3`$ depending on the saturation model, as will be discussed below in Sect. II. This scaling arises from the fact that nucleons separated by transverse distances greater than or of the order of a Fermi from one another must act independently. Thus for nuclear collisions we predict that the KNO scaling function will depend on $`A`$
$`\overline{N}(E){\displaystyle \frac{dP}{dN}}=f(A,N/\overline{N}(E)).`$ (7)
We will derive this result in Sect. II. It implies that for a nucleus of fixed size $`A`$ one should observe KNO scaling, but the scaling function will differ from a nucleus to a nucleus. We will also argue that the form of this distribution is
$`f\mathrm{exp}\left[constA^\delta (\sqrt{N/\overline{N}(E)}1)^2\right]`$ (8)
\[see Eq. (21)\]. This multiplicity distribution has a width
$`\delta N\overline{N}(E)/\sqrt{A^\delta }.`$ (9)
Of course the width $`\delta N`$ is parametrically of the order of $`\sqrt{A^\delta }`$, since, as will be shown in Sect. II, $`\overline{N}(E)A^\delta `$, but the signal for the KNO form is its dependence on energy through the total multiplicity.
Of course, due to the final state interaction of particles, the number of particles in the final state is not expected to be much changed from the initial conditions. We expect therefore that fluctuations in the total particle multiplicity as a function of rapidity on an event by event basis at the particle formation time should be reflected in the final state distribution of produced particles. This should be true up to at least some number such as $`\alpha _s`$ times $`\sqrt{dN/dy}`$. If we look for very rare fluctuations, then such fluctuations in the final state interactions should not obscure a huge initial state fluctuation.
In Sect. III we will derive another feature of the underlying classical dynamics which manifests itself in the two particle distribution function. To compute the two particle distribution, for each event we take the multiplicity in some bin at $`y_1`$ and multiply by the multiplicity at another rapidity bin $`y_2`$. If the classical fields are the ones responsible for the distribution of the produced particles, we shall show that at the leading order in $`A`$ the two particle distribution function factorizes into
$`D(y_1,y_2)={\displaystyle \frac{dN}{dy_1}}{\displaystyle \frac{dN}{dy_2}}.`$ (10)
The factorization is demonstrated in Fig. 3. Of course this factorization would happen if the distributions were uncorrelated at rapidities $`y_1`$ and $`y_2`$. We shall argue that they are in fact very tightly correlated in Sect. IV by introducing a correlation function which would allow one to disentangle the classical effects from other possible mechanisms of particle production.
The classical field responsible for this distribution is constant over a large rapidity interval. Therefore if we measure the multiplicity fluctuation in a rapidity interval centered around $`y_1`$ which is several sigma away from the average, the multiplicity in a neighboring rapidity interval centered around $`y_2`$ should be roughly the same multiple sigma fluctuation from the average. This very remarkable correlation is a measure of the classical coherence of the field, and is a direct measure of the underlying strong like dynamics of the gluon field. We shall make this argument firm in Sect. IV by introducing a correlation coefficient at a fixed impact parameter of the nuclei $`๐(b,y_1,y_2)`$ in Eq. (32). The coefficient $`๐(b,y_1,y_2)`$ is an experimentally measurable observable which is equal to $`1`$ when the particle productions at the rapidities $`y_1`$ and $`y_2`$ are correlated on the event-by-event basis, and is less than one otherwise. We will argue that if the classical picture of emissions is true this correlation coefficient should be $`1`$ over large intervals in rapidity, $`|y_1y_2|1/\alpha _s`$. It should fall off for wider rapidity intervals. This prediction could be checked experimentally at RHIC and LHC.
In Sect. IV we also discuss the differences and similarities of the results of our classical approach and the conclusions for rapidity correlations which could be driven out of the old reggeon theory.
We summarize our results in Sect. V.
## II Nature of KNO Scaling
In this section, we will study the fluctuation spectra of produced particles. Strictly speaking, we are studying the fluctuation spectrum of produced gluons in early stages of the collisions. The number of gluons is of course modified by subsequent interactions with other gluons and in their subsequent transmutation into pions. It is expected nevertheless that the number of produced pions is close to the number of initially produced gluons. This is because the gluons are thermalized largely by two body collisions in the early stage of the collision, since the coupling is weak if the typical transverse momentum of the gluons is large. This is expected if the multiplicity per unit area $`\mathrm{\Lambda }^2`$ of Eq. (3) is large compared to $`\mathrm{\Lambda }_{QCD}^2`$ as should be the case for large nuclei at asymptotically high energy. The two body collisions change the transverse momentum distribution of gluons, but preserve the total gluon number. In this paper we are interested in the multiplicities of gluons integrated over all transverse momenta (or coordinates), which do not change under thermalizing two gluon collisions. At later times after thermalization, we expect that entropy will be approximately conserved. At late stages, the entropy is converted into pion number as the system cools. We expect therefore that $`dN_{gluon}/dydN_{pion}/dy`$. In general there may be some weak dependence of this proportionality upon multiplicity, but for our purposes this weak dependence will not be important.
Therefore, although we compute the initial multiplicity fluctuations in gluons, this should be reflected in the multiplicity fluctuations of produced pions.
In the McLerran-Venugopalan model of the small x hadronic wave function, one computes a classical field which arises from a source density . This source density is at rapidities much larger than that of the classical field which we compute. Eventually the source density is itself integrated over to be included in the source for the next series of classical emissions , as was described in the introduction.
Let us first compute the fluctuation spectrum of produced particles for a fixed source. In this case, the wavefunction corresponding to this classical field is a coherent state. Coherent state wavefunction generate Poisson distribution in multiplicities. Therefore the typical fluctuation scale in a unit of rapidity is of order $`\sqrt{dN/dy}`$. We will soon see that this scale is small compared to that generated by the fluctuations in the source itself.
To understand the fluctuations induced by the source of the classical fields, we construct a model which has most of the features of BFKL evolution and fluctuations in the source density. We use intuition developed from understanding the renormalization group structure of the McLerran-Venugopalan model. We use the equation
$`{\displaystyle \frac{dN}{dy}}=\kappa N(y)+\sqrt{\kappa ^{}N(y)}\zeta (y).`$ (11)
In this equation, $`N(y)`$ is the total number of particles in the rapidity range between $`y_{target}`$ and $`y`$,
$`N(y)={\displaystyle _{y_{target}}^y}๐y^{}{\displaystyle \frac{dN}{dy^{}}}.`$ (12)
The first term on the right hand side of the evolution equation (11) is the toy model analog of the kernel of the BFKL evolution equation (thus $`\kappa =\alpha _P1=\frac{4\alpha _sN_c\mathrm{ln}2}{\pi }`$ is the intercept of the BFKL pomeron ). For a series of classical emissions one could conclude that the number of produced particles is proportional to the total number of partons off which the new particles are emitted. This is reflected in the first terms of Eq. (11). We note that due to the classical emission picture we can write Eq. (11) for particle multiplicities and not just for the cross sections (The BFKL equation was originally written for the cross section.). The reason for that stems from the fact that at the high energies considered here the total cross sections become independent of energies, thus making the energy dependence of inclusive cross sections and multiplicities identical. If we ignore the second term on the right hand side of Eq. (11) the solution of the equation would be
$`{\displaystyle \frac{dN}{dy}}=N_0e^{\kappa (yy_{target})},`$ (13)
which is our analog of the solution of the BFKL equation. Comparing Eq. (13) to the usual one BFKL pomeron exchange result we note here that $`N_0\alpha _s^2`$. The $`A`$-dependence of $`N_0`$ is determined from the predictions for multiplicity inside of the saturation region. All saturation models agree that the multiplicity of the produced particles in the saturation region is given by
$`{\displaystyle \frac{dN}{dy}}A^{2/3}Q_s^2(y)`$ (14)
with $`Q_s(y)`$ the saturation momentum, which is denoted by $`\mathrm{\Lambda }`$ in Eq. (3). The factor of $`A^{2/3}`$ in Eq. (14) results from the integration over the impact parameters of the nuclei. In the GlauberโMuellerโtype saturation models the saturation momentum scales as $`Q_s^2A^{1/3}`$ due to multiple rescatterings within the nucleus. In the approaches based on the BFKL equation $`Q_s^2A^{2/3}`$. In the Reggeon approach $`Q_s^2`$ is independent of $`A`$. We can summarize all these results by writing $`N_0A^\delta `$, with $`\delta `$ being dependent on the particular model at hand. In GlauberโMueller model $`\delta =1`$, in BFKL-based approaches $`\delta =4/3`$ and in Reggeon calculus $`\delta =2/3`$.
The second term on the right hand side of Eq. (11) is stochastic and in our model describes the fluctuations induced at each stage of the evolution equation. The probability distribution of the fluctuation of the stochastic term $`\zeta `$ is Gaussian
$`Z={\displaystyle [d\zeta ]e^{\frac{1}{2}{\scriptscriptstyle ๐y\zeta ^2(y)}}}.`$ (15)
Its origin is from the renormalization group . At each step a classical field is induced, which when small fluctuations are computed gets converted into a source for the next step in the evolution. Of course the classical field itself has Poissonian fluctuations which near the center of the distribution should be Gaussian. Therefore, the induces source at the next step will have fluctuations built in which are correlated with the fluctuations in the previous step. If we look over an interval of unit width in rapidity, the weight of these fluctuations is of order $`\sqrt{dN/dy}`$. Our stochastic source $`\zeta `$ is weighted by the function $`Z`$ of Eq. (15) so that its fluctuations in one unit of rapidity are also of order one. Using the fact that $`\kappa NdN/dy`$ which is true for the solution (13) of Eq. (11) without the stochastic term, and, as we shall see below, is approximately valid for the full Eq. (11), we can replace the weight of the stochastic fluctuation term by $`\sqrt{dN/dy}\sqrt{\kappa ^{}N}`$ arriving at the evolution equation (11). We also note that $`\kappa ^{}\alpha _s`$.
The solution to Eq. (11) for fixed $`\zeta `$ is
$`N(y)=\left[\sqrt{\overline{N}(y)}+{\displaystyle \frac{1}{2}}\sqrt{\kappa ^{}}{\displaystyle _0^y}๐y^{}e^{\kappa (yy^{})/2}\zeta (y^{})\right]^2.`$ (16)
In Eq. (16)
$`\overline{N}(y)=e^{\kappa y}N_0`$ (17)
where we will in the future set $`y_{target}=0`$. Note that $`\overline{N}`$ is the typical average multiplicity at fixed $`y`$ which one would have in the limit when fluctuations are turned off.
We can now compute the multiplicity distribution function $`dP/dN`$. To this we must integrate over the fluctuations fields $`\zeta `$ with the constraint that the total multiplicity is given by Eq. (16). We have
$`{\displaystyle \frac{dP}{dN}}={\displaystyle \frac{1}{Z}}{\displaystyle [d\zeta ]\mathrm{exp}\left[\frac{1}{2}๐y^{}\zeta ^2(y^{})\right]\delta \left(N(y)[\sqrt{\overline{N}(y)}\frac{\sqrt{\kappa ^{}}}{2}_0^{\mathrm{}}๐y^{}e^{\kappa (yy^{})/2}\zeta (y^{})]^2\right)}`$ (18)
In this integral, we replaced the upper limit of integration in the expression for $`N(y)`$ by infinity, since we are typically interested in large values of $`y`$. This makes the analysis simpler in what follows.
To evaluate the path integral, one should decompose
$`\zeta (y)={\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}e^{\kappa y/2}c_nL_n(\kappa y),`$ (19)
where the $`L_n`$โs are Laguerre polynomials $`L_n^0`$. Using the orthogonality condition for Laguerre polynomials one can see that all the integrals over $`c_n`$โs in Eq. (18) become Gaussian with the exception of the integral over $`c_0`$ in the numerator, which is fixed by the delta function. The integrals over $`c_n`$โs for $`n1`$ can be done in closed form and are canceled by the same integrals in $`Z`$ in the denominator. The final answer is given by the $`c_0`$ integration.
In the end, we find that
$`\overline{N}(y){\displaystyle \frac{dP}{dN}}=\sqrt{{\displaystyle \frac{\kappa N_0\overline{N}(y)}{2\pi \kappa ^{}N}}}\mathrm{exp}\left[2{\displaystyle \frac{\kappa N_0}{\kappa ^{}}}\left(1\sqrt{{\displaystyle \frac{N}{\overline{N}(y)}}}\right)^2\right].`$ (20)
The multiplicity of the produced particles $`N_0`$ is very large. That allows us to expand Eq. (20) around $`N=\overline{N}`$. We obtain
$`\overline{N}(y){\displaystyle \frac{dP}{dN}}=\sqrt{{\displaystyle \frac{\kappa N_0}{2\pi \kappa ^{}}}}\mathrm{exp}\left[{\displaystyle \frac{\kappa N_0}{2\kappa ^{}}}\left(1{\displaystyle \frac{N}{\overline{N}(y)}}\right)^2\right].`$ (21)
This result for the multiplicity distribution is simple to understand. The evolution equation connects the produced particle to some initial spectrum of fluctuations. The relative importance of these fluctuations decreases as the multiplicity increases. Therefore the fluctuations within an interval of width $`1/\alpha _s`$ in rapidity set up the fluctuations at higher values. This is the factor of $`\kappa N_0/\kappa ^{}`$ in the exponent of Eq. (21). Remembering that $`\kappa \kappa ^{}\alpha _s`$ and $`N_0\alpha _s^2A^\delta `$ we conclude that the relative width of the multiplicity distribution scales like $`1/\alpha _s\sqrt{A^\delta }`$. The fluctuations are all correlated all the way down the chain and therefore should only depend upon the ratio $`N/\overline{N}(y)`$. This dependence is the essence of KNO scaling. The only important variable is the multiplicity divided by the average multiplicity. Of course there is also a dependence on $`N_0`$ because there are more independent emitters for a nucleus than for a hadron. This is reflected in the fact that $`N_0A^\delta `$.
Note that the width of this distribution is
$`\delta N^2\overline{N}^2(y)\kappa ^{}/N_0\kappa .`$ (22)
Parametrically $`\delta N^2`$ is linear in $`A^\delta `$, as $`N_0A^\delta `$ and $`\overline{N}N_0A^\delta `$.
In general, the fluctuation spectrum for the first few emitters which generate the fluctuations in the distribution may be different from Eq. (21). The exact shape of the KNO scaling function will probably differ from the one given by Eqs. (20) and (21). Nevertheless, the physical picture we have generated still is true, and therefore we expect that in general the distribution will be of the form
$`\overline{N}(y){\displaystyle \frac{dP}{dN}}=f(N_0,N/\overline{N}(y))`$ (23)
and that the typical width of the distribution will scale with $`A`$ and $`\alpha _s`$ in the same way as the width in our model given by Eq. (22). The physical picture we have of the small $`x`$ gluon distribution functions therefore automatically has KNO scaling \[cf. \].
In terms of the strong coupling constant the relative width $`\delta N^2/\overline{N}^2(y)1/\alpha _s^2`$ (see Eq. (22)). It has been argued in the framework of McLerranโVenugopalan model that in high energy nuclear scatterings the high parton density sets the scale of the running coupling constant . If this assumption is true, than one could conclude that at very high energies $`N_0\alpha _s^2(\mathrm{\Lambda }_{target})`$, since $`N_0`$ is the multiplicity of the particles in the fragmentation region. $`\mathrm{\Lambda }_{target}`$ is the transverse momentum scale characterizing the target nucleus. At the same time $`\kappa \alpha _s(\mathrm{\Lambda })`$, as it determines the evolution of the multiplicity $`N(y)`$ at some large rapidity $`y`$. ($`\mathrm{\Lambda }`$ is given by Eq. (3).) $`\kappa ^{}`$ depends on $`y`$ through its dependence on $`\mathrm{\Lambda }N(y)`$. However, if we allow $`\kappa ^{}`$ to depend on $`y`$ then, since the integral over $`y^{}`$ in Eq. (16) is dominated by $`y^{}0`$ one can see that the most important value of $`\kappa ^{}`$ would be at the fragmentation region. Thus $`\kappa ^{}\alpha _s(\mathrm{\Lambda }_{target})`$. In the light of the above estimates and using Eq. (22) we conclude that the relative width depends on the average multiplicity
$`{\displaystyle \frac{\delta N^2}{\overline{N}^2(y)}}{\displaystyle \frac{1}{\alpha _s(\mathrm{\Lambda })\alpha _s(\mathrm{\Lambda }_{target})}}{\displaystyle \frac{1}{\alpha _s(N)}}.`$ (24)
The high multiplicity of the produced particles would create a large momentum scale $`\mathrm{\Lambda }`$, which would make the coupling constant $`\alpha _s(\mathrm{\Lambda })`$ small. Thus the relative width of the KNO distribution given by Eq. (24) will get larger. Moreover, since the change in width would depend on the average multiplicity of the produced particles that would violate the KNO scaling. Thus our prediction of KNO scaling will start to break down slowly at very high energies. At the same time since the rate of change of the running coupling constant $`\alpha _s(\mathrm{\Lambda })`$ slows down at large $`\mathrm{\Lambda }`$ the violation of KNO scaling will decrease as the energy gets higher.
## III Classical Nature of the Distribution Function
Another feature of the classical field particle production arises from considering the distribution function $`D(y_1,y_2)`$. We define this to be the two particle distribution function measured in the following way: In each event or class of events, measure the multiplicity of particles at rapidities $`y_1`$ and $`y_2`$ in bins of width $`dy_1`$ and $`dy_2`$. Multiply these multiplicities together to generate
$`d^2N(y_1,y_2)=D(y_1,y_2)dy_1dy_2.`$ (25)
Then average $`D(y_1,y_2)`$ over events or classes of events.
A class of events might be generated by putting a cut on say the total multiplicity of particles in the neighborhood of zero rapidity. In the class of events where this cut is satisfied, one could measure the multiplicity in different regions of rapidity and average over the class of events.
The distribution function $`D`$ may be generated as an expectation value of the number operator
$`D(y_1,y_2)={\displaystyle \frac{d^2p_T}{(2\pi )^2}\frac{d^2q_T}{(2\pi )^2}a^{}(y_1,p_T)a(y_1,p_T)a^{}(y_2,q_T)a(y_2,q_T)}.`$ (26)
In the classical field limit, this expression is of the form
$`a^{}(y_1,p_T)a(y_1,p_T)a^{}(y_2,q_T)a(y_2,q_T)A^i(p)A^i(p)A^i(q)A^i(q)`$ (27)
where $`A^i`$ is the classical field produce by the color sources in the colliding nuclei . When one averages over the sources which generate the fields, one finds that
$`D(y_1,y_2)={\displaystyle \frac{dN}{dy_1}}{\displaystyle \frac{dN}{dy_2}}.`$ (28)
This is true up to corrections which are of order $`A^{2/3}`$, that is of the order of one over the area of the nuclei. This statement could be understood from Fig. 3. The classical fields producing the gluon connect to several color sources. In the leading order in $`A`$ the fields generating gluons at $`y_1`$ and $`y_2`$ connect to different sets of color sources, making Eq. (28) true.
Note that this tells us that the fields are essentially trivially correlated in the longitudinal direction. The connected piece of $`D(y_1,y_2)`$ has vanished entirely, up to corrections which go like one over the area of the nuclei. This lack of correlation is not by itself evidence of a classical field. It could occur if there were for example no correlations at all in the longitudinal space. The structure of the KNO distribution itself suggests that this is not the case. In the next section, we shall see that there is a correlation function which dramatically illustrates the classical correlation.
## IV Classical Correlation
If there are classical fields, then, as we have seen above, due to boost invariance, these classical fields are independent of rapidity over a wide range of rapidity . Therefore the multiplicity should on the average be the same over the range of rapidity where the classical field theory is valid. This is typically of order $`1/\alpha _s`$ in the classical field approach. At the same time $`\alpha _s`$ should be small when the density of produced particles is large , making this correlation length in rapidity large.
This effect can be measured in the following way: Measure the rapidity density in some bin of width $`dy_1`$ around $`y_1`$ and $`dy_2`$ around $`y_2`$. Require that $`dy_1`$ and $`dy_2`$ are large enough so that the statistical fluctuations in the rapidity in a given event are small compared to the total multiplicity. From our picture of classical particle production it follows that the multiplicity around $`y_1`$ should be the same as around $`y_2`$ up to statistical fluctuations.
If we applied this analysis to an average event, the result above would be trivial. On the other hand, if we look at an event where the fluctuation is very rare in the bin around $`y_1`$, we predict that there will also be the same rare fluctuations around $`y_2`$! Rare events tend to fluctuate together over a wide range of rapidity!
A measurement of the width of the correlation length tells us something about the underlying classical dynamics and is interesting in itself.
First we note that in a nuclear collision $`\frac{dN}{dy}`$ is a function of rapidity $`y`$, impact parameter of the nuclei $`b`$, and of the configuration of color charges in the nuclei in this particular collision, which we will symbolically denote $`\rho `$:
$`{\displaystyle \frac{dN}{dy}}={\displaystyle \frac{dN}{dy}}(y,b,\rho ).`$ (29)
If one measures $`\frac{dN}{dy}`$ in a number of events and then takes the average value at a fixed impact parameter of the colliding nuclei $`b`$ the result should correspond to the averaging of the theoretical prediction for $`\frac{dN}{dy}`$ over all configurations of color charges in the colliding nuclei $`\rho `$, which we write as
$`{\displaystyle \frac{dN}{dy}}(y,b,\rho )_\rho .`$ (30)
Event by event fluctuations in $`\frac{dN}{dy}(y,b,\rho )`$ are characterized by the variance of that quantity which we will define as
$`V\left[{\displaystyle \frac{dN}{dy}}\right]=\left({\displaystyle \frac{dN}{dy}}(y,b,\rho )\right)^2_\rho {\displaystyle \frac{dN}{dy}}(y,b,\rho )_\rho ^2.`$ (31)
Thus we can also introduce the correlation function of the numbers of particles measured at two different rapidity points $`y_1`$ and $`y_2`$ in several events with the same impact parameter (that is for several configurations of color charges $`\rho `$). According to the standard mathematical methods we define the correlation coefficient
$`๐(b;y_1,y_2)={\displaystyle \frac{\frac{dN}{dy_1dy_2}(b,\rho )_\rho \frac{dN}{dy_1}(b,\rho )_\rho \frac{dN}{dy_2}(b,\rho )_\rho }{\sqrt{V\left[\frac{dN}{dy_1}\right]V\left[\frac{dN}{dy_2}\right]}}}.`$ (32)
We can neglect the $`y`$ dependence of $`\frac{dN}{dy}(y,b,\rho )`$ for the rapidity intervals of the size $`1/\alpha `$. That way one can see that when $`|y_1y_2|\genfrac{}{}{0pt}{}{_<}{}1/\alpha `$ we can neglect the variation of $`\frac{dN}{dy}(b,\rho )`$ with $`y`$, which is purely statistical in that interval, and put $`\frac{dN}{dy_1}\frac{dN}{dy_2}`$, which leads to $`๐(b;y_1,y_2)=1`$. This is the prediction of the classical emission picture discussed above and in : the correlation function $`๐(b;y_1,y_2)`$ should be close to one for $`|y_1y_2|\genfrac{}{}{0pt}{}{_<}{}1/\alpha `$. When $`|y_1y_2|\genfrac{}{}{0pt}{}{_>}{}1/\alpha `$ the $`y`$ dependence in $`\frac{dN}{dy}`$ in the interval between $`y_1`$ and $`y_2`$ becomes important and $`\frac{dN}{dy_1}`$ and $`\frac{dN}{dy_2}`$ become uncorrelated, which would lead to $`๐(b;y_1,y_2)`$ being smaller than one.
One might note that in the leading powers of $`A`$ approximation both numerator and denominator in the expression for $`๐(b;y_1,y_2)`$ given by Eq. (32) are zero. The non-zero terms appear once we include the corrections which are subleading in $`A`$. The first non-zero terms in the numerator and denominator of Eq. (32) are suppressed by $`A^{2/3}`$ compared to the leading contribution in, for instance, $`\frac{dN}{dy_1dy_2}(b,\rho )`$. That is if we calculate $`\frac{dN}{dy_1dy_2}(b,\rho )`$ in the classical approximation we would obtain the leading term in $`A`$ given by Eq. (28), which is canceled in the numerator of Eq. (32), and the subleading terms, which are suppressed at least by $`A^{2/3}`$, but still arise from the classical approximation. The largest of these subleading terms is suppressed by exactly $`A^{2/3}`$ and results from the situation when both particles at $`y_1`$ and $`y_2`$ are emitted from the same nucleon. Even though it is subleading in $`A`$ this term is nevertheless larger than the correlations induce by quantum corrections connecting the fields that produce the particles at $`y_1`$ and $`y_2`$, since the latter are also suppressed by extra powers of $`\alpha _s`$. The quantum correction within one of the fields do not violate the factorization of Eq. (28) and, therefore, cancel in the numerator of Eq. (32). Thus the fact that the correlator in Eq. (32) is equal to $`1`$ because of the terms in the numerator and denominator which are subleading in $`A`$ does not influence the fact that the correlations are classical.
In order to understand Eq. (32) better let us consider different possible definitions of the correlation function. For rapidity correlation function defined as
$`_\rho (y_1,y_2)={\displaystyle \frac{\frac{dN}{dy_1dy_2}(b,\rho )\frac{dN}{dy_1}(b,\rho )\frac{dN}{dy_2}(b,\rho )}{\frac{dN}{dy_1}(b,\rho )\frac{dN}{dy_2}(b,\rho )}}`$ (33)
we expect that
$`_\rho (y_1,y_2)=O\left(A^{\frac{2}{3}}\right).`$ (34)
Note, that in Eq.(32) we fix the value of the impact parameter (transverse distance between centers of colliding nuclei) which is also a characteristics of the event. The $`A^{\frac{2}{3}}`$ \- corrections stem from region of integration $`p_Tq_T`$ in Eq. (27). This is the subleading correction to the factorization of Eq. (28). It could also be viewed as resulting from the contribution when the two fields of Fig. 3 share the same source of color charge. That term would definitely be suppressed by $`A^{2/3}`$. Thus the correlation coefficient of Eq. (33) defined without averaging over events does not reflect the correlations that we are interested in. However, $`R_\rho `$ is not the correlation coefficient usually employed in the experiments. One defines the correlation function $`_{average}`$ by
$`_{average}(y_1,y_2)={\displaystyle \frac{\frac{1}{\sigma _{tot}}\frac{d\sigma }{dy_1dy_2}_{b,\rho }\frac{1}{\sigma _{tot}}\frac{d\sigma }{dy_1}_{b,\rho }\frac{1}{\sigma _{tot}}\frac{d\sigma }{dy_2}_{b,\rho }}{\frac{1}{\sigma _{tot}}\frac{d\sigma }{dy_1}_{b,\rho }\frac{1}{\sigma _{tot}}\frac{d\sigma }{dy_2}_{b,\rho }}}`$ (35)
where $`\sigma _{tot}`$, $`\frac{d\sigma }{dy}`$ and $`\frac{d\sigma }{dy_1dy_2}`$ are total, single and double inclusive cross sections for nuclear collisions that are defined by averaging over all events ($`\rho `$) and integrating over all impact parameters $`(b_t)`$. As was shown long ago in the Pomeron approach function $`_{average}`$ generally is of the order of unity even in the case if have $`_\rho =0`$ . However, we would like to point out that the main correlation which makes $`_{average}`$ large is the simple correlation in impact parameters which has a very simple underlying physics, which states that partons (gluons) are produced independently but at the same value of impact parameter. In McLerran-Venugopalan model one might expect that at high parton densities or better to say in collisions of heavy nuclei the $`b_t`$ \- distribution for all physical observables are the same, namely, $`\mathrm{\Theta }(R_Ab_t)`$. In this case $`_{average}\mathrm{\hspace{0.17em}0}`$ showing a classical field emission even without the event selection ($`b_t`$ fixing). If one does not integrate over the impact parameter in Eq. (35) then without the impact parameter correlations this correlation function will be small again failing to reflect the type of correlations we are interested in.
The general picture of correlations for high parton density QCD turns out to be very similar to pattern of correlations predicted by the Reggeon approach which has been discussed in details for three decades . The first observation is that in both approaches Eqs. (27) and (33) lead to conclusion that the secondary gluons (hadrons) are originated from the independent production of clusters with the mean multiplicity $`N=\frac{dN}{dy}(y,\rho ,b_t)๐y`$. Therefore, the probability to emit $`k\times N`$ gluons or, in other words, $`k`$ clusters, is equal to
$`P_k=e^N{\displaystyle \frac{N^k}{k!}}.`$ (36)
Note, that Eq. (36) is a well known Poisson distribution.
The second observation stems from the classical field emission, namely, $`\frac{dN}{dy}(y,\rho ,b_t)=d=Const(y)1`$ for the rapidity interval of the order of $`1/\alpha _s1`$. We will discuss why $`d1`$ below. Assuming this we can easily see that we can neglect statistic fluctuations in $`N(y,\rho ,b_t)`$, and, therefore, $`N=Yd`$ where $`Y`$ is the accessible rapidity interval ($`Y1/\alpha _s`$ ). We can claim even that $`N(y,\rho ,b_t)`$ which is defined as the number of gluons ( hadrons) in the rapidity interval $`y\mathrm{\Delta }yรทy+\mathrm{\Delta }y`$ is equal to
$`N(y,\rho ,b_t)=k\times d\times \mathrm{\hspace{0.17em}2}\mathrm{\Delta }y.`$ (37)
Therefore, measuring $`N(y,\rho ,b_t)`$ we fix the number of clusters $`k`$ or, in other words, we select a configuration which is produced with probability $`P_k`$ (see Eq. (36)).
Let us recall the main predictions that will hold both in classical field approach ( McLerran-Venugopalan model ) and in the Reggeon description. If we select the events where the multiplicity of the particles in the rapidity bin $`y_1\mathrm{\Delta }yรทy_1+\mathrm{\Delta }y`$ around $`y_1`$ ($`N(y_1)`$) is fixed and average the multiplicity in the rapidity interval $`y_2\mathrm{\Delta }yรทy_2+\mathrm{\Delta }y`$ over these events we predict that it will be equal to
$`N(y_2)_{N(y_1)fixed}=N(y_1).`$ (38)
Also the number of neutral pions in rapidity interval $`y_1\mathrm{\Delta }yรทy_1+\mathrm{\Delta }y`$ ($`N_0(y_1)`$) averaged over the events with a fixed number of charged pions in the same bin should be equal to
$`N_0(y_1)_{N^{ch}(y_1)fixed}={\displaystyle \frac{1}{2}}N^{ch}(y_1),`$ (39)
where $`N^{ch}(y_1)`$ is the number of charged pions. Both models predict that for heavy ion-ion collisions in the kinematic region of saturation we have
$`_{average}(y_1,y_2)\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}0}\left({\displaystyle \frac{1}{A^{2/3}}}\right).`$ (40)
One can see that all these prediction are direct consequences of Eq. (36) and Eq. (37). Much more detailed predictions could be found in Refs. .
The high density QCD and soft Reggeon approaches can be distinguished by measuring the transverse momenta distributions or/and transverse momenta correlation between produced particles. Indeed, in high parton density QCD the typical transverse momentum of produced gluons (saturation momentum $`Q_s(A,x)`$ ) is large and depends on $`A`$ while in the Reggeon approach this momentum is constant and rather small โ about 2 GeV, which results from the slope estimates of the soft pomeron. However, at first sight, this main difference does not influence the shape of the rapidity correlations and only increases the multiplicity of produced particles making our predictions more reliable in this case. Indeed, the saturation of the gluon density leads to $`dQ_s^2(W,A)`$ where $`W`$ is the energy in the center of mass frame and $`Q_s^2(W,A)`$ is the saturation scale Eq. (14), which is similar to $`\mathrm{\Lambda }`$ of Eq. (3). Since $`Q_s^2(W,A)`$ increases with $`W`$ and $`A`$ (at least $`Q_s^2A^{\frac{1}{3}}`$ but it could even be proportional to $`A^{\frac{2}{3}}`$ ) we expect $`d1`$ which makes all of our estimates much more accurate than the results of the Reggeon approach . As far as the shape of the rapidity correlations is concerned the saturation of the parton densities leads to sufficiently large correlation length which is proportional to $`1/\alpha _s(Q_s)`$. For heavy nuclei and/or high energies $`Q_s`$ increases and $`\alpha _s0`$. This fact leads to Eq. (37) being better justified in the high parton density QCD than in the case of the Reggeon approach.
## V Conclusions
In this paper we have considered the consequences of the model of classical emissions on the particle production mechanisms in heavy ion collisions . We have argued that the classical fields do not vary significantly over the rapidity intervals of the order of $`1/\alpha _s`$. This led us to conclude that the multiplicities of the particles produced in the early stages of the collisions (gluons) are correlated over $`1/\alpha _s`$ units of rapidity on the event-by-event basis. We then gave an argument demonstrating that the number of gluons generated in a particular collision is proportional to the number of pions produced in the final state. Therefore the correlation in the multiplicities of the produced gluons would reflect themselves in the multiplicities of the final state pions. We have then constructed the correlation function $`๐(b;y_1,y_2)`$ in Eq. (32), which could be measured experimentally and is equal to $`1`$ when the multiplicities of particles at the rapidities $`y_1`$ and $`y_2`$ are correlated and is less than $`1`$ otherwise. Surprisingly this effect comes from the terms in the numerator and denominator of the expression for $`๐(b;y_1,y_2)`$ given by Eq. (32) which are subleading in $`A`$. We predict $`๐(b;y_1,y_2)`$ to be close to $`1`$ when $`|y_1y_2|1/\alpha _s`$ and fall off for larger rapidity intervals.
We have also used the model of classical particle production to explain KNO scaling. In a simple toy model which resulted from trying to mimic the main features of the classical emissions we have reproduced KNO scaling for the multiplicities of produced particles. The result is given in Eq. (21). Even though the exact shape of the KNO distribution function is, probably, different from our toy model prediction, we believe that the model captures its main features. For the case of nuclei we predict the KNO function to depend on the atomic number $`A`$ in addition to the usual dependence on $`N/\overline{N}(y)`$. We predict that at very high energies KNO scaling in nuclear collisions will be violated due to the running of the coupling constant, which would get smaller as parton density increases.
The main results of this paper can be summarized in the following way:
1. We have derived KNO scaling from the classical emission picture of particle production (see Eqs. (20) and (21)).
2. We predict rapidity correlations on the scales $`\delta y1/\alpha _s`$ in the particle (pion) production in heavy ion collisions on the event-by-event basis (Sect. IV).
3. We proposed a correlation coefficient $`๐(b;y_1,y_2)`$ which would allow one to measure the predicted correlations (see Eq. (32)).
## Acknowledgments
The authors would like to acknowledge helpful and encouraging discussions with Errol Gotsman, Miklos Gyulassy, Jamal Jalilian-Marian, Uri Maor, Al Mueller, Mark Strikman, Kirill Tuchin, Raju Venugopalan, and Heribert Weigert.
This work was carried out while E.L. was on sabbatical leave at BNL. E.L. wants to thank the nuclear theory group at BNL for their hospitality and support during that time.
The research of E.L. was supported in part by the Israel Science Foundation, founded by the Israeli Academy of Science and Humanities.
This research has been supported in part by the joint AmericanโIsraeli BSF Grant $`\mathrm{\#}`$ 9800276. This manuscript has been authorized under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. |
no-problem/9912/math9912076.html | ar5iv | text | # Birational Maps of Del Pezzo Fibrations.
## 1. Introduction.
In classical result, it is known that any $`^1`$-bundle over a nonsingular complex curve $`T`$ can be birationally transformed to a $`^1`$-bundle over $`T`$ by an elementary transformation. Here, we can ask if it is also possible in 3-fold case. In other words, is it true that any nonsingular del Pezzo fibration over a nonsingular curve can be transformed to another nonsingular del Pezzo fibration? In this question, we can add more condition on del Pezzo fibrations with some kind of analogue from ruled surface cases, that is, we can assume that their fibers are always nonsingular even though this is not true for any nonsingular del Pezzo fibration.
We ask the same question for local cases. Of course, we can birationally transform any $`^1`$-bundle over a germ of nonsingular complex curve $`(T,o)`$ into another $`^1`$-bundle over $`(T,o)`$. But, in del Pezzo fibrations over $`(T,o)`$, something different happens. In this paper, we will show that any del Pezzo fibration of degree $`d4`$ with nonsingular special fiber cannot be birationally transformed into another del Pezzo fibration with nonsingular special fiber.
Let $`๐ช`$ be a discrete valuation ring such that its residue field $`k`$ is of characteristic zero. We denote $`K`$ the quotient field of $`๐ช`$. Let $`X_K`$ be a variety defined over $`\text{Spec}K`$. A model of $`X_K`$ is a flat scheme $`X`$ defined over $`\text{Spec}๐ช`$ whose generic fiber is isomorphic to $`X_K`$. Fano fibrations are models of nonsingular Fano variety defined over $`K`$. In particular, del Pezzo fibrations of degree $`d`$ are models of nonsingular del Pezzo surfaces of degree $`d`$ defined over $`K`$. Del Pezzo fibrations are studied in \[C96\] and \[K97\]. They constructed โstandard modelโ (\[C96\]) and โsemistable modelโ (\[K97\]) in each paper.
Now, we state the theorem which we will prove in this paper.
###### Main Theorem.
Let $`X`$ and $`Y`$ be del Pezzo fibrations of degree $`d4`$ over $`\text{Spec}๐ช`$. Suppose that each scheme-theoretic special fiber is nonsingular. Then any birational map between $`X`$ and $`Y`$ over $`\text{Spec}๐ช`$ which is identical over generic fiber is a biregular morphism.
We should remark here that even though it is hard to find such examples, there are del Pezzo fibrations of degree $`d4`$ over $`\text{Spec}๐ช`$ with nonsingular special fibers which can be birationally transformed into another del Pezzo fibration over $`\text{Spec}๐ช`$ with reduced and irreducible special fiber. But, as in Minimal model program over 3-folds, we have to allow some mild singularities, such as terminal ones, on them. In the end of this paper, we will give such examples.
From now on, we explain standard definitions and notations for this paper. For more detail, we can refer to \[K92\], \[P99b\], and \[Sh93\].
A variety $`X`$ means an integral scheme of finite type over a fixed field $`k`$. A log pair $`(X,B)`$ is a normal variety $`X`$ equipped with a $``$-Weil divisor $`B`$ such that $`K_X+B`$ is $``$-Cartier. A log variety is a log pair $`(X,B)`$ such that $`B`$ is a subboundary.
The discrepancy of a divisor $`E`$ over $`X`$ with respect to a log pair $`(X,B)`$ will be denoted by $`a(E;X,B)`$. And we will use the standard abbreviation plt, klt, and lc for purely log terminal, Kawamata log terminal, and log canonical, respectively.
Let $`(X,B)`$ be an lc pair and $`D`$ an effective $``$-Cartier divisor on $`X`$. The log canonical threshold (or lc threshold) of $`D`$ is the number
$$lct(X,B,D):=sup\{c|(X,B+cD)islc\}.$$
If $`B=0`$, then we use $`lct(X,D)`$ instead of $`lct(X,0,D)`$.
Finally, we will use V. V. Shokurovโs 1-complement which is a main tool for this paper. Let $`X`$ be a normal variety and let $`D`$ be a reduced and irreducible divisor on $`X`$. A divisor $`K_X+D`$, not necessarily log canonical, is $`1`$-complementary if there is an integral Weil divisor $`D^+`$ such that $`K_X+D^+`$ is linearly trivial, $`K_X+D^+`$ is lc, and $`D^+D`$. The divisor $`K_X+D^+`$ is called a 1-complement of $`K_X+D`$. This is just special case of $`n`$-complements. But, it is enough for this paper. For more detail about complements, we can refer to \[P99b\], \[Sh93\], or \[Sh97\].
Acknowledgments. The author would like to thank Prof. V. V. Shokurov for his invaluable support.
## 2. Properties of certain birational maps.
Let $`๐ช`$ be a discrete valuation ring with local parameter $`t`$. The quotient field and residue field of $`๐ช`$ are denoted by $`K`$ and $`k`$, respectively. We always assume that the field $`k`$ is of characteristic zero. We denote $`T=\text{Spec}๐ช`$. For a scheme $`\pi :ZT`$, its scheme-theoretic special fiber $`\pi ^{}(o)`$ is denoted by $`S_Z`$, where $`o`$ is the closed point of $`T`$. From now on, a birational map is always assumed to be identical when restricted to the generic fibers.
Let $`X/T`$ be a $``$-factorial Gorenstein model of a nonsingular variety defined over $`K`$ which satisfies the following three conditions.
* (Special fiber condition)
The special fiber $`S_X`$ is a reduced and irreducible variety with nonempty anticanonical linear system. Moreover, log pair $`(X,S_X)`$ is plt.
* (1-complement condition)
For any $`C|K_{S_X}|`$, there exists 1-complement $`K_{S_X}+C_X`$ of $`K_{S_X}`$ such that $`C_X`$ does not contain any center of log canonicity of $`K_{S_X}+C`$.
* (Surjectivity condition)
Any 1-complement of $`K_{S_X}`$ can be extended to a 1-complement of $`K_X+S_X`$.
With Special fiber condition, we can easily show that $`X`$ has at worst terminal singularities. Moreover, the special fiber $`S_X`$ is a variety over $`k`$ with Gorenstein canonical singularities.
Let $`\varphi :XY`$ be a birational map over $`T`$, where $`X`$ and $`Y`$ are $``$-factorial Gorenstein models of a nonsingular variety defined over $`K`$ which satisfy above three conditions. Suppose that $`\varphi :XY`$ is not an isomorphism in codimension 1. We fix a resolution of indeterminacy of $`\varphi :XY`$ as follows.
Let $`\stackrel{~}{S_X}`$ and $`\stackrel{~}{S_Y}`$ be proper transformations of $`S_X`$ and $`S_Y`$ by $`f`$ and $`g`$, respectively. Since birational map $`\varphi `$ is not an isomorphism in codimension 1, $`\stackrel{~}{S_X}`$ is a $`g`$-exceptional divisor and $`\stackrel{~}{S_Y}`$ is $`f`$-exceptional.
###### Lemma 2.1
. Let $`K_X+S_X+D_X`$ be a 1-complement of $`K_X+S_X`$. And let $`D_Y=\varphi _{}D_X`$. For any prime divisor $`E`$ over $`X`$,
$$a(E;X,qS_X+D_X)=a(E;Y,\alpha _qS_Y+D_Y),$$
where $`q`$ is any given number and $`\alpha _q=a(\stackrel{~}{S_Y};X,qS_X+D_X)`$. Moreover, log canonical divisor $`K_Y+S_Y+D_Y`$ is linearly trivial.
Proof. Suppose that $`E`$ is a divisor on $`W`$. Note that $`f_{}^1D_X=g_{}^1D_Y=D_W`$. Then we have
$$K_W+q\stackrel{~}{S_X}+D_W=f^{}(K_X+qS_X+D_X)\alpha _q\stackrel{~}{S_Y}+a_iE_i,$$
and
$$K_W+\alpha _q\stackrel{~}{S_Y}+D_W=g^{}(K_Y+\alpha _qS_Y+D_Y)+b\stackrel{~}{S_X}+b_iE_i,$$
where each $`E_i`$ is $`f`$-exceptional and $`g`$-exceptional. From them, we get
$$f^{}(K_X+qS_X+D_X)g^{}(K_Y+\alpha _qS_Y+D_Y)=(q+b)\stackrel{~}{S_X}+(b_ia_i)E_i.$$
Since $`K_X+qS_X+D_X`$ is numerically trivial, we have
$$(q+b)\stackrel{~}{S_X}+(b_ia_i)E_i_g0.$$
By Negativity lemma, $`b=q`$ and $`b_i=a_i`$. This prove the first statement.
Since $`\varphi `$ is identical on generic fiber, it is clear that $`D_Y`$ is linearly equivalent to $`K_Y`$. Thus, the second statement follows from the fact that $`S_Y`$ is linearly trivial. Q.E.D.
###### Lemma 2.2
. There exists 1-complement $`K_{S_X}+C_X`$ (resp. $`K_{S_Y}+C_Y`$) of $`K_{S_X}`$ (resp. $`K_{S_Y}`$) does not contain the center of $`S_Y`$ (resp. $`S_X`$) on $`X`$ (resp. $`Y`$).
Proof. Let $`K_Y+S_Y+L_Y`$ be a 1-complement of $`K_Y+S_Y`$. By lemma 2.1, $`a(\stackrel{~}{S_Y};X,S_X+L_Y)1`$, where $`L_X=\varphi _{}^1L_Y`$. Clearly, the center of $`\stackrel{~}{S_Y}`$ on $`X`$ is contained in $`C=L_X|_{S_X}`$. By inversion of adjunction, the center of $`\stackrel{~}{S_Y}`$ is a center of log canonicity singularities of $`K_{S_X}+C`$. Furthermore, $`K_{S_X}+C`$ is linearly trivial by lemma 2.1. Therefore, 1-complement condition implies the statement. Q.E.D.
###### Lemma 2.3
. There is a 1-complement $`K_X+S_X+D_X`$ (resp. $`K_Y+S_Y+H_Y`$) of $`K_X+S_X`$ (resp. $`K_Y+S_Y`$) such that $`D_X`$ (resp. $`H_Y`$) does not contain the center of $`S_Y`$ (resp. $`S_X`$).
Proof. It immediately follows from lemma 2.2 and Surjectivity condition. Q.E.D.
From now on, we fix 1-complements $`K_X+S_X+D_X`$ and $`K_Y+S_Y+H_Y`$ of $`K_X+S_X`$ and $`K_Y+S_Y`$, respectively, which satisfy the condition in lemma 2.3. We will use the notation $`D_Y`$, $`D_W`$, $`H_X`$ and $`H_W`$ for $`\varphi _{}D_X`$, $`f_{}^1D_X`$, $`\varphi _{}^1H_Y`$ and $`g_{}^1H_Y`$, respectively. Note that $`g_{}^1D_Y=f_{}^1D_X`$ and $`g_{}^1H_Y=f_{}^1H_X`$.
Now, we define the following condition.
* (Total lc threshold condition)
The inequality $`\tau _X+\tau _Y>1`$ holds, where $`\tau _X=\mathrm{min}\{lct(S_X,C):C|K_{S_X}|\}`$ and $`\tau _Y=\mathrm{min}\{lct(S_Y,C):C|K_{S_Y}|\}`$.
###### Theorem 2.4
. Under Total lc threshold condition, birational map $`\varphi `$ is an isomorphism in codimension 1.
Proof. Suppose that $`\varphi `$ is not an isomorphism in codimension 1. We pay attention to the following eight equations;
$$K_W=f^{}(K_X)+a\stackrel{~}{S_Y}+a_iE_i,\stackrel{~}{S_X}=f^{}(S_X)b\stackrel{~}{S_Y}b_iE_i,$$
$$D_W=f^{}(D_X)c_iE_i,H_W=f^{}(H_X)e\stackrel{~}{S_Y}e_iE_i,$$
$$K_W=g^{}(K_Y)+n\stackrel{~}{S_X}+n_iE_i,\stackrel{~}{S_Y}=g^{}(S_Y)m\stackrel{~}{S_X}m_iE_i,$$
$$D_W=g^{}(D_Y)l\stackrel{~}{S_X}l_iE_i,H_W=g^{}(H_Y)r_iE_i.$$
First of all, $`b=m=1`$ since $`S_X`$ and $`S_Y`$ are reduced and irreducible. Since $`D_X`$ does not contain the center of $`\stackrel{~}{S_Y}`$ on $`X`$, we have $`\text{mult}_{\stackrel{~}{S_Y}}D_X=0`$. For the same reason, we also have $`\text{mult}_{\stackrel{~}{S_X}}H_Y=0`$.
By lemma 2.1, we get $`n+al=a(\stackrel{~}{S_X};Y,aS_Y+D_Y)=a(\stackrel{~}{S_X};X,D_X)=0`$ and $`a+ne=a(\stackrel{~}{S_Y};X,nS_X+H_X)=a(\stackrel{~}{S_Y};Y,H_Y)=0`$, and hence $`a+n=l=e`$. Since $`X`$ and $`Y`$ have at worst terminal singularities, $`a+n=l>0`$.
Since $`K_Y+S_Y+D_Y`$ is linearly trivial by lemma 2.1, $`(K_Y+S_Y+D_Y)|_{S_Y}=K_{S_Y}+D_Y|_{S_Y}`$ is linearly trivial. Thus, $`D_Y|_{S_Y}|K_{S_Y}|`$. Consequently, it follows from inversion of adjunction that $`K_X+S_X+\tau _XH_X`$ is lc. By the same reason, $`K_Y+S_Y+\tau _YD_Y`$ is lc.
Now, we have $`a(\stackrel{~}{S_Y};X,S_X+\tau _XH_X)=a1\tau _Xe1`$ and $`a(\stackrel{~}{S_X};Y,S_Y+\tau _YD_Y)=n1\tau _Yl1`$. But, $`l=a+n\tau _Xe+\tau _Yl=(\tau _X+\tau _Y)l>l`$ by Total lc threshold condition. Since $`l>0`$, this is impossible. Q.E.D.
## 3. Lc thresholds on nonsingular del Pezzo surfaces.
Nonsingular del Pezzo surfaces were quite fully studied long time ago. Furthermore, we understand singular del Pezzo surfaces very well. For example, \[BW79\], \[D80\], \[HW81\], and \[R94\] give us rich information. In this section, we will study some classical result on anticanonical linear systems on del Pezzo surfaces with a modern point of view. Strictly speaking, we investigate all possible singular effective anticanonical divisors on nonsingular del Pezzo surfaces. From this investigation, we can get some information on lc thresholds on nonsingular del Pezzo surfaces.
###### Lemma 3.1
. Let $`S`$ be a nonsingular del Pezzo surface of degree $`d4`$. Then, $`K_S+C`$ is lc in codimension 1 for any $`C|K_S|`$.
Proof. Let $`C=_{i=1}^nm_iC_i|K_S|`$, where $`C_i`$โs are distinct integral curves on $`S`$ and each $`m_i1`$.
First, we claim that if $`C`$ is not irreducible, then each $`C_i`$ is isomorphic to $`^1`$. Suppose that $`C_i`$ is not isomorphic to $`^1`$. Then, the self-intersection number of $`C_i`$ is greater than $`0`$. Because $`K_S`$ is ample, $`C`$ is connected. So, we have
$$2p_a(C_i)2=(C_i+K_S)C_i=(1m_i)C_i^2\underset{ij}{}m_jC_jC_i<0,$$
which is contradiction. Thus, each component is a nonsingular rational curve.
Since $`d=C(K_S)=_{i=1}^nm_iC_i(K_S)`$ and $`K_S`$ is ample, we have $`_{i=1}^nm_id`$.
If $`d=1`$, then $`n=1`$ and $`m_1=1`$.
If $`d=2`$, then we have three possibilities $`C_1`$, $`C_1+C_2`$, and $`2C_1`$. But the last case is absurd because the Fano index of $`S`$ is one.
Suppose $`d=3`$. Then possibilities are $`C_1`$, $`C_1+C_2`$, $`C_1+C_2+C_3`$, $`C_1+2C_2`$, $`2C_1`$, and $`3C_1`$. With the Fano index one, we can get rid of the last two cases. For the case of $`C=C_1+2C_2`$, we consider the equation $`3=K_S^2=(C_1+2C_2)^2=C_1^2+4C_1C_2+4C_2^2`$. Since $`(C_1+2C_2)(K_S)=3`$, we have $`C_1(K_S)=C_2(K_S)=1`$, and hence $`C_1^2=C_2^2=1`$. Thus, $`C_1C_2=2`$. But, this implies contradiction $`2=2p_a(C_1)2=C_1(2C_2)=4`$.
Finally, we suppose that $`d=4`$. We have eleven candidates, $`C_1`$, $`C_1+C_2`$, $`C_1+C_2+C_3`$, $`C_1+C_2+C_3+C_4`$, $`C_1+2C_2`$, $`C_1+3C_2`$, $`C_1+C_2+2C_3`$, $`2C_1+2C_2`$, $`2C_1`$, $`3C_1`$, and $`4C_1`$. Again, we can exclude the last four candidates by Fano index. For the case of $`C=C_1+3C_2`$, we consider the equation $`4=K_S^2=(C_1+3C_2)^2=C_1^2+6C_1C_2+9C_2^2`$. As before, we can see $`C_1^2=C_2^2=1`$. So, we have contradiction $`3C_1C_2=7`$. Letโs consider the case of $`C=C_1+2C_2`$. Since $`(C_1+2C_2)(K_S)=4`$, $`C_1^2=0`$ and $`C_2^2=1`$. Then, we have $`4=(C_1+2C_2)^2=4+4C_1C_2`$. But, $`2=p_a(C_1)2=2C_1C_2`$. Finally, we consider $`C=C_1+C_2+2C_3`$. Then, each $`C_i`$ is $`1`$-curve. Since $`4=(C_1+C_2+2C_3)^2=C_1^2+C_2^2+4C_3^2+2(C_1C_2+2C_1C_3+2C_2C_3)`$, we have $`5=C_1C_2+2C_1C_3+2C_2C_3`$. But, $`2=2p_a(C_1)2=(C_2+2C_3)C_1`$, and hence $`3=2C_2C_3`$. But this is impossible. Q.E.D.
Let $`S`$ be a nonsingular del Pezzo surface with Fano index $`r`$. Then, there is an ample integral divisor $`H`$, called fundamental class of $`S`$, such that $`K_S=rH`$. A curve $`C`$ on $`S`$ is called a line (resp. conic and cubic) if $`CH=1`$ (resp. 2 and 3).
###### Proposition 3.2
. Let $`S`$ be a nonsingular del Pezzo surface of degree $`d4`$ and let $`C|K_S|`$. Suppose that $`K_S+C`$ is worse than lc.
1. If $`d=1`$, then $`C`$ is a cuspidal rational curve.
2. If $`d=2`$, then $`C`$ is one of the following;
* $`C=C_1+C_2`$, where $`C_1`$ and $`C_2`$ are lines intersecting tangentially at one point with $`C_1C_2=2`$.
* $`C`$ is a cuspidal rational curve.
3. If $`d=3`$, then $`C`$ is one of the following;
* $`C=C_1+C_2+C_3`$, where $`C_1`$, $`C_2`$, and $`C_3`$ are lines intersecting at one point with $`C_1C_2=C_1C_3=C_2C_3=1`$.
* $`C=C_1+C_2`$, where $`C_1`$ and $`C_2`$ are a line and a conic intersecting tangentially at one point with $`C_1C_2=2`$.
* $`C`$ is a cuspidal rational curve.
4. If $`d=4`$, then $`C`$ is one of the following;
* $`C=C_1+C_2+C_3`$, where $`C_1`$ and $`C_2`$ are lines, and $`C_3`$ is a conic intersecting at one point with $`C_1C_2=C_1C_3=C_2C_3=1`$.
* $`C=C_1+C_2`$, where $`C_1`$ and $`C_2`$ are a line and a cubic intersecting tangentially at one point with $`C_1C_2=2`$.
* $`C=C_1+C_2`$, where $`C_1`$ and $`C_2`$ are conics intersecting tangentially at one point with $`C_1C_2=2`$.
* $`C`$ is a cuspidal rational curve.
Proof. Note that if $`C`$ is irreducible, then arithmetic genus $`p_a(C)`$ of $`C`$ is one. If $`C`$ is not irreducible, then each component is isomorphic to $`^1`$. And we can see the intersection numbers of two different components of $`C`$ are less than or equal to $`2`$.
We can easily check the cases of degree 1 and 2.
Now, we suppose that $`d=3`$. And we suppose that $`C=C_1+C_2+C_3`$. Since $`3=(C_1+C_2+C_3)(K_S)`$, each $`C_i`$ is a line. From $`2=22p_a(C_1)=C_1(C_2+C_3)`$ and $`3=C_1^2+C_2^2+C_3^2+2C_1(C_2+C_3)+2C_2C_3`$, we get $`C_2C_3=1`$. Similarly, we can get $`C_1C_2=C_1C_3=1`$. Since $`K_S+C`$ is not lc, these three lines intersect each other at one point.
If $`C`$ has less than 4 components, then we can show our statement with the same method as above.
The only remaining that we have to show is that $`K_S+C`$ is lc if $`d=4`$ and $`C=C_1+C_2+C_3+C_4`$. Since each $`C_i`$ is a line, we get
$$4=C^2=4+2(C_1C_2+C_1C_3+C_1C_4+C_2C_3+C_2C_4+C_3C_4).$$
And, we have $`C_1(C_2+C_3+C_4)=22p_a(C_1)=2`$, $`C_2(C_1+C_3+C_4)=2`$, $`C_3(C_1+C_2+C_4)=2`$, and $`C_4(C_1+C_2+C_3)=2`$. With these 5 equations and connectedness of $`C`$, we can see that $`C`$ is a normal crossing divisor. Thus, $`K_S+C`$ is lc. Q.E.D.
###### Corollary 3.3
. Let $`S`$ be a nonsingular del Pezzo surface of degree $`d4`$.
* If $`d=1`$, then $`K_S+\frac{5}{6}C`$ is lc for any $`C|K_S|`$.
* If $`d=2`$, then $`K_S+\frac{3}{4}C`$ is lc for any $`C|K_S|`$.
* If $`d=3`$ or $`4`$, then $`K_S+\frac{2}{3}C`$ is lc for any $`C|K_S|`$.
Proof. If $`C`$ is three nonsingular curves intersecting each other at single point transversally, then $`lct(X,C)=\frac{2}{3}`$. If $`C=C_1+C_2`$ where $`C_i`$โs are nonsingular curves intersecting tangentially with $`C_1C_2=2`$, then we have $`lct(X,C)=\frac{3}{4}`$. For the case of a cuspidal rational curve, $`lct(X,C)=\frac{5}{6}`$. Thus, our statement immediately follows from proposition 3.2. Q.E.D.
###### Remark 3.4
. Let $`S`$ be a nonsingular del Pezzo surface of degree $`d`$. Then, we have the maximum number $`r`$ such that $`K_S+rC`$ is lc for any $`C|K_S|`$. It is easy to show that such $`r`$ is $`\frac{1}{3}`$ (resp. $`\frac{1}{2}`$) if $`d=9`$, $`7`$, or $`d=8`$ and Fano index $`1`$ (resp. $`d=5`$, $`6`$ or $`d=8`$ and Fano index $`2`$).
###### Remark 3.5
. If $`S`$ be a nonsingular del Pezzo surface of degree 1, then $`|K_S|`$ has exactly one base point. We can easily check that any element in $`|K_S|`$ is nonsingular at this point.
## 4. Proof of main theorem.
In this section, we will use the same notations as in the second section.
Proof of main theorem. Since $`K_X`$ and $`K_Y`$ are ample over $`T`$, Surjectivity condition follows from \[P99a\]. By the same reason, birational map $`\varphi `$ cannot be an isomorphism in codimension 1 unless it is biregular (see \[C95\]).
It is enough to check 1-complement condition and Total lc threshold condition by theorem 2.4. Total lc threshold condition immediately follows from corollary 3.3. If $`2d4`$, then it is clear that 1-complement condition holds. In the case of degree 1, 1-complement condition can be derived from remark 3.5. Q.E.D.
###### Corollary 4.1
. Let $`X`$ be a del Pezzo fibration over $`T`$ of degree $`4`$ with nonsingular scheme-theoretic special fiber. Then, the birational automorphism group of $`X/T`$ is the same as the biregular automorphism group of $`X/T`$.
Proof. Note that we always assume that birational map is identical on generic fiber. The statement immediately follows from the main theorem. Q.E.D.
As an easy application of theorem 2.4, we can get the following well-known example.
###### Example 4.2
. Let $`Z`$ be a $`^1`$-bundle over $`T`$. Suppose that the special fiber $`S_Z`$ has no $`k`$-rational point. In particular, the residue field $`k`$ is not algebraically closed. Then, there is no birational transform of $`Z`$ into another $`^1`$-bundle over $`T`$, because the special fiber $`S_Z`$ satisfies Total lc condition. If $`S_Z`$ has a $`k`$-rational point, then Total lc condition fails. Moreover, it can be birationally transformed into another $`^1`$-bundle over $`T`$ by elementary transformations.
## 5. Examples.
If we allow some mild singularities on del Pezzo fibrations, then we can find birational maps of del Pezzo fibrations over $`T`$ with reduced and irreducible special fiber. In each example, note that one of two del Pezzo fibrations has terminal singularities. Before taking examples, we will state easy lemma which helps us to understand our examples.
###### Lemma 5.1
. Let $`f(x_1,\mathrm{},x_m,y_1,\mathrm{},y_n)=g(x_1,\mathrm{},x_m)+h(y_1,\mathrm{},y_n)`$ be a holomorphic function near $`0^{m+n}`$ and let $`D_f=(f=0)`$ on $`^{m+n}`$, $`D_g=(g=0)`$ on $`^m`$, and $`D_h=(h=0)`$ on $`^n`$. Then
$$lct(^{m+n},D_f)=min\{1,lct(^m,D_g)+lct(^n,D_h)\}.$$
Proof. See \[Ku99\]. Q.E.D.
###### Example 5.2
. This example comes from \[C96\] and \[K97\]. Let $`X`$ and $`Y`$ be subschemes of $`_๐ช^3`$ defined by equations $`x^3+y^3+z^2w+w^3=0`$ and $`x^3+y^3+z^2w+t^{6n}w^3=0`$, respectively, where $`n`$ is a positive integer. Note that $`X`$ is nonsingular and $`Y`$ has single singular point of type $`cD_4`$ at $`p=[0,0,0,1]`$. Then, we have a birational map $`\rho _n`$ of $`X`$ into $`Y`$ defined by $`\rho _n([x,y,z,w])=[t^{2n}x,t^{2n}y,t^{3n}z,w]`$. Now, we consider a divisor $`D|K_X|`$ defined by $`z=w`$. This divisor $`D`$ has a sort of good divisor because $`K_X+S_X+D`$ is lc and $`D|_{S_X}`$ is a nonsingular elliptic curve on $`S_X`$. But, the birational transform $`\rho _n(D)`$ of $`D`$ by $`\rho _n`$ is worse than before. First, $`\rho _n(D)|_{S_Y}`$ is three lines intersecting each other at single point (Eckardt point) transversally on $`S_Y`$. Furthermore, we can see that $`\rho _n(D)`$ on $`Y`$ is defined by $`z=t^{3n}w`$. And, the log canonical threshold of $`\rho _n(D)`$ is $`\frac{4n+1}{6n}`$ by lemma 5.1, and hence $`K_Y+\rho _n(D)`$ cannot be lc.
###### Example 5.3
. Let $`Z`$ and $`W`$ be subschemes of $`_๐ช^3`$ defined by equations $`x^3+y^2z+z^2w+t^{12m}w^3=0`$ and $`x^3+y^2z+z^2w+w^3=0`$, respectively, where $`m`$ is a positive integer. Here, $`Z`$ has a singular point of type $`cE_6`$ at $`[0,0,0,1]`$ and $`W`$ is nonsingular. We have a birational map $`\psi _m`$ of $`Z`$ into $`W`$ defined by $`\psi _m([x,y,z,w])=[t^{2m}x,t^{3m}y,z,t^{6m}w]`$. Again, we consider a divisor $`H|K_Z|`$ defined by $`z=w`$. For the same reason as above, $`H`$ is a good divisor. But, the log canonical threshold of the birational transform $`\psi _m(H)`$ of $`H`$ by $`\psi _m`$ is $`\frac{5m+1}{6m}`$. Therefore, if $`m>1`$, then $`K_W+\psi _m(H)`$ cannot be lc. Note that $`\psi _m(H)|_{S_W}`$ is a cuspidal rational curve on $`S_W`$.
###### Example 5.4
. We consider birational map $`\phi _m=\psi _m^1`$ from $`W`$ to $`Z`$, where $`W`$, $`Z`$ and $`\psi _m`$ are the same as in example 5.3. And, we pay attention to nonsingular divisor $`L|K_W|`$ on $`W`$ defined by $`x=0`$. Then, we can see that $`\phi _m(L)|_{S_Z}`$ consists of a line and a conic intersecting tangentially each other. And, the log canonical threshold of $`\phi _m(L)`$ is $`\frac{9m+1}{12m}`$, and hence $`K_Z+\phi _m(L)`$ is not lc.
The following two examples were constructed by M. Grinenko. One is a del Pezzo fibration of degree 2, and the other is of degree 1.
###### Example 5.5
. Let $`X`$ and $`Y`$ be subschemes of $`_๐ช^3(1,1,1,2)`$ defined by equations $`w^2+x^3y+x^2yz+z^4+t^4xy^3=0`$ and $`w^2+x^3y+xy^3+x^2yz+t^2z^4=0`$, respectively, where $`w`$ is of weight 2. The map $`\varphi :XY`$ defined by $`\varphi (x,y,z,w)=(x,t^2y,z,tw)`$ is birational. Subscheme $`X`$ has a singular point of type $`cD_5`$ at $`[0,1,0,0]`$. Subscheme $`Y`$ has two singular points of types $`cD_6`$ and $`cA_1`$ at $`[0,0,1,0]`$ and $`[1,0,1,0]`$, respectively.
###### Example 5.6
. Let $`Z`$ be a subscheme of $`_๐ช^3(1,1,2,3)`$ defined by equation $`w^2+z^3+xy^5+t^4x^5y=0`$, where $`z`$ and $`w`$ are of weight 2 and 3, respectively. Then, we have a birational automorphism $`\alpha `$ of $`Z`$ defined by $`\alpha (x,y,z,w)=(y,t^2x,t^2z,t^3w)`$. Note that $`Z`$ has a singular point of type $`cE_8`$ at $`[1,0,0,0]`$.
E-mail address : jhpark@chow.mat.jhu.edu |
no-problem/9912/gr-qc9912029.html | ar5iv | text | # An Efficient Matched Filtering Algorithm for the Detection of Continuous Gravitational Wave Signals
## Introduction
Neutron stars are perhaps the most promising class of gravitational wave (GW) sources, and searches for such GW signals is particularly suited to the characteristics of the GEO600 detector (see Schutz โGetting Ready for GEO600 Dataโ grโqc9910033). However, the instantaneous GW frequency of such a source will evolve due to both intrinsic spindown effects and Doppler modulations induced by the motion of the Earth. Thus because of the large parameter space of likely signals, directly implemented optimal matched filtering is not computationally feasible.
In response to this problem, Schutz and Papa have developed an alternative strategy: the HoughโHierarchical search algorithm (see Schutz and Papa โEndโtoโEnd Algorithm for Hierarchical Area Searches for LongโDuration GW Sources for GEO600โ grโqc9905018). In order to carry out a blind search over a range of intrinsic GW frequencies, the following three stages must be calculated for each point in the parameter space of sky positions and intrinsic spindown parameters:
Stage I: Calculate demodulated Fourier transforms (DeFTs) on an intermediate time baseline (of order 1 day) by combining FFTs of short durations (approximately 30 minutes) of the time series data. In this context demodulated means that if there is a source at the sky position in question, and with the intrinsic spindown parameters in question, then all spindown and modulatory effects will have been correctly removed from the DeFTs: all signal power will be confined to one and the same frequency bin in each DeFT. This frequency is the intrinsic frequency of the source measured at the start of the observing time. It is expected that the total observing time will be of order 4 months, and thus roughly 120 of these DeFTs will be calculated for each point in parameter space.
Stage II: In general source parameters will not coincide exactly with those searched for, and residual frequency evolution and modulation will remain in the DeFTs. Thus, the peak in power associated with a given source may change frequency bins from DeFT to DeFT. Because of the relatively small time baseline of these DeFTs and the resultant poor signalโtoโnoise of any expected continuous GW signal, this evolution will be not directly apparent in the DeFTs, but can be recovered statistically using the Hough Transform algorithm.
Stage III: Calculate DeFTs for candidate sources with the full frequency resolution of the total observation time, by combining the intermediate baseline DeFTs produced in stage I.
Thus, during stage II, regions of the parameter space in which it is statistically unlikely that there are GW sources are eliminated from the search. Thereby, in stage III, the most computationally expensive part of the algorithm, the long time baseline DeFTs are calculated over only a very small fraction of parameter space and over a very small range of frequencies.
In this paper we outline the methods used in the first and third stages of this algorithm in constructing a longer time baseline DeFT from a number of shorter time baseline FFTs or DeFTs.
## The Method
Consider a time series $`x_a`$ of total duration $`T`$, which has been divided into $`M`$ short time series, each having $`N`$ data points. Then the DeFT for a signal with a time independent amplitude and phase $`2\pi \mathrm{\Phi }_{ab}(\stackrel{}{\lambda })`$ is
$$\widehat{x}_b(\stackrel{}{\lambda })=\underset{a=0}{\overset{NM1}{}}x_ae^{2\pi i\mathrm{\Phi }_{ab}(\stackrel{}{\lambda })}=\underset{\alpha =0}{\overset{M1}{}}\underset{j=0}{\overset{N1}{}}x_{\alpha j}e^{2\pi i\mathrm{\Phi }_{\alpha jb}(\stackrel{}{\lambda })},$$
(1)
where the time indices are related by $`N\alpha +j=a`$, and $`b`$ is a long time baseline frequency index. In the following discussion Latin indices $`j,k,l`$ always sum over $`N`$, while Greek indices sum over $`M`$. Note that $`\mathrm{\Phi }_{ab}(\stackrel{}{\lambda })`$ is dependent on a vector $`\stackrel{}{\lambda }`$ of parameters which characterize the signal one is searching for. In searching for GW signals from neutron stars these will include intrinsic spindown parameters, and the position of the source in the sky. If $`\stackrel{~}{x}_{\alpha k}`$ is the matrix formed by carrying out Fourier transforms along the short time index $`j`$ in $`x_{\alpha j}`$, then equation 1 can be written as
$$\widehat{x}_b(\stackrel{}{\lambda })=\underset{\alpha =0}{\overset{M1}{}}\underset{k=0}{\overset{N1}{}}\stackrel{~}{x}_{\alpha k}\left[\frac{1}{N}\underset{j=0}{\overset{N1}{}}e^{2\pi i\left(\mathrm{\Phi }_{\alpha jb}(\stackrel{}{\lambda })\frac{jk}{N}\right)}\right]=\underset{\alpha =0}{\overset{M1}{}}Q_\alpha (b,\stackrel{}{\lambda })\underset{k=0}{\overset{N1}{}}\stackrel{~}{x}_{\alpha k}P_{\alpha k}(b,\stackrel{}{\lambda }),$$
(2)
where the product $`Q_\alpha (b,\stackrel{}{\lambda })P_{\alpha k}(b,\stackrel{}{\lambda })`$ is defined by the terms in square brackets, and $`Q_\alpha (b,\stackrel{}{\lambda })`$ contains all parts of the square brackets independent of the short time index $`j`$ and short frequency index $`k`$.
In equation 2 we have effectively reโwritten equation 1, a long time baseline DeFT in the time domain, as a sum ($`\alpha `$ index) of short time baseline DeFTs in the frequency domain ($`k`$ index), where $`Q_\alpha (b,\stackrel{}{\lambda })P_{\alpha k}(b,\stackrel{}{\lambda })`$ are these frequency domain filters. In the presence of stationary noise with a flat spectrum, equation 2 is the optimal detector. However, through applying various approximations, the detector can be made โacceptably subโoptimalโ, in the sense that only a small fraction of power from a signal is lost in comparison to the optimal case, while achieving vast savings in computational cost.
To illustrate these mathematical approximations it is instructive to discuss equation 2 for a specific case of $`\mathrm{\Phi }_{\alpha jb}(\stackrel{}{\lambda })`$: a linearly varying frequency model, i.e. in the continuum limit $`\mathrm{\Phi }(t,f_0,\dot{f}_0)=f_0t+\dot{f}_0t^2`$, where $`f_0`$ and $`\dot{f}_0`$ are the intrinsic frequency and spindown of the source respectively, and $`t`$ is time. In the case of an actual search for GW signals from pulsars, $`\mathrm{\Phi }_{\alpha jb}(\stackrel{}{\lambda })`$ will not be so simple. However, this model is sufficiently complex to effectively demonstrate all of the approximations to be discussed here.
In discrete form, $`\mathrm{\Phi }(t,f_0,\dot{f}_0)`$ can be written as $`\mathrm{\Phi }_{\alpha j\beta l}(\gamma )=(\beta +Ml)(N\alpha +j)/NM+\gamma (N\alpha +j)^2/N^2M^2`$, where the long time baseline frequency index $`b=\beta +Ml`$. The chosen discretization of the spindown parameter $`\dot{f}_0\gamma /T^2`$ is not practically appropriate. However, in an actual search, a grid of points in spindown parameter space will be chosen to ensure an acceptable loss of power from unresolved signals. Thus, in the following discussion, only searches for resolved $`\dot{f}_0`$ parameters will be considered.
Approximation 1: By Taylor expanding the model phase function $`\mathrm{\Phi }(t)`$ about the middle of each short duration time series (i.e. about $`j=N/2`$) and discarding terms of order $`(j/N)^2t^2`$ and higher, in the limit $`N\mathrm{}`$ the function $`P_{\alpha k}(\beta ,l,\gamma )`$ is Re $`P_{\alpha k}(\beta ,l,\gamma )=\text{sinc }x`$ and Im $`P_{\alpha k}(\beta ,l,\gamma )=(1\mathrm{cos}x)/x`$. In the phase model considered here $`x=2\pi \left(\beta /M+l+(2\alpha +1)\gamma /M^2k\right)`$ and $`Q_\alpha (\beta ,l,\gamma )=\mathrm{exp}\left\{2\pi i(\alpha \beta /M+\alpha \gamma ^2/M^2)\right\}`$.
Approximation 2: Consider the case where the short time baseline is chosen such that the instantaneous model frequency $`f(t)=\dot{\mathrm{\Phi }}(t,f_0,\dot{f}_0,\ddot{f}_0,\mathrm{})`$ does not move by more than one short time baseline frequency bin over the duration of a short time baseline data set, i.e. in the model discussed here $`|\dot{f}_0|T/M<M/T`$. Then for a given $`\alpha `$, the function $`P_{\alpha k}(b,\stackrel{}{\lambda })`$ will be peaked in power about the model frequency averaged over the duration of time associated with the $`\alpha `$th short data set, i.e. about $`x=0`$ (the first three terms in the above definition of $`x`$ are the index of this average model frequency). Thus only a few terms around this model frequency will contribute significantly to the summation over $`k`$ in equation 2.
Approximation 3: The semiโperiodic nature of $`P_{\alpha k}(b,\stackrel{}{\lambda })`$ means that this function can be efficiently evaluated from a lookโup table of values containing the periodic parts, and three further operations: to calculate one instance of $`P_{\alpha k}(b,\stackrel{}{\lambda })`$ will require only 8 floating point operations.
Approximation 4: If one approximates the model frequency parameter $`\beta `$ in the calculation of $`P_{\alpha k}(\beta ,l,\gamma )`$ as a fixed value, for example with $`\beta =\beta _0`$, equation 2 can be calculated as an FFT, i.e.
$$\widehat{x}_{\beta l}(\gamma )=\underset{\alpha =0}{\overset{M1}{}}\left[Q_\alpha ^{^{}}(\gamma )\underset{k}{\overset{n_{term}}{}}\stackrel{~}{x}_{\alpha k}P_{\alpha k}(\beta _0,l,\gamma )\right]e^{2\pi i\frac{\alpha \beta }{M}},$$
(3)
where $`n_{term}`$ relates to approximation 2, $`P_{\alpha k}(\beta ,l,\gamma )`$ is defined above, and for the phase model discussed here $`Q_\alpha ^{^{}}(\gamma )=\mathrm{exp}\left\{2\pi i\left(\alpha ^2\gamma /M^2\gamma /4M^2\right)\right\}`$. Thus for values of $`\beta `$ sufficiently near to $`\beta _0`$, the loss in power due to this approximation will be small. To obtain $`\widehat{x}_{\beta l}(\gamma )`$ for other values of $`\beta `$, the calculation must be repeated using another $`\beta _0`$.
## Results and Discussion
Numerical tests have shown that if one chooses $`10\%`$ as an acceptable loss in power in comparison to the optimal case, then $`N_{FFT}=8`$ and $`n_{term}=16`$ are the preferred parameter combination, if the short time baseline $`T/M`$ is chosen such that in the phase model discussed here $`|\dot{f}_0|T/M<M/T`$. If one decides that only a $`5\%`$ loss in optimal power is acceptable, then this can be achieved with the same parameters, but choosing $`T/M`$ such that $`|\dot{f}_0|T/M<M/2T`$.
The computational cost of calculating one DeFT in stage I in floating point operations is
$$C_{DeFT}5.3\times 10^{10}\left(\frac{B}{300\text{ Hz}}\right)\left(\frac{T}{1\text{ day}}\right)\left(\frac{N_{FFT}}{8}\right)\left(\frac{n_{term.}}{16}\right),$$
(4)
where $`B`$ is the bandwidth of the search. This is comparable to the computational cost of the corresponding steps in the Hierarchical Stack / Slide algorithm of Brady and Creighton (โSearching for Periodic Sources with LIGO: Hierarchical Searchesโ grโqc9812014). The HoughโHierarchical search algorithm also has a number of computational advantages. To calculate a given bandwidth of a DeFT requires only the FFT data from this bandwidth and an additional small overlap. Thus the algorithm can be easily parallelized by distributing data and work by bandwidth; and no communication between processors is required. Also, the complete three stage algorithm can be arranged in such a way that once a bandwidth of FFT data is read from disk by a processor, all computation required on this data can be carried out while this data is held in memory, thus time spent reading data from disk is a negligible fraction of the total computational time: each processor will need to read roughly 40 Mb from disk once every two weeks. Furthermore, little additional memory is required as workspace for stages I and III: less than 100 kb.
The GEO600 data analysis team are currently working on coding this algorithm in a computationally optimal manner, as well as integrating this with the Hough Transform part of the procedure. |
no-problem/9912/astro-ph9912395.html | ar5iv | text | # A Simple Method for Computing the Non-Linear Mass Correlation Function with Implications for Stable Clustering
## Abstract
We propose a simple and accurate method for computing analytically the mass correlation function for cold dark matter and scale-free models that fits N-body simulations over a range that extends from the linear to the strongly non-linear regime. The method, based on the dynamical evolution of the pair conservation equation, relies on a universal relation between the pair-wise velocity and the smoothed correlation function valid for high and low density models, as derived empirically from N-body simulations. An intriguing alternative relation, based on the stable-clustering hypothesis, predicts a power-law behavior of the mass correlation function that disagrees with N-body simulations but conforms well to the observed galaxy correlation function if negligible bias is assumed. The method is a useful tool for rapidly exploring a wide span of models and, at the same time, raises new questions about large scale structure formation.
<sup>2</sup><sup>2</sup>affiliationtext: Dรฉpartement de Physique Thรฉorique, Universitรฉ de Genรจve, CH-1211 Genรจve, Switzerland<sup>3</sup><sup>3</sup>affiliationtext: Copernicus Astronomical Center, 00-716 Warsaw, Poland
Understanding the origin and evolution of the clustering pattern of galaxies is one of the most important goals of cosmology. Until now, this problem has been investigated using a four-fold path: (1) perturbation theory (for a review of recent advances, see Juszkiewicz & Bouchet (1996) and references therein); (2) a kinetic description, adapted from the BBGKY hierarchy, used in plasma physics (Peebles (1980), ยงIV); (3) N-body simulations (e.g., Jenkins et al. (1998), hereafter VIRGO); (4) and semi-analytic fits to N-body results, based on the so-called universal scaling hypothesis (see Hamilton et al. (1991); Jain et al. (1995); Peacock & Dodds (1996); Ma (1998)). The advantages and limitations of these methods are often complementary. For example, applying perturbation theory often leads to analytic results for a wide class of models while the N-body simulations allow a study of only one model at a time. On the other hand, perturbation theory works only in the weakly non-linear regime while N-body experiments describe the fully non-linear dynamics, albeit over a limited dynamical range. The subject of the present study is an analytic ansatz for the evolution of the two-point correlation function of density fluctuations spanning the linear and non-linear regime, which builds on all four methods described above.
Our approach is based on the pair conservation equation, which relates the mean (pair-weighted) relative velocity of a pair of particles to the time evolution of the correlation function in a self-gravitating gas:
$$\frac{a}{3[1+\xi (x,a)]}\frac{\overline{\xi }(x,a)}{a}=\frac{v_{12}(x,a)}{Hr},$$
(1)
where $`a(t)`$ is the expansion factor with $`a=1`$ at present, $`r=ax`$ is the proper separation and $`H(a)`$ is the Hubble parameter (see Davis & Peebles (1977); Peebles (1980)). Here $`\overline{\xi }(x,a)`$ represents the two-point correlation function averaged over a ball of comoving radius $`x`$:
$$\overline{\xi }(x,a)=\frac{3}{x^3}_0^x\xi (y,a)y^2๐y.$$
(2)
The approximate solution of (1) is known in the large separation limit, where $`|\xi |1`$ (linear regime); the stable clustering hypothesis is often invoked to describe the small separation limit, where $`\xi 1`$ (non-linear regime). Hence, equation (1) is โa guide to speculation on the behavior of the correlation functionโ (Peebles (1980), p.268) since an assumed $`v_{12}`$ implies a function $`\xi `$ that should agree with the weak and strong field limits, and interpolates between these limits in a reasonable way.
An approximate universal relation between the pair-wise velocity and the smoothed correlation function has been conjectured by Hamilton et al. (1991) on the basis of N-body simulation results and further explored in Nityananda & Padmanabhan (1994) and Padmanabhan & Engineer (1998). In the past, the relation has been used in attempts to derive a general functional that converts directly from a linear to a non-linear mass correlation. In this paper, we present a simple extension of the relation that applies to both high- and low-density models, but take a different approach to obtaining the non-linear correlation function. Namely, we use the universal relation to close (1); we then evolve the resulting partial differential equation to compute the non-linear correlation function. This turns out to be a fast and surprisingly accurate method that matches N-body results for a wide variety of cold dark matter (CDM) models. As a stand-alone computer program, the algorithm can be adopted on a programmable calculator; or, it can easily be incorporated in more sophisticated programs that predict other cosmological properties, such as CMBFAST (Seljak & Zaldarriaga (1996)).
Figure 1 clearly illustrates the nearly model-independent relationship between the pair-wise velocity and the smoothed correlation function, observed in N-body simulations for a wide range of perturbation spectra. We define this relation as
$$V[f\overline{\xi }]\frac{v_{12}}{Hr}.$$
(3)
Compared to Hamilton et al. (1991), a novel feature is plotting the relation in terms of $`f(\mathrm{\Omega })\overline{\xi }`$, where $`fd\mathrm{ln}D/d\mathrm{ln}a`$ is the standard linear density growth factor, rather than $`\overline{\xi }`$ alone โ a difference that is essential for extending the relation to low-density models. The evolved, non-linear clustering of scale-free spectra with $`n=1,2`$ as well as the CDM family of models produces a very similar relation between $`v_{12}/Hr`$ and $`\overline{\xi }`$. An excellent fit to the functional relation $`V[f\overline{\xi }(x,a)]`$ in Figure 1, based on the $`n=1`$ curve, is given by
$$V[x]=\{\begin{array}{cc}\frac{2}{3}x\hfill & x<0.15\hfill \\ 0.7x\mathrm{exp}(0.31x^{0.61})\hfill & 0.15x<20\hfill \\ 3.3x^{0.17}\hfill & x20\hfill \end{array}$$
(4)
valid for $`x\stackrel{<}{}10^3`$. In this paper, we use this fitting formula, designed for the $`n=1`$ curve, as the expression for $`V[x]`$ in (1) to be applied to all models. We find this to be sufficient to reproduce N-body results to within 10% accuracy over the range of models and scales shown in Figure 2, which extends deep into the non-linear regime. To push to lower density models and improve the accuracy further, it would be simple to modify the algorithm for $`V[x]`$ to include the $`\mathrm{\Omega }`$-dependent rise near $`x\stackrel{>}{}20`$.
Starting with the linear correlation function, and armed only with information about the background cosmological evolution, we propose to dynamically obtain the non-linear $`\xi `$ as a function of separation and time. Here then is our idealized procedure in three steps.
1. Reformulate: We first rewrite the partial differential equation as
$$\frac{\mathrm{ln}\overline{\xi }}{\mathrm{ln}a}=\mathrm{\hspace{0.33em}3}\frac{(1+\xi )}{\overline{\xi }}V[f\overline{\xi }]$$
(5)
where $`V[x]`$ is given in equation (4).
2. Initialize: The initial conditions are set by the linear correlation function at a red shift $`z=1+1/a_i`$ such that $`\xi (x,a_i)`$ is less than unity for all $`x`$ of interest. Our procedure assumes that only the amplitude, and not the shape of the linear correlation function has changed over this interval, as occurs for cold dark matter models with a primordial spectrum of adiabatic density perturbations. Hence, this procedure will not apply to cosmological models with a late-time decaying neutrino, but will apply to hot dark matter models wherein the shape of the linear power spectrum is set by red shift $`z100`$.
3. Evolve: We numerically solve the partial differential equation and evolve $`\overline{\xi }`$. At each step in the evolution, we use $`\xi =\overline{\xi }\times (1\overline{\gamma }/3)`$ with $`\overline{\gamma }d\mathrm{ln}\overline{\xi }/d\mathrm{ln}x`$ to determine the correlation function. The value of $`f`$ is updated at each step in $`a`$ as appropriate for the cosmology.
The remarkable results are shown in Figure 2. Here we see that our simple procedure gives excellent agreement with N-body simulations. Based on the span of behavior in the cosmic time evolution and the shape of correlation function, we expect this procedure should be valid for a wide range of cosmological models, including quintessence (Caldwell, Dave, & Steinhardt (1998)) and models with tilted spectra.
Figure 2 demonstrates that we have obtained a simple and powerful new tool for rapidly and accurately obtaining the non-linear power spectrum for a wide range of models. However, the physical origin of the nearly model-independent relation $`V[f\overline{\xi }]`$ is not understood in detail. In the linear regime, perturbation theory predicts $`v_{12}/Hr=(2/3)f\overline{\xi }`$. In the non-linear regime, Padmanabhan et al. (1996) have suggested that insight may be obtained by comparison to the case of the gravitational collapse of a spherical top hat mass distribution. Using their solution (eqs. (16-19) in their paper), we find that $`v_{12}/Hr`$ is linearly proportional to $`f\overline{\xi }`$ times a slowly decreasing function of $`\overline{\xi }`$ for a surprisingly wide range of $`\overline{\xi }1`$, including $`\overline{\xi }=10`$, the turnover point in Figure 1. In particular, for $`\overline{\xi }=10`$, $`v_{12}/Hr=1.77f(\mathrm{\Omega })`$, which is similar to $`v_{12}/Hr2f(\mathrm{\Omega })`$ in Figure 1.
In the strongly non-linear regime, $`\overline{\xi }10`$, Figure 1 shows a visible difference in the shape of $`V[f\overline{\xi }]`$ between the high density and low density models. This may be due to the suppression of linear growth, which occurs at late times in low density models and leads to the enhanced clustering on small scales relative to large scales. However, this has a negligible effect on the computed non-linear correlation function. For example, using the curve for $`\mathrm{\Lambda }`$CDM shown in Figure 1 as the basis of $`V`$ in our procedure, we find the amplitude of the non-linear correlation function differs by only $`10\%`$ at $`r0.1`$ Mpc/h. For the models shown, this accuracy is comparable to what is obtained by Hamilton et al. (1991) and Peacock & Dodds (1996). The advantage here is that our method can be immediately applied to new types of CDM models (e.g., quintessence cosmologies) without having to run new N-body simulations to fix fitting parameters.
An important issue raised by our ansatz is the validity of the stable clustering hypothesis. The stable clustering regime corresponds to the limit where particle pairs detach from the Hubble flow and $`v_{12}/Hr1`$. Figure 1 shows that $`v_{12}/Hr`$ first overshoots unity by a factor of two and then rebounds towards unity. However, it is not clear whether it converges to unity at $`\overline{\xi }1000`$ or possibly oscillates if the simulations are extended to higher values of $`\overline{\xi }`$.
It is interesting to compare the predictions of our ansatz if the relation between $`v_{12}/Hr`$ and $`\overline{\xi }`$ is modified to enforce more rapid convergence to stable clustering. A ready example is an alternative ansatz based on the pair conservation equation recently proposed by Juszkiewicz et al. (1999) (hereafter JSD). Their ansatz for $`v_{12}`$, based on an interpolation between the behavior predicted by perturbation theory in the weakly non-linear regime and stable clustering in the strongly non-linear regime is given by
$$v_{12}(x,a):=\frac{2}{3}Hrf\overline{\overline{\xi }}(x,a)\left[\mathrm{\hspace{0.17em}1}+\alpha \overline{\overline{\xi }}(x,a)\right]$$
(6)
where $`\overline{\overline{\xi }}\overline{\xi }/(1+\xi )`$ and $`\alpha `$ is a function which controls the strength of the non-linear feedback. Here we use $`\alpha =1.81.1\gamma `$, based on perturbation theory (see Scoccimarro & Frieman (1996)), where $`\gamma `$ is the slope of the correlation function at $`\xi =1`$. The key point, as shown in Figure 3, is that the pair-wise velocity rapidly approaches the stable clustering limit by $`\overline{\xi }10`$, and remains there to within $`20\%`$ on smaller scales in the more strongly non-linear regime. This means particle pairs separated by $`1`$ Mpc/h have the rough behavior of virialized objects, such as clusters and galaxies.
The correlation function $`\xi `$ obtained by closing the pair conservation equation with (6), as shown in Figure 3, displays a power-law behavior in the non-linear regime with index $`1.7`$, in disagreement with N-body simulations of CDM but curiously similar to the galaxy correlation function observed in the APM survey (Maddox et al. (1996)). This result is not unique to the JSD ansatz; substituting any shape similar to that shown in the top panel of Figure 3 for $`V[f\overline{\xi }]`$ into our ansatz would produce a similar effect on the mass correlation function. In the past, the disagreement between the dark matter power spectrum observed in simulations, which shows no evidence of power-law behavior, and the simple power-law observed in the galaxy correlation function has been attributed to a scale-dependent bias. The conventional picture is that the bias function, $`b(r)`$, is just so as to cause a non-power-law behavior in the dark matter to be translated into a power-law behavior of the galaxy correlation function, $`\xi _g(r)=\xi (r)b^2(r)`$.
Our present findings concerning the dependence of the mass correlation function on the model independent $`v_{12}/Hr`$ vs. $`f\overline{\xi }`$ relation suggests a radical possibility. Perhaps the bias factor is completely negligible and, instead, CDM models are missing some important physical feature (e.g. mechanics of galaxy formation, or some property of the dark matter) which causes rapid convergence to stable clustering in the non-linear regime ($`v_{12}/Hr1`$ without substantial overshoot) and to power-law behavior of the correlation function. The ansatz illustrated in Figure 3, which enforces stable clustering by fiat, may be implicitly describing a modified CDM model of galaxy formation which incorporates the new physical feature. The difference between N-body simulations and observations, whether due to bias in the conventional picture or to something more radical, such as a modification to CDM, can perhaps be determined empirically by studying red shift distortion on the scales where N-body simulation suggests overshoot in $`v_{12}/Hr`$ and the ansatz of Figure 3 does not.
In sum, our studies have produced a simple recipe for computing the non-linear power spectrum for a wide range of models. (Upon publication of the paper, we will make a program available at feynman.princeton.edu/$``$steinh.) A key feature of the method is the universal function relating the pair velocity to the mass correlation function which does not converge rapidly to the stable clustering limit, but, rather, overshoots by a factor of two. This feature is responsible for the fact that the mass correlation function does not approach a power-law. Our studies have also raised several interesting issues in structure formation: why is $`v_{12}/Hr`$ vs. $`f\overline{\xi }`$ so similar for a wide range of models? can the universal relation be derived from theory? for what range of models is the relation model-independent? how does the universal relation ultimately approach the stable clustering limit at small scales (if it does at all)? and, does the success of a universal relation based on the stable clustering hypothesis, as in Figure 3, suggest a viable, alternative explanation for the power-law behavior of the galaxy correlation function?
###### Acknowledgements.
We would like to thank Dick Bond, Marc Davis, Josh Frieman, Andrew Hamilton, Chung-Pei Ma, Jim Peebles, David Spergel, and Roman Scoccimarro for useful conversations. We also thank Bhuvnesh Jain and Volker Springel and the VIRGO collaboration for providing N-body simulation results. This work was carried out, in part, at the Isaac Newton Institute for Mathematical Sciences during their program on Structure Formation in the Universe. We would like to thank the organizers, N. Turok and V. Rubakov, and the staff of the Institute for their support and kind hospitality. The work of RRC and PJS was supported by the US Department of Energy grant DE-FG02-91ER40671. The work of RJ was supported by grants from the Polish Government (KBN grants No. 2.P03D.008.13 and 2.P03D.004.13), the Tomalla Foundation of Switzerland, by the Poland-US M. Skลodowska-Curie Fund. FRB and RJ were supported by the Franco-Polish collaboration grant (Jumelage). |
no-problem/9912/hep-ph9912499.html | ar5iv | text | # Spinodal effect in the natural inflation model
## I Introduction
The concept of inflationary cosmology is very attractive to describe the early stage of the universe. It not only solves horizon, flatness, and monopole problems the standard big bang cosmology has to face, but also provides large-scale density perturbations required for the structure formation. Inflation takes place while a scalar field $`\varphi `$ called inflaton slowly rolls down toward the minimum of its potential $`V(\varphi )`$. The inflationary period ends when the inflaton field begins to oscillate around the minimum of its potential. The elementary particles can be efficiently produced by a nonperturbative process called preheating in the oscillating stage of inflaton. Then these particles decay to other lighter particles and thermalize the universe.
So far, there are various models of inflation. These different kinds of models can be classified in the following way . The first class is the โlarge fieldโ model, in which the initial value of inflaton is large and rolls down toward the potential minimum. Chaotic inflation is one of the representative models of this class. The second class is the โsmall fieldโ model, in which inflaton is small initially and slowly evolves toward the potential minimum at larger values of $`\varphi `$. New inflation and natural inflation are good examples of this model. In the first model, the second derivative of the potential $`V^{(2)}(\varphi )`$ usually takes positive values, but in the second model, $`V^{(2)}(\varphi )`$ can change sign during inflation. The third one is the hybrid inflation model, where the inflaton field has a large amount of potential energy at the minimum of its potential, while the vacuum energy is almost zero at the end of inflation in the first and second inflation models.
Recently, Cormier and Holman have pointed out that fluctuations of inflaton can grow nonperturbatively during an inflationary stage in the second model of inflation when $`V^{(2)}(\varphi )`$ is negative. This idea is remarkable in the sense that particles are effectively produced by negative instability even in the slow rolling stage of inflaton. They call this kind of model spinodal inflation, and investigated the nonperturbative evolution of the inflaton field making use of the Hartree approximation in the toy model with a potential $`V(\varphi )=\frac{3m^4}{2\lambda }\frac{1}{2}m^2\varphi ^2+\frac{\lambda }{4!}\varphi ^4`$. It was found that low momentum modes of fluctuations are mainly enhanced, and the evolution of the system can be described effectively by two classical scalar fields. Especially when $`\varphi `$ is close to zero initially, the amount of the maximum fluctuation becomes so large that secondary inflation occurs by the effect of produced particles. It was suggested that ordinary prediction of scale invariance of density perturbations generated during inflation would be modified by taking into account this spinodal effect.
As for one example of spinodal inflation, we draw attention to the natural inflation model which was originally proposed by Freese et al.. This model is characterized by pseudo Nambu-Goldstone bosons (PNGB) which appear when an approximate global symmetry is spontaneously broken. The PNGB potential is expressed as $`V(\varphi )=m^4\left[1+\mathrm{cos}(\varphi /f)\right]`$, where $`f`$ and $`m`$ are two mass scales which characterize the shape of the potential. Considering PNGB as the candidate of inflaton, $`f`$ and $`m`$ are constrained by the requirement of sufficient inflation and primordial density perturbations observed by the Cosmic Background Explorer (COBE) satellite. In the case where the effect of spinodal instability is neglected, these mass scales are found to be $`fm_{\mathrm{pl}}10^{19}`$GeV, and $`mm_{\mathrm{GUT}}10^{16}`$GeV respectively. While other inflation models require extremely weak coupling $`\lambda `$ in order to satisfy the constraint of density perturbations (for example, in the chaotic inflation model with self coupling potential, $`\lambda \stackrel{<}{}10^{13}`$), the PNGB inflation model is preferable in the sense that two mass scales arise naturally in particle physics models. Furthermore, this model has an advantage in the analysis of the spinodal effect. When fluctuations of inflaton grow significantly, higher order terms of fluctuations play a relevant role for the evolution of the system. As compared with other more complicated spinodal models such as new inflation, we can handle these higher order terms in an analytic way in the natural inflation model. What we are concerned with is how the efficient particle production during the natural inflation would modify the dynamics of the system. As we will show later, secondary inflation due to fluctuations pointed out in Ref. appears and the evolution is drastically changed if the initial value of inflaton is close to zero.
This paper is organized as follows. In the next section, basic equations based on the Hartree approximation are introduced in the natural inflation model. In Sec. III, we study how the fluctuation of inflaton is generated during the inflationary stage. We will show that this effect can significantly alter the dynamics of inflation. We present our discussions and conclusions in the final section.
## II Basic equations
The model we consider is
$`=\sqrt{g}\left[{\displaystyle \frac{1}{2\kappa ^2}}R{\displaystyle \frac{1}{2}}(\varphi )^2V(\varphi )\right],`$ (1)
where $`\kappa ^2/8\pi G=m_{\mathrm{pl}}^2`$ is Newtonโs gravitational constant, $`R`$ is a scalar curvature, and $`\varphi `$ is a minimally coupled inflaton field. In this paper, we adopt a potential which is the so called natural inflation type
$`V(\varphi )=m^4\left[1+\mathrm{cos}\left({\displaystyle \frac{\varphi }{f}}\right)\right],`$ (2)
where two mass scales $`m`$ and $`f`$ characterize the height and width of the potential, respectively. The typical mass scales are of the order $`fm_{\mathrm{pl}}10^{19}`$ GeV and $`mm_{\mathrm{GUT}}10^{16}`$ GeV for the success of the scenario of the natural inflation. The inflaton field is initially located in the region of $`0<\varphi (0)<\pi f`$, and inflation takes place while inflaton evolves slowly toward the minimum of its potential at $`\varphi =\pi f`$. In order to obtain the sufficient inflation by which the number of $`e`$-folding exceeds $`N60`$, the initial value of inflaton is required to be close to $`\varphi =0`$. For example, in the case of $`f=10^{19}`$ GeV, we need $`\varphi (0)\stackrel{<}{}0.5m_{\mathrm{pl}}`$. The value of $`\varphi `$ when inflation ends ($`=\varphi _F`$) depends on the scale $`f`$, but $`\varphi _F`$ is close to the value $`\pi f`$ for the typical values of $`fm_{\mathrm{pl}}`$. When inflaton begins to oscillate around $`\varphi =\pi f`$, the system enters a reheating stage.
For the potential $`(\text{2})`$, we find that $`V^{(2)}(\varphi )`$ is negative when inflaton evolves in the region of $`0<\varphi <\pi f/2`$. This leads to the enhancement of fluctuations by spinodal instability in the realistic initial value of $`\varphi `$. After the inflaton field passes through $`\varphi =\pi f/2`$ where $`V^{(2)}(\varphi )`$ changes sign, fluctuations of inflaton no longer grow. Hence an important point for the development of fluctuations is the initial value of inflaton. The mass scales $`f`$ and $`m`$ also affect the evolution of the system.
Let us obtain basic equations in the natural inflation model. We adopt the flat Friedmann-Robertson-Walker metric
$`ds^2=dt^2+a^2(t)d๐ฑ^2,`$ (3)
where $`a(t)`$ is the scale factor, and $`t`$ is the cosmic time coordinate.
We decompose the quantum scalar field $`\varphi (t,๐ฑ)`$ into its expectation value $`\varphi _0(t)`$ and the quantum fluctuation $`\delta \varphi (t,๐ฑ)`$ as
$`\varphi (t,๐ฑ)=\varphi _0(t)+\delta \varphi (t,๐ฑ),`$ (4)
with
$`\varphi _0(t)=\varphi (t,๐ฑ)={\displaystyle \frac{\mathrm{Tr}\varphi \rho (t)}{\mathrm{Tr}\rho (t)}},`$ (5)
where $`\rho (t)`$ is the density matrix which satisfies the Liouville equation:
$`i{\displaystyle \frac{d\rho (t)}{dt}}=[,\rho (t)].`$ (6)
$``$ is the time dependent Hamiltonian. The initial condition for the density matrix should be chosen to describe a local thermal equilibrium state. Given the initial condition, the time evolution of $`\rho (t)`$ is known from Eq. $`(\text{6})`$. The expectation value of the energy-momentum tensor is evaluated from the time varying density matrix. Our approach is to solve the semiclassical Einstein equations where the source of the gravitational field is the expectation value of the energy-momentum tensor .
The system we consider is out-of-equilibrium and nonperturbative, and there are some approximations which are suitable to describe such a state. In this paper, we adopt the Hartree mean field approximation of the nonequilibrium quantum field theory. This is basically a Gaussian variational approximation to the time dependent density matrix. Related with this issue, several authors considered the large $`N`$ approximation which can deal with contributions beyond leading order. Although it is of interest to examine the differences between two approximations, the analysis based on the large $`N`$ approximation is left for the future work.
Performing the Hartree factorization
$`\delta \varphi ^{2n}`$ $``$ $`{\displaystyle \frac{(2n)!}{2^n(n1)!}}\delta \varphi ^2^{n1}\delta \varphi ^2{\displaystyle \frac{(2n)!(n1)}{2^nn!}}\delta \varphi ^2^n,`$ (7)
$`\delta \varphi ^{2n+1}`$ $``$ $`{\displaystyle \frac{(2n+1)!}{2^nn!}}\delta \varphi ^2^n\delta \varphi ,`$ (8)
and making use of the tadpole condition
$`\delta \varphi (t,๐ฑ)=0,`$ (9)
the expectation value of the potential $`V(\varphi _0+\delta \varphi )`$ can be written as
$`V(\varphi _0+\delta \varphi )={\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{2^nn!}}\delta \varphi ^2^nV^{(2n)}(\varphi _0),`$ (10)
where $`V^{(n)}(\varphi )\delta ^nV(\varphi )/\delta \varphi ^n`$. Then the equation of the $`\varphi _0`$ field yields
$`\ddot{\varphi }_0+3H\dot{\varphi }_0+{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{2^nn!}}\delta \varphi ^2^nV^{(2n+1)}(\varphi _0)=0,`$ (11)
where a dot denotes a derivative with respect to the cosmic time coordinate, $`H\dot{a}/a`$ is the Hubble parameter. Expanding the $`\delta \varphi `$ field by the Fourier modes as
$`\delta \varphi ={\displaystyle \frac{1}{(2\pi )^{3/2}}}{\displaystyle \left(a_k\delta \varphi _k(t)e^{i๐ค๐ฑ}+a_k^{}\delta \varphi _k^{}(t)e^{i๐ค๐ฑ}\right)d^3๐ค},`$ (12)
we obtain the following equation for the fluctuation:
$`\delta \ddot{\varphi }_k+3H\delta \dot{\varphi }_k+\left[{\displaystyle \frac{k^2}{a^2}}+{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{2^nn!}}\delta \varphi ^2^nV^{(2n+2)}(\varphi _0)\right]\delta \varphi _k=0,`$ (13)
where the expectation value $`\delta \varphi ^2`$ is represented by
$`\delta \varphi ^2={\displaystyle \frac{1}{2\pi ^2}}{\displaystyle k^2|\delta \varphi _k|^2๐k}.`$ (14)
The evolution of the scale factor is written as
$`\left({\displaystyle \frac{\dot{a}}{a}}\right)^2={\displaystyle \frac{\kappa ^2}{3}}\left[{\displaystyle \frac{1}{2}}\dot{\varphi _0}^2+{\displaystyle \frac{1}{2}}\delta \dot{\varphi }^2+{\displaystyle \frac{1}{2a^2}}(\delta \varphi )^2+{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{2^nn!}}\delta \varphi ^2^nV^{(2n)}(\varphi _0)\right],`$ (15)
where $`\delta \dot{\varphi }^2`$ and $`(\delta \varphi )^2`$ are expressed by
$`\delta \dot{\varphi }^2={\displaystyle \frac{1}{2\pi ^2}}{\displaystyle k^2|\delta \dot{\varphi }_k|^2๐k},`$ (16)
$`(\delta \varphi )^2={\displaystyle \frac{1}{2\pi ^2}}{\displaystyle k^4|\delta \varphi _k|^2๐k}.`$ (17)
The quantities of Eqs. $`(\text{14})`$, $`(\text{16})`$, and $`(\text{17})`$ need to be regulated in order to remove the divergences of integrals. Several authors considered renormalizations by the method of adiabatic regularization and dimensional regulalization. The former scheme is based on introducing a large momentum cutoff and subtracting the leading adiabatic orders of the fluctuation terms. The latter is the covariant regularization in which the counter terms do not depend on the initial state. However, this dimensional regulalization has a shortcoming that the energy-momentum tensor has an initial singularity. In this paper, we make use of the scheme of adiabatic regularization as in Ref. , which is suitable for numerical computations.
In the natural inflation potential $`(\text{2})`$, Eqs. $`(\text{11})`$, $`(\text{13})`$, and $`(\text{15})`$ can be rewritten as
$`\ddot{\varphi }_0+3H\dot{\varphi }_0{\displaystyle \frac{m^4}{f}}F(\delta \varphi ^2)\mathrm{sin}\left({\displaystyle \frac{\varphi _0}{f}}\right)=0,`$ (18)
$`\delta \ddot{\varphi }_k+3H\delta \dot{\varphi }_k+\left[{\displaystyle \frac{k^2}{a^2}}{\displaystyle \frac{m^4}{f^2}}F(\delta \varphi ^2)\mathrm{cos}\left({\displaystyle \frac{\varphi _0}{f}}\right)\right]\delta \varphi _k=0,`$ (19)
$`\left({\displaystyle \frac{\dot{a}}{a}}\right)^2={\displaystyle \frac{\kappa ^2}{3}}\left\{{\displaystyle \frac{1}{2}}\dot{\varphi _0}^2+{\displaystyle \frac{1}{2}}\delta \dot{\varphi }^2+{\displaystyle \frac{1}{2a^2}}(\delta \varphi )^2+m^4\left[1+F(\delta \varphi ^2)\mathrm{cos}\left({\displaystyle \frac{\varphi _0}{f}}\right)\right]\right\},`$ (20)
where
$`F(\delta \varphi ^2)\mathrm{exp}\left({\displaystyle \frac{\delta \varphi ^2}{2f^2}}\right).`$ (21)
Note that $`F(\delta \varphi ^2)=1`$ in the case of $`\delta \varphi ^2=0`$. When $`\varphi _0`$ is located in the region $`0<\varphi _0<\pi f/2`$ initially, the term in the square bracket in Eq. $`(\text{19})`$ is negative for small $`k`$ at the first stage of inflation. This leads to the enhancement of fluctuations of low momentum modes. Since $`\varphi _0`$ increases for $`0<\varphi _0<\pi f`$ as is found in Eq. $`(\text{18})`$, the term $`\mathrm{cos}(\varphi _0/f)`$ in Eq. $`(\text{19})`$ gradually decreases and the growth of fluctuations terminates after $`\varphi _0`$ becomes larger than $`\pi f/2`$.
As is found by Eq. $`(\text{20})`$, the evolution of inflaton can be described effectively by two homogeneous fields with the potential
$`V(\varphi _0,\sigma )m^4\left[1+\mathrm{exp}\left({\displaystyle \frac{\sigma ^2}{2f^2}}\right)\mathrm{cos}\left({\displaystyle \frac{\varphi _0}{f}}\right)\right],`$ (22)
where $`\sigma \sqrt{\delta \varphi ^2}`$. This potential is depicted in Fig. 1. When fluctuations do not grow relevantly as $`\sigma f`$, the $`\varphi _0`$ field slowly rolls down toward the potential minimum $`\varphi _0=\pi f`$ almost along the $`\varphi _0`$ direction in the usual manner. The inflationary period ends when the $`\varphi _0`$ field begins to oscillate around the minimum of its potential. At this stage, the potential energy becomes small and the expansion rate slows down. However, in the case where $`\sigma `$ grows to the order of $`f`$, the evolution of the system is drastically modified. The inflaton field moves toward the $`\sigma `$ direction rather than the $`\varphi _0`$ direction in Fig. 1. It reaches the flat region $`\sigma \stackrel{>}{}2m_{\mathrm{pl}}`$, and secondary inflation occurs there. We investigate for what initial values of $`\varphi _0`$ this behavior appears in the next section.
Before analyzing the evolution of the system, we mention the initial conditions for the fluctuation. We should choose a conformal adiabatic vacuum state where the density matrix represents a local thermal equilibrium, which means that the density matrix commutes with the initial conformal Hamiltonian. This corresponds to choose the mode functions $`\delta \varphi _k`$ as
$`\delta \varphi _k(0)={\displaystyle \frac{1}{\sqrt{2\omega _k(0)}}},\delta \dot{\varphi }_k(0)=\left[i\omega _k(0)H(0)\right]\delta \varphi _k(0),`$ (23)
with
$`\omega _k^2(0)=k^2+^2(0),^2(0)={\displaystyle \frac{m^4}{f^2}}F(\delta \varphi ^2)\mathrm{cos}\left({\displaystyle \frac{\varphi _0(0)}{f}}\right){\displaystyle \frac{R(0)}{6}},`$ (24)
where $`R(0)`$ is the initial scalar curvature, and we set $`a(0)=1`$.
In the present model, since $`\omega _k^2`$ becomes negative for small $`k`$, we need to modify the initial frequencies of low momentum modes. Following the approach performed in Ref. , we adopt the initial frequency as
$`\omega _k^2(0)=k^2+^2(0)\mathrm{tanh}\left({\displaystyle \frac{k^2+^2(0)}{|^2(0)|}}\right),`$ (25)
This initial frequency coincides with the conformal vacuum frequency $`(\text{24})`$ for large $`k`$, and becomes positive for small $`k`$. This choice of the initial frequency smoothly interpolates between large and small momentum modes. An alternative way is to choose the initial condition as
$`\omega _k^2(0)=\{\begin{array}{cc}k^2+|^2(0)|\hfill & \text{with }k^2<|^2(0)|\text{,}\hfill \\ k^2+^2(0)\hfill & \text{with }k^2|^2(0)|\text{.}\hfill \end{array}`$ (26)
Although there are some subtleties about the choice of initial frequencies, we can numerically check that these different choices have little effect on the evolution of the system. The qualitative properties of the system are the same in either case of Eq. $`(\text{25})`$ or Eq. $`(\text{26})`$.
We investigate the nonperturbative evolution of $`\delta \varphi ^2`$ with the initial condition of Eqs. $`(\text{23})`$ and $`(\text{25})`$ as the semiclassical problem.
## III Particle production in the natural inflation model
In this section, we study out-of-equilibrium dynamics due to the enhancement of fluctuations in the natural inflation model. Let us first consider the typical case of $`f=10^{19}`$ GeV$`m_{\mathrm{pl}}`$ and $`m=10^{16}`$ GeV$`10^3m_{\mathrm{pl}}`$. In order to solve the standard cosmological puzzles of the standard big bang cosmology, the number of $`e`$-folding
$`N\mathrm{ln}\left({\displaystyle \frac{a(t_f)}{a(0)}}\right),`$ (27)
where $`t_f`$ denotes the time when the slow roll period ends, is required to be $`N\stackrel{>}{}60`$. We need initial values of inflaton as $`\varphi (0)\stackrel{<}{}0.5m_{\mathrm{pl}}`$ to obtain $`N\stackrel{>}{}60`$ in the case where the spinodal effect is not included. For examples, when $`\varphi (0)=0.5m_{\mathrm{pl}}`$, $`N=71`$; and when $`\varphi (0)=m_{\mathrm{pl}}`$, $`N=39`$. One may consider that the enhancement of fluctuations would lead to the larger amount of $`e`$-folding and relax the constraint of $`\varphi (0)`$ to yield $`N\stackrel{>}{}60`$. However, this is not the case. Fluctuations do not grow relevantly for $`\varphi (0)\stackrel{>}{}0.1m_{\mathrm{pl}}`$. We depict the evolution of the $`\varphi _0`$ field and the fluctuation $`\sigma `$ for the case of $`\varphi (0)=0.1m_{\mathrm{pl}}`$ in Fig. 2. We find that the maximum fluctuation at $`\varphi _0=\pi f/2`$ is $`\sigma _{\mathrm{max}}10^5m_{\mathrm{pl}}`$. Since $`\sigma _{\mathrm{max}}^2/(2f^2)1`$ and $`F(\delta \varphi ^2)`$ is close to unity, the evolution of the $`\varphi _0`$ field and the scale factor are almost the same as in the case where the growth of the fluctuation is neglected. The $`\varphi _0`$ field evolves toward the potential minimum at $`\varphi _0=\pi f`$ without being affected by the back reaction effect of produced particles. Inflationary period ends when $`mt4.5\times 10^4`$, at which the value of $`\varphi _0`$ is $`\varphi _03.0f`$ .
In the case of $`\varphi (0)\stackrel{<}{}0.1m_{\mathrm{pl}}`$, the number of $`e`$-folding becomes larger with the decrease of $`\varphi (0)`$ since the slow roll period is longer. In addition to this, fluctuations are enhanced more efficiently. In TABLE I, we show numerical values of $`N`$ and $`\sigma _{\mathrm{max}}`$ in various cases of $`\varphi (0)`$. The number of $`e`$-folding $`N^{}`$ where the spinodal effect is neglected is also presented. We find that both $`N`$ and $`\sigma _{\mathrm{max}}`$ increase with the decrease of $`\varphi (0)`$ for $`10^6m_{\mathrm{pl}}\stackrel{<}{}\varphi (0)\stackrel{<}{}10^1m_{\mathrm{pl}}`$. The growth of fluctuations continues until the $`\varphi _0`$ field reaches $`\varphi _0=\pi f/2`$. With the decrease of $`\varphi (0)`$, since the period during which the inflaton field moves in the region of $`V^{(2)}(\varphi )<0`$ becomes longer, this results in the larger amount of the maximum fluctuation. In the case of $`10^6m_{\mathrm{pl}}\stackrel{<}{}\varphi (0)\stackrel{<}{}10^1m_{\mathrm{pl}}`$, numerical calculations show that $`\sigma _{\mathrm{max}}`$ can be approximately written as a function of $`\varphi (0)`$:
$`\sigma _{\mathrm{max}}\left({\displaystyle \frac{\varphi (0)}{10^6m_{\mathrm{pl}}}}\right)^1m_{\mathrm{pl}}.`$ (28)
When $`\varphi (0)\stackrel{>}{}10^5m_{\mathrm{pl}}`$, $`\sigma _{\mathrm{max}}`$ does not exceed $`0.1m_{\mathrm{pl}}`$. In this case, since $`\delta \varphi ^2/2f^21`$ in Eq. $`(\text{21})`$, the back reaction effect due to fluctuations can be neglected. Although the inflaton field moves a bit toward the $`\sigma `$ direction in Fig. 1, it slowly rolls down toward the minimum of its potential in the usual manner. As a result, the number of $`e`$-folding does not change even taking into account the spinodal effect (see TABLE I).
On the other hand, when $`\varphi (0)\stackrel{<}{}10^6m_{\mathrm{pl}}`$, the fluctuation reaches $`\sigma _{\mathrm{max}}\stackrel{>}{}m_{\mathrm{pl}}`$. In this case, produced fluctuations play a relevant role for the evolution of the system. For example, let us consider the case of $`\varphi (0)=5.0\times 10^7m_{\mathrm{pl}}`$. As is shown in Fig. 3, the fluctuation reaches the maximum value $`\sigma _{\mathrm{max}}2.3m_{\mathrm{pl}}`$ at $`mt=2.5\times 10^5`$, where $`\varphi _0=\pi f/2`$. After that, particle production terminates completely because $`V^{(2)}(\varphi )`$ changes sign, and $`\sigma `$ decreases by the expansion of the universe. We can see this behavior of inflaton in Fig. 1. In this case, the inflaton field evolves toward the $`\sigma `$ direction rather than the $`\varphi _0`$ direction, and reaches the region around $`\sigma 2.3m_{\mathrm{pl}}`$ and $`\varphi _01.5m_{\mathrm{pl}}`$. Since this region of the effective potential $`(\text{22})`$ is flatter than the region of $`\sigma 0`$ and $`\varphi _01.5m_{\mathrm{pl}}`$, the inflaton field moves slowly for some time. After that, it evolves along the valley around $`0<\sigma /m_{\mathrm{pl}}<2.3`$ and $`\varphi _0\pi f`$, and finally arrives at the minimum of its potential. The amount of inflation is larger than in the case where the spinodal effect is ignored as is found in TABLE I, because produced fluctuations provide the additional energy density in Eq. $`(\text{20})`$.
This tendency becomes stronger with the decrease of $`\varphi (0)`$. In Fig. 4, we depict the evolution of $`\varphi _0`$ and $`\sigma `$ fields in the case of $`\varphi _0(0)=3.0\times 10^7m_{\mathrm{pl}}`$. In this case, the fluctuation reaches the maximum value $`\sigma _{\mathrm{max}}3.8m_{\mathrm{pl}}`$. One of the different points from the $`\varphi (0)=5.0\times 10^7m_{\mathrm{pl}}`$ case is that the inflaton field stays in flat regions for a long time: $`1\times 10^6\stackrel{<}{}mt\stackrel{<}{}7\times 10^6`$. As is seen in Fig. 1, the effective potential $`V(\varphi _0,\sigma )`$ in the region around $`\sigma 3.8m_{\mathrm{pl}}`$ and $`\varphi _01.5m_{\mathrm{pl}}`$ is very flat. Since $`F(\delta \varphi ^2_{\mathrm{max}})`$ is much smaller than unity, $`V(\varphi _0,\sigma )`$ takes the almost constant value $`m^4`$. The third terms in Eqs. $`(\text{18})`$ and $`(\text{19})`$ become very small in this region (since the main contributions to the fluctuation are due to low momentum modes, $`k^2/a^20`$), and the inflaton field evolves very slowly in the flat region $`V(\varphi _0,\sigma )m^4`$. This results in the secondary inflation supported by fluctuations. This behavior was originally pointed out by Cormier and Holman in the model with a potential $`V(\varphi )=\frac{3m^4}{2\lambda }\frac{1}{2}m^2\varphi ^2+\frac{\lambda }{4!}\varphi ^4`$. We can expect that their results based on the Hartree approximation generally hold in more complex spinodal type potentials if we choose initial values of inflaton close to zero. When $`\varphi (0)=3.0\times 10^7m_{\mathrm{pl}}`$, the secondary inflation continues much longer than the first inflation driven by the potential energy around $`\varphi =0`$ at $`0\stackrel{<}{}mt\stackrel{<}{}1.3\times 10^5`$ (See Fig. 5). This means that the number of $`e`$-folding is modified to be much larger than in the case where particle production due to spinodal instability is neglected. In fact, as is found in TABLE I, the number of $`e`$-folding is very large as $`N=22178`$. After the secondary inflation ends, the inflaton field rolls down toward the potential minimum around $`\varphi _0=\pi f`$ and $`\sigma =0`$, after which the universe enters the reheating stage.
With the decrease of initial values of inflaton as $`\varphi (0)\stackrel{<}{}10^7m_{\mathrm{pl}}`$, since the fluctuation grows as $`\sigma \stackrel{>}{}4m_{\mathrm{pl}}`$, inflaton reaches the further flat region of the potential $`V(\varphi _0,\sigma )`$. The duration of the secondary inflation becomes very long, and the amount of inflation is enormously large. As long as $`0<\varphi _0<\pi f/2`$, the inflaton field is to roll down toward the $`\sigma `$ direction, and the secondary inflation continues. However, once $`\varphi _0`$ exceeds the value of $`\varphi _0=\pi f/2`$, $`\sigma `$ begins to decrease toward $`\sigma =0`$. The secondary inflation never ends in the extreme case of $`\varphi _0=0`$ as was pointed out in Ref. .
Next, we investigate the case where $`m`$ and $`f`$ are changed. The mass $`m`$ is constrained by density perturbations observed by COBE. The analytic estimation neglecting particle production due to spinodal instability shows that $`m`$ is constrained by a function of $`f`$ as
$`m={\displaystyle \frac{1.7\times 10^{16}}{b^{1/2}}}\left[{\displaystyle \frac{m_{\mathrm{pl}}}{f}}\mathrm{sin}\left({\displaystyle \frac{\varphi (t_f)}{2f}}\right)\right]^{1/2}\mathrm{exp}\left({\displaystyle \frac{15m_{\mathrm{pl}}^2}{8\pi f^2}}\right)\mathrm{GeV},`$ (29)
where $`b`$ is an overall bias factor which ranges $`0.7<b<1.3`$. The term $`\mathrm{sin}(\varphi (t_f)/2f)`$ is typically of the order of unity. In the case where $`f`$ is of order $`m_{\mathrm{pl}}`$, $`m`$ ranges in the region of $`10^{15}\mathrm{GeV}\stackrel{<}{}m\stackrel{<}{}10^{16}\mathrm{GeV}`$. When $`f`$ is smaller than $`m_{\mathrm{pl}}`$ by one order of magnitude, $`m`$ decreases significantly because the exponential term in Eq. $`(\text{29})`$ plays a dominant role to determine the mass scale. If we include the effect of spinodal instability, since this would produce the spatial inhomogeneity, the constraint $`m`$ will be changed. In order to study this issue appropriately, we have to investigate the evolution of metric perturbations during inflation. In the present model, however, since metric perturbations may be enhanced up to the nonlinear level by spinodal instability, the first order perturbation will not give the correct description of physics. Although we do not consider these complex issues in this paper, it would be necessary to include metric perturbations for a complete study of nonperturbative dynamics.
Let us study the growth of fluctuations by changing the scale $`m`$ for the fixed value of $`f=10^{19}`$ GeV$`m_{\mathrm{pl}}`$. Consider the case of $`m=10^{15}`$ GeV$`10^4m_{\mathrm{pl}}`$. When particle creation by spinodal instability is neglected, since the achieved number of $`e`$-folding is expressed by
$`N={\displaystyle \frac{16\pi f^2}{m_{\mathrm{pl}}^2}}\mathrm{ln}\left[{\displaystyle \frac{\mathrm{sin}(\varphi (t_f)/2f)}{\mathrm{sin}(\varphi (0)/2f)}}\right],`$ (30)
it does not depend on the scale of $`m`$. Although it takes more time to terminate inflation for smaller values of $`m`$, we obtain $`N\stackrel{>}{}60`$ for $`\varphi (0)\stackrel{<}{}0.5m_{\mathrm{pl}}`$, which is the same as in the $`m=10^{16}`$ GeV case. As for fluctuations of inflaton, since $`\delta \varphi ^2`$ is normalized by the square of mass $`m`$, the achieved maximum value of $`\sigma `$ becomes smaller as $`m`$ decreases for the same initial value of $`\varphi `$. Particle production relevantly occurs for the case of $`\varphi (0)\stackrel{<}{}10^3m_{\mathrm{pl}}`$. Numerical calculations show that the maximum fluctuation $`\sigma _{\mathrm{max}}`$ in the case of $`10^8m_{\mathrm{pl}}\stackrel{<}{}\varphi (0)\stackrel{<}{}10^3m_{\mathrm{pl}}`$ is smaller by two orders of magnitude than in the case of $`m=10^{16}`$ GeV (see TABLE II). Namely, we find the relation
$`\sigma _{\mathrm{max}}\left({\displaystyle \frac{\varphi (0)}{10^6m_{\mathrm{pl}}}}\right)^110^2m_{\mathrm{pl}}.`$ (31)
When $`\varphi (0)\stackrel{<}{}10^8m_{\mathrm{pl}}`$, $`\sigma _{\mathrm{max}}`$ exceeds the order of $`fm_{\mathrm{pl}}`$, and the secondary inflation occurs as in the case of $`m=10^{16}`$ GeV. The number of $`e`$-folding becomes larger than in the case where the spinodal effect is neglected, and it depends on the scale of $`m`$. We found that the smaller values of $`\varphi (0)`$ are required for the development of fluctuations with the decrease of $`m`$.
Finally we comment on the case where the mass $`f`$ is changed. The number of $`e`$-folding is smaller with the decrease of $`f`$, because the potential $`(\text{2})`$ becomes steeper. Consider the case of $`f=5.0\times 10^{18}`$ GeV $`0.5m_{\mathrm{pl}}`$ and $`m=10^{15}`$ GeV$`10^4m_{\mathrm{pl}}`$. Even in the initial value of $`\varphi (0)=10^1m_{\mathrm{pl}}`$, the number of $`e`$-folding is only $`N=33`$. In order to lead to the sufficient inflation as $`N\stackrel{>}{}60`$, we require initial values as $`\varphi (0)\stackrel{<}{}10^2m_{\mathrm{pl}}`$. Since the inflaton field rolls down rapidly in the regions of $`V^{(2)}(\varphi )<0`$ compared with the case of $`f=10^{19}`$ GeV, the growth of fluctuations is slower. For example, the maximum fluctuations are $`\sigma _{\mathrm{max}}=6.7\times 10^4m_{\mathrm{pl}}`$ for $`\varphi (0)=10^5m_{\mathrm{pl}}`$; $`\sigma _{\mathrm{max}}=6.7\times 10^2m_{\mathrm{pl}}`$ for $`\varphi (0)=10^7m_{\mathrm{pl}}`$ (see TABLE III). These values are smaller than in the case of $`f=10^{19}`$ GeV and $`m=10^{15}`$ GeV for the same initial values of $`\varphi `$. The fluctuation grows up to the nonlinear level for $`\varphi (0)\stackrel{<}{}5\times 10^9m_{\mathrm{pl}}`$, and the secondary inflation also occurs in this case. For the values of $`f`$ which are not much smaller than the Planck order, we can say that fluctuations are enhanced beyond the perturbative level and can support the total amount of inflation.
## IV Concluding remarks and discussions
In this paper we have investigated the evolution of an inflaton field $`\varphi `$ in the presence of nonperturbative behavior of fluctuations due to spinodal instability in the natural inflation model. Since the second derivative $`V^{(2)}(\varphi )`$ of the potential $`V(\varphi )=m^4\left[1+\mathrm{cos}(\varphi /f)\right]`$ is negative for the values of $`0<\varphi <\pi f/2`$, fluctuations of inflaton can grow even during the inflationary phase.
The strength of the excitation of fluctuations $`\sigma `$ strongly depends on the initial value of inflaton $`\varphi (0)`$. For typical mass scales $`f=m_{\mathrm{pl}}`$ and $`m=10^3m_{\mathrm{pl}}`$, we have examined the dynamics of the system in various values of $`\varphi (0)`$ by making use of the Hartree approximation. For the values of $`\varphi (0)\stackrel{<}{}0.5m_{\mathrm{pl}}`$, we have sufficient inflation as $`N\stackrel{>}{}60`$ which is required to solve cosmological puzzles of the big bang cosmology. When $`\varphi (0)\stackrel{<}{}0.1m_{\mathrm{pl}}`$, fluctuations are relevantly enhanced with the decrease of $`\varphi (0)`$ because duration of spinodal instability becomes longer. Since long wavelength modes of fluctuations are mainly enhanced, the system can be described effectively by two homogeneous fields with potential $`(\text{22})`$. The natural inflation model has the advantage that higher order terms of fluctuations can be handled in an analytic way. With the increase of $`\sigma `$, the term $`F(\delta \varphi ^2)`$ in Eq. $`(\text{21})`$ decreases from unity. This changes the evolution of the system as is found in Eqs. $`(\text{18})`$-$`(\text{20})`$. Numerical calculations show that the maximum value of fluctuations is $`\sigma _{\mathrm{max}}\left(\varphi (0)/10^6m_{\mathrm{pl}}\right)^1m_{\mathrm{pl}}`$ for the case of $`10^5m_{\mathrm{pl}}\stackrel{<}{}\varphi (0)\stackrel{<}{}10^1m_{\mathrm{pl}}`$, which means that $`\sigma _{\mathrm{max}}`$ is less than $`0.1m_{\mathrm{pl}}`$. In this case, since $`\sigma _{\mathrm{max}}`$ is smaller than the scale $`f`$ by one order of magnitude, the back reaction effect due to particle production can be neglected. When $`\varphi (0)\stackrel{<}{}10^6m_{\mathrm{pl}}`$, however, $`\sigma `$ exceeds the scale of $`f`$ and the dynamics of inflation are altered. Since the effective potential $`(\text{22})`$ is flat in the region of $`\sigma \stackrel{>}{}2m_{\mathrm{pl}}`$, secondary inflation takes place by produced fluctuations. As compared with the case where the spinodal effect is neglected, the number of $`e`$-foldings become much larger. The secondary inflation continues for a long time in the case of $`\varphi (0)\stackrel{<}{}10^7m_{\mathrm{pl}}`$. Once inflaton exceeds the value of $`\varphi _0=\pi f/2`$, it gradually approaches the potential minimum around $`\varphi _0=\pi f`$ and $`\sigma 0`$, after which the inflationary period terminates.
If we change two mass scales of $`m`$ and $`f`$, we obtain the smaller maximum fluctuation with the decrease of $`m`$ and $`f`$ for the fixed initial values of $`\varphi (0)`$. However, if we choose smaller values of $`\varphi (0)`$ which are close to zero, we find that fluctuations can grow beyond the perturbative level to lead to the secondary inflation. The number of $`e`$-folding depends on the scale of $`m`$ for the fixed values of $`f`$ and $`\varphi (0)`$ if the spinodal effect is taken into account.
We should comment on some points. The influence due to resonant particle production on cosmic background anisotropies was studied by several authors in the context of preheating and fermion production during inflation in the chaotic inflation model. In the new inflation scenario, it was found that metric perturbations of super-horizon modes are enhanced by spinodal instability when inflaton is initially located around $`\varphi =0`$. It is interesting to study how scale invariance of the Harrison-Zelโdovich spectrum would be modified in the present model by making use of the gauge invariant formalisms of metric perturbations.
As related to the treatment of the back reaction by produced particles, we relied on the Hartree approximation, which is essentially the mean field approximation. In the present model, this has the advantage of being able to deal with higher order contributions of fluctuations analytically beyond perturbation theory. On the other hand, there are other approaches related to back reaction issues. One of them is 2PI formalism by Cornwall et al. Another approach is to add the stochastic noise term due to quantum fluctuations to the field equation, which is based on the closed time path formalism. These formalisms may alter quantitative details obtained in this paper especially when fluctuations are enhanced significantly.
Although we studied the natural inflation model as one example of spinodal inflation, the nonperturbative evolution of fluctuations which leads to secondary inflation would be expected to occur in other spinodal models. As well as in the โsmall fieldโ models such as the natural inflation and new inflation, in which inflaton is initially small, potentials with spinodal instability appear in the โlarge fieldโ models such as the higher curvature gravity and the nonminimally coupled scalar field by performing conformal transformations to the Einstein frame. It is of interest how the dynamics of inflation are modified in these models by taking into account the effect of spinodal instability. These issues are in consideration.
## ACKNOWLEDGMENTS
The authors would like to thank Kei-ichi Maeda for useful discussions. T. T. is thankful for financial support from the JSPS. This work was supported partially by a Grant-in-Aid for Scientific Research Fund of the Ministry of Education, Science and Culture (No. 09410217), by a JSPS Grant-in-Aid (No. 094162), and by the Waseda University Grant for Special Research Projects.
Note added.
Very recently, Cormier and Holman considered the dynamics of the spinodal instability in the same model as ours. Their results are consistent with our results obtained in this paper.
Figure Captions
FIG. 1:
The effective two-field potential $`V(\varphi _0,\sigma )m^4\left[1+\mathrm{exp}\left(\frac{\sigma ^2}{2f^2}\right)\mathrm{cos}\frac{\varphi _0}{f}\right]`$ in the natural inflation model. $`\varphi _0`$ and $`\sigma `$ are normalized by the scale $`f`$. In the case where growth of the fluctuation of inflaton is neglected, this is reduced to the one-field potential $`V(\varphi _0)=m^4\left[1+\mathrm{cos}\left(\frac{\varphi _0}{f}\right)\right]`$. However, when the fluctuation grows significantly and evolves toward the $`\sigma `$ direction, the system is described by two fields $`\varphi _0`$ and $`\sigma `$.
FIG. 2:
The evolution of $`\varphi _0`$ and $`\sigma `$ fields for the initial value of $`\varphi (0)=0.1m_{\mathrm{pl}}`$ in the case of $`f=10^{19}`$ GeV$`m_{\mathrm{pl}}`$ and $`m=10^{16}`$ GeV$`10^3m_{\mathrm{pl}}`$. Both fields are normalized by $`m_{\mathrm{pl}}`$. Although fluctuations grow at the initial stage, the maximum value $`\sigma _{\mathrm{max}}10^5m_{\mathrm{pl}}`$ achieved in this case is much smaller than the value $`f`$. The $`\varphi _0`$ field evolves toward the potential minimum $`\varphi _0=\pi f`$, after which the universe enters the reheating stage. In this case, the enhancement of fluctuations hardly affects the evolution of the $`\varphi _0`$ field and the scale factor.
FIG. 3:
The evolution of $`\varphi _0`$ and $`\sigma `$ fields for the initial value of $`\varphi (0)=5.0\times 10^7m_{\mathrm{pl}}`$ in the case of $`f=10^{19}`$ GeV$`m_{\mathrm{pl}}`$ and $`m=10^{16}`$ GeV$`10^3m_{\mathrm{pl}}`$. The fluctuation $`\sigma `$ reaches the maximum value $`\sigma _{\mathrm{max}}=2.3m_{\mathrm{pl}}`$ when $`\varphi _0=\pi f/2`$. After that, $`\sigma `$ decreases because spinodal instability is absent. The inflaton field finally rolls down toward the potential minimum with $`\varphi _0=\pi f`$ and $`\sigma 0`$.
FIG. 4:
The evolution of $`\varphi _0`$ and $`\sigma `$ fields for the initial value of $`\varphi (0)=3.0\times 10^7m_{\mathrm{pl}}`$ in the case of $`f=10^{19}`$ GeV$`m_{\mathrm{pl}}`$ and $`m=10^{16}`$ GeV$`10^3m_{\mathrm{pl}}`$. The fluctuation $`\sigma `$ reaches the maximum value $`\sigma _{\mathrm{max}}=3.8m_{\mathrm{pl}}`$ when $`\varphi _0=\pi f/2`$. The secondary inflation due to fluctuations occurs for $`1\times 10^6\stackrel{<}{}mt\stackrel{<}{}7\times 10^6`$. In this region, since the effective two-field potential is very flat, the inflaton field evolves very slowly. Finally, inflaton is trapped in the potential minimum $`\varphi _0=\pi f`$ and $`\sigma 0`$ at $`mt=7.6\times 10^6`$.
FIG. 5:
The evolution of the Hubble parameter $`H`$ for the initial value of $`\varphi (0)=3.0\times 10^7m_{\mathrm{pl}}`$ in the case of $`f=10^{19}`$ GeV$`m_{\mathrm{pl}}`$ and $`m=10^{16}`$ GeV$`10^3m_{\mathrm{pl}}`$. The first inflation occurs around $`\varphi _00`$ for $`0\stackrel{<}{}mt\stackrel{<}{}1.3\times 10^5`$, which is followed by the secondary inflation caused by fluctuations for $`1\times 10^6\stackrel{<}{}mt\stackrel{<}{}7\times 10^6`$. Since the duration of this secondary inflation is long, the achieved number of $`e`$-folding is very large as $`N=22178`$. |
no-problem/9912/astro-ph9912184.html | ar5iv | text | # 1 An unknown factor
## 1 An unknown factor
The relevance of clusters of galaxies in cosmology cannot be overstated. The cosmological virtue of Xโray clusters mostly resides in the fact that the Xโray properties of the Intra Cluster Medium (ICM) can be used as a direct tracer of the total concentration of mass. The luminosity, and in particular the emissionโweighted temperature, offer a unique way to probe the power spectrum of primordial density fluctuations and its evolution.
However, ten years ago it has been realized that a certain amount of nonโgravitational energy input is needed to explain the scaling properties of Xโray halos ranging from clusters to groups . Such a nonโgravitational contribution does break the simple relation between the distribution of the ICM and that of the total matter. A recent detection confirmed the existence of such extra energy, which can be conveniently quantified in terms of an entropy excess with respect to the value expected from gravitational processes only. Furthermore, the specific entropy, defined as $`S\mathrm{ln}\left(K\right)=\mathrm{ln}\left(kT/\mu m_p\rho ^{2/3}\right)`$, determines the properties of both local and distant Xโray clusters. This implies that the Xโray evolution is driven both by the dynamics and the heating history of the gas, which, in turn, may depend on star formation, nuclear activity, etc. In particular, the impact of the nonโgravitational processes on the surrounding medium is unknown. Therefore, this unpredictable factor casts a shadow on the virtue of Xโray clusters as tracers of the distribution of matter. From this perspective, a physical model that includes the contribution of a nonโgravitational term will restore the reliability of Xโray clusters, and possibly will reveal new virtues.
Here we will show that the entropy is a convenient variable to describe the evolution of the ICM with the inclusion of a heating contribution. As a consequence, looking at the entropy in distant Xโray clusters will give information on the nonโgravitational processes that affect the ICM.
## 2 The Best Record of the History of Baryons
Let us describe the formation of Xโray halos as a spherical, smooth accretion of shells of gas (driven by the dark matter component) with an initial excess entropy $`K_{}`$, which is the only free parameter. We can clearly distinguish three main phases in the thermodynamic evolution of the diffuse baryons:
* adiabatic compression when the gas starts to be collected in the evolving potential well and its temperature grows as $`kTK_{}\rho ^{2/3}`$;
* shock heating as the infall velocities of the shells become larger than the sound speed; as a consequence, the entropy of the accreted gas shell jumps to higher values ;
* further adiabatic compression of the shells enclosed within the shock front; the shells may start to lose entropy due to the radiative cooling, especially in the central regions.
What role is played by the initial excess entropy? In Figure 1 we show the entropy history of three baryonic shells (containing 1%, 10% and 50% of the baryonic mass of a cluster of $`10^{15}h^1M_{}`$ total) for two different initial values of $`K_{}`$. In the first case with $`K_{}=0.1\times 10^{34}`$ erg cm<sup>2</sup> gm<sup>-5/3</sup>, the gas in the center of the halo becomes dense enough to start early cooling. Consequently, the final entropy in the center is much lower than the initial level. In particular, the inner shells can cool completely and drop out from the diffuse, emitting phase. In the case with $`K_{}=0.3\times 10^{34}`$ erg cm<sup>2</sup> gm<sup>-5/3</sup>, the high initial value of $`K_{}`$ prevents most of the gas from cooling, and a nonโnegligible entropy level is preserved in the center. These high entropy regions are responsible for the flat cores in the density distribution, that are more extended going from clusters to groups. This mechanism, by setting the appropriate value of $`K_{}`$, bends the $`L`$$`T`$ relation from the selfโsimilar prediction $`LT^2`$, to the observed average $`T^3`$. Note also that the entropy level at large radii is unaffected by the initial value, since it is dominated by shock heating.
Summarizing, after a proper treatment of shock heating and cooling, the entropy turns out to be the best record of the thermodynamic history of the diffuse baryons at the scale of groups and clusters. In particular the excess entropy and the cooling processes strongly interfere with each other, in the sense that a nonโnegligible excess entropy inhibits the radiative cooling. Despite the simplification of assuming a constant and homogeneous value in the external gas, this model can reproduce many scaling properties of Xโray halos if $`K_{}=\left(0.4\pm 0.1\right)\times 10^{34}`$ erg cm<sup>2</sup> gm<sup>-5/3</sup> . Note that the excess entropy can be generated after the collapse, but this would require a much higher energy budget. The last scenario is currently under investigation.
## 3 The Virtues of Clusters
At this point, it is clear that the evolution of the $`L`$$`T`$ relation is affected to a large extent by the amount of the excess entropy and its time evolution. We recall that both the luminosity and the emissionโweighted temperature are affected, even if the $`M`$$`T`$ relation is less dependent on the actual value of $`K_{}`$ . However, once the nonโgravitational processes can be included in the excess entropy, the above picture unveils a new virtue of Xโray halos. In fact, the emission properties of clusters and groups reflect both the dynamic and the thermodynamic history of the baryons. Paying the price of a more complex scenario, it will be possible not only to test the cosmology, but also, at the same time, the history of nonโgravitational processes like nuclear activity and star formation history (e.g., coupling galaxy formation models with the evolution of Xโray halos, see ).
The simple case of a single value of $`K_{}`$ in the IGM can explain many scaling properties of local clusters, as shown in Figure 2 for the $`L`$$`T`$ and the $`K`$$`T`$ relations (where $`K`$ is estimated at $`r=0.1R_{vir}`$, see ; note also that $`L`$ at the scale of groups is computed within a radius much larger than the $`0.1h^1`$ Mpc used in , see discussion in ). In the same Figure 2 we show two different cases for the evolution of the excess entropy. We found that a constant $`K_{}`$ will give a roughly constant $`L`$$`T`$, while an evolving $`K_{}\left(1+z\right)^1`$ will give similar local properties, but a higher $`L`$$`T`$ (lower $`K`$$`T`$) at $`z1`$. The observation of distant clusters therefore, will unveil a large part of the history of the cosmic baryons, and can be usefully coupled with observations in other spectral bands in order to unambiguously identify the sources, the time scale and the global energy budget of the nonโgravitational preheating.
This work has been supported by NASA grant NAG 8-1133. |
no-problem/9912/hep-ph9912473.html | ar5iv | text | # Super-horizon perturbations and preheating
## I Introduction
The standard inflationary paradigm is an extremely successful model in explaining observed structures in the Universe (see Refs. for reviews). The inhomogeneities originate from the quantum fluctuations of the inflaton field, which on being stretched to large scales become classical perturbations. The field inhomogeneities generate a perturbation in the curvature of comoving hypersurfaces, and later on these inhomogeneities are inherited by matter and radiation when the inflaton field decays. In the simplest scenario, the curvature perturbation on scales much larger than the Hubble length is constant, and in particular is unchanged during the inflaton decay. This enables a prediction of the present-day perturbations which does not depend on the specific cosmological evolution between the late stages of inflation and the recent past (say, before nucleosynthesis).
It has recently been claimed that this simple picture may be violated if inflation ends with a period of preheating, a violent decay of the inflaton particles into another field (or even into quanta of the inflaton field itself). Such a phenomenon would completely undermine the usual inflationary picture, and indeed the original claim was that large-scale perturbations would be amplified into the non-linear regime, placing them in conflict with observations such as measurements of microwave background anisotropies. Given the observational successes of the standard picture, these claims demand attention.
In a companion paper , we discuss the general criteria under which large-scale curvature perturbations can vary. As has been known for some time, this is possible provided there exist large-scale non-adiabatic pressure perturbations, as can happen for example in multi-field inflation models . Under those circumstances a significant effect is possible during preheating, though there is nothing special about the preheating era in this respect and this effect always needs to be considered in any multi-component inflation model.
In this paper we perform an analysis of the simplest preheating model, as discussed in Ref. . We identify two possible sources of variation of the curvature perturbation. One comes from large-scale isocurvature perturbations in the preheating field into which the inflaton decays; we concur with the recent analyses of Jedamzik and Sigl and Ivanov that this effect is negligible due to the rapid decay of the background value of the preheating field during inflation. However, we also show that in fact a different mechanism gives the dominant contribution, which is second-order in the field perturbations coming from short-wavelength fluctuations in the fields. Nevertheless, we show too that this effect is completely negligible, and hence that preheating in this model has no significant effect on large-scale curvature perturbations.
## II Perturbation evolution
An adiabatic perturbation is one for which all perturbations $`\delta x`$ share a common value for $`\delta x/\dot{x}`$, where $`\dot{x}`$ is the time dependence of the background value of $`x`$. If the Universe is dominated by a single fluid with a definite equation of state, or by a single scalar field whose perturbations start in the vacuum state, then only adiabatic perturbations can be supported. If there is more than one fluid, then the adiabatic condition is a special case, but for instance is preserved if a single inflaton field subsequently decays into several components. However, perturbations in a second field, for instance the one into which the inflaton decays during preheating, typically violate the adiabatic condition.
We describe the perturbations via the curvature perturbation on uniform-density hypersurfaces, denoted $`\zeta `$.<sup>*</sup><sup>*</sup>*This is the notation of Bardeen, Steinhardt and Turner . General issues of perturbation description and evolution are discussed in a companion paper . The curvature perturbation of comoving spatial hypersurfaces, usually denoted by $``$ , is practically the same as $`\zeta `$ well outside the horizon, since the two coincide in the large-scale limit. In linear theory the evolution of $`\zeta `$ is well known, and arises from the non-adiabatic part of the pressure perturbations. In any gauge, the pressure perturbation can be split into adiabatic and entropic (non-adiabatic) parts, by writing
$$\delta p=c_\mathrm{s}^2\delta \rho +\delta p_{\mathrm{nad}},$$
(1)
where $`c_\mathrm{s}^2\dot{p}/\dot{\rho }`$ and the non-adiabatic part is
$$\delta p_{\mathrm{nad}}\dot{p}\mathrm{\Gamma }\dot{p}\left(\frac{\delta p}{\dot{p}}\frac{\delta \rho }{\dot{\rho }}\right).$$
(2)
The entropy perturbation $`\mathrm{\Gamma }`$, defined in this way, is gauge-invariant, and represents the displacement between hypersurfaces of uniform pressure and uniform density.
On large scales anisotropic stress can be ignored when the matter content is entirely in the form of scalar fields, and in its absence the non-adiabatic pressure perturbation determines the variation of $`\zeta `$, according to the equation
$$\frac{d\zeta }{dN}=3Hc_\mathrm{s}^2\mathrm{\Gamma },$$
(3)
where $`N\mathrm{ln}a`$ measures the integrated expansion and $`H`$ is the Hubble parameter. The uniform-density hypersurfaces become ill-defined if the density is not a strictly decreasing function along worldlines between hypersurfaces of uniform density, and one might worry that this undermines the above analysis. However we can equally well derive this evolution equation in terms of the density perturbation on spatially-flat hypersurfaces, $`\delta \rho _\psi (d\rho /dN)\zeta `$, which remains well-defined. Spatially-flat hypersurfaces are automatically separated by a uniform integrated expansion on large scales, so the perturbed continuity equation in this gauge takes the particularly simple form
$$\frac{d\delta \rho _\psi }{dN}=3(\delta \rho _\psi +\delta p_\psi ).$$
(4)
From this one finds that $`\delta \rho _\psi d\rho /dN`$ for adiabatic perturbations and hence again we recover constant value for $`\zeta `$. However it is clearly possible for entropy perturbations to cause a change in $`\zeta `$ on arbitrarily large scales when the non-adiabatic pressure perturbation is non-negligible.
## III Preheating
During inflation, the reheat field into which the inflaton field decays possesses quantum fluctuations on small scales just like the inflaton field itself. As these perturbations are uncorrelated with those in the inflaton field, the adiabatic condition will not be satisfied, and hence there is a possibility that $`\zeta `$ might vary on large scales. Only direct calculation can demonstrate whether the effect might be significant, and we now compute this effect in the simplest preheating model, as analyzed in Ref. . This is a chaotic inflation model with scalar field potential
$$V(\varphi ,\chi )=\frac{1}{2}m^2\varphi ^2+\frac{1}{2}g^2\varphi ^2\chi ^2,$$
(5)
where $`\varphi `$ is the inflaton and $`\chi `$ the reheat field. Slow-roll inflation proceeds with $`\varphi m_{\mathrm{Pl}}`$ and $`g\chi m`$. The effective mass of the $`\chi `$ field is $`g\varphi `$ and thus will be much larger than the Hubble rate, $`H\sqrt{4\pi /3}m\varphi /m_{\mathrm{Pl}}`$, for $`gm/m_{\mathrm{Pl}}10^6`$. Throughout, we use the symbol โ$``$โ to indicate equality within the slow-roll approximation.
This model gives efficient preheating, since the effective mass of $`\chi `$ oscillates about zero with large amplitude. In most other models of inflation, preheating is less efficient or absent, because the mass oscillates about a nonzero value and/or has a small amplitude.
Any variation of $`\zeta `$ during preheating will be driven by the (non-adiabatic part of) the $`\chi `$ field perturbation. Our calculation takes place in three steps. The first is to compute the perturbations in the $`\chi `$ field at the end of inflation. The second is to compute how these perturbations are amplified during the preheating epoch by the strong resonance. Finally, the main part of the calculation is to compute the change in $`\zeta `$ driven by these $`\chi `$ perturbations.
### A The initial quantum fluctuation of the $`\chi `$-field
Perturbations in the $`\chi `$ field obey the wave equation
$$\ddot{\delta \chi }+3H\dot{\delta \chi }+\left(\frac{k^2}{a^2}+g^2\varphi ^2\right)\delta \chi =0.$$
(6)
The slow-roll conditions ensure that the $`\chi `$ field remains in the adiabatic vacuum state for a massive field
$$\delta \chi _k\frac{e^{i\omega t}}{\sqrt{2\omega }},$$
(7)
where $`\omega ^2=k^2/a^2+g^2\varphi ^2`$. This is a solution provided
$$\nu \frac{m_\chi }{H}\sqrt{\frac{3}{4\pi }}\frac{gm_{\mathrm{Pl}}}{m}1,$$
(8)
where $`m_\chi g\varphi `$ is the effective mass of the $`\chi `$ field.
The power spectrum of a quantity $`x`$, decomposed into Fourier components $`x_๐ค`$, is defined as
$$๐ซ_x\frac{k^3}{2\pi ^2}|x_๐ค|^2,$$
(9)
where $`k=|๐ค|`$ and the average is over ensembles. Hence the power spectrum for long-wavelength fluctuations ($`km_\chi `$) in the $`\chi `$ field simply reduces to the result for a massive field in flat space
$$๐ซ_{\delta \chi }\frac{1}{4\pi ^2m_\chi }\left(\frac{k}{a}\right)^3,$$
(10)
where $`m_\chi `$ is the mass of the field at the required time. Physically, this says that at all times the expansion of the Universe has a negligible effect on the modes as compared to the mass. In particular, at the end of inflation we can write
$$๐ซ_{\delta \chi }|_{\mathrm{end}}\frac{1}{\nu }\left(\frac{H_{\mathrm{end}}}{2\pi }\right)^2\left(\frac{k}{k_{\mathrm{end}}}\right)^3.$$
(11)
The power spectrum has a spectral index $`n_{_{\delta \chi }}=3`$. This is the extreme limit of the mechanism used to give a blue tilt in isocurvature inflation scenarios .
### B Parametric resonance
After inflation, the inflaton field $`\varphi `$ oscillates. Strong parametric resonance may now occur, amplifying the initial quantum fluctuation in $`\chi `$ to become a perturbation of the classical field $`\chi `$. The condition for this is
$$q\frac{g^2\mathrm{\Phi }^2}{4m^2}1,$$
(12)
where $`\mathrm{\Phi }`$ is the initial amplitude of the $`\varphi `$-field oscillations.
We model the effect of preheating on the amplitude of the $`\chi `$ field following Ref. as
$$๐ซ_{\delta \chi }=๐ซ_{\delta \chi }|_{\mathrm{end}}e^{2\mu _km\mathrm{\Delta }t},$$
(13)
and the Floquet index $`\mu _k`$ is taken as
$$\mu _k\frac{1}{2\pi }\mathrm{ln}\left(1+2e^{\pi \kappa ^2}\right),$$
(14)
with
$$\kappa ^2\left(\frac{k}{k_{\mathrm{max}}}\right)^2\frac{1}{18\sqrt{q}}\left(\frac{k}{k_{\mathrm{end}}}\right)^2.$$
(15)
For strong coupling ($`q1`$), we have $`\kappa ^21`$ for all modes outside the Hubble scale after inflation ends $`(kk_{\mathrm{end}})`$. Therefore $`\mu _k\mathrm{ln}3/2\pi 0.17`$ is only very weakly dependent on the wavenumber $`k`$. Combining Eqs. (11) and (13) gives
$$๐ซ_{\delta \chi }\frac{1}{\nu }\left(\frac{H_{\mathrm{end}}}{2\pi }\right)^2\left(\frac{k}{k_{\mathrm{end}}}\right)^3e^{2\mu _km\mathrm{\Delta }t}.$$
(16)
### C Change in the curvature perturbation on large scales
In order to quantify the effect parametric growth of the $`\chi `$ field fluctuations during preheating might have upon the standard predictions for the spectrum of density perturbations after inflation, we need to estimate the change in the curvature perturbation $`\zeta `$ on super-horizon scales due to entropy perturbations on large-scales.
The density and pressure perturbations due to first-order perturbations in the inflaton field on large scales (i.e. neglecting spatial gradient terms) are of order $`g^2\varphi ^2\chi \delta \chi `$. Not only are the field perturbations $`\delta \chi `$ strongly suppressed on large scales at the end of inflation \[as shown in our Eq. (11)\] but so is the background field $`\chi `$. We can place an upper bound on the size of the background field by noting that in order to have slow-roll chaotic inflation (dominated by the $`m^2\varphi ^2/2`$ potential) when any given mode $`k`$ which we are interested in crossed outside the horizon, we require $`\chi m/g`$. The large effective mass causes this background field to decay, just like the super-horizon perturbations, and at the end of inflation we require $`\chi m/g(k/k_{\mathrm{end}})^{3/2}`$ when considering preheating in single-field chaotic inflation. Combining this with Eq. (11) we find that the spectrum of density or pressure perturbations due linear perturbations in the $`\chi `$ field has an enormous suppression for $`kk_{\mathrm{end}}`$:
$$๐ซ_{\chi \delta \chi }|_{\mathrm{end}}\sqrt{\frac{4\pi }{3}}\left(\frac{m}{gm_{\mathrm{Pl}}}\right)^3\left(\frac{m_{\mathrm{Pl}}H_{\mathrm{end}}}{2\pi }\right)^2\left(\frac{k}{k_{\mathrm{end}}}\right)^6$$
(17)
Effectively the density and pressure perturbations have no term linear in $`\delta \chi `$, because that term is multiplied by the background field value which is vanishingly small.
By contrast the second-order pressure perturbation is of order $`g^2\varphi ^2\delta \chi ^2`$ where the power spectrum of $`\delta \chi ^2`$ is given by
$$๐ซ_{\delta \chi ^2}\frac{k^3}{2\pi }_0^{k_{\mathrm{cut}}}\frac{๐ซ_{\delta \chi }(\left|๐ค^{}\right|)๐ซ_{\delta \chi }(\left|๐ค๐ค^{}\right|)}{\left|๐ค\right|^3\left|๐ค๐ค^{}\right|^3}d^3๐ค^{}.$$
(18)
We impose the upper limit $`k_{\mathrm{cut}}k_{\mathrm{max}}`$ to eliminate the ultraviolet divergence associated with vacuum state. Substituting in for $`๐ซ_{\delta \chi }`$ from Eq. (11), we can write
$$๐ซ_{\delta \chi ^2}|_{\mathrm{end}}=\frac{8\pi }{9}\left(\frac{m}{gm_{\mathrm{Pl}}}\right)^2\left(\frac{H_{\mathrm{end}}}{2\pi }\right)^4\left(\frac{k_{\mathrm{cut}}}{k_{\mathrm{end}}}\right)^3\left(\frac{k}{k_{\mathrm{end}}}\right)^3,$$
(19)
Noting that $`H_{\mathrm{end}}m`$ and $`k_{\mathrm{cut}}k_{\mathrm{max}}q^{1/4}k_{\mathrm{end}}`$, it is evident that the second-order effect will dominate over the linear term for $`k<g^{1/2}q^{1/3}k_{\mathrm{end}}`$.
The leading-order contributions to the pressure and density perturbations on large scales are thus
$`\delta \rho `$ $`=`$ $`m^2\varphi \delta \varphi +\dot{\varphi }\dot{\delta \varphi }+{\displaystyle \frac{1}{2}}g^2\varphi ^2\delta \chi ^2+{\displaystyle \frac{1}{2}}\dot{\delta \chi }^2,`$ (20)
$`\delta p`$ $`=`$ $`m^2\varphi \delta \varphi +\dot{\varphi }\dot{\delta \varphi }{\displaystyle \frac{1}{2}}g^2\varphi ^2\delta \chi ^2+{\displaystyle \frac{1}{2}}\dot{\delta \chi }^2.`$ (21)
We stress that we will still only consider first-order perturbations in the metric and total density and pressure, but these include terms to second-order in $`\delta \chi `$. From Eqs. (2), (20) and (21) we obtain
$$\delta p_{\mathrm{nad}}=\frac{m^2\varphi \dot{\delta \chi }^2+\ddot{\varphi }g^2\varphi ^2\delta \chi ^2}{3H\dot{\varphi }},$$
(22)
where the long-wavelength solutions for vacuum fluctuations in the $`\varphi `$ field obey the adiabatic condition $`\delta \varphi /\dot{\varphi }=\dot{\delta \varphi }/\ddot{\varphi }`$. Inserted into Eq. (3), this gives the rate of change of $`\zeta `$.
Note that the non-adiabatic pressure will diverge periodically when $`\dot{\varphi }=0`$ as the comoving or uniform density hypersurfaces become ill-defined. Such a phenomenon was noted in the single-field context by Finelli and Brandenberger , who evaded it by instead using Mukhanovโs variable $`u=a\delta \varphi _\psi `$ which renders well-behaved equations. Linear perturbation theory remains valid as there are choices of hypersurface, such as the spatially-flat hypersurfaces, on which the total pressure perturbation remains finite and small. In particular, we can calculate the change in the density perturbation due to the non-adiabatic part of the pressure perturbation on spatially-flat hypersurfaces from Eq. (4), which yields
$$\mathrm{\Delta }\rho _{\mathrm{nad}}=3\delta p_{\mathrm{nad}}H๐t.$$
(23)
Even though $`\delta p_{\mathrm{nad}}`$ contains poles whenever $`\dot{\varphi }=0`$, the integrated effect remains finite whenever the upper and lower limits of the integral are at $`\dot{\varphi }0`$. From this density perturbation calculated in the spatially-flat gauge one can reconstruct the change in the curvature perturbation on uniform density hypersurfaces
$$\mathrm{\Delta }\zeta =H\frac{\mathrm{\Delta }\rho _{\mathrm{nad}}}{\dot{\rho }}.$$
(24)
Substituting in our expression for $`\delta p_{\mathrm{nad}}`$ we obtain
$$\mathrm{\Delta }\zeta =\frac{1}{\dot{\varphi }^2}\left(1+\frac{2m^2\varphi }{3H\dot{\varphi }}\right)g^2\varphi ^2\left|\delta \chi ^2\right|H๐t,$$
(25)
where we have averaged over short timescale oscillations of the $`\chi `$-field fluctuations to write $`\left|\dot{\delta \chi }^2\right|=g^2\varphi ^2\left|\delta \chi ^2\right|`$. To evaluate this we take the usual adiabatic evolution for the background $`\varphi `$ field after the end of inflation
$$\varphi =\mathrm{\Phi }\frac{\mathrm{sin}(m\mathrm{\Delta }t)}{m\mathrm{\Delta }t},$$
(26)
and time-averaged Hubble expansion
$$H=\frac{2m}{3(m\mathrm{\Delta }t+\mathrm{\Theta })},$$
(27)
where $`\mathrm{\Theta }`$ is an integration constant of order unity. The amplitude of the $`\chi `$-field fluctuations also decays proportional to $`1/\mathrm{\Delta }t`$ over a half-oscillation from $`m\mathrm{\Delta }t=n\pi `$ to $`m\mathrm{\Delta }t=(n+1)\pi `$, with the stochastic growth in particle number occurring only when $`\varphi =0`$. Thus evaluating $`\mathrm{\Delta }\zeta `$ over a half-oscillation $`\mathrm{\Delta }t=\pi /m`$ we can write
$$\mathrm{\Delta }\zeta =\frac{2g^2|\delta \chi ^2|x_n^4}{3m^2}_{x_n}^{x_{n+1}}\left(\frac{1}{x+\mathrm{\Theta }}+\frac{s}{s^{}}\right)\frac{s^2}{x^2}๐x,$$
(28)
where $`x=m\mathrm{\Delta }t`$, $`s(x)=\mathrm{sin}x/x`$, $`x_n=n\pi `$ and a dash indicates differentiation with respect to $`x`$. The integral is dominated by the second term in the bracket which has a pole of order 3 when $`s^{}=0`$. Although $`s/s^{}`$ diverges, it yields a finite contribution to the integral which can be evaluated numerically. For $`x_n1`$ the integral is very well approximated by $`24/x_n^4`$, independent of the integration constant $`\mathrm{\Theta }`$.
This expression gives us the rate of change of the curvature perturbation $`\zeta `$ due to the pressure of the field fluctuations $`\delta \chi ^2`$ over each half-oscillation of the inflaton field $`\varphi `$. Approximating the sum over several oscillations as a smooth integral and using Eq. (13) for the growth of the $`\chi `$-field fluctuations during preheating (neglecting the weak $`k`$-dependence of the Floquet index, $`\mu _k`$, on super-horizon scales) we obtain
$$\zeta _{\mathrm{nad}}=\frac{16g^2}{2\pi \mu }\frac{\left|\delta \chi ^2\right|_{\mathrm{end}}}{m^2}e^{2\mu m\mathrm{\Delta }t}.$$
(29)
The statistics of these second-order fluctuations are non-Gaussian, being a $`\chi ^2`$-distribution. Both the mean and the variance of $`\zeta _{\mathrm{nad}}`$ are non-vanishing. The mean value will not contribute to density fluctuations, but rather indicates that the background we are expanding around is unstable as energy is systematically drained from the inflaton field. We are interested in the variance of the curvature perturbation, and in particular the change of the curvature perturbation power spectrum on super-horizon scales which is negligible if the power spectrum of $`\zeta _{\mathrm{nad}}`$ on those scales is much less than that of $`\zeta `$ generated during inflation, the latter being required to be of order $`10^{10}`$ to explain the COBE observations.
To evaluate the power spectrum for $`\zeta _{\mathrm{nad}}`$ we must evaluate the power spectrum of $`\delta \chi ^2`$ which is given by substituting $`๐ซ_{\delta \chi }`$, from Eq. (16), into Eq. (18). This gives
$$๐ซ_{\delta \chi ^2}=\frac{2}{3\nu ^2}\left(\frac{H_{\mathrm{end}}}{2\pi }\right)^4\left(\frac{k_{\mathrm{max}}}{k_{\mathrm{end}}}\right)^3\left(\frac{k}{k_{\mathrm{end}}}\right)^3I(\kappa ,m\mathrm{\Delta }t),$$
(30)
where
$$I(\kappa ,m\mathrm{\Delta }t)\frac{3}{2}_0^{\kappa _{\mathrm{cut}}}๐\kappa ^{}_0^\pi ๐\theta e^{2(\mu _\kappa ^{}+\mu _{\kappa \kappa ^{}})m\mathrm{\Delta }t}\kappa ^2\mathrm{sin}\theta ,$$
(31)
$`\kappa =k/k_{\mathrm{max}}`$ as defined in Eq. (15), and $`\theta `$ is the angle between $`๐ค`$ and $`๐ค^{}`$. Note that at the end of inflation we have $`I(\kappa ,0)=\kappa _{\mathrm{cut}}^31`$, and $`๐ซ_{\delta \chi ^2}k^3`$. This yields
$$๐ซ_{\zeta _{\mathrm{nad}}}\frac{2^{9/2}3}{\pi ^5\mu ^2}\left(\frac{\mathrm{\Phi }}{m_{\mathrm{Pl}}}\right)^2\left(\frac{H_{\mathrm{end}}}{m}\right)^4g^4q^{1/4}\left(\frac{k}{k_{\mathrm{end}}}\right)^3I.$$
(32)
One might have thought that the dominant contribution to $`\zeta _{\mathrm{nad}}`$ on large scales would come from $`\delta \chi `$ fluctuations on those scales, and that is indeed the presumption of the calculation of Bassett et al. . However, in fact the integral is initially dominated by $`k^{}k_{\mathrm{cut}}`$, namely the shortest scales. The reason for this is the steep slope of $`๐ซ_{\delta \chi }`$; were it much shallower (spectral index less than 3/2), then the dominant contribution would come from large scales.
To study the scale dependence of $`I(\kappa ,m\mathrm{\Delta }t)`$ and hence $`๐ซ_{\zeta _{\mathrm{nad}}}`$ at later times, we can expand $`\mu _{\kappa \kappa ^{}}`$ for $`\kappa \kappa ^{}1`$ as
$$\mu _{\kappa \kappa ^{}}=\mu _\kappa ^{}+\frac{2\kappa ^{}\mathrm{cos}\theta }{2+e^{\pi \kappa ^2}}\kappa +๐ช(\kappa ^2).$$
(33)
We can then write the integral in Eq. (31) as
$$I(\kappa ,m\mathrm{\Delta }t)=I_0(m\mathrm{\Delta }t)+๐ช(\kappa ^2),$$
(34)
where first-order terms, $`๐ช(\kappa )`$, vanish by symmetry and
$$I_0(m\mathrm{\Delta }t)=\frac{3}{2}_0^{\kappa _{\mathrm{cut}}}e^{4\mu _\kappa ^{}m\mathrm{\Delta }t}\kappa ^2๐\kappa ^{}.$$
(35)
Thus the scale dependence of $`๐ซ_{\zeta _{\mathrm{nad}}}`$ remains $`k^3`$ on large-scales for which $`\kappa 1`$.
At late times these integrals become dominated by the modes with $`\kappa ^2(m\mathrm{\Delta }t)^1`$ which are preferentially amplified during preheating. These are longer wavelength than $`k_{\mathrm{cut}}`$, but still very short compared to the scales which give rise to large scale structure in the present Universe. From Eq. (14) we have $`\mu _\kappa ^{}\mu _0\kappa ^2/3`$, for $`\kappa ^21`$, where $`\mu _0=(\mathrm{ln}3)/2\pi `$, which gives the asymptotic behaviour at late times
$$I_00.86(m\mathrm{\Delta }t)^{3/2}e^{4\mu _0m\mathrm{\Delta }t}.$$
(36)
Thus although the rate of growth of $`๐ซ_{\zeta _{\mathrm{nad}}}`$ becomes determined by the exponential growth of the long-wavelength modes, the scale dependence on super-horizon scales remains proportional to $`k^3`$ for $`\kappa (m\mathrm{\Delta }t)^{1/2}`$. This ensures that there can be no significant change in the curvature perturbation, $`\zeta `$, on very large scales before back-reaction on smaller scales becomes important and this phase of preheating ends when $`m\mathrm{\Delta }t100`$ .
Numerical evaluation of Eq. (32) confirms our analytical results, as shown in Fig. 1. For $`kk_{\mathrm{max}}`$, the spectral index remains $`k^3`$ during preheating. Observable scales have $`\mathrm{log}_{10}k/k_{\mathrm{max}}20`$.
Our result shows that because of the $`k^3`$ spectrum of $`\delta \chi `$, which leads to a similarly steep spectrum for $`\zeta _{\mathrm{nad}}`$, there is a negligible effect on the large-scale perturbations before the resonance ceases. The suppression of the large-scale perturbations in $`\delta \chi `$, discussed in Refs. , means that large-scale perturbations in $`\delta \chi `$ are completely unimportant. However, it turns out that they donโt give the largest effect, which comes from the short-scale modes which dominate the integral for $`\zeta _{\mathrm{nad}}`$. Nevertheless, even they give a negligible effect, again with a $`k^3`$ spectrum. Indeed, that result with hindsight can be seen as inevitable; it has long been known that local processes conserving energy and momentum cannot generate a tail shallower than $`k^3`$ (with our spectral index convention) to large scales, which is the Fourier equivalent of realizing that in real space there is an upper limit to how far energy can be transported. Any mechanism that relies on short-scale phenomena, rather than acting on pre-existing large-scale perturbations, is doomed to be negligible on large scales.
## IV Conclusions and discussion
As discussed in detail in a companion paper , large-scale curvature perturbations can vary provided there is a significant non-adiabatic pressure perturbation. This is always possible in principle if there is more than one field or fluid, and since preheating usually involves at least one additional field into which the inflaton resonantly decays, such variation is in principle possible.
In this paper we have focussed on the simplest preheating model, as discussed in Ref. . We have identified the non-adiabatic pressure, and shown that the dominant effect comes from second-order perturbations in the preheating field. Further, the effect is dominated by perturbations on short scales, rather than from the resonant amplification of non-adiabatic perturbations on the large astrophysical scales. Nevertheless, we have shown that the contribution has a $`k^3`$ spectrum to large scales, rendering it totally negligible on scales relevant for structure formation in our present Universe by the time backreaction ends the resonance. Amongst models of inflation involving a single-component inflaton field, this model gives the most preheating, and so this negative conclusion will apply to all such models.
Recently Bassett et al. have suggested large effects might be possible in more complicated models. They consider two types of model. In one kind, inflation takes place along a steep-sided valley, which lies mainly along the direction of a field $`\varphi `$ but with a small component along another direction $`\chi `$. In this case, one can simply define the inflaton to be the field evolving along the valley floor, and the second heavy field lies orthogonal to it. Taking that view, there is no reason to expect the preheating of the heavy field to give rise to a bigger effect than in the simpler model considered in this paper.
In the second kind of model, the reheat field is light during inflation, and this corresponds to a two-component inflaton field. As has long been known, there can indeed be a large variation of $`\zeta `$ in this case, which can continue until a thermalized radiation-dominated universe has been established. Indeed, in models where one of the fields survives to the present Universe (for example becoming the cold dark matter), variation in $`\zeta `$ can continue right to the present. This variation is due to the presence on large scales of classical perturbations in both fields (properly thought of as a multi-component inflaton field) generated during inflation, and the effect of these must always be considered in a multi-component inflation model, with or without preheating.
## Acknowledgments
We thank Bruce Bassett, Lev Kofman, Andrei Linde, Roy Maartens and Anupam Mazumdar for useful discussions. DW is supported by the Royal Society. |
no-problem/9912/cond-mat9912325.html | ar5iv | text | # Dynamic critical behaviors of three-dimensional ๐โข๐ models related to superconductors/superfluids
## Abstract
The dynamic critical exponent $`z`$ is determined from numerical simulations for the three-dimensional $`XY`$ model subject to two types of dynamics, i.e., relaxational dynamics and resistively shunted junction (RSJ) dynamics, as well as for two different treatments of the boundary, i.e., periodic boundary condition (PBC) and fluctuating twist boundary condition (FTBC). In case of relaxational dynamics, finite size scaling at the critical temperature gives $`z2`$ for PBC and $`1.5`$ for FTBC, while for RSJ dynamics $`z1.5`$ is obtained in both cases. The results are discussed in the context of superfluid/superconductors and vortex dynamics, and are compared with what have been found for other related models.
A neutral superfluid like <sup>4</sup>He and a superconductor in the limit of large London penetration depth can be characterized by a complex order parameter, and the $`XY`$ model can be viewed as a discretized version of this type of systems . All these systems are expected to belong to the same universality class for the thermodynamic critical properties of the phase transition. An interesting feature of these models is the presence of thermally generated topological defects. In two dimensions (2D) the topological defects take the form of vortices and give rise to the Kosterlitz-Thouless transition . Also in 3D thermally generated vortex loops are present at the transition and it has been argued that the critical properties, both the static and the dynamic, can be associated with these vortex loops . The low-temperature phase in the 3D case consists of closed vortex loops of finite extent whereas in the high-temperature phase the loops can disintegrate .
In the present Letter we investigate the dynamic critical properties for two simple cases when the static properties are given by the 3D $`XY`$ model. One connection between the vortex loops and the dynamical properties is through the $`2\pi `$ phase slip across the system occurring when a vortex loop expands so much that it leaves the system. The connection is most easily phrased in case of a superconductor: the rate at which the vortex loops expands and leaves the system, when driven by a dc current, is proportional to the voltage across the sample. Consequently the vortex loops are connected to the resistance which for a system in equilibrium can be obtained by the fluctuation-dissipation theorem.
Similarly to the static case one expects universality also for the dynamic critical properties in the sense that the critical dynamics does not depend on the details but rather on general characteristics like conservation laws, spatial dimensions, and the static critical properties . The motion of the vortex loops are associated with a conservation law since the vorticity of a fixed area can only change by vortex segments leaving or entering the area. This conservation law restricts the motion of the vortex loops and consequently one may expect that the longest relaxation time of the system can be associated with the vortex loops. In the dynamic universality classes defined by Hohenberg and Halperin the dynamics of a 3D superfluid belongs to model F characterized by the dynamic critical exponent $`z=3/2`$. A model with purely relaxational dynamics on the other hand belongs to model A with $`z2`$ .
The two cases we study are relaxational dynamics which does not have local current conservation, and the resistively shunted junction (RSJ) dynamics which has local current conservation. The Hamiltonian for the 3D $`XY`$ model on an $`L\times L\times L`$ cubic lattice can be expressed as
$$H=J\underset{ij}{}\mathrm{cos}(\theta _i\theta _j๐ซ_{ij}๐ซ),$$
(1)
where the sum is over all nearest-neighbor pairs, $`\theta _i\theta _j๐ซ_{ij}๐ซ`$ is the difference in the spin direction between the neighboring sites $`i`$ and $`j`$, and $`J`$ is the coupling strength. The twist variable $`๐ซ=(\mathrm{\Delta }_x,\mathrm{\Delta }_y,\mathrm{\Delta }_z)`$ is a vector such that $`L\mathrm{\Delta }_x`$ measures the average rotation of the spin direction when going from one boundary surface to the opposite in the $`x`$ direction and similarly for the other directions, $`๐ซ_{ij}`$ is the unit vector from site $`i`$ to the nearest-neighbor site $`j`$ (the lattice spacing is taken to be unity), and $`\theta _i`$ is a phase angle associated with each site $`i`$ measured with respect to the local spin-direction associated with a uniform twist $`๐ซ`$ across the sample. We use the boundary conditions $`\theta _i=\theta _{i+L\widehat{๐ฑ}}=\theta _{i+L\widehat{๐ฒ}}=\theta _{i+L\widehat{๐ณ}}`$.
In the superfluid/superconductor analogy of the $`XY`$ model, $`\theta _i\theta _j๐ซ_{ij}๐ซ`$ is the total (gauge invariant) phase difference of the order parameter and the twist $`๐ซ`$ may be thought of as the contribution to the gauge-invariant phase from a spatially uniform vector potential.
The usual periodic boundary conditions (PBC) for the $`XY`$ model correspond to periodic spin directions and to $`๐ซ=0`$. However this imposes an unphysical restriction on the topological defects (the vortex loops): the original defect-free state is not regained with PBC, when a defect is created in a defect-free state and then annihilated across the boundary . The additional degrees of freedom introduced by $`๐ซ`$, on the other hand, ensure that the energy associated with a given configuration of topological defects is unique . The more physical boundary condition which includes the fluctuations of the twist is termed the fluctuating twist boundary condition (FTBC) .
In the RSJ case the total current $`i_{ij}`$ from $`i`$ to $`j`$ is the sum of the supercurrent, the normal resistive current, and a thermal noise current:
$$i_{ij}=i_c\mathrm{sin}(\theta _i\theta _j๐ซ_{ij}๐ซ)+\frac{V_{ij}}{r}+\eta _{ij},$$
(2)
where $`i_c2eJ/\mathrm{}`$ is the critical current of a single junction, $`V_{ij}`$ is the potential difference across the junction, $`r`$ is the shunt resistance. The current conservation law at each site, together with the Josephson relation $`d(\theta _i\theta _j๐ซ_{ij}๐ซ)/dt=2eV_{ij}/\mathrm{}`$, allows us to write the equations of motion in the form
$$\dot{\theta }_i=\underset{j}{}G_{ij}\underset{k}{}^{^{}}[\mathrm{sin}(\theta _j\theta _k๐ซ_{jk}๐ซ)+\eta _{jk}],$$
(3)
where the primed summation is over six nearest neighbors of $`j`$, $`G_{ij}`$ is the lattice Green function on the cubic lattice. For convenience we from now on use units such that $`i_c=J=r=\mathrm{}/2e=1`$. The remaining dynamical equation for $`๐ซ`$ is obtained from the local current conservation together with the global current conservation condition that no currents pass through the boundaries :
$$\dot{๐ซ}=\mathrm{\Gamma }_\mathrm{\Delta }\frac{H}{๐ซ}+\eta _๐ซ$$
(4)
with $`\mathrm{\Gamma }_\mathrm{\Delta }=1/L^3`$. In order to ensure the correct thermal equilibrium the noise correlations obey the relations: $`\eta _{ij}(t)=0`$, $`\eta _{ij}(t)\eta _{kl}(0)=2T(\delta _{ik}\delta _{jl}\delta _{il}\delta _{jk})\delta (t)`$, and correspondingly for the three components of $`\eta _๐ซ`$: $`\eta _{\mathrm{\Delta }_m}(t)=0`$, $`\eta _{\mathrm{\Delta }_m}(t)\eta _{\mathrm{\Delta }_n}(0)=(2T/L^3)\delta _{mn}\delta (t)`$ with $`m,n=x,y,z`$.
The RSJ equations defined in this way incorporates local current conservation and the boundary conditions are chosen such that the vorticity for each of the six sides of the cubic lattice is zero at any instant and that there is no current flow through them. The RSJ equations are usually phrased in the superconducting language, however, they also apply to a neutral superfluid; the RSJ equations in this case correspond to a constant mass density and local conservation of mass current.
We also note that eq. (4), which is related to the resistance and hence to the vortex loops, has a relaxational form. However, one should note that the relaxational constant $`\mathrm{\Gamma }_\mathrm{\Delta }=L^3`$ is unusual since it vanishes with the size of the system and that eq. (4) by itself can be viewed as a global current conservation law expressing that the average total current of the system vanishes at each instant.
How important is strict local current conservation for the critical dynamics? We investigate this by comparing the results from the RSJ dynamics to relaxational dynamics, where eq. (3) is replaced by the purely relaxational form:
$$\frac{d\theta _i(t)}{dt}=\mathrm{\Gamma }\frac{H}{\theta _i}+\eta _i(t)$$
(5)
with $`\eta _i(t)=0`$ and $`\eta _i(t)\eta _j(0)=2T\delta _{ij}\delta (t)`$ (we have set $`\mathrm{\Gamma }1`$). Thus the dynamics is in this case given by the two relaxational equations (4) and (5). Superficially one might have guessed that this relaxational dynamics should belong to model A. However, as will be shown below, the $`z`$ value obtained from size scaling at $`T_c`$ is not compatible with this expectation. This suggests that the global conservation law reflected in the size dependent relaxation constant in eq. (4) is enough to slow down the critical dynamics.
The dynamical equations are integrated numerically using the second order algorithm in ref. with a discrete time step $`\mathrm{\Delta }t=0.05`$ for RSJ and $`\mathrm{\Delta }t=0.05`$ and $`0.01`$ for relaxational dynamics, using lattice sizes up to $`L=32`$ and $`24`$, respectively.
The resistance $`R`$ is related to the equilibrium fluctuations of $`๐ซ(t)`$ by the fluctuation-dissipation theorem :
$$R=\frac{L^2}{2T}\frac{1}{\mathrm{\Theta }}[\mathrm{\Delta }_m(\mathrm{\Theta })\mathrm{\Delta }_m(0)]^2,$$
where $`\mathrm{\Theta }`$ is a large enough time interval ($`\mathrm{\Theta }=2000`$ in the present simulation). Near the second order phase transition the intensive quantity $`LR(T,L)`$ obeys the scaling relation:
$$LR(T,L)=L^{(z1)}\stackrel{~}{\rho }[L^{1/\nu }(TT_c)].$$
(6)
Thus if we plot the ratio $`\mathrm{ln}[R(T,L)/R(T,L^{})]/\mathrm{ln}(L/L^{})`$ as a function of $`T`$ for different pairs $`(L,L^{})`$ then the curves should cross at ($`T_c`$, $`z`$. The inset in fig. 1 shows the RSJ result for the pairs $`(L,L^{})=`$$`(4,8)`$, $`(8,16)`$, and $`(4,16)`$. The crossing point gives $`T_c=2.20`$, which is very close to the true critical temperature $`T_c2.202`$ for the 3D $`XY`$ model , and $`z=1.46`$. Figure 1 confirms this determination by aid of the full scaling relation eq. (6) using $`T_c`$ and $`z`$ obtained above together with $`\nu =0.67`$ for the 3D $`XY`$ model ($`\nu 0.671\pm 0.001`$ ). As seen a very good scaling collapse is obtained. We have estimated the precision in the determination by treating $`z`$ and $`T_c`$ as free parameters in the full scaling relation with the result $`z=1.46\pm 0.06`$. One reason for treating $`T_c`$ as a free variable is that in principle the finite time step $`\mathrm{\Delta }t`$ in the integration introduces an uncertainty in $`T_c`$ . Figure 2 gives the corresponding result for relaxational dynamics. In this case the integration turns out to be more sensitive to the choice of $`\mathrm{\Delta }t`$. In order to handle this we calculate $`R`$ at $`T_c=2.20`$ where $`RL^z`$ \[see eq. (6)\] with $`\mathrm{\Delta }t=0.05`$ and $`0.01`$, extrapolate linearly to $`\mathrm{\Delta }t=0`$, and obtain $`z1.5`$.
Our conclusion is that RSJ dynamics and relaxational dynamics have the same size scaling of the resistance at the critical temperature and that the $`z`$ value obtained from this scaling is consistent with $`z1.5`$. This implies that the constraint imposed by the local current conservation is less crucial than one might have thought. We suggest the following hand-waving explanation: Equation (5) describes individual spins relaxing towards a state with a given value of $`๐ซ`$. Each fixed configuration of vortex loops correspond to a twist $`๐ซ`$. Since $`๐ซ`$ has a slow relaxation governed by eq. (4) this suggests that the dynamics is compatible with a situation where the change of a vortex loop configuration is slow compared to the relaxation of the individual spins. From this perspective RSJ dynamics and the relaxational dynamics described above are just two alternative ways of imposing a slow dynamics on the vortex loops.
Next we consider the case when $`๐ซ=0`$ which corresponds to the standard PBC imposed on the spins. Relaxational dynamics is in this case given by eq. (5) with $`๐ซ=0`$. This is compatible with the situation when the spins are relaxing directly towards the global ground state with $`๐ซ=0`$. In this case we cannot use eq. (6) to find $`z`$ (because $`๐ซ=0`$). Instead we resort to the following size scaling relation valid at $`T_c`$:
$$G(t)=\frac{1}{L}h(tL^z),$$
(7)
where $`G(t)=F(t)F(0)/L^3`$ is the time-correlation function, with $`F(t)=_{ij_x}\mathrm{sin}(\theta _i\theta _j)`$ and the sum is over all links in one direction. In the superconductor analogy this is related to the supercurrent correlations. Figure 3 shows that a good scaling is obtained for $`z=2`$. From this we conclude that relaxational dynamics with standard PBC has $`z2`$ consistent with the model A universality class. Our suggested interpretation is that, when the average twist $`๐ซ`$ is not one of the dynamical variables, the global current constraint reflected in eq. (4) is no longer present and the relaxational dynamics becomes of standard relaxational type. We compare this to the RSJ dynamics for the same case, i.e., standard PBC for which $`๐ซ=0`$. Figure 4 shows that a good scaling collapse is obtained for $`z=1.5`$, which is the same as for FTBC. This suggests that local current conservation is a sufficient but not a necessary condition for imposing the slow vortex loop dynamics.
The suggestion, that the dynamics of the vortex loops can be associated with an exponent $`z1.5`$, can be further substantiated in the following way: The Hamiltonian for the $`XY`$ model with FTBC is dual to the lattice vortex loop model with PBC. This means that relaxational dynamics for the $`XY`$ model with FTBC corresponds to relaxational dynamics for the lattice vortex loop model with PBC. In the latter model the only degrees of freedom are the vortex loops and it has been shown that $`z1.5`$ is obtained from size scaling of the resistance for this model. From this perspective it is tempting to conclude that $`z1.5`$ can be associated with pure relaxational dynamics for the vortex loops. In addition one may note that an exponent $`z1.5`$ is also consistent with $`z=1.44`$ in ref. obtained from a theoretical treatment of vortex loops.
Finally we note that in ref. the value $`z=1.5\pm 0.5`$ was determined for the RSJ model in the presence of external currents and that in ref. $`z=1.38\pm 0.05`$ was found from simulations of a version of the $`XY`$ model with spin dynamics which is an alternative dynamics consistent with superfluids. For the lattice vortex loop model with Monte Carlo dynamics $`z=1.45\pm 0.05`$ was obtained in ref. and $`z=1.51\pm 0.03`$ in ref. using the same method as in the inset of fig. 1.
This leaves us with the following two main alternatives: $`z`$ determined from size scaling for 3D $`XY`$ model with relaxational dynamics and FTBC, the lattice vortex loop model with relaxational dynamics and PBC, as well as the 3D $`XY`$ model with RSJ dynamics in all cases gives the value $`z=3/2`$ corresponding to model F. This would then be different from the 3D $`XY`$ model with spin dynamics in ref. and the vortex loop prediction in ref. . The other possibility is that all cases correspond to a vortex loop dynamics with a $`z`$ slightly lower than 3/2. Our present precision is not enough to distinguish between these alternatives.
In conclusion we have from simulations determined the dynamic critical exponent $`z`$ by using size scaling at $`T_c`$ for the 3D $`XY`$ model with relaxational and RSJ dynamics. For relaxational dynamics with PBC we obtain $`z2`$ which is consistent with model A dynamics . However we conclude that the relaxational dynamics with FTBC, the lattice vortex loop model with relaxational dynamics and PBC, as well as RSJ dynamics with both PBC and FTBC all have the value $`z1.5`$. We suggest that the reason for this agreement is that all these models effectively corresponds to relaxational dynamics of the vortex loops. Model F corresponds to $`z=3/2`$ which is consistent with our result although the slightly lower value $`z=1.44`$ for vortex loops in ref. is also consistent.
\***
This work was supported by the Swedish Natural Research Council through contract FU 04040-332. |
no-problem/9912/gr-qc9912100.html | ar5iv | text | # 1 slns
## 1 Radially excited gravitating global monopoles
Following the notation of , we put $`\varphi ^a=f(r)\widehat{r}^a`$ for the Higgs field and
$$ds^2=A^2\mu dt^2+\frac{dr^2}{\mu }+r^2d\mathrm{\Omega }^2$$
(1)
for the spherically symmetric line element in Schwarzschild coordinates. The resulting static field equations are (we prefer to use rationalized units putting $`\overline{f}=\sqrt{4\pi }f`$, $`\overline{\eta }=\sqrt{4\pi }\eta `$, $`\lambda =2\pi `$ as compared to $`f`$ and $`\eta `$ from )
$`\overline{f}^{}`$ $`=`$ $`\psi `$ (2)
$`\psi ^{}`$ $`=`$ $`{\displaystyle \frac{\overline{f}}{r^2\mu }}\left[2+{\displaystyle \frac{r^2}{2}}(\overline{f}^2\overline{\eta }^2)\right]\psi \left[{\displaystyle \frac{2}{r}}+r\psi ^2+{\displaystyle \frac{\mu ^{}}{\mu }}\right]`$ (3)
$`\mu ^{}`$ $`=`$ $`{\displaystyle \frac{1\mu }{r}}r\mu \psi ^2{\displaystyle \frac{2\overline{f}^2}{r}}{\displaystyle \frac{r}{4}}\left(\overline{f}^2\overline{\eta }^2\right)^2`$ (4)
$`A^{}`$ $`=`$ $`r\psi ^2A.`$ (5)
Solutions with a regular origin obey the boundary conditions
$$\overline{f}(r)=ar+O(r^3),\psi (r)=a+O(r^2),\mu (r)=1+O(r^2),A(r)=A_0+O(r^2)$$
(6)
and are uniquely specified by the choice of $`a=\psi (0)`$ and $`A_0`$.
The boundary conditions for black holes at the horizon are
$`\overline{f}(r)`$ $`=`$ $`\overline{f}_\mathrm{h}+O(rr_\mathrm{h}),\psi (r)=\psi _\mathrm{h}+O(rr_\mathrm{h}),`$ (7)
$`\mu (r)`$ $`=`$ $`\mu _\mathrm{h}^{}(rr_\mathrm{h})+O((rr_\mathrm{h})^2),A(r)=A_\mathrm{h}+O((rr_\mathrm{h})^2)`$ (8)
with
$`\mu _\mathrm{h}^{}`$ $`=`$ $`{\displaystyle \frac{1}{r_\mathrm{h}}}\left[12\overline{f}_\mathrm{h}^2{\displaystyle \frac{r_\mathrm{h}^2}{4}}\left(\overline{f}_\mathrm{h}^2\overline{\eta }^2\right)^2\right]>0`$ (9)
$`\psi _\mathrm{h}`$ $`=`$ $`{\displaystyle \frac{\overline{f}_\mathrm{h}}{r_\mathrm{h}^2\mu _\mathrm{h}^{}}}\left[2+{\displaystyle \frac{r_\mathrm{h}^2}{2}}\left(\overline{f}_\mathrm{h}^2\overline{\eta }^2\right)\right].`$ (10)
Solutions with any of these boundary conditions stay finite for increasing $`r`$ as long as $`\mu `$ is non-zero. However, generically $`\mu `$ vanishes for some finite value of $`r`$. Depending on whether $`A`$ and $`\psi `$ stay finite or not at the zero of $`\mu `$ the geometrical significance of such points is different. In the first case the solution has a cosmological horizon, in the latter it has an โequatorโ, i.e. a maximum of $`r`$ considered as a metrical function (compare eq.(1)). The boundary conditions at a cosmological horizon are the same as at a black hole horizon given above with the only difference that $`\mu _\mathrm{h}^{}<0`$.
Solutions with a regular origin resp. black hole boundary conditions and a cosmological horizon can be obtained by fine-tuning the parameter $`a`$ resp. $`\overline{f}_\mathrm{h}`$. An important special case is obtained for $`a=0`$ resp. $`\overline{f}_\mathrm{h}=0`$ yielding the de Sitter resp. Schwarzschild-de Sitter solution with a cosmological constant provided by the Higgs potential for $`\overline{f}0`$. The de Sitter (dS) solution is given by
$$\mu _{\mathrm{dS}}(r)=1\frac{r^2}{r_\mathrm{c}^2}\mathrm{with}r_\mathrm{c}=\frac{2\sqrt{3}}{\overline{\eta }^2},$$
(11)
whereas the Schwarzschild-de Sitter (SdS) solution reads
$$\mu _{\mathrm{SdS}}(r)=1\frac{r^2}{r_\mathrm{c}^2}\frac{r_\mathrm{h}}{r}\left(1\frac{r_\mathrm{h}^2}{r_\mathrm{c}^2}\right).$$
(12)
Obviously the SdS solution goes into the dS solution for $`r_\mathrm{h}0`$. The cosmological horizon of the SdS solution turns out to be located at
$$r_+=\frac{r_\mathrm{h}}{2}+\sqrt{r_\mathrm{c}^2\frac{3}{4}r_\mathrm{h}^2}$$
(13)
As was shown in the de Sitter solution has bounded โzero modesโ $`\phi _K`$ solving the linearized eqs.(2) in the dS background for the discrete values
$$\overline{\eta }_K^2=\frac{3}{(K+2)(2K+1)}\mathrm{for}K=0,1,2,..$$
(14)
At the largest one $`\overline{\eta }_0^2=3/2`$ the de Sitter solution bifurcates with the global monopole solutions possessing a themselves a cosmological horizon for $`\overline{\eta }^2>1/2`$ . The corresponding zero mode is given by $`\phi _0=\overline{f}/a`$, where $`a`$ is the parameter characterizing the solution at $`r=0`$ (compare eq.(6)). As was already mentioned above we may look for โradially excitedโ global monopoles bifurcating with the de Sitter solution at the other zero modes $`\phi _K`$ for $`K>0`$. Since the functions $`\phi _K`$ have $`K`$ zeros, we have to look for solutions of eqs.(2) with $`\overline{f}`$โs with the same property and and a cosmological horizon. The results of a numerical investigation for the cases $`K=1,2`$ are summarized in Fig.(2), in which the values of the fine-tuning parameter $`a`$ are plotted as a function of $`\overline{\eta }^2`$. The graphs of the functions $`a(\overline{\eta }^2)`$ look very similar to the one for the case $`K=0`$ given in and approach the latter for $`\overline{\eta }0`$.
There is an important difference in the global behaviour between the fundamental monopole and the radially excited solutions. Whereas the first ones extend to arbitrarily large values of $`r`$ with $`\overline{f}\overline{\eta }`$ and $`\mu 12\overline{\eta }^2`$, the latter ones have an โequatorโ (maximum of $`r`$) just outside their cosmological horizon and then run back to $`r=0`$, where they become singular. The behaviour beyond the equator is quite similar to the one of โhairyโ black holes inside their horizon . Obviously $`\mu `$ cannot tend to $`12\overline{\eta }^2`$ for $`\overline{\eta }^2<1/2`$ without becoming positive again. This, however, could only be achieved with finite $`\psi `$ by further fine-tuning, for which there is no parameter left for given $`\overline{\eta }`$. One might try to also fine-tune $`\overline{\eta }`$ in order find such solutions, but then one would still need another parameter to suppress the divergent mode of the Higgs field for $`r\mathrm{}`$ to enforce $`\overline{f}\overline{\eta }`$.
Since the Schwarzschild coordinates become singular at the equator, where $`r`$ is stationary one has to use a different radial coordinate. A convenient choice is to take the geodesic distance $`s`$ in the radial direction and determine $`r(s)`$ solving
$$\frac{d}{ds}r=\sqrt{|\mu |}.$$
(15)
Fig.(1) shows the functions $`\overline{f}(x)`$, $`\mu (x)`$ and $`r(x)`$ for particular values of $`\overline{\eta }`$, where the coordinate $`x`$ is equal to $`s`$ up to a normalisation factor chosen such that the cosmological horizon where $`\mu `$ vanishes is situated at $`s=1`$. The equator at a second zero of $`\mu `$ is very close to, but slightly outside the horizon. Since $`\mu `$ stays very small between its two zeros we have inserted enlargements in Fig.(1) in order to make this behaviour visible.
## 2 Black holes with global monopole hair
Next we turn to black holes sitting inside global monopoles. Such solutions have been described already by S. Liebling . According to his results there is a 2-parameter family of such solutions, parametrized by the radius $`r_\mathrm{h}`$ of their bh-horizon and $`\overline{\eta }`$. Encouraged by the results of the previous section, we also look for black holes sitting inside radially excited global monopoles. Indeed, such solutions are readily found numerically solving eqs.(2) with bh boundary conditions eq.(7) and analogous ones for a cosmological horizon at some $`r_\mathrm{c}>r_\mathrm{h}`$.
In view of the experience with hairy black holes studied in the literature , we expect a limited domain of existence in the $`\overline{\eta }r_\mathrm{h}`$-plane for such solutions. The numerical analysis shows that there is a qualitative difference between the fundamental solutions (without zeros of the Higgs field) and the radially excited ones. Let us first discuss the fundamental solutions.
As shown in Fig.(3) there three regions in the $`\overline{\eta }r_\mathrm{h}`$-plane. For $`0<\overline{\eta }^2<1/2`$ black holes with arbitrarily large radius $`r_\mathrm{h}`$ seem to exist. In the interval $`1/2<\overline{\eta }^2<1`$ the $`r_\mathrm{h}`$ values are bounded from above by the curve $`r_\mathrm{h}=2/\overline{\eta }^2`$. Approaching this curve from below the solutions bifurcate with the SdS solution (i.e. $`\overline{f}0`$), such that at the same time $`r_\mathrm{h}`$ tends to $`r_\mathrm{c}`$. Comparing with eq.(13) this yields the relation $`r_\mathrm{h}=r_\mathrm{c}/\sqrt{3}=2/\overline{\eta }^2`$.
In the interval $`1<\overline{\eta }^2<3/2`$ the existence domain is bounded by some curve joining the points $`(1,2)`$ and $`(\sqrt{3/2},0)`$ in the $`\overline{\eta }r_\mathrm{h}`$-plane. Approaching this curve the solution again bifurcates with the SdS solution, but this time with $`r_\mathrm{h}<r_+`$. As in the case of the regular solution the bifurcation points are determined by the requirement that a bounded zero mode of the SdS solution exists. The equation to be solved for $`\phi =r\delta \overline{f}`$ is (using rescaled radial variables $`x=r/r_\mathrm{c}`$ and $`x_\mathrm{h}=r_\mathrm{h}/r_\mathrm{c}`$)
$$\frac{d}{dx}(\mu _{\mathrm{SdS}}\frac{d}{dx}\phi )=\left(\frac{2}{x^2}2\frac{6}{\overline{\eta }^2}\frac{x_\mathrm{h}^3x_\mathrm{h}}{x^3}\right)\phi $$
(16)
with bh boundary conditions at $`x=x_\mathrm{h}`$ and $`\mu _{\mathrm{SdS}}`$ as given by eq.(12). The requirement for $`\phi `$ to stay bounded for $`xr_+/rc`$ then determines the boundary curve $`r_\mathrm{h}(\overline{\eta })`$ shown in Fig.(3) joining the points $`P_0`$ and $`Q_0`$.
The situation is quite similar for the radially excited solutions, the only difference being the absence of the unbounded piece of the domain. The existence domains for solutions with one and two zeros of $`\overline{f}`$ are shown in Fig.(3). The intersection points $`P_i,i=0,1,2`$, etc. of the respective boundary curves can be determined analytically as follows.
From what was said above they are determined by the condition that $`r_\mathrm{h}r_+`$ as one approaches them along the curve determined by the zero mode condition. In order to study this limit we assume $`x_\mathrm{h}=1/\sqrt{3}ฯต`$ with $`ฯต<<1`$ and introduce a rescaled variable $`y`$ defined by $`x=\frac{1}{\sqrt{3}}+ฯตy`$. Keeping only the leading terms as $`ฯต0`$ we obtain from eq.(16)
$$\frac{d}{dy}((1y^2)\frac{d}{dy}\phi )=2(1\frac{1}{\overline{\eta }^2})\phi .$$
(17)
When $`x`$ varies from $`x_\mathrm{h}`$ to $`r_+/rc`$ the variable $`y`$ runs from $`1`$ to $`+1`$. The bounded solutions on this interval are the Legendre polynomials $`P_K(y)`$ obtained for
$$\overline{\eta }_K^2=\frac{2}{K(K+1)+2}K=0,1,\mathrm{}$$
(18)
The first three values are $`\overline{\eta }^2=1,1/2,1/4`$ with corresponding values $`r_\mathrm{h}=2,4,8`$ matching exactly the points $`P_0,P_1`$ and $`P_2`$ of Fig.(3).
## 3 Summary
In section 1 we put forward some arguments for the existence of radial excitations of the static gravitational global monopoles possessing a de Sitter like cosmological horizon studied in . We present some numerical evidence for the existence of solutions with up to two zeros of the Higgs field. In section 2 we study corresponding static hairy black hole solutions, representing black holes sitting inside a global monopole core. In particular, we determine their existence domains as a function of their horizon radius $`r_\mathrm{h}`$.
In a forthcoming publication we shall consider generalisations of these results to gravitational monopoles with a dynamical YM field.
## 4 Acknowledgments
I am indebted to P. Breitenlohner and P. Forgรกcs for frequent discussions on the subject. |
no-problem/9912/hep-ph9912427.html | ar5iv | text | # Neutrino oscillations and neutrinoless double-๐ฝ decay
## Abstract
We consider the scheme with mixing of three neutrinos and a mass hierarchy. We shown that, under the natural assumptions that massive neutrinos are Majorana particles and there are no unlikely fine-tuned cancellations among the contributions of the different neutrino masses, the results of solar neutrino experiments imply a lower bound for the effective Majorana mass in neutrinoless double-$`\beta `$ decay. We also discuss briefly neutrinoless double-$`\beta `$ decay in schemes with mixing of four neutrinos. We show that one of them is favored by the data.
preprint: DFTT 71/99 hep-ph/9912427
Neutrino oscillations have been observed in solar and atmospheric neutrino experiments. The corresponding neutrino mass-squared differences are
$$\mathrm{\Delta }m_{\mathrm{sun}}^210^610^4\mathrm{eV}^2\text{(MSW)},$$
(1)
in the case of MSW transitions, or
$$\mathrm{\Delta }m_{\mathrm{sun}}^210^{11}10^{10}\mathrm{eV}^2\text{(VO)},$$
(2)
in the case of vacuum oscillations, and
$$\mathrm{\Delta }m_{\mathrm{atm}}^210^310^2\mathrm{eV}^2.$$
(3)
These values of the neutrino mass-squared differences and the mixing required for the observed solar and atmospheric oscillations are compatible with the simplest and most natural scheme with three-neutrino mixing and a mass hierarchy:
$$\underset{\mathrm{\Delta }m_{\mathrm{atm}}^2}{\underset{}{\stackrel{\mathrm{\Delta }m_{\mathrm{sun}}^2}{\stackrel{}{m_1m_2}}m_3}}.$$
(4)
This scheme is predicted by the see-saw mechanism , which predicts also that the three light massive neutrinos are Majorana particles. In this case neutrinoless double-$`\beta `$ decay ($`\beta \beta _{0\nu }`$) is possible and its matrix element is proportional to the effective Majorana mass
$$|m|=\left|\underset{k}{}U_{ek}^2m_k\right|,$$
(5)
where $`U`$ is the neutrino mixing matrix and the sum is over the contributions of all the mass eigenstate neutrinos $`\nu _k`$ ($`k=1,2,3`$).
In principle the effective Majorana mass (5) can be vanishingly small because of cancellations among the contributions of the different mass eigenstates. However, since the neutrino masses and the elements of the neutrino mixing matrix are independent quantities, if there is a hierarchy of neutrino masses such a cancellation would be the result of an unlikely fine-tuning, unless some unknown symmetry is at work. Here we consider the possibility that no such symmetry exist and *no unlikely fine-tuning operates to suppress the effective Majorana mass* (5) . In this case we have
$$|m|\underset{k}{\mathrm{max}}|m|_k,$$
(6)
where $`|m|_k`$ is the absolute value of the contribution of the massive neutrino $`\nu _k`$ to $`|m|`$:
$$|m|_k|U_{ek}|^2m_k.$$
(7)
In the following we will estimate the value of $`|m|`$ using the largest $`|m|_k`$ obtained from the results of neutrino oscillation experiments.
Let us consider first $`|m|_3`$, which, taking into account that in the three-neutrino scheme under consideration $`m_3\sqrt{\mathrm{\Delta }m_{31}^2}=\sqrt{\mathrm{\Delta }m_{\mathrm{atm}}^2}`$, is given by
$$|m|_3|U_{e3}|^2\sqrt{\mathrm{\Delta }m_{\mathrm{atm}}^2}.$$
(8)
Since the results of the CHOOZ experiment and the Super-Kamiokande atmospheric neutrino data imply that $`|U_{e3}|^2`$ is small ($`|U_{e3}|^25\times 10^2`$ ), the contribution $`|m|_3`$ to the effective Majorana mass in $`\beta \beta _{0\nu }`$ decay is very small . The upper bounds for $`|m|_3`$ as functions of $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ obtained from the present experimental data are shown in Fig. Neutrino oscillations and neutrinoless double-$`\beta `$ decay. The dash-dotted upper limit has been obtained using the 90% CL exclusion curve of the CHOOZ experiment (taking into account that $`|U_{e3}|^2=\frac{1}{2}\left(1\sqrt{1\mathrm{sin}^22\vartheta _{\mathrm{CHOOZ}}}\right)`$, where $`\vartheta _{\mathrm{CHOOZ}}`$ is the two-neutrino mixing angle measured in the CHOOZ experiment), the dashed upper bound has been obtained using the results presented in Ref. of the analysis of Super-Kamiokande atmospheric neutrino data (at 90% CL) and the solid upper limit, that surrounds the shadowed allowed region, has been obtained using the results presented in Ref. of the combined analysis of the CHOOZ and Super-Kamiokande data (at 90% CL). The dotted line in Fig. Neutrino oscillations and neutrinoless double-$`\beta `$ decay represents the unitarity limit $`|m|_3\sqrt{\mathrm{\Delta }m_{\mathrm{atm}}^2}`$. One can see from Fig. Neutrino oscillations and neutrinoless double-$`\beta `$ decay that the results of the CHOOZ experiment imply that $`|m|_32.7\times 10^2\mathrm{eV}`$, the results of the Super-Kamiokande experiment imply that $`|m|_33.8\times 10^2\mathrm{eV}`$, and the combination of the results of the two experiments drastically lowers the upper bound to
$$|m|_32.5\times 10^3\mathrm{eV}.$$
(9)
Since there is no lower bound for $`|U_{e3}|^2`$ from experimental data, $`|m|_3`$ could be much smaller than the upper bound in Eq. (9).
Hence, the largest contribution to $`|m|`$ could come from $`|m|_2|U_{e2}|^2m_2`$. In the scheme (4) $`m_2\sqrt{\mathrm{\Delta }m_{21}^2}=\sqrt{\mathrm{\Delta }m_{\mathrm{sun}}^2}`$ and, since $`|U_{e3}|^2`$ is very small, $`|U_{e2}|^2\frac{1}{2}\left(1\sqrt{1\mathrm{sin}^22\vartheta _{\mathrm{sun}}}\right)`$ , where $`\vartheta _{\mathrm{sun}}`$ is the two-neutrino mixing angle used in the analysis of solar neutrino data. Therefore, $`|m|_2`$ is given by
$$|m|_2\frac{1}{2}\left(1\sqrt{1\mathrm{sin}^22\vartheta _{\mathrm{sun}}}\right)\sqrt{\mathrm{\Delta }m_{\mathrm{sun}}^2}.$$
(10)
Solar neutrino data imply bounds for $`\mathrm{sin}^22\vartheta _{\mathrm{sun}}`$ and $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$. In particular the large mixing angle MSW solution (LMA) of the solar neutrino problem requires a relatively large $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ and a mixing angle $`\vartheta _{\mathrm{sun}}`$ close to maximal:
$`1.2\times 10^5\mathrm{eV}^2\mathrm{\Delta }m_{\mathrm{sun}}^23.1\times 10^4\mathrm{eV}^2,`$ (11)
$`0.58\mathrm{sin}^22\vartheta _{\mathrm{sun}}1,`$ (12)
at 99% CL . The corresponding allowed range for $`|m|_2`$ as a function of $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ is shown in Fig.Neutrino oscillations and neutrinoless double-$`\beta `$ decay (the shadowed region limited by the solid line). The dashed line in Fig.Neutrino oscillations and neutrinoless double-$`\beta `$ decay represents the unitarity limit $`|m|_2\sqrt{\mathrm{\Delta }m_{\mathrm{sun}}^2}`$. From Fig.Neutrino oscillations and neutrinoless double-$`\beta `$ decay one can see that the LMA solution of the solar neutrino problem implies that
$$7.4\times 10^4\mathrm{eV}|m|_26.0\times 10^3\mathrm{eV}.$$
(13)
Assuming the absence of fine-tuned cancellations among the contributions of the three neutrino masses to the effective Majorana mass, if $`|U_{e3}|^2`$ is very small and $`|m|_3|m|_2`$, from Eqs.(6) and (13) we obtain
$$7\times 10^4\mathrm{eV}|m|6\times 10^3\mathrm{eV}.$$
(14)
Hence, assuming the absence of an unlikely fine-tuned suppression of $`|m|`$, in the case of the LMA solution of the solar neutrino problem we have obtained a *lower bound* of about $`7\times 10^4\mathrm{eV}`$ for the effective Majorana mass in $`\beta \beta _{0\nu }`$ decay.
Also the small mixing angle MSW (SMA) and the vacuum oscillation (VO) solutions of the solar neutrino problem imply allowed ranges for $`|m|_2`$, but their values are much smaller than in the case of the LMA solution. Using the 99% CL allowed regions obtained in from the analysis of the total rates measured in solar neutrino experiments we have
$`5\times 10^7\mathrm{eV}|m|_210^5\mathrm{eV}`$ $`\text{(SMA)},`$ (15)
$`10^6\mathrm{eV}|m|_22\times 10^5\mathrm{eV}`$ $`\text{(VO)}.`$ (16)
If future $`\beta \beta _{0\nu }`$ experiments will find $`|m|`$ in the range shown in Fig.Neutrino oscillations and neutrinoless double-$`\beta `$ decay and future long-baseline experiments will obtain a stronger upper bound for $`|U_{e3}|^2`$, it would mean that $`|m|_2`$ gives the largest contribution to the effective Majorana mass, favoring the LMA solution of the solar neutrino problem. On the other hand, if future $`\beta \beta _{0\nu }`$ experiments will find $`|m|`$ in the range shown in Fig.Neutrino oscillations and neutrinoless double-$`\beta `$ decay and the SMA or VO solutions of the solar neutrino problem will be proved to be correct by future solar neutrino experiments, it would mean that $`|m|_3`$ gives the largest contribution to the effective Majorana mass and there is a lower bound for the value of $`|U_{e3}|^2`$.
Finally, let us consider briefly the two four-neutrino mixing schemes compatible with all neutrino oscillation data , including the indications in favor of $`\nu _\mu \nu _e`$ oscillations found in the short-baseline (SBL) LSND experiment :
(A) $`\underset{\mathrm{\Delta }m_{\mathrm{SBL}}^2}{\underset{}{\stackrel{\mathrm{\Delta }m_{\mathrm{atm}}^2}{\stackrel{}{m_1<m_2}}<\stackrel{\mathrm{\Delta }m_{\mathrm{sun}}^2}{\stackrel{}{m_3<m_4}}}},`$ (17)
(B) $`\underset{\mathrm{\Delta }m_{\mathrm{SBL}}^2}{\underset{}{\stackrel{\mathrm{\Delta }m_{\mathrm{sun}}^2}{\stackrel{}{m_1<m_2}}<\stackrel{\mathrm{\Delta }m_{\mathrm{atm}}^2}{\stackrel{}{m_3<m_4}}}}.`$ (18)
Since the mixing of $`\nu _e`$ with the two massive neutrinos whose mass-squared difference generates atmospheric neutrino oscillations is very small , the contribution of the two โheavyโ mass eigenstates $`\nu _3`$ and $`\nu _4`$ to the effective Majorana mass (5) is large in scheme A and very small in scheme B. Hence, the effective Majorana mass is expected to be relatively large in scheme A and strongly suppressed in scheme B. In particular, in the scheme A the SMA solution of the solar neutrino problem implies a value of $`|m|`$ larger than the the present upper bound obtained in $`\beta \beta _{0\nu }`$ decay experiments and is, therefore, disfavored. Furthermore, since the measured abundances of primordial elements produced in Big-Bang Nucleosynthesis is compatible only with the SMA solution of the solar neutrino problem , we conclude that the scheme A is disfavored by the present experimental data and *there is only one four-neutrino mixing scheme supported by all data*: scheme B . |
no-problem/9912/cond-mat9912480.html | ar5iv | text | # Systematic Study of Magnetic Interactions in Insulating Cuprates
## Abstract
The magnetic interactions in one-dimensional, two-dimensional (2D) and ladder cuprates are evaluated systematically by using small Cu-O clusters. We find that the superexchange interaction $`J`$ between nearest neighbor Cu spins strongly depends on Cu-O structure through the Madelung potential, and in 2D and ladder cuprates there is a four-spin interaction $`J_{\mathrm{cyc}}`$, with magnitude of 10 % of $`J`$. We show that $`J_{\mathrm{cyc}}`$ has a strong influence on the magnetic excitation in the high-energy region of 2D cuprates.
A variety of insulating cuprates affords us an opportunity to study the magnetic properties of low-dimensional systems. Recent experiments for insulating cuprates have revealed interesting characteristics of magnetic interactions: The superexchange interaction between nearest neighbor Cu spins $`J`$ remarkably depends on Cu-O network structure , and additional interactions such as a four-spin (4S) interaction are important for ladder and two-dimensional (2D) cuprates . These characteristics indicate the necessity to establish proper magnetic descriptions for the cuprates. In this paper we perform a systematic study of magnetic interactions for one-dimensional (1D), 2D, and ladder cuprates theoretically.
A starting model to describe the electronic states of cuprates is the $`d`$-$`p`$ model, in which hopping integrals between Cu3$`d`$ and O2$`p`$ orbitals ($`T_{pd}`$) and between O2$`p`$ orbitals ($`T_{pp}`$), an energy-level separation between the Cu3$`d`$ and O2$`p`$ orbitals ($`\mathrm{\Delta }`$), and Coulomb interactions on Cu and O sites are taken into account. $`T_{pd}`$ and $`T_{pp}`$ are obtained by considering not only the bond length dependence but also the effect of the Madelung potential around Cu and O ions. We find that the potential enhances the magnitudes of $`T_{pd}`$ and $`T_{pp}`$ in the 1D cuprates as compared with those in the 2D ones . In the two-leg ladder compounds such as SrCu<sub>2</sub>O<sub>3</sub>, $`T_{pp}`$ along the leg of the ladder is enhanced by the Madelung potential due to adjacent two-leg ladders. These enhancements play an important role in the dependence of $`J`$ on the dimensionality. The $`\mathrm{\Delta }`$ is determined from the difference in the Madelung potential between Cu and O sites.
The magnetic interactions are evaluated by mapping the lowest several eigenstates of small clusters (Cu<sub>2</sub>O<sub>7</sub>, Cu<sub>4</sub>O<sub>12</sub>, and Cu<sub>6</sub>O<sub>17</sub>) for the $`d`$-$`p`$ model onto those of the corresponding Heisenberg-type model . For 2D systems, we take into account not only $`J`$, but also a diagonal interaction $`J_{\mathrm{diag}}`$ and 4S interaction $`J_{\mathrm{cyc}}`$ in the model: $`H=J_{i,j}๐_i๐_j+J_{\mathrm{diag}}_{i,j}๐_i๐_j+J_{\mathrm{cyc}}_{\mathrm{plaquette}}(P_{ijkl}+P_{ijkl}^1)`$, where $`๐_i`$ is a spin operator at site $`i`$, and $`J_{\mathrm{cyc}}`$ is defined as the coefficient of the 4S cyclic permutation operators $`P_{ijkl}`$ and $`P_{ijkl}^1`$, which can be rewritten by using the two-spin interaction $`(๐_i๐_j)`$ and the four-spin interactions $`(๐_i๐_j)(๐_k๐_l)`$. For ladder systems, we distinguish between the nearest neighbor interactions along the leg ($`J_{\mathrm{leg}}`$) and along the rung ($`J_{\mathrm{rung}}`$) of the ladder.
The calculated results are summarized in Table 1, where we take La<sub>2</sub>CuO<sub>4</sub>, SrCu<sub>2</sub>O<sub>3</sub>, and Sr<sub>2</sub>CuO<sub>3</sub> as typical systems of 2D, ladder and lD cuprates, respectively (see Refs. 1 and 5 for the parameters used in the calculations).
We find that $`J`$ in the 1D cuprate is larger than that in the 2D one. This is caused by the enhancement of the hopping integrals in 1D cuprates as mentioned above. For 2D cuprates, we obtain $`J`$ to be $``$0.15 eV, consistent with the experimental values . In addition, we find that $`J_{\mathrm{cyc}}`$ is 7 % of $`J`$, while $`J_{\mathrm{diag}}`$ is zero. These results are consistent with a previous cluster calculation , and a recent analysis of a multimagnon spectrum . For ladder cuprates, we obtain $`J_{\mathrm{leg}}/J_{\mathrm{rung}}`$=1.3. The enhancement of $`T_{pp}`$ along the leg of the ladder is the origin of the relation $`J_{\mathrm{leg}}`$$`>`$$`J_{\mathrm{rung}}`$. $`J_{\mathrm{cyc}}`$ is 10 $`\%`$ of $`J_{\mathrm{leg}}`$. Note that the ratio $`J_{\mathrm{leg}}/J_{\mathrm{rung}}`$ is smaller than that believed so far, i.e., $`J_{\mathrm{leg}}`$/$`J_{\mathrm{rumg}}`$$``$2 . However, the Heisenberg ladder with the present values of $`J_{\mathrm{leg}}`$, $`J_{\mathrm{rung}}`$, $`J_{\mathrm{cyc}}`$ and $`J_{\mathrm{diag}}`$ reproduces very well the experimental results of the temperature dependence of the magnetic susceptibility (not shown here). Therefore, we consider the values shown in Table 1 to be reasonable. Here we would like to emphasize that $`J_{\mathrm{cyc}}`$ plays a crucial role in obtaining the good agreement between the experimental and theoretical magnetic susceptibilities .
Next, in order to examine the effect of $`J_{\mathrm{cyc}}`$ on the magnetic excitation, we calculate the dynamical spin-correlation function $`S(๐ช,\omega )`$ for 2D cuprates. Figure 1 shows the dispersion and the intensity of $`S(๐ช,\omega )`$ for a 4$`\times `$4 Heisenberg model with $`J`$=0.146 eV and $`J_{\mathrm{cyc}}`$=0.011 eV. For comparison, the result for $`J_{\mathrm{cyc}}`$=0 are also shown. We find that the intensity is not sensitive to $`J_{\mathrm{cyc}}`$, while the dispersion is strongly suppressed in the high-energy region. In particular, it is worth noting that $`\omega (๐ช)`$ at $`๐ช`$=($`\pi `$/2,$`\pi `$/2) becomes smaller than that at $`๐ช`$=($`\pi `$,0). This is in contrast with the case of $`J_{\mathrm{cyc}}`$=0, in which the magnetic zone boundary ($`๐ช`$=($`\pi `$,0)$``$($`\pi `$/2,$`\pi `$/2)) has a flat dispersion. Therefore, it is desirable that inelastic neutron-scattering experiments in the wide energy region be performed to verify the role of $`J_{\mathrm{cyc}}`$, that is, the suppression of the dispersion at ($`\pi `$/2,$`\pi `$/2).
In summary, we have evaluated the magnetic interactions in various cuprates systematically. We have shown that an ionic nature inherent in insulating cuprates is important for the material dependence of $`J`$. We found that $`J_{\mathrm{cyc}}`$ is $``$10 % of $`J`$, and greatly influences the magnetic excitation spectra in 2D cuprates.
This work was supported by a Grant-in-Aid for Scientific Research on Priority Areas from the Ministry of Education, Science, Sports and Culture of Japan, CREST and NEDO. The parts of the numerical calculation were performed in the Supercomputer Center in ISSP, University. of Tokyo, and the supercomputing facilities in IMR, Tohoku University. |
no-problem/9912/astro-ph9912013.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Clusters of galaxies are the largest virialised structures in the Universe, evolving rapidly at recent times because in hierarchical cosmologies big objects form last. Even at moderate redshifts the number of large dark matter halos in a cold dark matter Universe with a significant, positive cosmological constant is higher than in a standard cold dark matter Universe and it is precisely because both the number density and size of large dark matter halos evolve at different rates in popular cosmological models that observations of galaxy clusters provide an important discriminator between rival cosmologies.
The simulations we have carried out follow 2 million gas and 2 million dark matter particles in a box of side $`100Mpc`$. We have performed both a SCDM and a $`\mathrm{\Lambda }`$CDM simulation with the parameters; $`\mathrm{\Omega }=1.0`$, $`\mathrm{\Lambda }=0.0`$, h=0.5, $`\sigma _8=0.6`$ for the former and $`\mathrm{\Omega }=0.3`$, $`\mathrm{\Lambda }=0.7`$, h=0.7, $`\sigma _8=0.9`$ for the latter. The baryon fraction was set from Big Bang nucleosynthesis constraints, $`\mathrm{\Omega }_bh^2=0.015`$ and we have assumed an unevolving gas metallicity of 0.3 times the solar value. These parameters produce a gas mass per particle of $`2\times 10^9\mathrm{M}_{}`$.
These simulations produce a set of galaxies that fit the local K-band number counts . The brightest cluster galaxies contained within the largest halos are not excessively luminous for a volume of this size, unlike those found in previous work and presumably (although they do not state a central galaxy mass or galaxy luminosity). The fraction of the baryonic material that cools into galaxies within the virial radius of the large halos in our simulation is typically around 20 percent, close to the observed baryonic fraction in cold gas and stars. This is much less than the unphysically high value of 40 percent reported by .
## 2 Results
For each of the 20 largest clusters from each simulation we follow in using the following estimator for the bolometric X-ray luminosity of a cluster,
$$L_X=4\times 10^{32}\rho _iT_i^{\frac{1}{2}}\mathrm{erg}\mathrm{s}^1$$
(1)
where the sum is over all the gas particles with temperatures above $`12000K`$ within the specified radius. Temperatures are in Kelvin and densities are relative to the mean gas density in the box. We plot these bolometric luminosities as a function of radius for each of our relaxed clusters in figure 1. For the simulation without cooling the clusters are several times more luminous than those from the cooling run. This contradicts previous results who all found the X-ray luminosity increased if cooling was turned on. The cooling clusters are less luminous than their counterparts in the simulation without cooling because they have lower central temperatures and similar central densities. Most of the emission coming from the non-cooling clusters comes from the central regions, with little subsequent rise in the bolometric luminosity beyond 0.3 times the virial radius whereas for the majority of the cooling clusters the bolometric luminosity continues to rise out to the virial radius.
There has been much debate in the literature centering on the X-ray cluster $`L_X`$ versus $`T`$ correlation. The emission weighted mean temperature in keV is plotted against the bolometric luminosity within the virial radius for all our clusters in figure 2. The filled symbols represent the relaxed clusters and the open symbols denote those clusters that show significant substructure. Clearly the simulation without cooling produces brighter clusters at the same temperature. All 3 sets of objects display an $`L_XT`$ relation although there are insufficient numbers to tie the trend down very tightly. Also plotted in figure 2 are the observational data . Our clusters are smaller and cooler because they are not very massive (due to our relatively small computational volume) but span a reasonable range of luminosities and temperatures.
## 3 Discussion
Implementing cooling clearly has a dramatic effect on the X-ray properties of galaxy clusters. Without cooling our clusters closely resemble those found by previous authors ( and references therein). These clusters appear to have remarkably similar radial densities and bolometric X-ray luminosity profiles, especially when those with significant substructure are removed.
With cooling implemented the cluster bolometric X-ray luminosity profiles span a broader range. The formation of a central galaxy within each halo acts to steepen the dark matter profile, supporting the conclusion of the lensing studies that the underlying potential that forms the lens only has a small core. For the largest cluster, a significant amount of baryonic material has cooled and built up a large central galaxy. This localised mass deepens the potential well and contains hot gas with a steeply rising density ($`\rho r^{2.75}`$ in the inner regions). For this cluster around 80 percent of the bolometric X-ray emission comes from the galactic region and this must therefore be viewed as a lower limit as the central emission is unresolved. Such a large central spike to the X-ray emission is already only weakly consistent with the latest observational data . For the remaining 19 clusters the central galaxy is not so dominant and a shallower central potential well is formed. In these cases the slope of the central hot gas is $`\rho r^{0.5}`$ and the total X-ray emission is well resolved. In principle, the presence of a large galaxy could resolve the problem of the slope of the X-ray luminosity - temperature relation. In large clusters, large central galaxies are more likely to be present and this galaxy deepens the local potential well, boosting the emission above the theoretically expected $`L_XT^2`$ regression line. Getting a reasonable amount of material to cool into the central galaxy is seen to be of vital importance.
## 4 Conclusions
We have performed two N-body plus hydrodynamics simulations of structure formation within a volume of side $`100Mpc`$, including the effects of radiative cooling but neglecting star formation and feedback. By repeating one of the simulations without radiative cooling of the gas we can both compare to previous work and study the changes caused by the cooling in detail. A summary of our conclusions follows.
(a) The bolometric luminosity for the clusters with radiative cooling is around five times lower than for matching clusters without it. Except for the largest cluster where the massive central galaxy produces a deep potential well the X-ray luminosity profile is less centrally concentrated than in the non-cooling case with a greater contribution coming from larger radii. This effect assists in convergence as we are less dependent upon the very centre of the cluster profile.
(b) The spread of the X-ray luminosity โ temperature relation is well reproduced by our clusters. Our non-cooling clusters lie close to the regression line suggested by and have a similar slope ($`\rho r^2`$). We suggest that the increasing dominance of a large central galaxy on the local potential may produce the luminosity excess that drives the observed X-ray luminosity โ temperature relation away from the theoretically predicted slope. |
no-problem/9912/cond-mat9912257.html | ar5iv | text | # Kinetic glass transition
## 1 Introduction
A long debated problem in glass physics concerns the nature of the dynamical ergodicity breaking and its relation with the existence of an underlying equilibrium phase transition . In mode-coupling theory the glass transition appears as a purely dynamic effect due to an instability of the equation governing the time correlation of density fluctuations . In particular, mean-field disordered models of structural glasses show that glassy features are associated to a rugged free energy landscape and that the origin of the dynamical transition is the existence of a large number of metastable states which trap the system for an infinite time . On the other hand, the lifetime of metastable states in finite dimensional short range models is finite, since it is always possible to nucleate, by a thermally activated process, a droplet of the stable phase. Therefore the dynamical transition appear as an artifact of the mean-field approximation, and in real glasses this transition would be just a finite-time kinetic effect, at least on time scales much smaller than the lifetime of metastable states. Recently, the close connection between the non-trivial structure of Gibbs equilibrium states and the appearance of a persistent glassy dynamics has been established for a certain class of systems . However, since the dynamical universality classes are smaller than the static ones , and since salient features of glasses are essentially of dynamical nature , it is important to understand to what extent glassy behaviour depends on the details of microscopic kinetics. A generic microscopic mechanism leading to slow relaxation phenomena was suggested some time ago by Fredrickson and Andersen . It is based on kinetic rules involving only a selection of the possible configuration changes compatibly with the detailed balance and the Boltzmann distribution. A kinetic rule can be so effective that there is no need to introduce an energetic interaction between the particles. Although they are physically motivated (e.g. by the cage effect mechanism) these kinetic models are not intended to describe the realistic dynamics of glasses, but rather to show that the glass transition could be, at least in principle, a purely kinetic or dynamical phenomenon. Taking advantage of this idea we have explored the limit case of a three dimensional lattice gas model defined only by short-range kinetic constraints and by a trivial equilibrium measure . Remarkably, this finite dimensional model exhibits a fragile glass behavior unrelated to the existence of a thermodynamic phase transition (for another case and the related experimental situation, see and ). It provides a simple example of how the distinction between the ideal (static or dynamic) and the laboratory (i.e. kinetic) glass transition can be very subtle and elusive. In the following we present some numerical results showing that this model reproduces qualitatively some aspects of the glassy phenomenology, such as history dependence, irreversibility effects, power-law approach to the asymptotic state, and simple aging behavior. Some related works on constrained lattice-gas models are .
## 2 The model
Our starting point is a kinetic lattice-gas model introduced by Kob and Andersen . The system consists of $`N`$ particles in a cubic lattice of size $`L^3`$, with periodic boundary conditions. There can be at most one particle per site. Apart from this hard-core constraint there are no other static interactions among the particles. At each time step a particle and one of its neighbouring sites are chosen at random. The particle moves if the three following conditions are all met:
1. the neighbouring site is empty;
2. the particle has less than $`m`$ nearest neighbours;
3. the particle will have less than $`m`$ nearest neighbours after it has moved.
The rule is symmetric in time, detailed balance is satisfied and the allowed configurations have the same statistical weight in equilibrium. Significant results are obtained when the value of $`m`$ is set to $`4`$. With this simple definition one can proceed to study the dynamical behavior of the model at equilibrium. One observes that the dynamics becomes slower and slower as the particle density $`\rho `$ increases; in particular, the diffusion coefficient of the particles, $`D`$, vanishes as the density $`\rho `$ approaches the critical value $`\rho _\mathrm{c}0.88`$, with a power law
$`D(\rho )`$ $``$ $`(\rho _\mathrm{c}\rho )^\varphi ,`$ (1)
with an exponent $`\varphi 3.1`$ . Since we are interested in the dynamical approach to the putative equilibrium state we allow the system to exchange particles with a reservoir characterized by a chemical potential $`\mu `$. Therefore, we alternate the ordinary diffusion sweeps with sweeps of creation/destruction of particles on a single layer with the following Monte Carlo rule: we randomly choose a site on the layer; if it is empty, we add a new particle; otherwise we remove the old particle with probability $`\text{e}^{\beta \mu }`$. The number of particles is no longer fixed and the external control parameter is $`1/\mu `$, which plays the role of the temperature. The equilibrium equation of state $`\rho =\rho _{\mathrm{eq}}(\mu )`$ is then trivially calculated. There is therefore a critical value $`\mu _\mathrm{c}`$ of $`\mu `$ defined by $`\rho _{\mathrm{eq}}(\mu _\mathrm{c})=\rho _\mathrm{c}`$ corresponding to the ideal glass transition of the model. In this way we can prepare the system in a non equilibrium state by a process analogous to a quench, which is represented by a jump in $`1/\mu `$ from above to below $`1/\mu _\mathrm{c}`$. Or, we can let $`1/\mu `$ decrease or increase smoothly like in cooling or heating experiments. The situation becomes analogous to the canonic case in which one controls the temperature, and the energy endeavors to reach its equilibrium value.
## 3 Thermodynamics
Before to study the non-equilibrium regime let us consider the static properties. The point is relevant for the question of whether the possible ideal glass transition is purely dynamical or is a consequence of an equilibrium transition of some sort. The Hamiltonian of the model is
$``$ $`=`$ $`\mu {\displaystyle \underset{i=1}{\overset{N}{}}}n_i,`$ (2)
where $`n_i=0,1`$ are occupation site variables and $`\mu `$ is the chemical potential. The corresponding partition function, for a system of volume $`V=L^3`$,
$`Z`$ $`=`$ $`\left(1+\text{e}^{\beta \mu }\right)^V,`$ (3)
would describe correctly the thermodynamics of the system provided that the measure of configurations made inaccessible by the kinetic constraints vanishes in the thermodynamic limit. It is possible to convince oneself that the kinetic rules, which satisfy detailed balance, allow an initially empty lattice to be progressively filled in, leaving only O($`1/L`$) empty sites per unit volume. Indeed, it is always possible to find a path connecting almost any two allowed configurations, if necessary by letting the particles escape one by one by the way they got in. Therefore the Markov process generated by the dynamical evolution rule is irreducible on the full manifold of particles configurations and the static properties of the model are described by the (3). In particular the state equation and the entropy are respectively given by:
$`\rho `$ $`=`$ $`1/(1+\mathrm{e}^{\beta \mu }),`$ (4)
$`S`$ $`=`$ $`\rho \mathrm{log}\rho (1\rho )\mathrm{log}(1\rho ).`$ (5)
Since the static properties of the system are regular as a function of the density or the chemical potential, the possible ideal glass transition should appear as a purely dynamical effect. The critical value of $`\mu `$ and $`S`$ corresponding to the threshold density $`\rho _\mathrm{c}`$, can be estimated from the previous equations and they are given by
$`\mu _\mathrm{c}2.0,S_\mathrm{c}0.36.`$ (6)
## 4 History dependence
A first insight into the nature of the relaxational processes can be gained by studying the behavior of one-time observables (energy, specific volume etc.) in a slow annealing procedure. We consider a compression experiment in which the inverse chemical potential of the reservoir, $`1/\mu `$, is slowly decreased at fixed rate from a value corresponding to a low density equilibrium configuration up to zero. The simulation results presented in the following refer to a system of size $`20^3`$. In Figure 1 the numerical results of the specific volume $`v=1/\rho `$ vs. $`1/\mu `$ are compared, for several annealing rates, $`r`$, with the equilibrium state equation of the system (the smooth curve). In close resemblance with the behavior of real glasses these curves exhibits the characteristic annealing dependence of one-time observables: after a certain value of the inverse chemical potential, $`1/\mu _\mathrm{g}(r)`$, is reached the dynamics become so sluggish that the system is no longer able to follow the annealing procedure; the faster the compression, the sooner the system falls off equilibrium. The limit value of $`v`$ reaches a plateau that depends on the compression rate and never seems to cross the critical value $`v_\mathrm{c}=1/\rho _\mathrm{c}`$ (the horizontal dashed line). In the inset of fig. 1 we also show the same plot for a compression experiment, but this time by removing the dynamical constraints. We see that the ordinary lattice gas has no problem in equilibrating at each value of chemical potential $`\mu `$; therefore our โexperimentalโ setup (the way in which the particles reservoir and its connection with the system is realized) provide a suitable representation of the equilibrium properties of the model.
### Kauzmannโs paradox.
Once we obtained the experimental equation of state, we can evaluate the entropy variation of the reservoir by numerical integration, which is given by:
$`S(\mu _\mathrm{f})=S(\mu _\mathrm{i}){\displaystyle _{\mu _\mathrm{i}}^{\mu _\mathrm{f}}}\mu {\displaystyle \frac{d\rho }{d\mu }}๐\mu .`$ (7)
This โcalorimetric entropyโ in presence of irreversible effects will be different from the thermodynamical entropy $`S_{\mathrm{eq}}`$. Indeed, in fig. 2 we see that when the relaxation time exceeds the inverse of the annealing rate the numerical data remain consistently above the equilibrium curve $`S=S_{\mathrm{eq}}(\mu )`$. If one were given only the dynamical data of figs. 1 and 2, one would feel tempted to extrapolate the equilibrium specific volume and entropy to lower values of $`1/\mu `$. Then, given that both these quantities are bounded, one could conclude that the Kauzmann โtemperatureโ, defined here as
$`{\displaystyle \frac{1}{\mu _\mathrm{K}}}`$ $`=`$ $`\underset{r0}{lim}{\displaystyle \frac{1}{\mu _\mathrm{g}(r)}},`$ (8)
is different than zero and therefore that there has to be a static transition. This is the usual argument, known as Kauzmannโs paradox, according to which the glassy state is related to the existence of a thermodynamic phase transition. Of course, here there is no such static transition: in this simple case we have access to the whole equilibrium curves, which are perfectly analytical though they change concavity rather sharply. Irreversibility effects are also evident when we let $`1/\mu `$ perform a cycle: in this case the specific volume appear to follow a hysteresis loop whose area decreases as the compression speed decreases (fig. 3).
## 5 Structural relaxation
We now turn to the behaviour of the system after a sudden quench to a subcritical value $`1/\mu <1/\mu _\mathrm{c}`$. In order to allow the system to reach more rapidly the asymptotic regime we perform a โgentleโ quench i.e. starting from a configuration with density $`0.75`$ corresponding to a chemical potential closer to $`\mu _\mathrm{c}`$.
Figure 4 shows the time relaxation of particles density after a subcritical quench at $`1/\mu =1/2.2`$. We see that $`\rho `$ never exceeds the threshold $`\rho _\mathrm{c}`$, but rather approaches it like a power law in time:
$`\rho _\mathrm{c}\rho (t)`$ $``$ $`t^z,`$ (9)
where $`t`$ is the time elapsed after the quench and where the exponent $`z0.3`$. Therefore the diffusion coefficient $`D`$ of particles after a subcritical quench vanishes as
$`D(t)`$ $``$ $`t^\zeta ,`$ (10)
with the exponent $`\zeta =z\varphi `$ quite close to one. This is closely related to the aging behavior observed in the two-time mean-squared displacement of particles, $`B(t,s)`$. Indeed, in a simple minded approach such a quantity would be given by
$`B(t,s)`$ $`=`$ $`{\displaystyle _s^t}๐\tau D(\tau ),`$ (11)
from which follows the simple logarithmic aging
$`B(t,s)`$ $``$ $`\mathrm{log}(t/s),`$ (12)
in good agreement with the numerical results and the analytical solution of the associated singular diffusion model . Since the size of the system considered here is finite, equilibrium will eventually be reached, (since almost any two allowed configurations can be connected by a path of allowed moves), but with times which grow fast as $`L\mathrm{}`$.
### Activated processes.
It is interesting to investigate the role of activated hopping processes in the low-temperature phase of glassy systems since they are responsible of restoring the ergodicity broken at the glass transition, and it is important to know the characteristic time scale on which this equilibration process takes place. As pointed out in ref. , the activation processes can be simply implemented in this model by allowing the violation of the kinetic constraint with a given probability $`p`$. Figure 5 shows an example of a $`v`$ vs. $`1/\mu `$ plot in a compression experiment at fixed annealing rate for several values of the activation probabilities, $`p`$. As expected, in this case the system become able to cross the threshold and, after a certain value of $`p`$, $`p^{}`$, it follows the full equilibrium curve; we can also see that, for $`p`$ below $`p^{}`$, the dependence of the relaxation time from $`1/\mu `$ is not affected by $`p`$, since the annealing curves depart from the equilibrium one approximately at the same point. The relation between the activation probability $`p^{}`$ and the equilibration time is better characterized by looking at behaviour of the density after a sudden quench. In fig. 6 the relaxation curves in presence of activated processes are compared with that one obtained previously for $`p=0`$ (fig. 4). If we conventionally defined the ergodicity time, $`\tau _{\mathrm{erg}}(p)`$, as the time at which the curves with $`p0`$ depart from that one at $`p=0`$, it appears that this characteristic time follows a power-law, $`\tau _{\mathrm{erg}}(p)p^\alpha `$, with an exponent $`\alpha 1`$. A similar result was obtained by Castellano and Franz (unpublished). This seems to provide a further evidence of the existence of a purely dynamical glass transition in this model.
## 6 Conclusions
To summarize we have shown that three dimensional lattice-gas models defined by short range kinetic constraint and trivial equilibrium Boltzmann-Gibbs measure display many features of the fragile glass behaviour. Glassy phenomena may therefore have a purely dynamical or kinetic origin unrelated to an underlying equilibrium phase transition, even in finite dimension and in absence of metastable states. If this kinetic model exhibits a true dynamical transition it would provide a microscopic realization, in finite dimension, of the mechanism invoked by the ideal mode-coupling theory for the glass transition. Of course, it is hard to establish from numerical simulations the existence of such a dynamical transition. Indeed, a comparison with the backbone percolation problem show that the linear size-dependence of the critical threshold cannot be faster than :
$`1\rho _\mathrm{c}(L)`$ $``$ $`1/\mathrm{log}(\mathrm{log}L),`$ (13)
therefore even if $`lim_L\mathrm{}\rho _\mathrm{c}(L)=1`$ (i.e. there is no ideal dynamical glass transition), the length-scale over which such a value would be observable is not experimentally accessible. In this respect, the emergence in purely kinetic models of a well defined macroscopic effective temperature associated with the violation of the fluctuation-dissipation theorem appears quite surprising . Indeed, given the non-holonomic nature of kinetic constraints and the trivial Hamiltonian of the model, it would be interesting to understand whether a statistical mechanics approach based on the calculation of some restricted partition function is able to predict the features of the glassy phase and in particular the value of the so called fluctuation-dissipation ratio. A first step in this direction would consist in to define a kinetic analogue of the metastable states by considering, for example, as metastable those system configurations where all particles are blocked by the kinetic constraint; and then find a way to count them.
I warmly thank J. Kurchan and L. Peliti for the collaboration leading to the results presented here. I also thank L. Berthier, ร. de Campos, S. Franz and W. Kob for interesting discussions. This work is supported by the contract ERBFMBICT983561.
## References |
no-problem/9912/astro-ph9912533.html | ar5iv | text | # 1 Method
## 1 Method
In order to estimate the projected cluster shape we diagonalize the inertia tensor ($`\mathrm{det}\left(I_{ij}\lambda ^2M_2\right)=0`$) where $`M_2`$ is the $`2\times 2`$ unit matrix. The eigenvalues $`(\lambda _1,\lambda _2)`$ with $`\left(\lambda _2>\lambda _1\right)`$ define the ellipticity of the configuration under study: $`\epsilon =1\lambda _1/\lambda _2`$. Initially the galaxy positions are transformed to the coordinate system of each cluster. Then the discrete galaxy distribution is smoothed using a Gaussian kernel. All cells that have a density above some threshold are used to define the moments of inertia with weight $`w_i=\left(\rho _i\rho \right)/\rho `$ where $`\rho `$ is the mean projected APM galaxy density. This method is free of the aperture bias and we found that it performs significantly better than using the discrete galaxy distribution.
## 2 Results
Inverting a set of integral equations, which relate the projected and real axial ratio distribution, we obtain the distribution of real axial ratios under the assumption that the orientations are random with respect of line of sight.
According to , if the inverted distribution of axial ratios has significantly negative values, a fact which is unphysical, then this can be viewed as a strong indication that the particular spheroidal model is unacceptable. In figure 1 we present the uncorrected and corrected intrinsic axial ratio distributions. It is evident that the APM cluster shapes are better represented by that the prolate spheroids (in agreement with ) rather than oblate beacause the former model provides a distribution of intrinsic axial ratios that is positive over the whole axial ratio range. |
no-problem/9912/cond-mat9912191.html | ar5iv | text | # Low temperature acoustic properties of amorphous silica and the Tunneling Model
## Abstract
Internal friction and speed of sound of a-SiO<sub>2</sub> was measured above 6 mK using a torsional oscillator at 90 kHz, controlling for thermal decoupling, non-linear effects, and clamping losses. Strain amplitudes $`ฯต_\mathrm{A}`$$`=`$$`10^8`$ mark the transition between the linear and non-linear regime. In the linear regime, excellent agreement with the Tunneling Model was observed for both the internal friction and speed of sound, with a cut-off energy of $`\mathrm{\Delta }_{\mathrm{o},\mathrm{min}}/\mathrm{k}_\mathrm{B}`$ = 6.6 mK. In the non-linear regime, two different behaviors were observed. Above 10 mK the behavior was typical for non-linear harmonic oscillators, while below 10 mK a different behavior was found. Its origin is not understood.
The low temperature acoustic, thermal, and dielectric properties of amorphous solids have long been successfully described by the phenomenological Tunneling Model (TM). In this model, the low energy localized vibrational excitations, a common feature of amorphous solids, are described by non-interacting two-level defects which are thought to be caused by tunneling of atoms or groups of atoms between nearly degenerate potential minima. The excitation energy between the two lowest states of the double well potential, E = $`\sqrt{\mathrm{\Delta }^2+\mathrm{\Delta }_\mathrm{o}^2}`$, is determined by the asymmetry, $`\mathrm{\Delta }`$ and the tunneling splitting, $`\mathrm{\Delta }_\mathrm{o}`$. Low temperature internal friction and speed of sound measurements on a-SiO<sub>2</sub> using the torsional oscillator technique between 66 and 160 kHz above 50 mK have shown excellent agreement with the TM . However, several acoustic and dielectric experiments have indicated deviations from this model below 100 mK and have been interpreted as evidence for tunneling defect interactions. This discrepancy provided the impetus for the present study in which we extended the acoustic measurements to 6 mK.
In low temperature acoustic measurements, a major cause of uncertainty are thermal decoupling caused by spurious heat input and self heating, non-linear effects resulting from moderate strain amplitudes, and lack of knowledge of the influence of mounting. Taking particular care to control these problems, we report here measurements in the extended temperature range which are in excellent agreement with the predictions of the TM with a low energy cut-off in the tunneling state spectrum, $`\mathrm{\Delta }_{\mathrm{o},\mathrm{min}}/\mathrm{k}_\mathrm{B}`$ = 6.6 mK. These results emphasize the extreme care with which low temperature acoustic work must be carried out.
The amorphous silica sample, Suprasil - W, ($`<`$ 5 ppm OH<sup>-</sup> impurities, 4 mm diameter, 22.33 mm long) was mounted in a torsional composite oscillator resonating at $``$ 90 kHz in a dilution refrigerator (0.07 - 2K) and in a vibrationally isolated dilution refrigerator with a demagnetization stage (0.006 - 0.100 K). In the latter, the sample was surrounded by a Nb tube (5.4 mm i.d., 60 mm long) in order to shield it from residual magnetic fields except for the earthโs field ($`<`$ 0.5 G).
Before presenting our results on the acoustic properties, we describe the thermal and strain studies that are crucial for avoiding experimental errors. We begin by considering thermal decoupling of the sample. The temperature of the sample was measured directly by epoxying a 1k$`\mathrm{\Omega }`$ RuO<sub>2</sub> Dale resistor onto the free end of the undriven a-SiO<sub>2</sub> sample, with the leads (76 $`\mu `$m diameter Evanohm) thermally anchored along its length (see Fig. 1 inset). A twin Dale resistor was attached to the base. RuO<sub>2</sub> thick film resistors have been successfully used as thermometers from 0.015 to 80 K in magnetic fields up to 20 T. The resistances for the two resistors, read with a self-balancing resistance bridge (Linear Research, LR700) using a power = 10<sup>-15</sup>W and calibrated against a He<sup>3</sup> melting curve thermometer, were found to have identical temperature dependencies above 10 mK with very little scatter, see Fig. 1a. Below 10 mK, however, irreproducible resistances for both resistors were observed (error bars). The reason for this irreproducibility is not known, nor are we aware of any measurements on these resistors below 15 mK. Because of this irreproducibility, thermal decoupling of the undriven sample can be definitely ruled out only above 10 mK. Heating of the oscillator through vibrational noise in this cryostat below 10 mK is nonetheless considered to be unlikely. The lowest frequency mode of the sample, f $``$ 3 kHz, is that in which the stiff oscillator bends the thin Be:Cu torsion rod (1 mm diameter, 3 mm long). Because of its relatively large metallic thermal conductivity, this heat will be easily removed through the base. In addition, a torsional oscillator in the same refrigerator connected by a hollow Be:Cu torsion rod previously showed no signs of thermal decoupling to 1 mK.
Heating can also occur when the oscillator is driven. The strain amplitude, $`ฯต_\mathrm{A}`$, is defined as the maximum angular displacement between the two ends of the silica sample. The electrical measurement of $`ฯต_\mathrm{A}`$ was calibrated by reflecting a laser beam off the sample and measuring its deflection with a photodiode. The power loss from mechanical dissipation is $`Q^1f_\mathrm{o}\frac{1}{2}GVฯต_\mathrm{A}^2`$, where $`Q^1`$ is the internal friction, $`f_\mathrm{o}`$ the resonant frequency, $`G`$ the shear modulus, and $`V`$ the volume of the oscillator. The thermal resistance, R<sub>th</sub>, of the oscillator was measured by heating the sample thermometer (inset of Fig. 1) with a controlled power input from the LR700. The data are close to the dashed line which was calculated ignoring the thermal resistance of the glass sample (see Fig. 1 caption). The explanation may be that heat enters the glass over its entire surface by way of the resistor leads. R<sub>th</sub> was calculated by adding the thermal resistance of the glass to that shown as a dashed line, and was used to determine the upper limit of the temperature rise of the driven oscillator for a measured mechanical power loss. For example, at 10 mK, $`ฯต_\mathrm{A}=\mathrm{\hspace{0.17em}10}^8`$ would result in $`\mathrm{\Delta }`$T = 40$`\mu `$K. It should be noted, however, that $`ฯต_\mathrm{A}10^7`$ would result in a temperature rise of 4 mK at 10 mK in our amorphous sample, a significant source of self heating and error.
The elastic properties of the background were measured by replacing the amorphous sample with a crystal quartz sample of equal size which has negligible internal dissipation. The frequency traces of the background oscillator (quartz on quartz) at different strain amplitudes showed a Lorentzian line shape at all amplitudes and temperatures. Fig. 2a shows two traces taken at nominally 2 mK (i.e. ignoring the $`\mathrm{\Delta }`$T due to self-heating). They show that neither the epoxy, nor the Be:Cu metal base lead to non-linear behavior even when the strain amplitude was varied by a factor of 100. The background internal friction (clamping losses) was found to be slightly temperature dependent while the speed of sound was independent of temperature to within $``$ 0.1 ppm, as shown in Fig. 3.
Elastic measurements are sensitive to non-linear effects as can be seen in frequency traces of a driven a-SiO<sub>2</sub> at increasing driving voltages (peak power dissipation). These traces are plotted in Fig. 2b-d at 27 mK, 10.6 mK and 7 mK respectively. The dashed curve is the normalized response of the oscillator carrying a crystal quartz sample. Above 10 mK, the lineshapes for a-SiO<sub>2</sub> are clearly resolved from that of the quartz sample (background) as seen in Fig. 2b. In contrast to the background, the frequency response of the oscillator with the amorphous sample shows a variety of non-linear behaviors. Above 10 mK, with increasing peak strain amplitude, the oscillator exhibits behavior typical for a non-linear harmonic oscillator. (This non-linearity is not seen above 50 mK where it is masked by self-heating.) Below 10 mK, however, (see Fig. 2c and 2d) the non-linear behavior differs dramatically. With increasing strain amplitudes, the frequency response does not lean to higher frequencies. Instead the response has jumps, plateaus and oscillations. Since the background oscillator shows no such behavior, we conclude that the tunneling entities in a-SiO<sub>2</sub> are responsible. This is the first evidence that the tunneling entities themselves may change their behavior below 10 mK, although we do not understand the nature of the tunneling entities that can bring about these non-linearities.
The internal friction was only determined when the frequency dependence of the oscillator was clearly in the linear regime, i.e. fit a Lorentzian curve well (solid curves in Fig. 2). In the linear regime, the internal friction was found to be independent of the driving power and $`\mathrm{\Delta }`$T to be negligible. At larger strain amplitudes (in the non-linear regime), the fits yielded higher internal frictions and speeds of sound. As an example, consider Fig 2c; for a peak $`ฯต_\mathrm{A}`$ = 1.7 $`\times `$ 10<sup>-8</sup> and a power = 3 $`\times `$ 10<sup>-14</sup> W, using a half-width of the non-Lorentzian would lead to an erroneous increase of the internal friction by a factor of 2 and an increase of the speed of sound by $``$1 ppm. Note that in the most recent investigation of a-SiO<sub>2</sub>, a double paddle oscillator operating at $`ฯต_\mathrm{A}`$ $``$10<sup>-7</sup> was used, which should lead to considerable non-linearities at low temperatures, at least in our geometry.
With the previously identified experimental problems avoided, Fig. 3 shows the internal friction, Q<sup>-1</sup>, and the relative change of the transverse speed of sound, $`\frac{\mathrm{\Delta }\mathrm{v}}{\mathrm{v}_\mathrm{o}}`$ (see eq. 1) of a-SiO<sub>2</sub> which were determined from the Lorentzian frequency responses, as shown in Fig. 2 as solid curves. Sample heating was always less than 0.5% of the temperature. The solid and the open circles show the excellent agreement of the data from the two cryostats. Below 10 mK, the internal friction approaches a nearly temperature independent value very close to that measured on the quartz sample (dashed line). This close agreement is also evidenced in Figs. 2c and 2d with the frequency responses obtained on both samples (solid and dashed Lorentzians). We conclude that the internal friction measured on the a-SiO<sub>2</sub> sample below 10 mK is dominated by the background, and use the dashed line to derive the internal friction of the a-SiO<sub>2</sub> without this background, shown as xโs in Fig. 3a, following the method outlined in Ref . No such correction needs to be applied to the speed of sound, because the background frequency shift is independent of temperature (dashed line in Fig. 3b). The solid curves in Fig. 3 are fits to the TM using a tunneling strength C = 3.1 $`\times `$ 10<sup>-4</sup> and a crossover temperature T<sub>co</sub> = 0.08 K, both identical to our previous published values based on the measurements above 50 mK. The internal friction shows no deviation from the TM prediction (below 15 mK, separating background from the internal friction by the tunneling states would involve large errors). The speed of sound also agrees with the TM prediction between 25 and 500 mK. The deviation of $`\frac{\mathrm{\Delta }\mathrm{v}}{\mathrm{v}_\mathrm{o}}`$ for T $`>`$ 500 mK is experimentally well established, indicating the existence of channels for defect relaxation other than by single phonon emission. Very remarkable and new, however, is the deviation of $`\frac{\mathrm{\Delta }\mathrm{v}}{\mathrm{v}_\mathrm{o}}`$ from the logarithmic temperature dependence below 25 mK which is unambiguous at least to 10 mK (the uncertainty below 10 mK is only due to the thermometer calibration as discussed earlier). This deviation is consistent within the standard TM assuming a cut-off $`\mathrm{\Delta }_{\mathrm{o},\mathrm{min}}/\mathrm{k}_\mathrm{B}`$ of the energy distribution of the tunneling states, which affects the speed of sound v:
$$\frac{\mathrm{v}(\mathrm{T})\mathrm{v}_\mathrm{o}}{\mathrm{v}_\mathrm{o}}=\frac{\mathrm{\Delta }\mathrm{v}}{\mathrm{v}_\mathrm{o}}=\mathrm{C}(\mathrm{ln}\frac{\mathrm{T}}{\mathrm{T}_\mathrm{o}}+\frac{\mathrm{\Delta }_{\mathrm{o},\mathrm{min}}/\mathrm{k}_\mathrm{B}}{2\mathrm{T}}),$$
(1)
where v<sub>o</sub> is the speed of sound at some reference temperature $`\mathrm{T}_\mathrm{o}`$, and C is the tunneling strength. The long-dashed curve, calculated with $`\mathrm{\Delta }_{\mathrm{o},\mathrm{min}}/\mathrm{k}_\mathrm{B}`$ = 6.6 mK, fits the data in Fig. 3b well. Note that a cut-off should not affect the internal friction significantly, in agreement with the experimental findings. Evidence for a cut-off energy has also been obtained recently from heat pulse measurements of a-SiO<sub>2</sub> ($`\mathrm{\Delta }_{\mathrm{o},\mathrm{min}}/\mathrm{k}_\mathrm{B}`$ = 3.1 mK), and from dielectric measurements on a multi-component alumosilicate glass (12.2 mK). While such a cut-off may indeed be caused by interaction between the tunneling defects, our data show no evidence for the effects of such interactions beyond this gap.
In conclusion, our measurements have shown no evidence below 500 mK for a deviation from the predictions of the standard Tunneling Model provided we include a cut-off energy in the mK range. Our results disagree with the earlier studies in which much weaker temperature dependences of both internal friction and speed of sound than predicted by the model had been observed, although at different frequencies ($``$ 14 kHz). Those measurements had been interpreted as evidence for defect interactions at temperatures as high as 100 mK. We suggest that an alternative explanation may be based, at least in part, on experimental problems common to acoustic experiments such as self-heating and non-linear responses at large strain amplitudes, which can lead to significant errors as we have shown here. It appears essential that all previous evidence for defect interactions be inspected meticulously in order to convincingly exclude these sources of errors. This is not feasible with the published information. Therefore, the fascinating problem of interactions between the tunneling defects should be left as an open question at this time.
We thank Eric Smith and Ch. L. Spiel for their help and many stimulating discussions and Kris Poduska for the loan of the LR700 resistance bridge. We also thank J. Classen for sending us his preprint. This work was supported by the National Science Foundation, grant No.DMR-970972, and DMR9705295. |
no-problem/9912/astro-ph9912097.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Significant advances in science inevitably occur when the state of the art in instrumentation improves. NASAโs newest Great Observatory, the Chandra X-Ray Observatory (CXO) โ formally known as the Advanced X-Ray Astrophysics Facility (AXAF) โ launched on July 23, 1999 and represents such an advance. The CXO is designed to study the x-ray emission from all categories of astronomical objects from normal stars to quasars. Observations with CXO will therefore obviously enhance our understanding of neutron stars and black holes.
CXO has broad scientific objectives and an outstanding capability to provide high-resolution ($``$ 0.5-arcsec) imaging, spectrometric imaging and high resolution dispersive spectroscopy over the energy band from 0.1 to 10 keV. CXO, together with ESAโs XMM, the Japanese-American Astro-E and ultimately the international Spectrum-X mission lead by Russia, will usher in a new age in x-ray astronomy and high-energy astrophysics.
NASAโs Marshall Space Flight Center (MSFC) manages the Chandra Project, with scientific and technical support from the Smithsonian Astrophysical Observatory (SAO). TRWโs Space and Electronics Group was the prime contractor and provided overall systems engineering and integration. Hughes Danbury Optical Systems (HDOS), now Raytheon Optical Systems Incorporated, figured and polished the x-ray optics; Optical Coating Laboratory Incorporated (OCLI) coated the polished optics with iridium; and Eastman Kodak Company (EKC) mounted and aligned the optics and provided the optical bench. Ball Aerospace & Technologies was responsible for the Science Instrument Module (SIM) and the CCD-based aspect camera for target acquisition and aspect determination. The scientific instruments, discussed in some detail below, comprise two sets of objective transmission gratings that can be inserted just behind the 10-m-focal-length x-ray optics, and two sets of focal-plane imaging detectors that can be positioned by the SIMโs translation table.
The fully deployed CXO, shown schematically in Figure 1, is 13.8-m long, with a 19.5-m-long solar-array wingspan. The on-orbit mass is about 4500-kg. CXO was placed in a highly elliptical orbit with a 140,000-km apogee and 10,000-km perigee by the Space Shuttle Columbia, Boeingโs Inertial Upper Stage, and Chandraโs own integral propulsion system. Figure 2 shows photos of the payload and the sequence of events through the deployment from the Shuttle. This particular launch gained some additional notoriety due to the Commanderโs (Colonel Eileen Collins) gender.
## 2 The X-ray Optics
The heart of the observatory is, of course, the x-ray telescope. Grazing-incidence optics function because x rays reflect efficiently if the angle between the incident ray and the reflecting surface is less than the critical angle. This critical grazing angle is approximately $`10^2(2\rho )^{1/2}/E`$, where $`\rho `$ is the density in g-cm<sup>-3</sup> and E is the photon energy in keV. Thus, higher energy telescopes must have dense optical coatings (iridium, platinum, gold, etc.) and smaller grazing angles. The x-ray optical elements for Chandra and similar telescopes resemble shallow angle cones, and two reflections are required to provide good imaging over a useful field of view; the first CXO surface is a paraboloid and the second a hyperboloid โ the classic Wolter-1 design. The collecting area is increased by nesting concentric mirror pairs, all having the same focal length. The wall thickness of the inner elements limit the number of pairs, and designs have tended to fall into two classes: Those with relatively thick walls achieve stability, hence angular resolution, at the expense of collecting area; those with very thin walls maximize collecting area but sacrifice angular resolution. NASAโs Einstein Observatory (1978), the German ROSAT (1990), and the CXO optics are examples of the high-resolution designs, while the Japanese-American ASCA (1993) and European XMM mirrors are examples of emphasis upon large collecting area.
The mirror design for CXO includes eight optical elements comprising four paraboloid/hyperboloid pairs which have a common ten meter focal length, element lengths of 0.83-m, diameters of 0.63, 0.85, 0.97, and 1.2-m, and wall thickness between 16-mm and 24-mm. Zerodur, a glassy ceramic, from Schott was selected for the optical element material because of its low coefficient of thermal expansion and previously demonstrated capability (ROSAT) of permitting very smooth polished surfaces.
Figure 3 shows the largest optical element being ground at HDOS. Final polishing was performed with a large lap designed to reduce surface roughness without introducing unacceptable lower frequency figure errors. The resulting rms surface roughness over the central 90% of the elements varied between 1.85 and 3.44 $`\AA `$ in the 1 to 1000-mm<sup>-1</sup> band; this excellent surface smoothness enhances the encircled energy performance at higher energies by minimizing scattering.
The mirror elements were coated at OCLI by sputtering with iridium over a binding layer of chromium. OCLI performed verification runs with surrogates before each coating of flight glass; these surrogates included witness samples. The x-ray reflectivities of the witness flats were measured at SAO to confirm that the expected densities were achieved. The last cleaning of the mirrors occurred at OCLI prior to coating, and stringent contamination controls were begun at that time because both molecular and particulate contamination have adverse impacts on the calibration and the x-ray performance. Figure 4 shows the smallest paraboloid in the OCLI handling fixture after being coated.
The final alignment and assembly of the mirror elements into the High Resolution Mirror Assembly (HRMA) was done at, and by, EKC. The completed mirror element support structure is shown in Figure 5. Each mirror element was bonded near its mid-station to flexures previously attached to the carbon fiber composite mirror support sleeves. The four support sleeves and associated flexures for the paraboloids can be seen near the top of the figure, and those for the outer hyperboloid appear at the bottom. The mount holds more than 1000 kg of optics to sub-arcsecond precision.
The mirror alignment was performed with the optical axis vertical in a clean and environmentally controlled tower. The mirror elements were supported to approximate a gravity-free and strain-free state, positioned, and then bonded to the flexures. A photograph taking during the assembly and alignment process is shown in Figure 6. Despite the huge mass of the system and the stringent environmental controls, the heat produced by a 50 watt light bulb at the top of the facility caused some alignment anomalies until detected and resolved.
The HRMA was taken to MSFC for pre-launch x-ray calibration (see OโDell and Weisskopf (1998) and references therein) in the fall of 1996, and then to TRW for integration into the spacecraft. Testing at MSFC took place in the X-Ray Calibration Facility (XRCF), shown in Figure 7. The calibration facility has a number of x-ray source and detector systems and continues to be used for x-ray tests of developmental optics for such programs as Constellation-X. Details concerning the XRCF may be found in Weisskopf and OโDell (1997) and references therein.
X-ray testing demonstrated that the CXO mirrors are indeed the largest high-resolution X-ray optics ever made; the nominal effective area (based on the ground calibrations) is shown as a function of energy in the left panel of Figure 8, along with those of their Einstein and ROSAT predecessors. The CXO areas are about a factor of four greater than the Einstein mirrors. The effective areas of CXO and ROSAT are comparable at low energies because the somewhat smaller ROSAT mirrors have larger grazing angles; the smaller grazing angles of CXO yield more throughput at higher energies. The fraction of the incident energy included in the core of the expected CXO response to 1.49-keV x rays is shown as a function of image radius in the right panel of Figure 8 including early in-flight data. The responses of the Einstein and ROSAT mirrors also are shown. The improvement within 0.5-arcsec is dramatic, although it is important to note that the ROSAT mirrors bettered their specification and were well matched to the principal detector for that mission. The excellent surface smoothness achieved for the CXO (and ROSAT) mirrors result in a very modest variation of the performance as a function of energy; this reduces the uncertainties which accrue from using calibration data to infer properties of sources with different spectra, and improves the precision of the many experiments to be performed.
## 3 The Instruments
CXO has two focal plane instruments โ the High-Resolution Camera (HRC) and the Advanced CCD Imaging Spectrometer (ACIS). Each of these instruments, in turn, has two detectors, one optimized for direct imaging of x rays that pass through the optics and the other optimized for imaging x rays that are dispersed by the objective transmission gratings when the latter are commanded into place directly behind the HRMA. Each focal-plane detector operates in essentially photon counting mode and has low internal background. A slide mechanism is utilized to place the appropriate instrument at the focus of the telescope. Provision for focus adjustment is also present.
### 3.1 The Focal Plane Instruments
The HRC was produced at SAO; Dr. S. Murray is the Principal Investigator. The HRC-I is a large-format, 100-mm-square microchannel plate, coated with a cesium iodide photocathode to improve x-ray response. A conventional cross-grid charge detector reads out the photo-induced charge cloud and the electronics determine an arrival time to 16$`\mu `$s, and the position with a resolution of about 18 $`\mu `$m or 0.37 arcsec. The spectroscopy readout detector (HRC-S) is a 300-mm x 30-mm, 3-section microchannel plate. Sectioning allowed the 2 outside sections to be tilted in order to conform more closely to the Rowland circle that includes the low-energy gratings.
The ACIS has 2 charge coupled-device (CCD) detector arrays: ACIS-I is optimized for high-resolution spectrometric imaging; ACIS-S is optimized for readout of the high-energy transmission gratings, although these functions are not mutually exclusive. Prof. G. Garmire of the Pennsylvania State University is the Principal Investigator. The Massachusetts Institute of Technologyโs Center for Space Research, in collaboration with Lincoln Laboratories, developed the detector system and manufactured the CCDs; Lockheed-Martin integrated the instrument. Stray visible light is shielded by means of baffles and an optical blocking filter (about 1500-$`\AA `$ aluminum on 2000-$`\AA `$ polyimide). The ACIS-I is a 2x2 array of CCDs. The 4 CCDs tilt slightly toward the optics to conform more closely to the focal surface. Each CCD has 1024 x 1024 pixels of 24-$`\mu `$m (0.5-arcsec) size. The ACIS-S is a 1x6 array with each chip tilted slightly to conform to the Rowland circle and includes two back-illuminated CCDs, one of which is at the best focus position. The back-illuminated devices cover a broader bandwidth than the front-illuminated chips and, under certain circumstances, may be the best choice for high-resolution, spectrometric imaging.
### 3.2 The Transmission Gratings
Both sets of objective transmission gratings consist of hundreds of co-aligned facets mounted to supporting structures on 4 annuli (one for each of the four co-aligned mirror pairs) to intercept the x rays exiting the HRMA. In order to optimize the energy resolution, the grating support structure holds the facets close to the Rowland toroid that intercepts the focal plane. The two sets of transmission gratings, attached to the mounting structure are shown in Figure 9.
The Low-Energy Transmission Grating (LETG) provides high-resolution spectroscopy at the lower end of the CXO energy range. Dr. A Brinkman, of the Space Research Organization of the Netherlands, is the Principal Investigator. The LETG was developed in collaboration with the Max Planck Institut fรผr Extraterrestische Physik, Garching. The LETG has 540 1.6-cm diameter grating facets, 3 per grating module. Ultraviolet contact lithography was used to produce an integrated all-gold facet bonded to a stainless-steel facet ring. An individual facet has 0.43-$`\mu `$m-thick gold grating bars with 50% filling factor and 9920-$`\AA `$ period, resulting in 1.15-$`\AA `$/mm dispersion. The HRC-S is the primary LETG readout.
The High-Energy Transmission Grating (HETG) provides high-resolution spectroscopy at the higher end of the CXO energy range. Prof. C. Canizares of the Massachusetts Institute of Technology Center for Space Research is the Principal Investigator. This group developed the instrument in collaboration with MITโs Nanostructures Laboratory. The HETG has 336 2.5-cm square grating facets. Microlithographic fabrication using laser interference patterns was used to produce the facets, which consist of gold grating bars with 50% filling factor on a polyimide substrate. The HETG uses gratings with 2 different periods which are oriented to slightly different dispersion directions, forming a shallow โXโ image on the readout detector as shown in Figure 10.
The Medium-Energy Gratings (MEG) have 0.40-$`\mu `$m-thick gold bars on 0.50-$`\mu `$m-thick polyimide with 4000-$`\AA `$ period, producing 2.85-$`\AA `$/mm dispersion, and are placed behind the outer two CXO mirrors. The High-Energy gratings (HEG), placed behind the inner two CXO mirror pairs are 0.70-$`\mu `$m -thick-gold bars on 1.0-$`\mu `$m-thick polyimide with 2000-$`\AA `$ period, resulting in 5.7-$`\AA `$/mm dispersion. The ACIS-S is the primary readout for the HETG.
## 4 Science with CXO
Given the superb capabilities of the optics and associated instrumentation, the scientific possibilities are almost incredible. The most exciting investigations will no doubt result from the unexpected discoveries that the improved sensitivity, angular resolution, and energy resolution produces. The potential of Chandra is illustrated in Figure 11 which shows the official โfirst-lightโ image. The target was the supernova remnant Cas A. This image, based on only a few thousand seconds observing time, was taken with the back-illuminated chip at the best focus position on the ACIS-S. We see, for the first time, that there is a compact, x-ray emitting object at the center of the 300-year-old remnant. Studies are underway to establish that the positional coincidence is no accident and to determine the nature of the compact object, possibly the long-sought after neutron star or black hole.
Perhaps the neutron star - black hole connection and the utility of CXO are best illustrated by an early observation of the Crab Nebula and pulsar taken as part of the HETG calibration. During grating observations, one also obtains an undispersed (zero order) image. The image quality is essentially that of the HRMA/detector combination and not broadened by the insertion of the grating. Figure 12 shows the zeroth-order image of the Crab Nebula. There are numerous new features, especially the inner ellipse with its bright knots. The pulsar itself is so bright that the central region is โpiled upโ to the point that there are no data โ hence the โholeโ in the image. Pile-up also is present in the data from the nebula and the study of the spectral dependence of these features ought to be the subject of a future ASI. The ubiquity of the jet phenomenon clearly points to the importance of angular momentum as a physical key and critical parameter towards unlocking secrets to all or part of the emission mechanisms.
The capability to perform meaningful, high-spectral-resolution observations with the gratings is illustrated in Figure 13, which shows a portion of the x-ray spectrum from Capella around the Fe-L complex. The red is a HEG spectrum and the green spectrum was produced by the MEG. Observations such as these โ with CXO, XMM, and Astro-E โ will be at the center of new developments in astrophysics in the next century.
## 5 Conclusion
The Chandra X-Ray Observatory will have a profound influence on astronomy and astrophysics. The Observatory is open to use by scientists throughout the world who successfully propose specific investigations. Data will be available through the Chandra X-ray Center (CXC), directed by Dr. H. Tananbaum, and located at the SAO.
## Acknowledgements
I would like to thank the many members of the Chandra team, especially Steve OโDell, Leon Van Speybroeck, Harvey Tananbaum, Steve Murray, Gordon Garmire, Claude Canizares, and Bert Brinkman.
## 6 AXAF web sites
The following lists several Chandra-related sites on the World-Wide Web (WWW). Most sites are cross-linked to one another. Often you will find that these contain the best and most recent sources of detailed information; hence, the minimal number of entries in the bibliography.
Chandra X-Ray Center (CXC), operated for NASA by the Smithsonian Astrophysical Observatory (SAO).
Chandra Project Science, at the NASA Marshall Space Flight Center (MSFC).
AXAF Mission Support Team (MST), at the Smithsonian Astrophysical Observatory (SAO).
AXAF High-Resolution Camera (HRC) team, at the Smithsonian Astrophysical Observatory (SAO).
Advanced CCD Imaging Spectrometer (ACIS) team at the Pennsylvania State University (PSU).
Advanced CCD Imaging Spectrometer (ACIS) team at the Massachusetts Institute of Technology (MIT).
Chandra Low-Energy Transmission Grating (LETG) team at Space Research Organisation Netherlands (SRON).
Chandra Low-Energy Transmission Grating (LETG) team at the Max-Planck Institut fรผr extraterrestrische Physik (MPE).
Chandra High-Energy Transmission Grating (HETG) team, at the Massachusetts Institute of Technology (MIT). |
no-problem/9912/cond-mat9912276.html | ar5iv | text | # Supercooling of the disordered vortex lattice in Bi2Sr2CaCu2O8+ฮด
\[
## Abstract
Timeโresolved local induction measurements near to the vortex lattice order-disorder transition in optimally doped Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub> single crystals shows that the highโfield, disordered phase can be quenched to fields as low as half the transition field. Over an important range of fields, the electrodynamical behavior of the vortex system is governed by the co-existence of the two phases in the sample. We interpret the results in terms of supercooling of the highโfield phase and the possible first order nature of the order-disorder transition at the โsecond peakโ.
\]
It is now well accepted that the mixed state in type II superconductors is itself subdivided into different vortex phases. In very clean materials, notably single crystals of the high-$`T_c`$โcuprates YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-ฮด</sub> and Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub> , the vortex lattice, characterized by long-range translational and orientational order, undergoes a first order transition (FOT) to a flux liquid state without long-range order . The FOT is observed at inductions $`B`$ at which vortex pinning by crystalline defects is negligible and the vortex system can rapidly relax to thermodynamic equilibrium , but which are still much below the upper critical field $`B_{c2}`$. The FOT is prolongated into the low temperature regime of nonlinear vortex response by a transition from the weakly pinned lowโfield vortex lattice to a strongly pinned, disordered highโfield vortex phase . This order-disorder transition is manifest through the so-called โsecond peakโ feature in the magnetic hysteresis loops, a result of the dramatic increase of the sustainable shielding current associated with bulk pinning . It was proposed that the crossover from the FOT to the โsecond peakโ regime constitutes a critical point in the phase diagram , which in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub> would lie near $`T40`$ K. In more dirty type-II superconductors the FOT and the critical point are absent and the critical current โpeak effectโ is found at temperatures up to $`T_c`$ . The peak effect is often accompanied by strongly history dependent dynamical behavior of the vortex system at fields and temperatures just below it, suggesting that a first order transition lies at its origin .
Among the abovementioned materials the layered superconductor Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub> has a specific interest: its high Ginzburg number $`Gi0.01`$ means that vortex lines are extremely sensitive to thermal and static fluctuations and that the FOT and secondโpeak lines are depressed to inductions lower than 1 kG. The local induction and flux dynamics around the transition can then be accurately measured using local Hallโarray magnetometry and magneto-optics. Recently, the decomposition of the vortex system near the FOT in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub> into coexisting lattice and liquid phases was imaged using this latter technique . In this Letter, we image the flux dynamics and the coexistence of the two pinned vortex phases in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub> near the disordering transition at the โsecond peakโ . In particular, we find that the disordered phase can be quenched to flux densities that are nearly half that at which it exists in equilibrium. We interpret our results in terms of supercooling of the high field phase. This suggests that the orderโdisorder transition at the second peak is of first order, and that it is the โtrueโ continuation of the FOT in the regime of slow vortex dynamics. By implication, we propose that a putative critical point lies at a temperature not exceeding 14 K.
The experiments were performed on an optimally doped Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub> single crystal ($`T_c=90`$ K) of size $`630\times 250\times 35`$ $`\mu `$m<sup>3</sup>, grown at the University of Tokyo using the travellingโsolvent floating zone technique, and selected for its uniformity. Previous experiments on this crystal using the Hall-probe array technique have revealed the disordering transition of the vortex lattice to occur at $`B_{sp}=380`$ G . We have visualized the flux density distribution at inductions close to the transition using the technique of Ref. . A ferrimagnetic garnet film with inโplane anisotropy is placed directly on top of the crystal, and observed using linearly polarized light. The induction component perpendicular to the film induces a perpendicular magnetization and concommitant Faraday rotation of the polarization, which is then vizualised using an analyzer. The local inductionโvariations induced by the presence of the superconducting crystal are revealed as intensity variations of the reflected light, the brighter regions corresponding to the greater flux, or vortex, density. The technique is particularly useful for the study of the lowโfield behavior of oxide superconductors, in which the measurement of the electromagnetic response of the vortex system is easily marred by the presence of surface barriers , the appearance of the Meissner phase , and macroscopic defects . The direct mapping of the flux density allows one to distinguish where currents flow inside the superconductor, to identify inhomogeneous parts of the crystal, and, eventually, to eliminate them.
Figure 11(a) shows a magneto-optical image of the crystal after zeroโfield cooling to $`T=24.6`$ K and the slow ramp of the applied magnetic field $`H_a`$ to 486 G. There is a bright belt around the crystal edge, corresponding to a region of high field gradient, and, visible under the sawtoothโlike magnetic domain wall structure in the garnet, an inner region with little contrast, indicating a plateau in the local induction. The axis of the sawtooth structure is located there where the induction component parallel to the garnet film vanishes. This corresponds to the boundary between regions of zero and non-zero screening current in the Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> crystal. We infer that no current flows in the central region of (nearly) constant induction. Conversion of the light intensity to flux density shows that the plateau induction equals that expected at the transition, $`B_{sp}=380`$ G.
The evolution of the flux profiles at successive values of the applied field during the ramp is shown in Fig. 11(b). At small $`H_a`$, one has a comparatively large step in the induction at the crystal edges, and a domeโshaped flux distribution in the crystal interior. Such a flux profile occurs when the edge screening current due to a surface barrier is much greater than the bulk shielding current, which is the result of vortex pinning . The domeโlike profile moves up to higher induction values as field is increased; its evolution stops when the induction in the crystal center reaches $`B_{sp}=380`$ G (for $`H_a=427`$ G). As
$`H_a`$ is increased further, the flux profile flattens out, i.e. $`B`$ becomes constant throughout the crystal as the highโfield vortex phase spreads outwards from the crystal center. As a result, the slope $`B/H_a`$ becomes equal to the Meissner slope . At $`H_a=444`$ G, the whole crystal is in the highโfield state, and new flux (vortices) penetrates from the edges; it cannot, however, accumulate in the crystal center but adopts the linear gradient characteristic of the pinning-induced critical state. This indicates that, at this temperature, field, and field ramp rate, the pinning current is comparable to or greater than the surface barrier current, giving rise to the โsecond peak featureโ in the magnetic moment .
Figure 22(a) shows the relaxation of the flux profile after a rapid decrease of the applied field from 500 G to 120 G, at $`T=23`$ K. The initial flux profile is similar to the one for $`H_a=488G`$ in Fig. 1: the โcritical stateโ fronts of the highโfield phase have not yet penetrated the whole crystal so that the induction in the crystal center is nearly constant $`BB_{sp}`$; Thus, the internal induction is lower than the applied field because of the combined screening by the surface barrier current and by the highโfield phase. When $`H_a`$ is suddenly decreased, the sample initially fully screens the field change ($`t=0.16`$ s). From $`t0.32`$ s onwards, vortices leave the sample. The flux
profiles display three distinct linear sections with different gradients, corresponding to three mechanisms opposing vortex motion and exit. The gradient nearest to the crystal edge corresponds to the surface barrier current ; the two gradients in the bulk correspond to the (rapidly decaying) screening current in the lowโfield vortex lattice phase and the (nearly constant) current in the highโfield disordered vortex phase, respectively. The phase transformation line, at which one passes from the lowโfield to the high field current, progressively moves to the crystal center, until the whole crystal is in the lowโfield phase at $`t10`$ s. We note that these features are not observed if one prepares a similar initial flux profile with a central plateau of $`B<B_{sp}`$ (Fig. 22(b)). There are then only two flux gradients corresponding to the surface barrier and the screening current in the lowโfield phase. These results unambiguously demonstrate that the region of constant flux density $`B_{sp}`$ in the sample center, obtained during a slow field ramp (Fig.11), is in the highโfield phase; namely, it responds to an external field perturbation by developping the corresponding, โhighโfieldโ, screening current. Moreover, the screening current at any point in the crystal depends on the history of the vortex system. This is well seen at e.g. $`x=100`$ $`\mu `$m and $`B=250`$ G: if this induction is attained by a rapid quench from the highโfield phase, the screening current is equal to that usually observed for $`B>B_{sp}`$. If, during the experiment, the vortex system did not undergo the phase transformation to the disordered state, a screening current characteristic of the lowโfield phase is observed.
A glance at the flux density scale on Fig. 22 shows that the disordered vortex phase in the center of the crystal, identifiable in Fig. 22(a) by the large screening current it can sustain, has in fact been quenched to inductions nearly half $`B_{sp}`$. A similar situation occurs if one rapidly increases the magnetic field from zero to a value much
above $`B_{sp}`$ (Fig. 33). Initially, the crystal again perfectly screens the field change. As vortices enter the crystal, they are initially in the disordered state. Only when they move sufficiently far into the interior does the phase transformation to the ordered vortex lattice state take place. The decrease of the flux density from $`B\mu _0H_a>B_{sp}`$ at the crystal edge to $`BB_{sp}`$ inside the sample due to formation of the critical state necessitates the presence of the phase transformation line in the sample interior. This is visible in Fig. 33 as the changes in the induction gradient near 80 and 220 $`\mu `$m. The induction at which the transformation takes place is again lower than $`B_{sp}`$, i.e. the high field phase is now quenched as it penetrates from the sample edge, in this case to an induction $`200`$ G. As the induction gradient in the high field phase relaxes due to thermally activated flux motion, the phase transformation line moves from the crystal edges towards the crystal centre. In contrast to the dramatic quenching of the disordered phase, we did not, in any experiment, obtain unambiguous indications that the low field state can be prepared at $`B>B_{sp}`$.
The above observations have important implications for the vortex phase diagram in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub> and other layered superconductors. First of all, it is shown that the phase transition line between the lowโfield lattice phase and the highโfield disordered vortex phase is, to within our spatial resolution ($`10`$ $`\mu `$m), sharp, and that its position can be readily identified by the difference in shielding current density developped by the two phases after a field pertubation. The phase transformation line can, depending on the ratio of these currents, move inwards from the crystal edge, which happens at relatively low temperature or large field sweep rates (Figs. 22 and 3), or outwards from the crystal center, at higher temperatures near the reported โcritical pointโ or during slow field ramps (such as in Fig. 11). In the latter case, any small โexternalโ field perturbation is screened by the high current developped by the disordered vortex phase, so that the induction in the crystal center is held constant and equal to $`B_{sp}`$ (as in Fig. 11). This notably holds for the induction change $`\mathrm{\Delta }B`$ associated with the entropy change $`\mathrm{\Delta }S=(1/\mathrm{\Delta }B)H_m/T`$ at the FOT ($`H_m`$ is the FOT field). Thus, our results give a natural explanation for the apparent vanishing of $`\mathrm{\Delta }S`$ at a nonzero temperature, without the need to invoke the presence of a critical point in the phase diagram. It is not $`\mathrm{\Delta }S`$ that vanishes at $`T40`$ K, but the corresponding change $`\mathrm{\Delta }B`$ of the equilibrium flux density which cannot be observed, because it is perfectly screened by the pinning โcriticalโ current developped in the disordered highโfield phase. In other words, thermodynamic equilibrium can no longer be achieved, because, for $`T40`$ K , the highโfield phase is pinned on the typical experimental time scale. The extra vortices needed to satisfy the constitutive relation $`B(H)`$ cannot enter the region where the highโfield phase is present.
Further support for the absence of a critical point near 40 K is given by the quenching experiments of Figs. 22(a) and 3. The flux distributions shown in these plots correspond to the coexistence of the ordered lowโfield vortex lattice state and the disordered highโfield phase. The latter is metastable since it exists at inductions that at much smaller than $`B_{sp}`$. We interpret this observation as supercooling of the disordered state, which in turn suggests that the transition at $`B_{sp}`$ is of first order. Further, the continuity with the highโtemperature FOT implies that it is simply the continuation of the โlatticeโto liquidโ transition into the regime of slow vortex dynamics. The observation of the present features at temperatures down to 14 K, below which the โsecond peakโ cannot be observed at ordinary experimental timescales, means that if a critical point exists, it should lie below this temperature. This would be in agreement with the highโfield vortex glass transition line of Ref. . The lowโfield extrapolation of this line, was found to interrupt $`B_{sp}`$ around the same temperature.
Finally we point out the consequences for measurements of flux dynamics. The possibility of phase coexistence should be taken into account in lowโfield magnetic relaxation experiments, especially those triggered by a decrease in the applied magnetic field. In such experiments, the decay rate of the global magnetic moment and of the local induction will be determined by no less than four contributions: the relaxation of the surface barrier current, flux creep in the lowโfield and highโfield vortex phase, and the rate at which the vortex lattice recrystallizes at the phase transformation line. At temperatures below 20 K, these processes become slow and similarly impede flux transport. Supercooling of a disordered vortex phase has been previously observed in other type-II superconductors such as $`\alpha `$-Nb<sub>3</sub>Ge and NbSe<sub>2</sub>. The anomalous flux dynamics observed in the field regime close too but below the critical current peak may find a natural explanation in the โasymmetricโ vortex response and flux profiles introduced in transport measurements by phase coexistence and the supercooling phenomenon.
In conclusion, we have visualized the flux distribution in the โsecond peakโ regime in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub>. The peak effect feature, the fact that $`M/H_a=1`$ below the peak, and the vanishing of $`\mathrm{\Delta }B`$ associated with the FOT at $`T40`$ K are the result of the pinning current in the high field phase, which prohibits flux entry into this phase until the phase transformation is complete. We have observed supercooling of the highโfield disordered vortex system to fields nearly half the phase transformation field $`B_{sp}`$. The results suggest that the vortex orderโdisorder transition at the second peak in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> is first order, and that any critical point in the phase diagram lies below 14 K.
We thank N. Motohira for providing the Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> crystal, and M.V. Feigelโman, P.H. Kes, A. Soibel, A. Sudbรธ, and E. Zeldov for fruitful discussions. |
no-problem/9912/cond-mat9912164.html | ar5iv | text | # A Comment on โSuperconducting-Normal Phase Transition in (Ba1-xKx)BiO3, x = 0.40, 0.47โ by B. F. Woodfield, D. A. Wright, R. A. Fisher, N. E. Phillips and H. Y. Tang, Phys. Rev. Lett. 83, 4622 (1999)
In a recent paper by Woodfield et al., comments have been made on our earlier paper , their reference . In this paper it is stated that: (a) that there is specific heat discontinuity at the superconducting phase transition in (Ba<sub>1-x</sub>K<sub>x</sub>)BiO<sub>3</sub> (for x = 0.4 and 0.47) of the order of a few mJ/mole-K, (b) that this is what is expected, and (c) that there is no reason to invoke a higher order transition, as we recently suggested.
The logical foundation of our paper, which suggests that the superconducting-normal transition in (Ba<sub>1-x</sub>K<sub>x</sub>)BiO<sub>3</sub> with x = 0.40 may be of order IV, rests on three independent observations. They are: (1) the lack of an observed discontinuity in magnetic susceptibility at the superconducting transition (T<sub>c</sub>), $`\mathrm{\Delta }\chi `$ = 0, (2) near T<sub>c</sub>, the thermodynamic critical field, H<sub>0</sub>(T), obtained by the integration of the magnetization M(H) at different temperatures, depends on temperature as (1-T/T<sub>c</sub>)<sup>2-ฮผ</sup>, where $`\mu `$ is small and $`<`$ 1 and, (3) the lower critical field near T<sub>c</sub> fits the expression H<sub>c1</sub>(T) $``$ (1-T/T<sub>c</sub>)<sup>3</sup>.
The fact that, in all of the samples we have measured, the M vs. H curves (1) never approach H<sub>c2</sub> linearly, as required for a second order transition from the Abrikosov state , and (2) the slope varies smoothly into the normal state, substantiates our first observation stated above. Contary to the statement by Woodfield et al., that a discontinuity was observed by Hundley et al., in their constant field temperature dependent suseptibility measurements, they did see a discontinuity at T<sub>c1</sub> where complete flux exclusion occurs but no discontinuity at T<sub>c2</sub>, the normal to superconducting transition. Thus, observation (1) is consistent with a vanishing specific heat discontinuity, $`\mathrm{\Delta }`$C = 0 and implies that the transition *cannot be second order*. The assertion for a IV order transition comes from observation (2). In the end, observation (3) is a verification of a model for a IV order phase transition. *All three of the experimental observations are inconsistent with a second order phase transition.* If the transition were II order, then the exponents in (2) and (3) would be, respectively 1-$`\alpha `$, and 1 where $`\alpha `$ is the small specific heat exponent.
In addition, there had not been a finite value of $`\mathrm{\Delta }`$C at T<sub>c</sub> measured in the published data of Hundley et al. and Stupp et al. even though their results indicate that, in the normal state $`\gamma `$ = 150 mJ/moleK<sup>2</sup>. This large value of $`\gamma `$ made the problem look acute in that the difference between the expected size of the discontinuity and the experimental uncertainty was large. In Woodfield et al., there is no mention of the volume fraction of the samples that become superconducting, making the magnititude of their estimates of $`\gamma `$ 1 mJK<sup>-2</sup> of unknown accruacy even if the transition were second order.
The expectation that $`\mathrm{\Delta }`$C $``$ 1 mJK<sup>-2</sup> is based on a BCS expression which is to assume that the transition is second order. For example, if we put the observed temperature dependence of H<sub>0</sub>(T) in the expression used by Batlogg et al., we get $`\mathrm{\Delta }`$C = 0. Similarly, the expressions described in Ref. are also suspect because they assume a value for $`\kappa `$ which in light of its temperature dependence leads to $`\mathrm{\Delta }`$C = 0.
Interpreting the data of Woodfield et al. is problematic because of apparent temperature independence of the anomaly. For the highest T<sub>c</sub>, x = 0.4, samples the magnitude of the specific heat anomaly changes with field, but the temperature location of the anomaly changes very little. There is a large amount of scatter in the data presented, but one can argue that the peak in C for H = 1 T is actually higher in temperature than the one for H = 0.5 T. In addition, there appears to be no anomaly at any temperature for fields $`>`$ 3 T even though H<sub>c2</sub> is only reduced in all other measurements to about 22 K at 5 T, and exceeding 20 T at 4 K. For the x = 0.47 sample, no peak is observed for fields $`>`$ 0.5 T, and we point out that the temperature of the observation of the peaks is in the range, 12 - 17 K, where we previously noted anomalies in the critical field vs. temperature curves. *We must conclude that whatever this data may represent, it may not be the onset of superconductivity.*
A portion of this work was perfomed at the National High Magnetic Field Laboratory, which is supported by NSF Cooperative Agreement No. DMR-9527035 and by the State of Florida. |
no-problem/9912/astro-ph9912208.html | ar5iv | text | # Untitled Document
THE CERES ASTRONOMICAL DATABASE
E. Xanthopoulos, N. J. Jackson
Jodrell Bank Observatory, United Kingdom
I. Snellen
IoA, University of Cambridge, United Kingdom
J. Dennett-Thorpe
Kapteyn Institute, Groningen, The Netherlands
K.-H. Mack
Istituto di Radioastronomia del CNR, Bologna, Italy
Key words: Astronomical archive $``$ gravitational lenses $``$ active galacti nuclei
ABSTRACT
The CERES Astronomical Archive was created initially in order to make the large amount of data that were collected from the two surveys, the Jodrell-VLA Astrometric Survey (JVAS) and the Cosmic Lens All-Sky Survey (CLASS), easily accessible to the partner members of the Consortium for European Research on Extragalactic Surveys (CERES) through the web. Here we describe the web based archive of the 15,000 flat spectrum radio sources. There is a wealth of information that one can access including downloading the raw and processed data for each of the sources. We also describe the search engines that were developed so that a) one can search through the archive for an object or a sample of objects by setting specific criteria and b) select new samples. Since new data are continually gathered on the same sources from follow up programs, an automatic update engine was created so that the new information and data can be added easily in the archive.
INTRODUCTION
The Consortium for European Research on Extragalactic Surveys (CERES) is a TMR research network that was created in order to work on two major surveys the Jodrell-VLA Astrometric Survey (JVAS; Patnaik et al. 1992; Patnaik 1993; Browne et al. 1998, Wilkinson et al. 1998) and the Cosmic Lens All-Sky Survey (CLASS; Browne et al. 1998) which together contain a total of $``$ 15,000 flat spectrum radio sources. Initially the main research objective was to find new gravitational lens systems (17 lenses have been found up to now), and to use the unbiased samples of gravitational lenses for constraints on cosmological parameters. CERES was also set up to study high and low redshift AGN.
The survey consisted of observations of all the flat spectrum radio sources in the Northern sky (declination between 0 and 75, galactic latitude $``$ 10, spectral index $`0.5`$ based on the flux densities in the GB6 (Gregory et al. 1996) and the NVSS (Cotton et al. 1996) surveys) that had a GB6 5 GHz flux density $``$ 30 mJy. The survey was done in different epochs (called JVAS 1, 2 and 3 and CLASS 1, 2, 3 and 4) but a recalibration of all the data was performed in 1999 in order to achieve a uniform data reduction and sample. Following the above criteria 10,499 sources compiled the so-called โstatistically complete sampleโ which forms the basis of the archive, while the rest of the observations that do not follow the above criteria comprise the so-called โsupplementary sampleโ. The sources in the โstatistically complete sampleโ were named following the naming convention of the GB6 catalogue (J2000 coordinates). For the supplementary sample the sources are known by names of their original samples which are based on two different naming systems depending on when they were selected.
In order to facilitate the use of the initial and follow up data by the CERES partner members and by the scientific world when the archive will become publicly available, an archive/database was created that holds all the data and background information. Publicly available information can be found at the URL address:
http://www.jb.man.ac.uk/$``$ceres1.
THE DATA ARCHIVE
As soon as one enters the data archive one has the option to link to:
* the statistically complete sample
* the supplementary sample
* the statistically complete sample using the initial survey names
A. The statistically complete sample
10,499 webpages have been created each containing all the available information for each source in the โstatistically complete sampleโ. By clicking on the โstatistically complete sampleโ one has the option to a) download the whole sample webpages b) download the webpages from sources ranging over a 1 hour RA interval or c) download a more simple webpage format of the whole sample.
By clicking on a specific GB6 name/source one can then access a webpage of the format shown in Figure 1 (the figure has been created from a combination of the webpages of two sources so that it shows all the possible information that can be available from the archive).
Information and data that are available for each source are as follows:
* General Information: This includes the official GB6 J2000 and B1950 name for each source obtained from the GB6 position in the manner described in Gregory et al. 1996, as well as J2000 and B1950 coordinates from the initial pointing (NVSS, WENSS, TEXAS) that were available at the time of observations, the GB6 5 GHz coordinates and finally the CLASS 8.4 GHz coordinates, with an accuracy of 200 mas, that came out from the recalibration of all the CERES data.
* Flux densities: For each source one can find the WENSS (0.325 GHz), TEXAS (0.365 GHz), NVSS (1.4 GHz) and GB6 (4.85 GHz) flux densities in mJy, if available. One has also the option to download a โflux mapโ. By providing the coordinates of your object, a cross correlation with all the available catalogues is performed and the โflux mapโ is returned. This gives all the flux densities known for that source and their positions. This map enables the validity of the identifications of sources in different catalogues to be checked at a glance.
* Radio map information: This entry supplies the information that we have from the initial radio data of the survey and namely a) the 8.4 GHz flux density derived from the VLA data b) the old observation name by which the source was known at the time of the observations c) the date of the observations d) the number of visibilities e) the CLASS epoch (when the source was observed, JVAS 1 2 or 3, CLASS 1, 2, 3 or 4) f) the CLASS tape number (this refers to the archive tapes kept at Jodrell Bank).
* Flags: Other information relevant to the radio properties of the target can be obtained with flags. This includes: a) The 1.4 GHz NVSS map of the source consisting of a 4 arcmin diameter circle around the GB6 position which can be downloaded by clicking on the button, b) one can also check whether from the data reduction it was found that the source has multiple components or c) the number of fields processed in AIPS (these reduced maps can also be downloaded by clicking on the appropriate number) and d) a quality control factor for each map that is defined as the ratio of the peak flux density before the selfcalibration divided by the final peak flux density (this is the reliability number). Empirically experience shows that the results on sources with reliability numbers $`>`$2 should be regarded with caution.
* Optical information: There are two main sources from which we get all the optical information: a) our targeted optical follow up observations of the sources that give 1. the morphological type, 2. the redshift, 3. the redshift error, 4. the date of observation, 5. the names of the observers and reducers, 6. the lines that were used in the spectra for determining the redshift, and 7. any specific notes for the object from the optical observations; b) from the APM archive we get the following information for each source: 1. the optical offset in the optical position given our radio position, 2. the APM R ID (galaxy, stellar etc.), 3. the APM PSF in the Red, 4. the R magnitude, 5. the APM B ID, 6. the APM PSF in the Blue, 7. the B magnitude, 8. the colour (B-R), 9. the RA and 10. the DEC J2000 optical coordinates.
* The data: Actual data that now can be downloaded: 1. VLA 8.4 GHz A-configuration raw data (in FITS format), 2. VLA 8.4 GHz maps (in gif format), 3. VLA 8.4 GHz maps (postscript compressed files), 4. APM map (4$`\times `$4 arcmin in gif format), 5. APM list (text file that identifies the sources seen in the APM map. There is also a direct link to the DSS (Digital Sky Survey) with automatic input of the coordinates. One has only to select the type of format of the data and the size of the map and click to download. There are more entries for future MERLIN, HST, optical spectra, X-ray and other wavelength data that will be available to download.
B. The supplementary sample
A separate webpage archive is created for all the observations that do not fall in the โstatistically complete sampleโ as defined using the criteria above. The webpages have the same format and type of information described above, where available.
THE SEARCH ENGINES
In order to search for what is in the database for specific objects and also to select new samples two search engines have been developed:
1. Selection Engine: The web-interface allows the user to select sources on their radio position, flux densities in different surveys, spectral indices, redshift and optical parameters. It selects sources from the masterlist-file, a text file with a collection of all the information that we have for all the sources in our archive. It returns those sources from the masterlist with the specified criteria, giving all the parameters available (class name, GB6 positions, WENSS, NVSS, GB6, CLASS flux densities, APM R and B magnitudes, redshift, morphological type, CLASS and NVSS positions as well as links to available optical spectra or images and VLA maps). In addition it provides links to the individual source-webpages, to NVSS images, DSS images, to other databases like NED and APM, and also the option of a text-file output version.
2. Matching-list Engine: This feature allows the user to see whether a list of objects is in the database, by entering a list of GB6 names, J2000 positions or both. This list is checked against the masterlist (using a user defined search-radius). The outcome is similar to the above.
AUTOMATIC UPDATING
Since we plan to add new data from radio and optical observations, in order to automate everything, an โupdateโ program has been developed. An authorised user can use this programme to add new data to the database. The new data may be one of several classes (e.g. redshift, new optical or radio images, spectra, or comments) which the user is required to declare, and the input formatted accordingly. The program then runs a series of checks and informs the user of any errors, before updating the webpages and associated lists. Because of security problems with web-based programmes, we decided not to implement a web version of this programme, as originally planned.
References
Browne, I. W. A., Jackson, N. J., Augusto, P., Henstock, D. R., Marlow, D. R., Nair, S., Wilkinson, P. N., 1998. The JVAS/CLASS gravitational lens surveys, in Bremer, M. N., Jackson, N., Pรฉrez-Fournon, I., eds, Observational Cosmology with the new radio surveys, Astrophysics and Space Science Library, Vol. 226. Dordrecht: Kluwer Academic Publishers, p. 323
Browne, I. W. A., Wilkinson, P. N., Patnaik, A. R., Wrobel, J. B., 1998. Interferometer phase calibration sources. II - The region 0$``$ $`\delta _{B1950}`$ $``$$`+`$20 , MNRAS, 293, 257
Cotton, W. D., Condon, J. J., Yin, Q. F., et al. , 1996. The NRAO VLA D-Array Sky Survey (NVSS), in proceedings of the 175th Symposium of the International Astronomical Union, eds. Ron D. Ekers, C. Fanti, and L. Padrielli, Kluwer Academic Publishers, p. 503
Gregory, P. C., Scott, W. K., Douglas, K., Condon, J. J., 1996. The GB6 Catalog of Radio Sources, ApJS, 103, 427
Patnaik, A. R., Browne, I. W. A., Wilkinson, P. N., Wrobel, J. M., 1992, MNRAS, 254, 655
Patnaik, A. R., 1993, Proceedings of the 31st Liรจge International Astrophysical Colloquium โGravitational Lenses in the Universeโ, p. 311
Wilkinson, P. N., Browne, I. W. A., Patnaik, A. R., Wrobel, J. M., Sorathia, B., 1998, MNRAS, 300, 790 |
no-problem/9912/nucl-th9912044.html | ar5iv | text | # Pion-Baryon Couplings
## Abstract
We have extended and applied a general QCD parameterization method to the emission of pions from baryons. We use it to calculate the strength and sign of the coupling of pions to the octet and decuplet of baryons. Certain relations between octet and decuplet couplings are pointed out.
In 1989 Morpurgo introduced a parameterization for the properties of hadrons, which expresses masses, magnetic moments, transition amplitudes, and other properties of the baryon octet and decuplet in terms of a few parameters. The method uses only general features of QCD and baryon descriptions in terms of quarks. Recently, Dillon and Morpurgo have shown that the method is independent of the choice of the quark mass renormalization point in the QCD Lagrangian . They have also extended the method to nucleon electromagnetic form factors and radii .
In addition to the electromagnetic properties of baryons, i.e., the interaction with an external photon field, it is possible to consider the interaction of a baryon with an external pion field, and to calculate the pion-baryon couplings. Due to the internal quark structure of the pion, this problem is rather different from the ones treated in Refs. . Despite the additional difficulties due to the pionโs size and mass, the Morpurgo method is nevertheless applicable here, as we argue below.
The Morpurgo method is based on the following considerations. For the observable at hand one formally writes a QCD operator $`\mathrm{\Omega }`$ and QCD eigenstates expressed explicitly in terms of quarks and gluons. This matrix element can, with the help of the unitary operator $`V`$, be reduced to a calculation in the basis of auxiliary (model) three-quark states $`\mathrm{\Phi }_B`$
$$B|\mathrm{\Omega }|B=\mathrm{\Phi }_B|V^{}\mathrm{\Omega }V|\mathrm{\Phi }_B=W_B|๐ช|W_B.$$
(1)
Both the unitary operator $`V`$ and the model states $`\mathrm{\Phi }_B`$ are defined in Ref.. The $`\mathrm{\Phi }_B`$ are pure $`L=0`$ three-quark states excluding any quark-antiquark or gluon components. $`W_B`$ stands for the standard three-quark $`SU(6)`$ spin-flavor wave functions. The operator $`V`$ dresses the auxiliary states $`\mathrm{\Phi }_B`$ with $`q\overline{q}`$ components and gluons and thereby generates the exact QCD eigenstate. Furthermore, the operator $`V`$ contains a Foldy-Wouthuysen transformation, which transforms the original 4-component Dirac spinor $`B`$ into a two-component Pauli spinor contained in $`W_B`$.
One then writes the most general expression for $`๐ช`$ compatible with the space-time and inner QCD symmetries. The orbital and color space matrix elements are absorbed in unknown parameters multiplying the various invariants appearing in the expansion of $`๐ช`$. As an example, for the squared charge radius $`๐ช`$ we need a scalar operator linear in the quark charge $`Q_i`$ . The lowest order one is just $`_iQ_i`$. The next higher order expression is $`_{ij}Q_i๐_i๐_j`$, where the sum is over all quarks. A three-body term is $`_{ijk}Q_i๐_j๐_k`$, so that the full expression reads
$$r_B^2=A\underset{i}{}Q_i+B\underset{ik}{}Q_i๐_i๐_k+C\underset{ijk}{}Q_i๐_j๐_k+\mathrm{}.$$
The expectation value of the three-quark term is expected to be $`1/3`$ of the two-quark term, which in turn should be $`1/3`$ of the one-body term. The reasons for this hierachy of expressions are discussed in Ref. .
Coming back to the nucleon pion coupling and writing the standard effective pseudovector coupling
$$H=\frac{f}{m_\pi }\overline{\psi }\gamma _\mu \gamma _5\psi ^\mu \stackrel{}{\varphi }\stackrel{}{\tau },$$
(2)
it appears that, for a nucleon at rest and in the limit of small four-momentum transfer to the pion, the operator $`\mathrm{\Omega }`$ โno matter how complicatedโ must be such that
$$N|\mathrm{\Omega }|N=\frac{f}{m_\pi }๐\mathbf{}\stackrel{}{\tau }\stackrel{}{\varphi }.$$
(3)
Eq.(3) defines the coupling constant $`f`$ of the (point) pion field $`\stackrel{}{\varphi }`$ to the nucleon, $`๐`$ is the nucleon spin and $`\stackrel{}{\tau }`$ the isospin matrix. It is understood that the right-hand side of Eq.(3) is calculated between the spin-isospin state of the nucleon. In the limit of small four-momentum transfer $`\mathbf{}\stackrel{}{\varphi }=i๐ค\stackrel{}{\varphi }`$. As noted in Ref., even if the right-hand side is non-covariant, referring to the rest frame, the theory is relativistically complete. There can be no other spin-structure for the pion-nucleon interaction, in the limit of small $`๐ค`$.
Because the right-hand side of Eq.(3) has a structure similar to the magnetic moment operator, we proceed as in Refs. . As shown in detail by Morpurgo , the most general parameterization of the axial vector coupling operator $`๐ช`$ for octet and decuplet baryons Eq.(3) can be classified in terms of one-quark, two-quark, and three-quark terms plus Trace (closed loop) terms. The latter are not present here.
For the case under discussion the one-body operator can be taken as
$$๐ช_1=A_1\underset{i}{}\stackrel{}{\tau }^i๐^i๐ค,$$
(4)
where the sum is over the 3 quarks present in the auxiliary state $`\mathrm{\Phi }_B`$. For our purpose, we can use
$$๐ช_1=A_1\underset{i}{}\tau _3^i\sigma _z^ik_z.$$
(5)
With this $`๐ช_1`$, the most general two-body term is given by
$$๐ช_2=A_2\underset{i,ji}{}\tau _3^i\sigma _z^jk_z.$$
(6)
Although one can make up further two-body terms, e.g., $`(\stackrel{}{\tau ^i}\times \stackrel{}{\tau ^j})_3(๐^i\times ๐^j)_zk_z`$, Dillon and Morpurgo have shown that they are not independent. One can also make up three-body terms as $`๐ช_3=A_3\tau _3^i\sigma _z^ik_z๐^j๐^k`$ and others; we shall neglect them here. In that case, the operator we need is
$$๐ช=๐ช_1+๐ช_2.$$
(7)
The auxiliary (model) states $`\mathrm{\Phi }_B`$ used for calculating the expectation values of the operator $`๐ช`$ coincide with the familar $`SU(6)`$ states multiplied by a spatial wave function with orbital angular momentum $`L=0`$. The $`SU(6)`$ states are listed, e.g., in Ref.. We only need the completely symmetric spin-isospin part, $`W_B`$. The radial and color parts of the matrix elements are absorbed in the constants $`A_1`$ and $`A_2`$.
We present, separately, the quark model matrix elements of the operator in Eq.(7) to first order (one-body terms) and second order corrections (two-body terms) in Table I. $`A_1`$ and $`A_2`$ are constants to be determined below, and $`r`$ is the ratio $`\frac{m_u+m_d}{2m_s}=\frac{m_u}{m_s}`$, where $`m_i`$ is the mass of quark i. The reason we include $`r`$ in the two-body term is that a two-body gluon exchange between particles 1 and 2 would be inversely proportional to the masses of the two quarks. This approximate way to take into account SU(3) symmetry breaking works quite well for magnetic moments . We neglect SU(3) symmetry breaking in the one-body term. A more rigorous treatment of SU(3) symmetry breaking requires the introduction of two constants .
In order to obtain from the quark model matrix elements in Table I the conventional pion-baryon couplings, additional overall factors are needed. The pion-octet baryon couplings are defined for spin projection $`m_s=+1/2`$ and maximal isospin projection of the corresponding baryon level operator evaluated between baryon spin and isospin wave functions. Similarly, the $`\mathrm{\Delta }\mathrm{\Delta }\pi `$ coupling is defined for maximal spin and isospin projection .
If we neglect the two-body operators, then the pion coupling to the nucleon is sufficient to fix the unknown constant $`A_1`$ of the theory. When two-body corrections are included, we use the decay of the $`\mathrm{\Delta }(1232)`$ to fix the second constant $`A_2`$. The $`N\mathrm{\Delta }\pi `$ coupling is of the form
$$H_{N\mathrm{\Delta }\pi }=\frac{f_{N\mathrm{\Delta }\pi }}{m_\pi }๐^{}\stackrel{}{๐}^{}\stackrel{}{\varphi }+h.c.,$$
(8)
where $`๐^{}`$ and $`\stackrel{}{๐^{}}`$ are transition spin and isospin operators; they are defined such that their matrix elements are simple Clebsch-Gordan coefficients . The coupling $`f_{N\mathrm{\Delta }\pi }`$ is taken to be 2 f, which gives the $`\mathrm{\Delta }(1232)`$ its experimental width of about 130 MeV .
Without two-body terms, $`A_1`$ is fixed by $`\frac{f^2}{4\pi }=0.08`$, and $`A_1=(3/5)f`$. The first entry in Table I is equal to $`g_A`$, the axial coupling constant in the additive quark model. In this approximation, one obtains the well known additive quark model result for the $`N\mathrm{\Delta }\pi `$ coupling $`f_{N\mathrm{\Delta }\pi }^2=(72/25)f^2`$. If we include the two-body terms, then we also need the empirical relation $`f_{N\mathrm{\Delta }\pi }2f`$ to fix $`A_2`$. In this case, we obtain $`A_10.53f`$ and $`A_20.18f`$, so that $`A_2/A_11/3`$, a quite substantial correction to the additive quark model. For exact SU(3) symmetry, $`r=1`$, but if SU(3) is broken, then $`r330/5500.6`$.
Table II lists the various couplings in terms of $`f`$, the $`\pi ^0p`$ coupling constant, to first order and to second order with and without the inclusion of r. We recall that for the decuplet-octet transition couplings, baryon level spin and isospin Clebsch-Gordan coefficients $`(\mathrm{1\hspace{0.17em}0}SS_z|S^{}S_z^{})(\mathrm{1\hspace{0.17em}0}TT_z|T^{}T_z^{})`$ are needed in order to convert the quark level matrix elements in Table I to the (baryon level) coupling constants listed in Table II. Here, $`S(T)`$ refers to octet, and $`S^{}(T^{})`$ to decuplet baryons. Similarly, in order to obtain the baryon level decuplet-decuplet couplings one uses a conversion factor $`1/(T_z^{}S_z^{})`$ . For example, the entry for $`\mathrm{\Delta }^+`$ in Table II contains a factor $`4/3`$ to go from quarks to the $`\mathrm{\Delta }^+`$. Without SU(3) breaking the decuplet-decuplet $`(DD)`$ couplings can then be generically written as $`f_{DD}=4/3(A+2B)`$, which implies, e.g., $`f_{\mathrm{\Delta }^{++}\mathrm{\Delta }^{++}\pi ^0}=f_{\mathrm{\Delta }^+\mathrm{\Delta }^+\pi ^0}`$.
Our results satisfy the following relation in the SU(3) symmetric case
$`f_{\pi ^0p}+f_{\pi ^0\mathrm{\Xi }^0}`$ $`=`$ $`f_{\pi ^0\mathrm{\Sigma }^+}.`$ (9)
Note that the second-order correction is especially large for the $`\mathrm{\Xi }^0`$, because the coefficient in front of $`A_2`$ is 4 times as large as that in front of $`A_1`$. The two-quark operator $`๐ช_2`$ changes the $`\pi ^0\mathrm{\Delta }^+\mathrm{\Delta }^+`$ coupling from the first order value $`f_{\pi \mathrm{\Delta }\mathrm{\Delta }}=(4/5)f`$ to the total result $`f_{\pi \mathrm{\Delta }\mathrm{\Delta }}=0.23f`$. We agree with Brown and Weise for those pion couplings they calculated using a one-quark operator, namely to the nucleon and $`\mathrm{\Delta }`$.
The $`\pi \mathrm{\Sigma }\mathrm{\Lambda }`$ and the $`\pi \mathrm{\Sigma }^{}\mathrm{\Lambda }`$ couplings remain unaffected by SU(3) symmetry breaking. Irrespective of the value of $`r`$ our octet-decuplet transition couplings satisfy the sum rule
$$\sqrt{2}f_{\mathrm{\Delta }^+p}=f_{\mathrm{\Xi }^0\mathrm{\Xi }^0}+\sqrt{6}f_{\mathrm{\Sigma }^+\mathrm{\Sigma }^+}f_{\mathrm{\Sigma }^0\mathrm{\Lambda }^0}.$$
(10)
This relation is not new. It has been derived before using SU(3) symmetry and its breaking to first order.
Next, we compare our results for the transition couplings to those obtained in the large $`N_c`$ approach . Because the couplings in are computed for $`\pi ^+`$ emission, we calculate the matrix elements in Eq.(1) with $`\tau _z`$ in Eq.(7) replaced by $`\tau _+`$. We then obtain for the transition $`\mathrm{\Delta }^{++}p\pi ^+:4\sqrt{3}(AB)/3`$, for $`\mathrm{\Sigma }^+\mathrm{\Sigma }^0\pi ^+:2\sqrt{2}(A(2r1)B)/3`$, and finally for $`\mathrm{\Sigma }^+\mathrm{\Lambda }^0\pi ^+:2\sqrt{6}(AB)/3`$. By taking ratios of two transition couplings we get for the case $`r=1`$
$$\frac{\mathrm{\Delta }^{++}p}{\mathrm{\Sigma }^+\mathrm{\Sigma }^0}=\sqrt{6}(3.06),\frac{\mathrm{\Sigma }^+\mathrm{\Sigma }^0}{\mathrm{\Sigma }^+\mathrm{\Lambda }^0}=\frac{1}{\sqrt{3}}(0.46)$$
(11)
The numbers in parentheses include SU(3) symmetry breaking in the two-quark term $`(r=0.6)`$. These results are in agreement with those obtained in the large $`N_c`$ approach , including the next-to-leading order corrections, which is undoubtedly more than a numerical coincidence.
In Table III we compare the couplings we obtain with the inclusion of the two-body terms to those derived by other means. The values of the coupling constants from Stoks and Rijken (SR) are obtained from fits to baryon-nucleon scattering data and one-boson exchange potentials. The difference in sign from SR in the entry to Table III is because they use the coupling for $`\pi ^+\mathrm{\Sigma }^+\mathrm{\Lambda }^0`$. The SR couplings are essentially the same as those of Maessen, Rijken, and de Swart . The columns labeled KDOL are obtained from QCD sum rules with the use of SU(3) and โbeyondโ SU(3) by correcting for mass effects. Our values tend to be closer to those of Stoks and Rijken . The latter also satisfy Eq.(9) in contrast to the SU(3) symmetric values of Ref..
Finally, we point out certain analytical relations between octet and decuplet baryon couplings to pions that emerge from our theory (neglecting three-quark terms)
$`f_{\pi ^0p}{\displaystyle \frac{1}{4}}f_{\pi ^0\mathrm{\Delta }^+\mathrm{\Delta }^+}`$ $`=`$ $`{\displaystyle \frac{\sqrt{2}}{3}}f_{\pi ^0p\mathrm{\Delta }^+}`$ (12)
$`f_{\pi ^0\mathrm{\Sigma }^+}{\displaystyle \frac{1}{2}}f_{\pi ^0\mathrm{\Sigma }^+\mathrm{\Sigma }^+}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{6}}}f_{\pi ^0\mathrm{\Sigma }^+\mathrm{\Sigma }^+}`$ (13)
$`f_{\pi ^0\mathrm{\Xi }^0}{\displaystyle \frac{1}{4}}f_{\pi ^0\mathrm{\Xi }^0\mathrm{\Xi }^0}`$ $`=`$ $`{\displaystyle \frac{1}{3}}f_{\pi ^0\mathrm{\Xi }^0\mathrm{\Xi }^0}.`$ (14)
They are a consequence of the underlying unitary symmetry, and are valid for all values of the strange quark mass. Eq.(12) can be used to predict the elusive decuplet couplings from the experimentally better known octet and decuplet-octet transition couplings. As far as we know, these relations are new.
In summary, we have used the Morpurgo formalism to predict pion-baryon coupling constants. The inclusion of two-body operators leads to significant corrections of the additive quark model values. Finally, we hope that this work will stimulate further research along these lines, such as the inclusion of three-quark operators and a more rigorous treatment of SU(3) flavor breaking.
Acknowledgement: This work has been partially supported by a U.S. DOE grant. We would like to thank Drs. G. Morpurgo and G. Dillon for useful criticism and valuable suggestions. |
no-problem/9912/nucl-th9912055.html | ar5iv | text | # Statistical aspects of nuclear coupling to continuum
\[
## Abstract
Various global characteristics of the coupling between the bound and scattering states are explicitly studied based on realistic Shell Model Embedded in the Continuum. In particular, such characteristics are related to those of the scattering ensemble. It is found that in the region of higher density of states the coupling to continuum is largely consistent with the statistical model. However, assumption of channel equivalence in the statistical model is, in general, violated.
\]
Relating properties of nuclei to the ensembles of random matrices is of great interest. A potential agreement reflects those aspects that are generic and thus do not depend on the detailed form of the Hamiltonian matrix, while deviations identify certain system-specific, non-random properties of the system. On the level of bound states the related issues are quite well explored and documented in the literature . In many cases, however, the nuclear states are embedded in the continuum and the system should be considered as an open quantum system. Applicability of the related scattering ensemble of non-Hermitian random matrices has however never been verified by an explicit calculation due to serious difficulties that such an explicit treatment of all elements needed involves. These include a proper handling of multi-exciton internal excitations, an appropriate scattering asymptotics of the states in continuum and a consistent and realistic coupling among the two. The recently developed advanced computational scheme termed the Shell Model Embedded in the Continuum (SMEC) successfully incorporates such elements and will be used below to study conditions under which the statistical description of the continuum coupling applies.
Constructing the full SMEC solution consists of three steps. In the first step, one solves the many-body problem in the subspace $`Q`$ of (quasi-)bound states. For that one solves the multiconfigurational Shell Model (SM) problem : $`H_{QQ}\mathrm{\Phi }_i=E_i\mathrm{\Phi }_i`$ , where $`H_{QQ}QHQ`$ is the SM effective Hamiltonian which is appropriate for the SM configuration space used. For the continuum part (subspace $`P`$), one solves the coupled channel equations :
$`(E^{(+)}H_{PP})\xi _E^{c(+)}{\displaystyle \underset{c^{^{}}}{}}(E^{(+)}H_{cc^{^{}}})\xi _E^{c^{^{}}(+)}=0,`$ (1)
where index $`c`$ denotes different channels and $`H_{PP}PHP`$. The superscript $`(+)`$ means that boundary conditions for incoming wave in the channel $`c`$ and outgoing scattering waves in all channels are used. The channel states are defined by coupling of one nucleon in the scattering continuum to the many-body SM state in $`(N1)`$-nucleus. Finally one solves the system of inhomogeneous coupled channel equations :
$`(E^{(+)}H_{PP})\omega _i^{(+)}=H_{PQ}\mathrm{\Phi }_iw_i`$ (2)
with the source term $`w_i`$ which is primarily given by the structure of $`N`$ \- particle SM wave function $`\mathrm{\Phi }_i`$ and which couples the wave function of $`N`$-nucleon localized states with $`(N1)`$-nucleon localized states plus one nucleon in the continuum . These equations define functions $`\omega _i^{(+)}`$, which describe the decay of quasi-bound state $`\mathrm{\Phi }_i`$ in the continuum.
The resulting full solution of SMEC equations is then expressed as :
$`\mathrm{\Psi }_E^c=\xi _E^c+{\displaystyle \underset{i,j}{}}(\mathrm{\Phi }_i+\omega _i){\displaystyle \frac{1}{EH_{QQ}^{eff}}}\mathrm{\Phi }_jH_{QP}\xi _E^c,`$ (3)
where
$`H_{QQ}^{eff}=H_{QQ}+H_{QP}G_P^{(+)}H_{PQ}H_{QQ}+W`$ (4)
defines the effective Hamiltonian acting in the space of quasibound states. Its first term reflects the original direct mixing while the second term originates from the mixing via the coupling to the continuum. $`G_P^{(+)}`$ is the Green function for the single particle (s.p.) motion in the $`P`$ subspace. This external mixing is thus energy dependent and consists of the principal value integral and the residuum :
$`W_{ij}(E)`$ $`=`$ $`{\displaystyle \underset{c=1}{\overset{\mathrm{\Lambda }}{}}}{\displaystyle _{ฯต_c}^{\mathrm{}}}๐E^{}{\displaystyle \frac{\mathrm{\Phi }_jH_{QP}\xi _E^c\xi _E^cH_{PQ}\mathrm{\Phi }_i}{EE^{}}}`$ (5)
$``$ $`i\pi {\displaystyle \underset{c=1}{\overset{\mathrm{\Lambda }}{}}}\mathrm{\Phi }_jH_{QP}\xi _E^c\xi _E^cH_{PQ}\mathrm{\Phi }_i.`$ (6)
These two terms prescribe the structure of the real $`W^R`$ (Hermitian) and imaginary $`W^I`$ (anti-Hermitian) parts of $`W`$, respectively. The dyadic product form of the second term allows to express it as
$`W^I={\displaystyle \frac{i}{2}}\mathrm{๐๐}^T,`$ (7)
where the $`M\times \mathrm{\Lambda }`$ matrix $`๐\{V_i^c\}`$ denotes the amplitudes connecting the state $`\mathrm{\Phi }_i`$ ($`i=1,\mathrm{},M`$) to the reaction channel $`c`$ ($`c=1,\mathrm{},\mathrm{\Lambda }`$) .This form of $`W^I`$ constitutes a starting point towards statistical description of the related effects. In the latter case one assumes that the internal dynamics is governed by the Gaussian orthogonal ensemble (GOE) of random matrices. Relation of this assumption to the classical chaotic scattering can also be traced . The orthogonal invariance arguments then imply that the amplitudes $`V_i^c`$ can be assumed to be Gaussian distributed and the channels independent . Assuming, as consistent with the statistical ensemble, the equivalence of the channels one then arrives at the following distribution of the off-diagonal matrix elements of $`W^I`$ for $`\mathrm{\Lambda }`$ open channels :
$`๐ซ_\mathrm{\Lambda }(W_{ij}^I)={\displaystyle \frac{W_{ij}^I^{(\mathrm{\Lambda }1)/2}K_{(\mathrm{\Lambda }1)/2}(W_{ij}^I)}{\mathrm{\Gamma }(\mathrm{\Lambda }/2)\sqrt{\pi }\mathrm{\hspace{0.17em}2}^{(\mathrm{\Lambda }1)/2}}},`$ (8)
with $`(W_{ij}^I)^2=\mathrm{\Lambda }`$. $`K_\lambda `$ denotes here the modified Bessel function.
The physics to be addressed below by making use of the above formalism is that of a nucleus decaying by the emission of one nucleon. As an example, <sup>24</sup>Mg is taken with the inner core of <sup>16</sup>O and the phenomenological $`sd`$-shell interaction among valence nucleons . For the coupling between bound and scattering states a combination of Wigner and Bartlett forces is used, with the spin-exchange parameter $`\beta =0.05`$ and the overall strength coupling $`V_{12}^{(0)}=650\text{MeV}\text{fm}^3`$ . The radial s.p. wave functions in the $`Q`$ subspace and the scattering wave functions in $`P`$ subspace are generated from the average potential of the Woods-Saxon type .
In the above SM space, the <sup>24</sup>Mg nucleus has 325 $`J^\pi =0^+,T=0`$ states. Depending on the particle emission threshold, these states can couple to a number of open channels. Such channels correspond to excited states in the neighboring $`N1`$ nucleus.
When testing validity of the statistical model it is instructive to begin with one open channel and to compare the distribution of the corresponding matrix elements with the formula (8) for $`\mathrm{\Lambda }=1`$. In the example shown in Fig. 1, the open channel corresponds to spin 1/2 and its energy to about the middle of the spectrum. Both the imaginary (left) and real (right) parts of $`W`$ are displayed. The upper part of Fig. 1 involves all 325 $`J^\pi =0^+,T=0`$ states of <sup>24</sup>Mg. Clearly, there are too many large and also too many small matrix elements as compared to the statistical distribution (solid line) with $`\mathrm{\Lambda }=1`$ . This may originate from the fact that many states in the $`Q`$ space are localized stronger than allowed by the GOE. It is actually natural to suspect that this may apply to the states close to both edges of the spectrum. Indeed, by discarding 60 states on both ends of the spectrum (205 remain), the picture changes significantly as illustrated in the lower part of Fig. 1. In this case the statistical distribution provides a good representation, interestingly, also for the real part although applicability of the formula (8) is not directly justifiable as for the imaginary part. Similar behavior is found for majority of channels except for a limited number of them located at the edges of the spectrum. Hence, the assumption about the Gaussian distribution of amplitudes $`V_i^c`$ is well fulfilled in a generic situation.
As for the equivalence of channels, the conditions are expected to be more intricate, especially when different channel quantum numbers are involved, because the effective coupling strength depends on those quantum numbers. In addition, such a coupling strength depends also on energy $`E`$ of the particle in continuum so the proportions among the channels may vary with $`E`$. This is illustrated in Fig. 2 which shows the energy dependence of the standard deviations of distributions (as in Fig. 1) of relevant matrix elements for several different channel spin values, both for real and imaginary part of $`W`$, and their correlation coefficient. It should to be noted however that within a given spin the differences are much smaller. Instead of trying to identify (with the help of Fig. 2) a sequence of $`\mathrm{\Lambda }`$ approximately equivalent channels and to verify the resulting distribution of matrix elements of $`W`$ against formula (8) we find it more informative to make a random selection of such channels. An example for $`\mathrm{\Lambda }=10`$ and two different energies ($`E=20`$ and $`40\text{ MeV}`$) of the particle in the continuum is shown in Fig. 3. Among these 10 randomly selected channels, two correspond to spin $`1/2`$, three to spin $`3/2`$, three to spin $`5/2`$ and two to spin $`7/2`$. The distributions significantly change as compared to those of the lower part of Fig. 1. Moreover, $`๐ซ_{\mathrm{\Lambda }=10}(W_{ij}^{I,R})`$ (Eq. (8) ) (dashed lines) does not provide an optimal representation for these explicitly calculated distributions. For $`E=20\text{ MeV}`$ particle energy (the upper part of Fig. 3), the best fit in terms of the formula (8) is obtained for $`\mathrm{\Lambda }_{\text{eff}}=3.1`$ for the imaginary part and $`\mathrm{\Lambda }_{\text{eff}}=4.4`$ for the real part of $`W`$. At $`E=40\text{ MeV}`$ one obtains $`\mathrm{\Lambda }_{\text{eff}}=4.8`$ and $`\mathrm{\Lambda }_{\text{eff}}=3.1`$, correspondingly. This, first of all, indicates that effectively a smaller number of channels is involved what is caused by the broadening of the width distribution as a result of the non-equivalence of the channels . Secondly, such effective characteristics depend on the energy of particle in the continuum, what in turn is natural in view of the dependences displayed in Fig. 2. It is also interesting to notice that $`W_{ij}^R`$ obeys functionally similar distribution as $`W_{ij}^I`$ although this does not result from Eq. (5) .
The fact that generically $`\mathrm{\Lambda }_{\text{eff}}`$ is much smaller than the actual number of open physical channels can be anticipated from their obvious non-equivalence in majority of combinations as can be concluded from Fig. 2. The global distribution, especially in the tails, is dominated by stronger channels. Due to the separable form of $`W`$, which in terms of $`\mathrm{\Lambda }`$ explicitly expresses its reduced dimensionality relative to $`H_{QQ}`$, an interesting related effect in the eigenvalues of $`H_{QQ}^{eff}`$ may take place. For a sufficiently strong coupling to the continuum one may observe a segregation effect among the states, i.e., $`\mathrm{\Lambda }`$ of them may separate from the remaining $`M\mathrm{\Lambda }`$ states . This effect is especially transparent when looking at the structure of $`W^I`$. For the physical strength $`V_{12}^{(0)}`$ of the residual interaction in $`{}_{}{}^{24}\text{Mg}`$ this effect is negligible, as shown in the upper panel of Fig. 4. Only one state in this case separates from all others by acquiring a larger width. A magnification of the overall strength $`V_{12}^{(0)}`$ of the coupling to the continuum by a constant factor $`f`$ allows further states to consecutively separate. For $`f=7`$, all 10 states become unambigously separated as illustrated in the middle panel of Fig. 4. Their distance from the remaining, trapped states reflects approximately the order of their separation when $`f`$ is kept increasing. This nicely illustrates the degree of non-equivalence of the channels and the fact that $`\mathrm{\Lambda }_{\text{eff}}5`$, as consistent with Fig. 3 at $`E=40\text{ MeV}`$, is an appropriate representation for an effective number of relevant open channels. It needs also to be noticed that the segregation effect takes place also in the direction of the real energy axis, though in this sense only three states uniquely separate (again consistent with $`\mathrm{\Lambda }_{\text{eff}}=3.1`$ of Fig. 3). This direction of the separation originates from the real part of $`W`$. Incorporating an equivalent multiplication factor into $`W^I`$ only, results in a picture as shown in the lower panel of Fig. 4. No separation in energy can now be observed anymore.
In summary, the present study indicates that certain characteristics of the statistical description of nuclear coupling to the continuum, like the distribution of coupling matrix elements for one channel continuum, do indeed apply when the non-generic edge effects are removed. On the other hand, in realistic SMEC calculations we find the generic nonequivalence of channels which contradicts the orthogonal invariance arguments and results in strong reduction of the number of effectively involved channels. The quantitative identification and understanding of this effect may turn out to be helpful in postulating improved scattering ensembles which automatically account for this effect, similarly as various versions of the random matrix ensembles invented in the context of bound states. Up to now the statistical models ignore the real part of the matrix connecting the bound states to the scattering states. The real part of $`H_{QQ}^{eff}`$ is likely to be dominated by $`H_{QQ}`$, therefore, this, in many cases, may be not a bad approximation. Keeping in mind a relatively strong energy dependence of $`W^R`$ (see Fig. 2) this however may not be true in some cases, especially, because the segregation of states in energy (along the real axis) originates from this part. Interestingly, $`W^R`$ is found to obey similar statistical characteristics as $`W^I`$. This does not however yet mean that the two parts of $`W`$ can simply be drawn as independent ensembles. In fact, the individual matrix elements $`W_{ij}^I`$ and $`W_{ij}^R`$ are often strongly correlated and the degree of correlation depends on energy of the particle in the continuum. A more detailed account of such correlations will be presented elsewhere.
We thank K. Bennaceur, E. Caurier, F. Nowacki and M. Wรณjcik for useful discussions. This work was partly supported by KBN Grant No. 2 P03B 097 16 and by the Grant No. 76044 of the French-Polish Cooperation. |
no-problem/9912/astro-ph9912161.html | ar5iv | text | # Chromospherically active binaries members of young stellar kinematic groups
## 1. Introduction
Activity-rotation and activity-age relationships have been found in many studies of late-type stars. The rotation rate moderates the dynamo mechanism which generates and amplifies the magnetic fields in the convective zone, but there is a further relationship between rotation and age. Rotation rates decline with age because stars lose angular momentum through the coupling of the magnetic field and stellar mass loss, and thus there is an indirect trend of decreasing activity with increasing age. Chromospherically active binaries (CAB) are detached binary systems with cool components characterized by strong chromospheric, transition region, and coronal activity. CAB can lose angular momentum, but maintain high rotation rates and activity levels by a decrease in their component separation (synchronization of rotation and orbital periods). Samples of CAB with the same age are of maximum interest to better understand the magnetic activity of these systems.
Some late-type spectroscopic binaries have been identified as members of well known open clusters (Montes 1999, and references therein), but only a few are well known CAB. Stellar kinematic groups (SKG) are kinematically coherent groups of stars that share a common origin, and thus offer another way to compile samples of stars with the same age. The youngest and best documented SKG are: the Hyades supercluster (Eggen 1992b) associated with the Hyades cluster (600 Myr), the Ursa Mayor group (Sirius supercluster) (Eggen 1992a, 1998; Soderblom & Mayor 1993) associated with the UMa cluster (300 Myr), the Local Association or Pleiades moving group associated with the Pleiades and several other young open clusters and associations (age ranges from about 20 to 150 Myr) (Eggen 1992c), the IC 2391 supercluster (35-55 Myr) (Eggen 1995), and the Castor moving group (200 Myr) (Barrado y Navascuรฉs 1998). The existence of these SKG has been rather controversial in the literature, but recent studies (Chereul et al. 1999, Dehnen 1998, Asiain et al. 1999, Skuljan et al. 1999) using astrometric data taken from Hipparcos not only confirm the existence of classical young moving groups, but also detect finer structures in space velocity and age. Well known members of these SKG are mainly early-type stars and few studies have been centered in late-type stars (see Montes et al. 1999, and Montes 2000 (this proceedings)). In this contribution we present a kinematic study of a large sample of CAB in order to determine their membership to representative young disk SKG. Precise measurements of proper motions and parallaxes taken from Hipparcos Catalogue and published radial velocity measurements are used to calculate Galactic space motions (U, V, W).
## 2. Sample of CAB and parameters
A total of 205 CAB with complete kinematic input have been included in this study. The systems have been selected from different sources:
$``$ Previously established members of stellar kinematic groups based in photometric and kinematic properties (several papers by Olin Eggen).
$``$ Possible new candidates found in our previous kinematic study of late-type stars (Montes et al. 1999).
$``$ The 206 CAB included in the โCatalog of Chromospherically Active Binary Starsโ (Strassmeier et al. 1993).
$``$ Some of the CAB included in the candidate list of Strassmeier et al. (1993)
$``$ Other late-type stars recently identified in the literature as CAB, including X-ray/EUV selected stars. (Jeffries et al. 1995, Henry et al. 1995).
In order to determine the membership of this sample to the different stellar kinematic groups we have studied the distribution of stars in the space velocity by calculating the Galactic space-velocity components (U, V , W) in a right-handed coordinated system (positive in the directions of the Galactic center, Galactic rotation, and the North Galactic Pole, respectively). The procedures in Johnson & Soderblom (1987) were used to calculate U, V, W, and their associated errors.
Parallaxes and proper motions are taken from Hipparcos Catalogue (ESA, 1997); PPM (Positions and Proper Motions) Catalogue (Rรถser et al, 1994); ACT Reference Catalog (Urban et al. 1997); and TCR (Tycho Reference Catalogue) (Hog et al. 1998). We have only included in the study stars with significant trigonometric parallaxes ($`\pi `$ $``$ 3$`\sigma `$<sub>ฯ</sub>). In some cases, when trigonometric parallaxes are not available, we adopted spectroscopic parallaxes. Radial velocities are primarily taken from the systemโs center of mass radial velocity listed in Strassmeir et al. (1993) catalog or other more recent orbital determination found in the literature. Some radial velocities are also taken from the WEB (Wilson Evans Batten) compilation (Duflot et al. 1995), and from other references given in SIMBAD.
## 3. (U, V) and (W, V) diagrams
The (U, V) and (W, V) planes (Boettlinger Diagram) for the whole sample are plotted in Fig. 1. All the stars fall in the range of U (-130, 120) and V (-90, 40) except two stars with very large space velocities: CM Dra (U = -105.35, V = -119.35) and Gl 629.2A (U = -88.24, V = -172.06) which result to be old Population II binaries. We have divided the sample in three groups according to their luminosity class (V, IV, and III). The stars of the three groups are plotted in this figure with different simbols and colors. Fig. 2 is an enlargement of the central region of Fig. 1 including the boundaries (dashed line) that determine the young disk population as defined by Eggen (1984, 1989). As it can be seen in this figure a large number of BY Dra stars (luminosity class V) seems to fall inside of the boundaries of the young star region, but a considerable number of subgiants and giants also fall in this region. A detailed kinematic study will be the subject of a future work, for a previous kinematic study see Eker (1992).
In Fig. 3 we have plotted each star with its associated error, in the central region of the (U, V) and (W, V) diagrams. Stars with trigonometric parallaxes have been plotted in black and stars with spectrocopic parallaxes in blue. The uncertainties are in general modest, except some cases with large errors, which correspond to stars with small trigonometric parallaxes.
We focus this contribution in the indentification of a preliminary list of CAB possible members of some of the five young moving groups above mentioned. In base of the concentrations in (U, V) and (W, V) planes around the central position of the different moving groups (see Fig. 4 we have classified the stars of our sample as members of one of these moving groups or as other possible young disk stars if their classification is not clear but it is inside or near the boundaries (dashed line) of the young disk population. In Tables 1 to 5 <sup>1</sup><sup>1</sup>1Tables 1 to 5 available at http://www.ucm.es/info/Astrof/cabs$`\mathrm{\_}`$yskg.html we list the candidate stars for each moving group. We give the name, coordinates (FK5 1950.0), radial velocity (V<sub>r</sub>) and the error in km/s, parallax ($`\pi `$) and the error in milli arc second (mas), proper motions $`\mu `$<sub>ฮฑ</sub> and $`\mu `$<sub>ฮด</sub> and their errors in mas per year (mas/yr), and the U, V, W, calculated components with their associated errors in km/s. In the last column we mark with Y previously established members of the stellar kinematic group and Y? possible new members in base of their position in the (U, V) plane.
## 4. Membership and ages
For some of the CAB listed in Tables 1 to 5, for which accurate determinations of their stellar parameters are available, stellar ages have been obtained (Barrado et al. 1994, B94 hereafter) by using evolutionary tracks. In the following we comment some particular cases for each moving group.
Local Association
Four CABS (V640 Cas AB, EP Eri, HD 102077, V772 Her) have been previously identified as members of the Local Association. LX Per was classified as member of $`\alpha `$ Per open cluster, but the space velocities calculated here indicate it is member of the Hyades supercluster. The ages calculated by B94 for TW Lep (94 Myr) and BM CVn (65 Myr) are compatible with their membership. The B94โs age of the doubtful members xi UMa B (6 Gyr), $`\sigma `$<sup>2</sup> CrB (4 Gyr), and ER Vul (4 Gyr) indicates they are not members. The case of V772 Her is not clear since it seems to be a certain member, Batten et al. (1979) suggest an age as the Pleiades, but the B94โs age is 3 Gyr.
IC 2391
Only five CAB could be included in this group of which TZ For, HD 54371, HD 58738A are previously established members.
Castor moving group
YY Gem (Castor C) is one of the stars that define this moving group and its membership has been confirmed by Barrado y Navascuรฉs (1998). VV Mon was initially included as a possible member but the age of 2.6 Gyr calculated by B94 indicated it is not a member.
Ursa Mayor group
The age calcuted by B94 for $`ฯต`$ UMi (446 Myr) is compatible with its membership, however the evolutionary status of this system is complicated. UV CrB with an age of 5 Gyr (B94) should be rejected as possible member.
Hyades supercluster
Some CAB are previously established members of the Hyades open cluster (V1136 Tau, V818 Tau, HD 27149, HD 27691, V918 Tau, V808 Tau, QY Aur) and are plotted with a different simbol in Fig. 4. Previously established members of the supercluster are: ADS 48A (GJ 4A), V471 Tau, and DH Leo. The age calculated by B94 for 93 Leo (933 Myr) is close to the Hyades ages, but the age of HD 131832 (93 Myr) is too young and the ages of HD 3196 (1.7 Gyr), RZ Eri (2.2 Gyr), and LU Hya (4.3 Gyr) are too old to be members.
Other possible young disk stars
In this group of other possible young disk CAB we found several young stars as calculated by B94, but also some old stars.
### Acknowledgments.
This work has been supported by the Universidad Complutense de Madrid and the Spanish Direcciรณn General de Enseรฑanza Superior e Investigaciรณn Cientรญfica (DGESIC) under grant PB97-0259.
## References
Asiain R., Figueras F., Torra J., Chen B., 1999, A&A 341, 427
Barrado D., et al., 1994, A&A 290, 137
Barrado y Navascuรฉs D., 1998, A&A 339, 831
Batten A.H., Morbey C.L., Fekel F.C., Tomkin J., 1979, PASP 91, 304
Chereul E., Creze M., Bienayme O., 1999, A&AS 135, 5
Dehnen W., 1998, AJ 115, 2384
Duflot M., Figon P., Meyssonnier N., 1995 A&AS 114, 269
Eggen O.J. 1984, ApJS, 55, 597; Eggen O.J. 1989, PASP 101, 366
Eggen O.J., 1992a, AJ 104, 1493; 1992b, AJ 104, 1482; 1992c, AJ, 103, 1302
Eggen O.J., 1995, AJ 110, 2862; Eggen O.J., 1998, AJ 116, 782
Eker Z., 1992, ApJS 79, 481
ESA, 1997, The Hipparcos and Tycho Catalogues, ESA SP-1200
Henry G.W., Fekel F.C., Hall D., 1995, AJ 110, 2926
Hog E., et al., 1998, A&A 335, 65
Jeffries R.D., Bertram D., Spugeon B.R., 1995, MNRAS 276, 397
Johnson D.R.H., Soderblom D.R., 1987, AJ 93, 864
Montes D., 1999, ASP Conf. Ser., โStellar clusters and associationsโ, R. Pallavicini, et al. eds. (in press), (http://www.ucm.es/info/Astrof/cabs$`\mathrm{\_}`$ocl.html)
Montes D., Latorre A., Fernรกndez-Figueroa M.J., 1999, ASP Conf. Ser., โStellar clusters and associationsโ, R. Pallavicini, et al. eds. (in press), (http://www.ucm.es/info/Astrof/ltyskg.html)
Rรถser S., Bastian U., Kuzmin A., 1994, A&AS 105, 301
Skuljan J., Hearnshaw J.B., Cottrell P.L., 1999, MNRAS 308, 731
Soderblom D.R., Mayor M., 1993, AJ 105, 226
Strassmeier K.G., Hall D.S., Fekel F.C., Scheck M., 1993, A&AS 100, 173
Urban S.E., Corbin T.E., Wycoff G.L., 1997 U.S. Naval Obs., Washington D.C. |
no-problem/9912/astro-ph9912315.html | ar5iv | text | # Environmental Influences in SGRs and AXPs
## I Introduction
Soft gammaโray repeaters (SGRs) are neutron stars whose multiple bursts of gammaโrays distinguish them from other gammaโray burst sourceshurley00 . SGRs are also unusual xโray pulsars in that they have spin periods clustered in the interval $`58`$ s, and they all appear to be associated with supernova remnants (SNRs), which limits their average age to approximately $`20`$ kyrbraun89 . The angular offsets of the SGRs from the apparent centers of their associated supernova remnant shells indicates that SGRs are endowed with space velocities $`>500`$ km s<sup>-1</sup>, which are greater than the space velocities of most radio pulsarscordes98 . Anomalous xโray pulsars (AXPs) are similar to SGRs in that they are radio quiet xโray pulsars with spin periods clustered in the range $`612`$ s, and have similarmer99 persistent xโray luminosities as the SGRs ($`10^{35}`$ ergs s<sup>-1</sup>). Most of the AXPs appear to be associated with supernova remnants, and therefore they are also thought to be young neutron stars like the SGRs. Here we present a new look at environmental evidence which shows that the SGRs and AXPs can not be due to a purely innate property, such as superstrong magnetic fieldsthompson95 .
## II The Environments of SGRs and AXPs
If the unusual properties of SGRs and AXPs were due solely to an intrinsic property of the neutron star, that developed independently of the external environment, then the characteristics of the interstellar medium which surrounded the AXP and SGR progenitors should be typical of that around the massive O and B stars which are progenitors of all neutron stars. Observations clearly show that the majority of neutron stars are formed in โsuperbubblesโ: evacuated regions of the ISM which surround the OB associations in which the massive progenitors of most neutron stars live. The supernovae from the massive O and B stars which form SGRs and AXPs are heavily clustered in space and time and form vast ($`>100`$ pc) HII regions/superbubblesmaclow88 filled with a hot ($`>10^6`$ K) and tenuous ($`n10^3`$ cm<sup>-3</sup>) gas. The occurrence of most supernovae in the hot phase of the ISM is confirmed from observations of nearby galaxiesvandyk96 and from studies of Galactic SNRshigdon80 . It is estimated that $`90\pm 10\%`$ of all core-collapse supernova should occur in this hot and tenuous environmenthigdon98 .
The environments of SGRs and AXPs are probed by the blastwaves of their associated supernova remnants, and from the size of the remnant shell as a function of the age we can constrain the external density. In Table $`1`$ we have listed the $`12`$ known SGRs and AXPs and their associated supernova remnant shellsmarsden00 . The identification of the associated remnants are based on both positional coincidences of the remnant and the SGR/AXP, and on similar distances of the SGR/AXP and its associated remnant. We include the new tentative cline99 SGR candidate 1801โ23, which appears to be associated with the SNR W28. The thin SGR error box passes roughly through the center of the SNR and through the compact, nonthermal xโray sourceandrews83 within the remnant. No associated remnants can be found for AXPs 0720โ3125 and 0142$`+`$615, which is not surprising given the close distance ($`0.1`$) of 0720โ3125haberl97 , and the molecular clouds associated with 0142$`+`$615israel94 . A more detailed discussion and reference list for the sources in Table $`1`$ will be published elsewheremarsden00 .
Most of the SGR/AXP positions are significantly displaced from the apparent centers of their associated SNRs, as can be seen in Table $`1`$ from the ratio of the neutron star angular displacement $`\theta _{}`$ divided by the angular radius $`\theta _{SNR}`$ of the remnant shell. These displacements clearly indicate that the SGR/AXPs have large transverse velocities. There is considerable uncertainty in the actual velocities, however, because the estimated remnant ages are probably uncertain by a factor of two in most cases, which introduces a corresponding uncertainty in the transverse velocities. In addition, the actual space velocities of the SGR/AXPs are larger by an unknown factor dependent on the viewing angle. Nonetheless, the data suggest that the typical SGR/AXPs are of the order of $`1000`$ km s<sup>-1</sup>. Such velocities, while much larger than the typical neutron star velocities, are not unprecedented, as $`10\%`$ of radio pulsars may have space velocities of $`1000`$ km s<sup>-1</sup> or greatercordes98 . We conclude, therefore, that the SGRs and AXPs are a high velocity subset of young neutron stars.
In Figure $`1`$ we have plotted the SNR shell radii as a function of the estimated age of each remnant. Overplotted in solid lines are simple approximations of the evolutionary tracksshull89 of supernova remnant expansion in the wide range of the external ISM densities, and we see that these SNRs are all in the denser ($`>0.1`$ cm<sup>-3</sup>) phases of the ISM which slow their expanding shells to $`<2000`$ km s<sup>-1</sup> in $`<10`$ kyr. Also overplotted are the tracks of neutron stars born at the origin of the supernova explosion with varying velocities, showing the times required for fast (e.g. $`>500`$ km s<sup>-1</sup>) neutron stars to catch up with the slowing supernova ejecta and swept-up matter.
## III Discussion
From the discussion in $`\mathrm{\S }`$ II, we saw that neutron stars should preferentially reside in the diffuse ($`n<0.01`$ cm<sup>-3</sup>) gas which constitutes the hot phase of the interstellar medium. As seen from Figure $`1`$, however, the SGRs and AXPs tend form in denser regions of the ISM. Given the entire sample of AXPs and SGRs, the probability that this is merely due to chance depends on the the ability to detect supernova remnants in the different phases of the interstellar medium. For the SGRs, the detection sensitivity is independent of the interstellar medium, because they are detected via their bright gammaโray/xโray bursts. Therefore, using only the SGRs yields a chance probability of less than $`(0.2)^510^4`$, if one accepts the tentative W28/SGR 1801โ23 association, and $`10^3`$ if one excludes SGR 1801โ23 from the SGR sample. The AXPs are also preferentially in the dense phase, which further lowers the chance probability for the class as a whole. The evidence then suggests that the environments surrounding SGRs and AXPs are significantly different than otherwise normal neutron stars in a way which is inconsistent with the hypothesis that the properties of these sources are the result of an innate characteristic such as a superstrong magnetic field.
These observational facts imply instead that the environment is crucial in the development of SGRs and AXPs. One plausible scenario is that the rapid spin-down of the SGR/AXPs may result from their interaction with co-moving ejecta and swept-up ISM material corbet95 vanparadijs95 . Calculationsmarsden00 indicate that such an interaction scenario, involving the formation of accretion disks by fast ($`>500`$ km s<sup>-1</sup>) neutron stars from co-moving ejecta of supernova remnants slowed to $`<2000`$ km s<sup>-1</sup> by the denser ($`>0.1`$ cm<sup>-3</sup>) phases of the ISM, could spin-down SGRs and AXPs to their present-day spin periods in $`10`$ kyr โ consistent with the estimated ages of these sources โ without requiring the existence of a population of neutron stars with ultrastrong magnetic fields. In addition, such a scenario can explain the clustering of spin periods, present-day spin-down rates, and the number of SGRs and AXPs in our galaxymarsden00 . |
no-problem/9912/astro-ph9912056.html | ar5iv | text | # An X-ray and Optical Study of Matter Distribution in the Galaxy Cluster A 2319
## 1 Introduction
Detailed studies of the matter distribution in clusters of galaxies provide important clues on the growth of condensations and the evolution of the Universe. From X-ray observations it is possible to derive both the gas and the total binding mass distributions, under the assumption of hydrostatic equilibrium. Optical data, i.e. galaxy photometry and redshifts, combined with X-ray observations allow to check the validity of the hydrostatic equilibrium assumption and derive the spatial distribution of the dark matter. Most analyses in the past (Jones and Forman (1984); Cowie, Henriksen and Mushotzky (1987); Hughes, Gorenstein and Fabricant (1988); Hughes (1989); Gerbal et al. (1992); Briel, Henry and Bรถhringer 1992 ; Durret et al. (1994); David, Jones and Forman (1995); Cirimele, Nesci and Trรจvese (1997)) were based on the further simplifying assumption that the gas is isothermal, at least within about 1 h$`{}_{}{}^{1}{}_{50}{}^{}`$ Mpc, possibly with the exclusion of a central cooling flow region (see e.g. Fabian, Nulsen and Canizares (1984); White, Jones and Forman (1997)). This leads to the $`\beta `$-model (Cavaliere and Fusco-Femiano (1976)) which predicts that the dynamical parameter $`\beta _{spec}\frac{\mu m_p\sigma _r^2}{kT}`$, representing the ratio between the energy per unit mass of galaxies and gas, equals the morphological parameter $`\beta _{fit}`$ defined by the fit of the gas density distribution with a King profile. The observations show that on average $`\beta _{fit}<\beta _{spec}`$ (Sarazin (1986); Evrard (1990)). However Bahcall and Lubin (1994) ascribed this โ$`\beta `$-discrepancyโ to the underestimate of the slope of the galaxy density profile, appearing in the hydrostatic equilibrium equation, rather than to a failure of the model. In their X-rayโOptical analysis of a sample of Abell clusters, Cirimele, Nesci, & Trรจvese (1997) (CNT) found that $`\mathrm{log}\rho _{gas}=\beta _{XO}\mathrm{log}\rho _{gal}+C`$ in a wide range of densities, as predicted by the hydrostatic isothermal equilibrium. This allows to define, for each cluster, a morphological parameter $`\beta _{XO}`$, independent of any analytical representation of $`\rho _{gas}`$ and $`\rho _{gal}`$. A comparison of $`\beta _{XO}`$ with $`\beta _{spec}`$ supports the explanation of the โ$`\beta `$-discrepancyโ suggested by Bahcall and Lubin (1994) and the consistency of the โ$`\beta `$-modelโ, at least for several galaxy clusters of regular and relaxed appearance. The gas and binding mass distributions thus derived provide a typical value of the baryon fraction $`f_B`$, of the order of 0.2 within 1-2 h$`{}_{}{}^{1}{}_{50}{}^{}`$ Mpc (Cirimele, Nesci and Trรจvese (1997); Evrard (1997); Ettori & Fabian (1999),Mohr et al., 1999). This relatively high value, compared with the results of primordial nucleosynthesis calculation (Walker et al. (1991); Olive, Steigman (1995) ,but see Burles and Tytler (1998)) implies that either the cosmological parameter is smaller than unity (White et al. (1993)), or $`f_B`$ is not representative of the cosmic value, and galaxy clusters are surrounded by extended halos of non baryonic dark matter (White and Fabian (1995)). The latter hypothesis raises the problem of understanding the mechanisms of a large scale baryonic segregation.
However, recently ASCA data have provided direct evidences of gas temperature gradients in the outer regions of several galaxy clusters (Arnaud (1994); Markevitch et al. (1994, 1996); Ikebe et al. (1996); Markevitch (1996); Markevitch et al. (1998)). According to Markevitch (1996), in the outer regions some clusters show a polytropic index even greater than 5/3, which is inconsistent with the hydrostatic equilibrium conditions. According to Ettori & Fabian (1998), a systematic difference between the electron and the proton temperatures cannot explain the inconsistency, and a real departure from hydrostatic equilibrium must happen in some cases. Even disregarding these extreme cases, once the temperature profiles are available it is worth: i) to check how far from the cluster center the hydrostatic condition can be assumed, specially if strong temperature gradients are present; ii) to estimate the correction to the total mass and baryon fraction implied by non-isothermality. Moreover some clusters show an anomalously high value of the dynamical parameter $`\beta _{spec}`$, possibly suggesting deviations from the equilibrium conditions and requiring a detailed analysis of the velocity distribution. The galaxy cluster A 2319 has been extensively studied in the past, so that many galaxy redshifts are available, ROSAT PSPC images can be retrieved from the public archive and a temperature profile based on ASCA data has been published by Markevitch (1996) (see however Molendi (1998), Molendi et al. (1999)).
In the present work we combine these data with the F band photometry of galaxies (Trรจvese et al. (1992)), and compare the gas and galaxy density distributions. We generalize the definition of the morphological parameter $`\beta _{XO}`$ to verify the hydrostatic equilibrium conditions. The analysis suggests the consistency of the hydrostatic model in the presence of a temperature gradient. Thus we discuss the mass distribution as obtained by adopting a polytropic model or a simple parabolic representation of the temperature profile, and we compare the resulting baryon fraction with the limits provided by the standard nucleosynthesis calculations, then deriving constraints on the large scale baryon segregation and the cosmological parameter $`\mathrm{\Omega }_o`$.
We use $`H_o=50h_{50}kms^1Mpc^1`$.
## 2 The galaxy distribution
The galaxy cluster A 2319 has been studied by several authors in the radio , optical and X-ray bands. It is classified as a BM type II-III and as a richness 1, RS-type cD cluster (see Abell, Corwin and Olowin (1989)). The galaxy velocity dispersion $`\sigma `$1800 km s<sup>-1</sup> is particularly high. However, Faber and Dressler (1977), on the basis of 31 galaxy spectra already suggested that A 2319 is actually two clusters nearly superimposed along the line of sight: the main component A 2319 A with an average redshift $`\overline{v}_A`$=15882 and a velocity dispersion $`\sigma _A=873_{148}^{+131}`$ km s<sup>-1</sup> and the second component A 2319 B, located about $`8^{}`$ NW with $`\overline{v}_B`$=19074 and $`\sigma _B=573_{149}^{+120}`$ km s<sup>-1</sup>.
More recently Oegerle, Hill, and Fitchett (1995)(OHF) measured several new redshifts, applied the โ$`\delta `$-testโof Dressler and Shectman (1988) to locate the A and B components, and assigned the 139 galaxies of known redshift to the component A and B (or to the background/foreground) on the basis of their position and redshift, empirically trying to keep gaussian the velocity distribution of A 2319 B. They found N<sub>A</sub>=100 and N<sub>B</sub>=28 galaxies in the two components with $`\overline{v}_A`$=15727 km s<sup>-1</sup>, $`\sigma _A`$=1324 km s<sup>-1</sup> $`\overline{v}_B`$=18636 km s<sup>-1</sup> $`\sigma _B`$=742 km s<sup>-1</sup> respectively.
To assign individual galaxies to the A and B components we adopted the results of OHF to obtained a first order estimate of the cluster positions, average radial velocities $`\overline{v}^{(i)}`$, and velocity dispersions $`\sigma _{(i)}`$, and we computed the relevant core radii $`R_c^{(i)}`$ of the two components, where i=A,B identifies the component. Then we assumed the following probability distributions of galaxies respect to radial velocities v and projected distance b from the relevant cluster center:
$`P_i(b,v)`$ $``$ $`{\displaystyle \frac{N_i}{N_A+N_B}}f_i(b)g_i(v)`$ (1)
$`f_i(b)`$ $`=`$ $`\left\{{\displaystyle \frac{2\pi R_c^{(i)^2}}{1\beta _{(i)}}}\left(\left[1+\left({\displaystyle \frac{b_{max}^{(i)}}{R_c^{(i)}}}\right)^2\right]^{(1\beta _{(i)})}1\right)\right\}^1\left[1+\left({\displaystyle \frac{b}{R_c^{(i)}}}\right)^2\right]^{\beta _{(i)}}`$
$`g_i(v)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2\pi \sigma _{(i)}^2}}}\mathrm{exp}\left[{\displaystyle \frac{(v\overline{v}_{(i)})^2}{2\sigma _{(i)}^2}}\right]`$
where $`N_{(i)}`$ the first order estimate of the number of galaxies of the relevant component, and the $`b_{max}^{(i)}`$ is the radius of the circle containing the $`N_i`$ observed galaxies. This simple parameterization is independent of any assumption about the distance along the line of sight of the two clusters and their relative motion. Each galaxy is then assigned to the component of higher probability. The 99% confidence volumes are also considered for each cluster, and galaxies outside these volumes are assigned to the background or foreground. We obtain the new values N<sub>A</sub>=96 and N<sub>B</sub>=24 $`\overline{v}_A=15891kms^1`$, $`\sigma _A=1235\pm 90kms^1`$, $`\overline{v}_B=18859kms^1`$, $`\sigma _B=655\pm 97kms^1`$. Since the resulting velocity distributions of the two components do not show strong deviation from gaussian distributions, the reported uncertainties $`\sigma _{\sigma _A}`$ and $`\sigma _{\sigma _B}`$, on $`\sigma _A`$ and $`\sigma _B`$ respectively, have been computed as $`\sigma _{\sigma _i}^2=\sigma _{(i)}^2/[2(N_{(i)}1)]`$ i=A,B. The effect of membership uncertainty can be evaluated as follows. Given a sample of N galaxies with average velocity $`\overline{v}`$ and velocity dispersion $`\sigma `$, the addition of k galaxies with velocity $`v=\overline{v}+\delta `$ produces a new velocity dispersion $`\sigma ^2=(N\sigma ^2+k\delta ^2)/(N+k1)`$. Therefore, to increase $`\sigma _A`$ by more than $`2\sigma _{\sigma _A}`$ it is necessary to include in the sample A more than k=2 galaxies with a recession velocity exceeding $`\overline{v}_B+2\sigma _B\overline{v}_A+3.46\sigma _A`$ .
Although on the sole basis of the โ$`\delta `$ testโ there is a 10% probability that A2319B is not a physical association, the strong clustering of large $`\delta `$ values in a region (see OHF fig.5) corresponding to enhanced X-ray emission suggest that it is a physical entity. Moreover, the analysis of bound orbits of the two components A 2319 A and A 2319 B led Oegerle, Hill, and Fitchett (1995) to the conclusion that โthere is a reasonably high probability that these clusters are not bound and will never mergeโ. The latter conclusion is supported by the discussion of FGB who compare the X-ray images with simulations of cluster collisions (see section 3). The above considerations suggest and legitimate the assumption, adopted in the following, that the two clusters are separate entities.
We have added to the spectroscopic information the F band photometry of A 2319, obtained by Trรจvese et al. (1992) from microdensitometric scans of a Palomar 48 inch Schmidt plates, as part of a systematic study of the morphology and luminosity functions of galaxy clusters (Trรจvese, Cirimele and Flin (1992); Flin et al. (1995); Trรจvese, Cirimele and Appodia (1996); Trรจvese et al. (1997)). Due to the low galactic latitude (b$`13^{}`$) the field of A 2319 is very crowded and the automatic star/galaxy classification is difficult. Thus we have revised the classification and recovered some misclassified object.
To focus our attention on the main component A, we reduced the effect of the B component excluding from our sample all the galaxies classified as B (or background/foreground). We adopted a fixed center $`(\alpha =19^h21^m11.8^s,\delta =+43^o56^{}39^{\prime \prime }(J2000))`$, derived from the centroid of X-ray emission as computed in a small (2 arcmin) circle around the intensity peak. This point is identified with the center of a spherical structure which we assume to represent the A component. We chose a magnitude limit $`m_F=m_3+2`$=16.33 mag and the resulting fraction of galaxies without measured redshift is 0.23. Thus, the fraction of galaxies without measured redshift and belonging to the B component is of the order $`0.23N_B/(N_A+N_B)`$, i.e. 4%, and is not expected to affect significantly the galaxy density profile. We fitted with a maximum likelihood algorithm the unbinned galaxy distribution using both a King profile $`\sigma _{gal}(b)=\sigma _0(1+(b/r_c)^2)^\kappa +\sigma _b`$ and with a de Vaucouleurs profile $`\sigma _{gal}(b)=\sigma _0exp(7.67(b/r_v)^{\gamma _g})+\sigma _b`$, where the background counts $`\sigma _b`$, $`r_v`$, $`\gamma _g`$, $`r_c`$ and $`\beta `$ are free parameters, while $`\sigma _0`$ is determined by the normalization to the total number of observed galaxies and $`b`$ is the projected distance from the cluster center. The Kolmogorov-Smirnov (KS) test has been applied in both cases and the results are reported in Table 1, where the errors reported represent one-sigma uncertainties derived from Monte Carlo simulations described in section 4, and $`P_{KS}(>D)`$ is the probability of the null hypothesis that deviations larger than D are produced by random noise.
The surface density and the fitting profiles are shown in Figure 1. In this case the King profile has a slightly higher probability and will be adopted in the following to derive the volume distribution by numerical inversion. However, the differences between the two fitting curves, specially for $`b>0.1h_{50}^1Mpc`$, cannot affect significantly the subsequent discussion of the hydrostatic equilibrium conditions.
To obtain the total luminosity of the cluster, $`L_{tot}(r)`$, we fitted with a Schechter (1976) function the unbinned luminosity distribution excluding the brightest member, using a maximum likelihood algorithm, adopting a constant $`\alpha =1.25`$ and $`M^{}`$ as a free parameter, as in Trรจvese, Cirimele and Appodia (1996). This gives:
$$L_{tot}(r)=10^{0.4(M_F^{}+28.43)}\frac{\mathrm{\Gamma }(2+\alpha )4\pi }{\mathrm{\Gamma }(1+\alpha ,\frac{L_{lim}}{L^{}})}_0^r\rho _{gal}(r^{})r^2๐r^{}$$
(2)
where $`L_{tot}`$ is expressed in units of $`10^{13}L_{}`$, $`r`$ in $`kpc`$ and $`L_{lim}`$ is the limiting luminosity corresponding to the magnitude limit ($`M_F=21.56`$ mag) of the galaxy sample adopted to derive $`\rho _{gal}`$. The value of $`M^{}`$ changes by less than 1% considering a sample which includes the galaxies of A 2319 B. The galaxy mass $`M_{gal}(r)`$ is then obtained from the total luminosity assuming an average mass-to-light ratio $`M/L_R=3.32\pm 0.14M_{}/L_R`$ from van der Marel (1991), adopting $`FR`$ for bright ellipticals (see Lugger (1989)) and computing $`F_{}`$ from the relation V-F=0.40 (B-V) (Kron (1980)).
To estimate the virial mass we have evaluated the r.m.s. velocity dispersion $`\sigma _r`$ in four concentric rings each containing 1/4 of the galaxies of known redshift of A 2319 A. The four values are 1148 km s<sup>-1</sup>, 1415 km s<sup>-1</sup>, 1327 km s<sup>-1</sup>, 1193 km s<sup>-1</sup> with a r.m.s uncertainty of about 200 km s<sup>-1</sup>. Thus in the following we assume a constant dispersion, derived from the entire A 2319 A sample, $`\sigma _r=1235\pm 90`$km s<sup>-1</sup>
The resulting virial mass $`M_V=3\pi b_G\sigma _r^2/2G`$ is $`M_V=2.89\times 10^{15}h_{50}^1_{}`$, namely only 2% less than the value given by OHF, since the decrease of $`\sigma _r^2`$ is almost entirely compensated by a slight increase of the projected virial radius $`b_G=21/b^1`$ (Sarazin (1988)), which in our case is $`b_G=1.736h_{50}^1`$ Mpc.
## 3 The gas distribution
From the ROSAT public archive we extracted the available Position Sensitive Proportional Counter (PSPC) images corresponding to two observations of A 2319, on March 1991 and November 1992, which cover a 128x128 arcmin<sup>2</sup> field with pixel size of 15x15 arcsec<sup>2</sup> and an effective resolution of about $`25^{\prime \prime }`$ FWHM, in the energy band 0.5-2.0 keV. The exposure maps (Snowden et al. (1992); Plucinsky et al. (1993)), providing the effective exposure time of each pixel, are also available at the ROSAT public archive for each observation. We divided each image for the relevant exposure map and combined the two images with weights proportional to the maxima of the exposure map (1514.6 s and 3200.8 s respectively). The resulting image is shown in Figure 2.
Feretti, Giovannini & Bรถhringer (1997) (FGB) discuss the radio structure of A2319, which shows a powerful radio halo. They also analyze two substructural features in the X-rays, one corresponds to the E-W elongation in the very center of the A component, detected in the image obtained with the ROSAT High Resolution Imager, and is interpreted by FGB as an evidence of a recent merging process, likely providing the energy for the radio halo . This feature is confined within the inner 5 arcmin and does not affect the analysis of the hydrostatic equilibrium, particularly in the region where the temperature is not constant, i.e. for $`r>5`$ arcmin. In the outer region the cluster structure is rather regular, as for most cD clusters, except for an elongation in the direction of the B component. According to FGB, the B component is in a pre-merger state and has not yet proceed far enough to disturb the bulk of the gas, as can be argued from a comparison with the cluster collision simulation by Schimu93 (Schindler and Mรผller 1993).
Thus we analyzed A 2319 A as a separate entity, as discussed in section 2. As a first approximation we ignored the presence of the B component, we assumed spherical symmetry and, as in CNT, we derived the volume density $`\rho _{gas}(r)`$ of the gas by both numerical inversion of the projection equation, and by fitting the observed surface brightness with a โ$`\beta `$-modelโ (Cavaliere and Fusco-Femiano (1976)) $`I(b)=I_0[1+(b/r_c)^2]^{3\beta +1/2}+I_b`$, obtaining consistent results. The results are also consistent with FGB and with a more recent analysis of Mohr et al. (1999). Then we applied the same procedure after the exclusion of the northern half ($`\delta >43^o56^{}39^{\prime \prime }(J2000)`$) of the image, to eliminate the effect of the B component. We have evaluated the constant background value $`I_b=8.86\times 10^4`$ cts s<sup>-1</sup> arcmin<sup>-2</sup> in the region $`b>3h_{50}^1`$ Mpc. The fitting parameters are reported in Table 2, together with the central proton density $`n_o`$. The one-sigma uncertainties have been evaluated by Monte Carlo simulations, described in section 4. The central proton density is obtained from the relation:
$$I_0[1+(b/r_c)^2]^{3\beta +1/2}=EM_{\nu _{min}}^{\nu _{max}}\mathrm{\Lambda }(\nu ,T)๐\nu $$
(3)
where $`\mathrm{\Lambda }(\nu _{rest},T)`$ is the rest-frame cooling function corrected for the cosmological dimming factor $`(1+z)^4`$ and computed with the code of Mewe et al. (1986) with 0.3 solar abundance corresponding to $`\frac{n_e}{n_p}=1.2`$ , $`\nu _{min}`$ and $`\nu _{max}`$ define the observing band in the rest-frame and $`EM_0^{\mathrm{}}(\frac{n_e}{n_p})n_o^2(1+\frac{r^2}{r_c^2})^{\frac{3}{2}\beta }๐l`$ (see Sarazin (1988)), and correction for absorption with $`N_H=8.8910^{20}cm^2`$ (Stark et al., (1992)) has been applied to $`I_o`$.
Again our results are consistent with the analysis of FGB. In particular we also find a smaller value of the core radius $`r_c`$ of the A component when the northern half is excluded to reduce the effect the B component. The observed surface brightness profile and the fitting $`\beta `$-model are shown in Figure 3.
For consistency with the optical analysis, where we eliminated the galaxies assigned to the B component, in the following we adopt the fit obtained after the exclusion of the northern half of the cluster image. The corresponding gas density profile $`\rho _{gas}(r)=\rho _{gas}^o[1+(b/r_c)^2]^{3\beta /2}`$ is obtained assuming a constant temperature equal to the emission-weighted temperature $`T_X=10.0\pm 0.7keV`$ (Markevitch (1996)). The non-isothermality does not significantly affect the results (Markevitch et al. (1996)), due to the weak dependence of the emissivity on temperature, in the adopted band (see next section). The gas density profile has been computed also by a non parametric numerical deprojection as in CNT, obtaining consistent results.
## 4 Hydrostatic model, mass distributions and baryon fraction
As pointed out in CNT, it is possible to check the hydrostatic equilibrium condition in a non-parametric way, by a direct comparison of the density profiles of gas and galaxies. Assuming spherical symmetry, the equilibrium condition implies (Bahcall and Lubin (1994)):
$$\frac{\mu m_p\sigma _r^2}{kT}=\frac{d\mathrm{ln}\rho _{gas}(r)/d\mathrm{ln}r+d\mathrm{ln}T/d\mathrm{ln}r}{d\mathrm{ln}\rho _{gal}(r)/d\mathrm{ln}r+d\mathrm{ln}\sigma _r^2/d\mathrm{ln}r+2A}$$
(4)
where $`\sigma _r`$ is the radial galaxy velocity dispersion, $`A=1\left(\frac{\sigma _t}{\sigma _r}\right)^2`$ measures the anisotropy of the velocity distribution and $`\mu `$ is the average molecular weight which we assume equal to 0.58 (Edge and Stuart (1991)). For constant $`\sigma _r`$ and $`T`$, and $`A=0`$, this implies $`\rho _{gas}\rho _{gal}^\beta `$ where $`\beta _{spec}=\frac{\mu m_p\sigma _r^2}{kT}`$ is a constant representing the ratio between the energy per unit mass in the galaxies and the gas respectively (Cavaliere and Fusco-Femiano (1976)). In this case it is possible to define the morphological parameter $`\beta _{XO}d\mathrm{ln}\rho _{gas}(r)/d\mathrm{ln}\rho _{gal}`$ to be compared with $`\beta _{spec}`$ which is obtained from the spectroscopic observation of galaxies and the gas temperature derived from X-ray spectra. As already discussed in CNT, the very existence of a wide range of densities where $`\beta _{XO}`$ is constant supports, in many cases, the validity of the isothermal model. However in the presence of a temperature gradient, as in the case of A 2319 A, both $`\beta _{spec}`$ and $`\beta _{XO}`$ depend on radius and the equilibrium equation reads:
$$\beta _{spec}=\beta _{XO}+\beta _{TO},$$
(5)
where $`\beta _{TO}d\mathrm{ln}T/d\mathrm{ln}\rho _{gal}`$ and we still assume that the galaxy velocity distribution is isotropic and $`\sigma _r`$ is constant. To check the validity of equation 5 we used the temperatures obtained by Markevitch (1996). We computed $`\beta _{spec}`$, $`\beta _{XO}`$ and $`\beta _{TO}`$ at three projected radii of 4, 10 and 20 arcmin, corresponding to the boundaries of the four rings whose temperatures are given by Markevitch (1996). With the constant velocity dispersion $`\sigma _r=1235\pm 90`$km s<sup>-1</sup> derived in section 2, $`\beta _{spec}`$ ranges from 0.86 $`\pm `$ 0.15 in the first point, to 1.52$`\pm `$ 0.47 in the outermost point, i.e. the ratio between the energy per unit mass in gas and galaxies is close to unity within a central isothermal region of about 0.5 h$`{}_{}{}^{1}{}_{50}{}^{}`$ Mpc, while it decreases in the outer regions.
Figure 4 shows $`\mathrm{ln}\rho _{gas}`$, obtained by numerical deprojection, versus $`\mathrm{ln}\rho _{gal}`$. From the slope of the curve is possible to derive a value of $`\beta _{XO}(r)`$ at each radius.
The values of $`\beta _{XO}(r)`$ range from $`0.59\pm 0.06`$ in the first ring, to $`0.64\pm 0.06`$ in the outermost ring. The one-sigma uncertainties are evaluated from Monte Carlo simulations described below. We also computed $`\beta _{TO}(r)`$, which is close to zero in the cluster center and increases up to 0.58$`\pm 0.44`$ at r$``$1 h$`{}_{}{}^{1}{}_{50}{}^{}`$ Mpc. The one-sigma uncertainties are derived from errors on the temperature values reported in Figure 2 of Markevitch (1996). Figure 5 shows $`\beta _{spec}(r)`$ as a function of the RHS of equation 5, ($`\beta _{XO}+\beta _{TO}`$).
We stress that the relation between the two quantities on the x and y axes is not automatically implied by their definition, since the galaxy density $`\rho _{gal}(r)`$ only appears in the RHS and is observationally independent of the quantities in the LHS.
The value of $`\beta _{spec}(r)`$ is slightly larger than $`\beta _{XO}+\beta _{TO}`$. This could indicate that $`\sigma `$ is still slightly overestimated due to the presence of the background component A 2319 B. However the intrinsic statistical uncertainty does not allow this level of accuracy. Furthermore, even small deviation from the spherical symmetry, or from isotropy of the velocity distribution, could have comparable effects.
All we can safely say is that there is an increase of $`\beta _{spec}`$ versus ($`\beta _{XO}+\beta _{TO}`$), which is consistent with a straight line of unit slope. Thus, within the present uncertainties the data are consistent with equation 5, namely with validity of the hydrostatic equilibrium, also in the outer region of the cluster where the temperature declines.
The accuracy of this type of check, as applied to a single cluster, is limited by various factors. Although future X-ray data will provide much higher signal-to-noise, the uncertainty on galaxy density profile is intrinsically limited by Poisson noise on galaxy counts. Moreover subclustering and unknowable deviations from spherical symmetry will always produce an uncertainty on the galaxy density deprojection.
Nevertheless, the systematic application of the method described, to all the cluster with measured temperature distribution (Markevitch et al. (1998)) will likely provide statistical indication on the validity of, or the deviation from, the equilibrium conditions in the outer parts of galaxy clusters.
Assuming that our results indicate the validity of hydrostatic equilibrium, we can derive the distribution of the total binding mass $`M_T(r)`$ of A2319A:
$$M_{tot}(r)=\frac{kT}{\mu m_pG}\left(\frac{d\mathrm{ln}\rho _{gas}(r)}{d\mathrm{ln}r}+\frac{d\mathrm{ln}T}{d\mathrm{ln}r}\right)r$$
(6)
The slopes of the temperature profiles are crucial in establishing whether the non-isothermality causes an increase or decrease of the mass estimate, with respect to the isothermal $`\beta `$-model.
In fact, from equation 6, indicating with $`M_{tot}^{isot}(r)`$ the mass derived by an isothermal $`\beta `$-model with temperature $`T_{isot}`$ the fractional change in the mass estimate is:
$$\mathrm{\Delta }(r)\frac{M_{tot}(r)M_{tot}^{isot}(r)}{M_{tot}^{isot}(r)}=\frac{T(r)T_{isot}}{T_{isot}}+\frac{T(r)}{T_{isot}}\frac{d\mathrm{ln}T}{d\mathrm{ln}\rho _{gas}}$$
(7)
where the two terms on the r.h.s. of the equation can be of the same order.
It is customary to adopt a polytropic gas distributions as the simplest analytic representation of a non-isothermal gas in hydrostatic equilibrium. In this case $`T\rho _{gas}^{\gamma 1}`$, with the polytropic index $`\gamma `$ ranging from unity to 5/3 for an isothermal or adiabatic equilibrium respectively. The polytropic model implies (Cowie, Henriksen and Mushotzky (1987)):
$`\rho _{gas}(r)=\rho _{gas}^o[1+(r/r_c)^2]^\delta ,T(r)=T_o[1+(r/r_c)^2]^\alpha ,`$ (8)
$`\delta ={\displaystyle \frac{3\beta /2}{1+\eta (\gamma 1)/2}},\alpha =\delta (\gamma 1)`$
where $`\eta d\mathrm{ln}ฯต/d\mathrm{ln}T`$ and $`ฯต`$ is the emissivity of the gas, integrated in the adopted band. We can fit the temperature profile of A 2319 provided by Markevitch (1996), neglecting the effect of projection and adopting $`\eta 0.2`$, to determine the polytropic index $`\gamma `$. Notice that, due to small value of $`\eta `$, the result is not significantly different if we simply assume $`\delta =3\beta /2`$ and $`\eta =0`$ in the above expressions. The slope $`\left|dT/dr\right|`$ of the temperature profile reaches a maximum at $`r/r_c=\alpha +\sqrt{\alpha ^2+1}`$ and progressively decreases in the outer regions.
As a result, for $`T_{isot}=T_o`$, the quantity $`\mathrm{\Delta }(r)`$ is positive only for $`r/r_c<x_\gamma \sqrt{\gamma ^{1/\alpha }1}`$. This limit decreases for increasing $`\beta `$ and $`\gamma `$, e.g. $`x_\gamma 0.78`$ for $`\beta =1`$, $`\gamma =5/3`$ and $`\eta =0.2`$, while $`x_\gamma =\sqrt{e^21}2.53`$ for $`\beta =1/3`$ and $`\gamma =1`$. We stress that $`\mathrm{\Delta }(r)`$ is always negative at large radii, and this implies an enhancement of the โbaryon catastropheโ. The best fit value is $`\gamma =1.091`$ and the resulting temperature profile is shown in Figure 6.
The quality of the fit is quite poor, and there is a probability of 91% that the deviations are non random. The increment of the mass in the central region, with respect to isothermal model, and the enhancement of the baryonic catastrophe obtained with a polytropic model are a mere artifact of the particular shape of the polytropic temperature profile, which is not a good representation of the data, at least in the case of A 2319 A.
More specifically, the data seem to indicate an almost isothermal central region and an increasing slope of the temperature profile for increasing radius.
This behavior is consistent with a change of the polytropic index with radius from an isothermal ($`\gamma =1`$) towards an adiabatic ($`\gamma =5/3`$) hydrostatic equilibrium (see Sarazin (1988)). A fit with the law $`T(r)=T_oar^2`$ is shown in Figure 6. In this case the probability is P($`>\chi ^2`$)=0.99.
Figure 7 shows $`M_{tot}^{isot}(r)`$, corresponding to the isothermal model, and $`M_{tot}(r)`$ as obtained with both the polytropic model and the quadratic interpolation of the temperature profile. On the basis of the above discussion we assume the latter as the best representation of the mass distribution.
In the same figure, galaxy and gas masses, $`M_{gal}(r)`$ and $`M_{gas}(r)`$ are also shown. The latter is computed both in the isothermal approximation and using the temperature profile: the result is only weakly dependent on the temperature changes.
$`M_{gas}(r)`$ is steeper than $`M_{gal}(r)`$ and $`M_{tot}(r)`$, which have more similar slopes. As a consequence the gas mass dominates over the galaxy mass at large radii. These results are consistent with previous findings of CNT.
The statistical uncertainties have been evaluated by Monte Carlo simulations of the entire reduction process. For the X-ray data, starting from a โ$`\beta `$-modelโ corresponding to the fitting parameters, we generated 500 random sets representing the photon counts in each radial ring with Poisson noise, and we extracted 500 random background values, with a standard deviation estimated from intensity fluctuation of the surface brightness outside $`3h_{50}^1`$ Mpc where the background value has been measured. Then we fitted with a โ$`\beta `$-modelโ each intensity profile, obtaining the statistical distribution of the fitting parameters. Finally we extracted 500 values of the temperature in each of the four rings corresponding to the data of Markevitch (1996), with the relevant standard deviations, and we fitted the temperature profile with a parabolic law. Then we applied to the simulated data the same algorithms applied to real data for the evaluation of the mass profiles. This procedure allows to define a one-sigma confidence interval for the gas mass $`M_{gas}`$ and for the total mass $`M_{tot}`$ as a function of radius. A similar procedure was adopted for the galaxy distribution. Then we extracted, for each cluster simulation, a random value of $`M/L_F_{}`$ from a gaussian distribution with a mean value and a standard deviation corresponding obtained from van der Marel (1991). The one-sigma confidence intervals are reported as shaded areas in Figure 7.
We define the luminous mass as the sum of the gas mass and the galaxy mass as deduced using an average stellar mass-to-light ratio (see section 2): $`M_{lum}=M_{gas}+M_{gal}`$. The above results imply that the dark matter $`M_{dark}=M_{tot}M_{lum}`$ has a distribution similar to $`M_{gal}(r)`$. This provides a constraint on the mechanism of galaxy and cluster formation.
Since an unknown fraction of the dark matter is baryonic, $`M_{lum}/M_{tot}`$ represents a lower limit on the baryon fraction $`f_b`$. Figure 8 shows this lower limit as a function of radius, as computed in the isothermal approximation and taking into account the temperature gradient by the polytropic model and by the quadratic interpolation. We can compare our estimate of $`f_b`$ with the results of Mohr et al. (1999) who gives the $`f_b`$ values $`f_b=0.213\pm 0.004`$ and $`f_b=0.297\pm 0.027`$, at $`1h_{50}^1`$ Mpc and at $`1.91h_{50}^1`$ Mpc respectively, the latter distance representing $`r_{500}`$, namely the radius within which the mean density is 500 times the critical density $`\rho _{crit}=3H_o^2/8\pi G`$. Our isothermal values $`f_b=0.180\pm 0.017`$ and $`f_b=0.252\pm 0.045`$ are slightly lower, because were derived by a fit of the southern part of the X-ray image, to exclude the effect of the B component.
At 2 $`h_{50}^1`$ Mpc $`f_b`$, as computed with the quadratic interpolation, becomes respectively 66 % and 56 % of the isothermal and polytropic values, thus mitigating the โbaryon catastropheโ.
Assuming $`f_b0.2`$ as typical of galaxy clusters, the residual discrepancy between $`\mathrm{\Omega }_b=f_b\mathrm{\Omega }_o`$ and the corresponding value derived from nucleosynthesis calculations $`\mathrm{\Omega }_b^{nucl}0.076\pm 0.004h_{50}^2`$ (Burles and Tytler (1998)), can be reconciled assuming $`\mathrm{\Omega }_o<\mathrm{\Omega }_b^{nucl}/f_b<\text{ }0.4`$.
## 5 Summary and Conclusions
We have performed a new analysis of the Abell cluster A 2319, and assigned the individual galaxies to the A 2319 A and A 2319 B components by an objective criterion taking into account both position and redshift. The resulting velocity dispersion of the A component is slightly smaller, but consistent with the previous determination of OHF.
We have obtained photographic F band photometry of the cluster galaxies, which allows us to construct the cluster luminosity function and galaxy density profile.
We have analyzed archival ROSAT PSPC images separating the A component on the basis of the optical information, and we have obtained a gas density profile of the A component. The result is consistent with recent studies of Feretti, Giovannini and Bรถhringer (1997), Mohr et al. (1999).
Since, according to Markevitch (1996), A 2319 A shows a radial gas temperature decrease, we have generalized the method introduced by Cirimele, Nesci & Trรจvese (1997) CNT, in order to check the validity of the hydrostatic equilibrium in the case of a non isothermal gas.
We have derived the total mass profile $`M_{tot}(r)`$ through the non isothermal hydrostatic equation adding new evidence in favor of the results of CNT that the total mass and the galaxy mass have similar radial distributions, more concentrated with respect to the gas component.
Polytropic models imply smaller masses at large radii, respect to the isothermal model, i.e. a higher value of the baryon fraction. Thus, the use of a polytropic model would enhance the baryon catastrophe.
However we have shown that the polytropic model is inconsistent with the observed temperature profile, at least in the specific case of A 2319 A. A parabolic representation of the temperature profile gives, instead, a total mass larger than computed in the isothermal approximation, mitigating the baryon catastrophe.
In any case, $`f_b`$ is larger than $`\mathrm{\Omega }_b^{nucl}0.076\pm 0.004h_{50}^2`$, resulting from nucleosynthesis calculations and recent measures of the deuterium to hydrogen ratio (D/H), in high resolution studies of the $`Ly\alpha `$ forest (Burles and Tytler (1998)). Under the assumption that the value of $`f_b0.2`$, found for A 2319 A, which is consistent with other estimates (Evrard (1997), CNT, Ettori & Fabian (1999), Mohr et al., 1999) is typical of galaxy clusters, it is possible to derive the following conclusions. If the $`\mathrm{\Omega }_o=1`$ assumption is kept, then the baryonic fraction within galaxy clusters is not representative of the cosmic value and clusters must be surrounded by dark matter halos (White and Fabian (1995)). If, on the other hand, beyond $`r2h_{50}^1`$ Mpc the material has not yet fallen into the cluster, as infall models indicate, then $`f_b`$ is representative of the cosmic value and the cosmological parameter must be $`\mathrm{\Omega }_o=\mathrm{\Omega }_b^{nucl}/f_b`$ (White et al. (1993) and refs. therein), i.e. in our case $`\mathrm{\Omega }_o<\text{ }0.4`$, where the uncertainty on the latter value depends on the intrinsic spread of the baryon fraction, in the presently observable volumes around galaxy clusters.
Recently Markevitch et al. (1998) have collected temperature profiles for 30 galaxy clusters based on ASCA observations. However, only a minority of clusters is regular enough, specially in the outer regions, to allow an X-ray and optical check of the equilibrium conditions as suggested in the present paper. This limits the accuracy and the reliability of a โmeasureโ of the cosmological parameter $`\mathrm{\Omega }_o`$, based on a comparison between the baryon fraction $`f_b`$ and $`\mathrm{\Omega }_b^{nucl}`$. Moreover future cosmic microwave background experiments are expected to provide tighter constraints on $`\mathrm{\Omega }_o`$ (Mandolesi et al. (1995)) as compared with the results derived from the analysis of the matter distribution in galaxy clusters.
On the basis of this new estimate of $`\mathrm{\Omega }_o`$, once the equilibrium conditions of the non-isothermal regions are verified by the systematic application of the analysis outlined in the present paper , it will be possible to extend the (otherwise questionable) estimates of the general distribution of luminous and dark matter based on the hydrostatic model, to the outer regions of galaxy clusters of a statistical sample, thus providing new constraints for the models of cluster formation an the physics of large scale baryon segregation.
We are grateful to the anonymous referee and to the editor for comments and suggestions. This work has been partly supported by Ministero dellโUniversitร e della Ricerca Scientifica e Tecnologica (MURST). |
no-problem/9912/hep-ph9912433.html | ar5iv | text | # Physics in 2006
## Introduction
Physicists are presently faced with a quandry. Plans and designs for the next generation of accelerators need to be formulated in the near future, since the construction of these machines spans many years. The โbestโ machine to build for the post-LHC era will only become clear, however, when we see what surprises the LHC holds. Faced with our imperfect knowledge, we must examine possible scenarios for LHC physics and determine the machine most likely to answer the physics questions remaining in the years following completion of the LHC and the fulfillment of its physics promise.
Here, I will consider the physics of electroweak symmetry breaking at a high energy $`e^+e^{}`$ collider.<sup>1</sup><sup>1</sup>1For values of the Higgs boson mass near and slightly above $`100GeV`$, a $`\mu ^+\mu ^{}`$ collider operating at the Higgs resonance can make extremely precise measurements of the Higgs mass and couplings. I will not discuss the physics potential of a muon collider here.muon This discussion must be made in the context of potential discoveries at the Tevatron and the LHC. I begin by reviewing the current experimental status of electroweak symmetry breaking, both from direct Higgs boson searches at LEP2 and from precision electroweak measurements. Theoretical expectations for the Higgs boson mass are then reviewed, with emphasis on the implications for physics at higher mass scales.
Next, I review the discovery prospects for the Higgs boson at both the Tevatron and the LHC. The working hypothesis is that a weakly interacting Higgs boson will be discovered, if it exists, at the Tevatron or the LHC, and so the role of the next generation of accelerators will be to study the properties of a Higgs boson. Precision measurements of the mass, decay widths, and production rates will all be necessary in order to verify that a particle is the Higgs boson of the Standard Model.
Aside from the couplings to fermions and gauge bosons, we would also like to know that the Higgs boson self-interactions result from the spontaneously broken scalar potential of the Standard Model. In order to do this, the three- and four- point self-couplings of the Higgs boson must be measured. These couplings can only be probed by multi-Higgs production, which has extremely small rates, both at the LHC and at a high energy $`e^+e^{}`$ collider.
The focus in this note is on verifying the properties of the Standard Model Higgs boson. In order to do this, it is helpful to compare with the predictions of a supersymmetric model since these predictions may be quite different from those of the Standard Model. Distinguishing between the Standard Model and a supersymmetric model is an important test of our understanding of the electroweak sector.
If a Higgs boson is not found at the Tevatron or the LHC, the electroweak symmetry breaking sector must be strongly interacting. I end with a brief discussion of strong electroweak symmetry breaking and a view towards the future.
## Inferences from the Standard Model
The Standard Model of electroweak interactions has been verified to the $`.1\%`$ level through precision measurements at LEP and SLD.prerev In fact, the mechanism of electroweak symmetry breaking remains the only unconfirmed area of the Standard Model. The Standard Model predicts the existence of a physical scalar particle, termed the Higgs boson. The search for this particle is therefore a fundamental goal of all current and future accelerators since its discovery is needed to complete our knowledge of the electroweak sector. The mass is a free parameter of the theory and so the Higgs boson must be systematically sought in all mass regions.
The couplings of the scalar Higgs boson, however, are completely specified in terms of the Higgs vacuum expectation value, $`v=246GeV`$. Hence branching ratios and production rates can be computed unambiguously in terms of the mass. Measurements of ratios of branching rates can then be used to test the validity of the model.
Since the Higgs boson contributes to electroweak radiative corrections at one loop, precision measurements from LEP and SLD can be used to infer a prefered value for the Higgs mass. The contribution of the Higgs boson to electroweak observables is logarithmic and so the limit on the Higgs mass is not nearly as precise as the indirect limit on the top quark mass from precision measurements. The current $`95\%`$ confidence level limit is,prerev
$$M_h<230GeV,\text{Precision Measurements}.$$
(1)
It is important to understand that this limit assumes the validity of the Standard Model. Quantum loops containing new particles can change this limit, as can new operators beyond those of the Standard Model. If there is new physics at the $`TeV`$ scale, the limit of Eq. 1 can be evaded.alam
The Higgs boson mass is the only free parameter of the electroweak theory. Although we cannot compute its mass, there are certain theoretical restrictions following from the consistency of the theory. The scalar potential for an $`SU(2)`$ scalar doublet $`\mathrm{\Phi }`$ is,
$$V=\mu ^2\mathrm{\Phi }^2+\lambda (\mathrm{\Phi }^2)^2.$$
(2)
After the electroweak symmetry breaking has occured, there remains the physical scalar Higgs boson $`h`$. The quartic coupling, $`\lambda `$, is related to the Higgs boson mass,
$$\lambda =\frac{M_h^2}{2v^2}.$$
(3)
Now $`\lambda `$ is not a fixed parameter, but scales with the relevent energy, $`Q`$, and so Eq. 2 is the potential at the electroweak scale. If $`\lambda `$ is large, (corresponding to a heavy Higgs boson), then at a scale $`Q`$,quiros
$$Q\frac{d\lambda }{dQ}=\frac{3}{4\pi ^2}\lambda ,$$
(4)
which can be solved to obtain,
$$\frac{1}{\lambda (\mathrm{\Lambda })}=\frac{1}{\lambda (M_h)}\frac{3}{4\pi ^2}\mathrm{log}(\frac{\mathrm{\Lambda }^2}{M_h^2}).$$
(5)
A sensible theory will have $`\lambda (\mathrm{\Lambda })`$ finite at all scales, ($`\lambda (\mathrm{\Lambda })\mathrm{}`$ is termed the Landau Pole), or correspondingly $`\frac{1}{\lambda (\mathrm{\Lambda })}>0`$. This yields an upper bound on $`\lambda `$ and hence on $`M_h^2`$,
$$M_h^2<\frac{8\pi ^2v^2}{3\mathrm{log}(\mathrm{\Lambda }^2/M_h^2)}.$$
(6)
If the Standard Model is valid to the GUT scale, $`\mathrm{\Lambda }10^{16}GeV`$, then we have an approximate upper bound on the Higgs mass,quiros ; chiv
$$M_h<170GeV.$$
(7)
For any given value of $`\mathrm{\Lambda }`$, there is a corresponding upper bound on $`M_h`$. $`\mathrm{\Lambda }`$ is often termed the โscale of new physicsโ since above this scale, the Standard Model interactions are not valid. This bound is the upper curve on Figure 1.
There is also a theoretical lower bound on $`M_h`$. If $`\lambda `$ is small (light $`M_h`$), then
$$Q\frac{d\lambda }{dQ}\frac{1}{16\pi ^2}(B12g_t^4),$$
(8)
where $`g_t`$ is the Higgs-top quark Yukawa coupling, $`g_t=M_t/v`$, and $`B`$ is a function of the gauge coupling constants, $`B=\frac{3}{16}(2g^4+(g^2+g^2)^2)`$. We see that the large top quark mass tends to drive $`\lambda `$ negative. In order for electroweak symmetry breaking to occur, the potential must remain bounded from below, and $`\lambda `$ positive. Solving Eq. 8,
$$\lambda (\mathrm{\Lambda })=\lambda (M_h)+\frac{B12g_t^4}{16\pi ^2}\mathrm{log}(\frac{\mathrm{\Lambda }}{M_h}).$$
(9)
Requiring $`\lambda (\mathrm{\Lambda })>0`$ gives the lower bound on $`M_h`$,
$$\frac{M_h^2}{2v^2}>\frac{B12g_t^4}{16\pi ^2}\mathrm{log}(\frac{\mathrm{\Lambda }}{M_h}).$$
(10)
For large $`M_t`$, this relation changes sign and the two-loop renormalization group corrections are important to obtain the numerical bound. Requiring that the Standard Model be valid to the GUT scale, $`\mathrm{\Lambda }=10^{16}GeV`$, gives the restriction on the Higgs boson massshere
$$M_h>130GeV.$$
(11)
This is shown as the lower curve in Figure 1. There have been many theoretical improvements to the naive bounds presented above, but the bottom line is the same: If the Standard Model is valid to the GUT scale, then
$$130GeV<M_h<180GeV,\mathrm{\Lambda }10^{16}GeV.$$
(12)
A Higgs boson outside this mass region would be a signal for new physics at the corresponding scale $`\mathrm{\Lambda }`$. The mass region of Eq. 12 is particularly interesting since it could potentially be probed at the Tevatron with an upgraded luminosity.
There are also absolute bounds on the Higgs boson mass which are independent of the scale, $`\mathrm{\Lambda }`$. Unitarity of the $`WW`$ elastic scattering amplitudes requires $`M_h<800GeV`$, while lattice calculations obtain a similar bound, $`M_h<700GeV`$.latt All of the theoretical bounds of this section predict a Higgs boson comfortably within the discovery range of the LHC and so if the Standard Model is correct, a Higgs boson discovery should be just around the corner.
## Prospects for Discovery
The current $`95\%`$ confidence level limit on the Higgs boson mass from direct searches at LEP2 using data from $`\sqrt{s}=189202GeV`$ isblondel
$$M_h>106GeV,LEP2.$$
(13)
This limit is not expected to improve substantially with further running at LEP2.
The minimal supersymmetric model has two neutral Higgs bosons, $`h^{SUSY}`$ and $`H^{SUSY}`$, a charged Higgs, $`H^\pm `$, and a pseudoscalar, $`A`$. The structure of the supersymmetric potential dictates that at lowest order all the couplings can be expressed in terms of two parameters, which are typically taken to be the pseudoscalar mass, $`M_A`$, and the ratio of Higgs vacuum expectation values, $`\mathrm{tan}\beta `$. All masses can then be expressed in terms of these two parameters.susyrev
The experimental limit on the Higgs boson mass in a supersymmetric theory typically depends on $`\mathrm{tan}\beta `$. If we require that the limit be valid for all $`\mathrm{tan}\beta `$, there is a slightly lower $`95\%`$ confidence level limit than for the Standard Model Higgs boson,blondel
$$M_h^{SUSY}>90GeVLEP2.$$
(14)
The minimal supersymmetric theory has the remarkable feature that there is an upper bound on the lightest Higgs boson resulting from the structure of the scalar potential. This bound is roughly
$$M_h^{SUSY}<110130GeV,$$
(15)
where the exact value depends on assumptions about the parameters of the theory.carena This is tantalizingly close to the experimental limit of Eq. 14. We see that there is no overlap between the expected mass of the lightest Higgs boson of a supersymmetric model and the Standard Model Higgs boson when $`\mathrm{\Lambda }M_{GUT}`$. Hence an observation of the Higgs boson with even an imprecise value for its mass will help to distinguish between the Standard Model and its minimal supersymmetric extention.
A Standard Model Higgs boson should be discovered at the Tevatron or the LHC. Due to the small rate, the Higgs boson will be extraordinarily difficult to observe at the Tevatron. The signal with the best signature is associated production with a $`W^\pm `$. For $`M_h120GeV`$, the cross section at $`\sqrt{s}=2TeV`$ is $`\sigma (p\overline{p}W^\pm h).3pb.`$ Even with $`10fb^1`$, the $`5\sigma `$ discovery level is only $`M_h100GeV`$, below the current LEP2 limit. This underscores the need for the highest possible luminosity.
Figure 2 illustrates the discovery potential for a Standard Model Higgs boson at the Tevatron.hobbs For $`M_h<140GeV`$, the dominant signal results from $`p\overline{p}Wh,hb\overline{b}`$, while at higher Higgs masses, the decay $`hWW^{}`$ becomes the most important. The discovery reach plot combines small signals from many different channels. In fact, the maximum $`S/\sqrt{B}`$ in any channel is $`.9`$ for $`=1fb^1`$. A Standard Model Higgs discovery at the Tevatron will almost certainly require the full $`2530fb^1`$ of upgraded luminosity.
The LHC, on the other hand, should discover a Standard Model Higgs boson in any mass region below $`1TeV`$, as illustrated in Figure 3, even with only $`30fb^1`$.atlastdr From $`M_h120GeV`$ all the way up to $`M_h700`$ GeV, the Higgs boson can be observed through the decay $`hZZ4l`$. The discovery reach can be extended up to $`M_h1TeV`$ through the channels $`hZZl^+l^{}\nu \overline{\nu }`$ and $`hW^+W^{}l\nu jetjet`$. With the full luminosity of $`100fb^1`$, the LHC will see a Higgs signal in multiple channels for all possible masses. The observation in multiple channels will allow preliminary measurements of the Higgs coupling constants, as discussed in the next section.
The Standard Model points to a Higgs boson in the $`100200GeV`$ mass range, while its minimal supersymmetric extention suggests that the lightest Higgs boson is just above the current experimental limit. In either case, such a light Higgs boson would be kinematically accessible through the process $`e^+e^{}hZ`$ at an $`e^+e^{}`$ collider with $`\sqrt{s}350500GeV`$. The rates for Higgs production at an $`e^+e^{}`$ collider are shown in Fig. 4. For an $`e^+e^{}`$ collider with $`\sqrt{s}500GeV`$, the dominant production mechanism is $`e^+e^{}Zh`$ for $`M_h200GeV`$. At higher energy, say $`\sqrt{s}1TeV`$, the largest rate is from $`e^+e^{}\nu \overline{\nu }h`$ In the next section, we examine the capabilities and the required luminosities for linear colliders to measure the Higgs properties and contrast these potential future measurements with what we will know from the LHC.
## Precision measurements of mass, couplings, and branching ratios
### Higgs Mass Measurements
There are two complementary approaches to measuring the Higgs boson mass. The first is through the direct observation of the Higgs boson. For most values of $`M_h`$, with an integrated luminosity of $`=300fb^1`$, the LHC will measure $`\frac{\delta M_h}{M_h}10^3`$, as shown in Fig. 5. Even at $`M_h800GeV`$, the expected precision is $`\frac{\delta M_h}{M_h}10^2`$.
At a high energy $`e^+e^{}`$ collider, the cross section for $`e^+e^{}Zh`$ is a sensitive function of the Higgs boson mass and we could hope to obtain an extremely precise measurement of the mass. By measuring the rate as a function of $`\sqrt{s}`$, a measurement of order tesla
$$\delta M_h60MeV\sqrt{\frac{}{100fb^1}}$$
(16)
could be obtained for a Higgs boson in the $`100GeV`$ region. An alternate method is to measure the recoil spectrum in the process $`e^+e^{}Zhhe^+e^{},h\mu ^+\mu ^{}`$. This would yield a precision of,
$$\delta M_h300MeV\sqrt{\frac{}{100fb^1}},$$
(17)
again for a Higgs boson in the $`100GeV`$ region. With $`1000fb^1`$ the precision on $`\delta M_h`$ for a light Higgs boson at an $`e^+e^{}`$ collider could be considerably better than at the LHC, using either the excitation spectrum or the recoil spectrum of the $`e^+e^{}Zh`$ process.
Precise measurements of $`M_W`$ and $`M_t`$ at future colliders will allow a value of $`M_h`$ to be inferredtesla , as shown in Table 1. (The $`e^+e^{}`$ numbers in this table assume $`=1000fb^1`$.) Since the Higgs boson contributes only logarithmically to electroweak observables, the precision is significantly less than the direct measurement. Consistency between the direct and the indirect measurements will provide an important check of the theory at the quantum level, however.
### Measurements of Higgs Couplings
The measurement of the Higgs boson couplings is important to differentiate between the Standard Model and other possibilities. In a supersymmetric model, the Higgs couplings to both fermions and gauge bosons can be quite different from those of the Standard Model, as illustrated in Fig. 6 for an arbitrary choice of input parameters. The total decay width can differ by more than an order of magnitude between the Standard Model and a supersymmetric model.
The total Higgs boson width can be measured from the reconstructed Higgs peak at the LHC. This direct measurement is only possible for $`M_h>200GeV`$. Below this mass, the width of the resonance is narrower than the experimental resolution. For $`M_h>200GeV`$, the Higgs can be observed through the decay $`hZZ4l`$ and the resulting measurement of the total width is shown in Fig. 7. With $`=300fb^1`$, the LHC can measure $`\mathrm{\Delta }\mathrm{\Gamma }_h/\mathrm{\Gamma }_h<10^1`$ for $`300GeV<M_h<800GeV`$.
Measurements of specific branching ratios are probably the most useful quantities for distinguishing between the Standard Model and other models. As an example, I discuss the coupling of the Higgs boson to the top quark. In the Standard Model the Yukawa coupling is given by,
$$g_t=\frac{M_t}{v},$$
(18)
while in the minimal supersymmetric model the coupling is modified by the factor $`C_{tth}`$,
$$g_t=C_{tth}\frac{M_t}{v}.$$
(19)
For some values of $`\mathrm{tan}\beta `$ and $`M_A`$, $`C_{tth}`$ can be quite different from 1, as shown in Fig. 8. Fig. 8 also shows the coupling of the heavier neutral Higgs boson of a supersymmetric theory, $`H^{SUSY}`$, to the top quark. Again, the coupling can be far from the Standard Model coupling. Note that for $`M_A\mathrm{}`$, $`C_{tth}1`$, $`C_{ttH}0`$ and the Standard Model coupling is recovered.
At the LHC, the $`t\overline{t}h`$ coupling can be measured to roughly $`20\%`$ through the process $`ppt\overline{t}h`$ in the mass region $`M_h120GeV`$.atlastdr (For higher Higgs masses, the cross section becomes quite small.) A similar mass region can be probed at an $`e^+e^{}`$ collider. The signal decays predominantly to $`W^+W^{}b\overline{b}b\overline{b}`$ and so will be spectacular. A study of the signal and background showed that the signal could be extracted from the background using both the semi-leptonic and the hadronic decays of the $`W`$โs and a measurement of $`g_t`$ obtained.morr ; bdr Table 2 shows the expected precision for the measurement of $`g_t`$ at $`\sqrt{s}=500GeV`$ and $`1TeV`$.bdr The message is clear. A precision measurement of $`C_{tth}`$ requires high energy and high luminosity ($`L=1000fb^1`$) in order to improve on the LHCโs measurement.
The total rate for Higgs production in the process $`e^+e^{}Zh`$ can be found by measuring the recoil mass of the lepton pair, $`M_{ll}`$, from the decay $`Zl^+l^{}`$. This measurement is independent of the Higgs boson decay mode. Once the total rate is known, the Higgs branching ratios can be measured by flavor tagging of the Higgs decay final states.
The measurements of the Higgs couplings to the lighter quarks can be done with a precision of $`510\%`$ with $`500fb^1`$ at an $`e^+e^{}`$ collider. Ref. batt found roughly equivalent results for $`\sqrt{s}=350GeV`$ and $`\sqrt{s}=500GeV`$. The error on the measurements of the Higgs Yukawa couplings of Ref. batt is dominated by theoretical uncertainty due to the measured input values of $`\alpha _s`$, $`m_c`$, and $`m_b`$, not by systematic or statistical errors.
Armed with measurements of the Higgs boson branching ratios, we can ask over what region of parameter space the minimal supersymmetric model can be distinguished from the Standard Model. The answer is shown in Figure 9, taken from Ref. batt . First, the $`95\%`$ confidence level value of the branching ratio for the Standard Model was computed. Ref. batt then scans over the parameter space of the minimal supersymmetric model, taking $`\mathrm{tan}\beta <60`$ and the mass parameters to be less than $`11.5TeV`$. For a given set of parameters, the Higgs branching ratio was then computed. In Fig. 9, the region to the right of the curves (going from left to right on the figure) has more than $`68`$, $`90`$ or $`95\%`$ of the supersymmetric model solutions outside of the Standard Model $`95\%`$ confidence level region. With $`500fb^1`$, an $`e^+e^{}`$ collider can distinguish between the Standard Model and the minimal supersymmetric model up to $`M_A550GeV`$, while with $`1000fb^1`$, the sensitivity is increased to $`M_A730GeV`$.batt This is remarkable given the decoupling of the Higgs sector of the minimal supersymmetric model for large $`M_A`$.
At the LHC, measurements of the Higgs couplings are less clearcut than at an $`e^+e^{}`$ collider. At the LHC, measurements involving the Higgs boson typically involve combinations of Higgs couplings. For example, a measurement of the ratio of the $`h\gamma \gamma `$ and $`hZZ4l`$ rates would give the ratio of the $`h\gamma \gamma `$ and $`hZZ`$ branching ratios, but not the absolute couplings. A study of the combinations of Higgs couplings which can be measured at the LHC is given in Ref. snow1 .
## Verifying the structure of the Higgs potential
Once a Higgs particle is found, it will be necessary to investigate its self-couplings in order to reconstruct the Higgs potential and to verify that the observed particle is indeed the Standard Model Higgs boson which results from spontaneous symmetry breaking. A first step in this direction is the measurement of the trilinear self-couplings of the Higgs boson which are uniquely specified by the scalar potential of Eq. 2.
After the symmetry breaking, the self-couplings of the Higgs boson are uniquely determined by $`M_h`$,
$$V=\frac{M_h^2}{2}h^2+\frac{M_h^2}{2v}h^3+\frac{M_h^2}{8v^2}h^4.$$
(20)
In extensions of the Standard Model, such as models with an extended scalar sector, with composite particles or with supersymmetric partners, the self-couplings of the Higgs boson may be significantly different from the Standard Model predictions.
In order to probe the three- and four- point Higgs couplings, it is necessary to measure multi-Higgs production. Higgs boson pairs can be produced by several mechanisms at hadron colliders:
* Higgs-strahlung $`W^{}/Z^{}hhW/Z`$,
* vector-boson fusion $`WW,ZZhh`$,
* Higgs radiation off top and bottom quarks $`gg,q\overline{q}Q\overline{Q}hh`$,
* gluon-gluon collisions $`gghh`$.
At the LHC, gluon fusion is the dominant source of Higgs-boson pairs in the Standard Model and arises from quark loops, with the dominant contribution coming from top quark loops. The rate, even at the LHC, is quite small as can be seen in Fig. 10. Although the rate is sensitive to the tri-linear coupling, the variation is probably too small to be observed.dds A detailed study of the signal and background gives the results shown in Fig. 11.bely This study computed the minimum rate necessary for a $`5\sigma `$ discovery of $`hh`$ production. It is clear that in the Standard Model, this physics will have to wait for the next generation of accelerators.
In a supersymmetric model, the $`b`$\- quark contribution to $`hh`$ production will be enhanced for large $`\mathrm{tan}\beta `$. Even so, in the absence of large squark loop contributions, with $`25fb^1`$ the Tevatron can only exclude a small region of parameter space with $`M_A<150GeV`$ and $`\mathrm{tan}\beta >80`$. The LHC will be able to exclude an even larger region of $`M_A`$ and $`\mathrm{tan}\beta `$ space.bely However, the situation changes dramatically for light squarks and with the parameters chosen to maximize the squark tri-linear couplings. In this case, it is possible to obtain a significant enhancement of the rate, largely due to resonance effects. This is shown in Fig. 12 for the Tevatron.bely In this very special situation, even the Tevatron will be extremely sensitive to double Higgs production.
At a high energy $`e^+e^{}`$ collider, Higgs pairs are produced through similar mechanisms as in hadronic collisions. At intermediate energies, $`\sqrt{s}500GeV`$, the dominant mechanism is $`e^+e^{}Zhh`$, while at TeV scale energies, the process $`e^+e^{}\nu \overline{\nu }hh`$ is dominant. Just above the kinematic threshold, the sensitivity to the trilinear coupling is maximal in the $`e^+e^{}Zhh`$ process. With $`2000fb^1`$, the tri-linear coupling can be measured to $`15\%`$zerwas . The cross sections for all sources of double Higgs production in an $`e^+e^{}`$ collider are small, on the order of a few femptobarn or less for $`M_h<200GeV`$.zerwas This is clearly a measurement which requires the highest possible luminosity in order to isolate the signal from the background and make a measurement of the tri-linear Higgs coupling.
At present, it does not appear possible to measure the Higgs boson four-point coupling. In principle, it could be measured in triple Higgs production, but the rate is miniscule.
## Strongly Interacting Symmetry Breaking
If a Higgs boson is not found at the LHC, then the electroweak symmetry breaking is strongly interacting. Without the addition of some new type of physics, $`WW`$ scattering will violate unitarity at an energy scale somewhere below $`3TeV`$. There are two classes of effects which could potentially be observed in this scenario.
The first possibility is that whatever new physics unitarizes the $`WW`$ scattering is at too high an energy scale to be observed at either the LHC or an $`e^+e^{}`$ collider with $`\sqrt{s}500GeV1TeV`$. In this case the only effects which can be observed are small deviations in absolute rates. The Lagrangian can be written as
$$=_{SM}+\underset{i}{}\frac{f_i}{\mathrm{\Lambda }^2}๐ช_i,$$
(21)
where $`_{SM}`$ is the Lagrangian of the Standard Model with the Higgs boson removed. Without the Higgs boson, the Lagrangian can be written in terms of an expansion in powers of $`\frac{s}{\mathrm{\Lambda }^2}`$, where $`\mathrm{\Lambda }`$ is the scale of new physics. The $`f_i`$ are dimensionless coefficients of the new operators, $`๐ช_i`$. A complete set of operators at order $`s/\mathrm{\Lambda }^2`$ can be found in Ref. ab . The goal of the LHC or a high energy $`e^+e^{}`$ collider in this scenario would be to measure the $`f_i`$ and attempt to distinguish between models. At the LHC, there will be a very small number of eventsatlastdr and it is doubtful if it will be possible to tell the difference between the various possible models. An $`e^+e^{}`$ collider with $`\sqrt{s}1.5TeV`$ could measure some of the $`f_i`$ to $`๐ช(10^3)`$, but a complete set of measurements will take still higher energy.
In the second case, the new physics which unitarizes the $`WW`$ scattering amplitudes produces resonances which can be observed. Numerous studies have found that an $`e^+e^{}`$ collider with $`\sqrt{s}1.5TeV`$ has roughly the same sensitivity to $`TeV`$ scale resonances as does the LHC.snow1 Both machines will be sensitive to resonances on the order of $`1.5TeV`$.
## Conclusion
Even after the LHC has successfully run for a few years, there will still be unanswered physics questions. If a weakly interacting Higgs boson exists, either from a supersymmetric model or the Standard Model, it will be observed at the LHC. The LHC will make preliminary measurements of the Higgs boson mass and couplings, but a high energy $`e^+e^{}`$ collider with high luminosity will significantly improve on the precision. Precise measurements of the Higgs width are particularly important for differentiating between models. Measurements of double Higgs production and strong symmetry breaking in particular will require the highest possible energy and luminosity.
This note has considered only electroweak symmetry breaking. There will of course be many exciting questions to be answered in other areas of particle physics such as supersymmetry, QCD, CP violation, etc. Interesting times await us! |
no-problem/9912/astro-ph9912494.html | ar5iv | text | # The nature of 1WGA J1958.2+3232: A new intermediate polar
## 1 Introduction
There are several types of X-ray sources which display significant modulation in their X-ray lightcurves, among which isolated neutron stars, anomalous X-ray pulsars and two types of well characterised binary systems: accreting X-ray pulsars (accreting neutron stars with strong magnetic fields $`B10^{11}\mathrm{G}`$) and magnetic cataclysmic variables (accreting white dwarfs with moderate magnetic fields $`B10^5\mathrm{G}`$). Recent systematic analysis of ROSAT observations has resulted in the detection of several new such sources. However, given the limited spectral information of the ROSAT data and the impossibility of determining the intrinsic luminosity of the sources, the classification of these objects depends on the identification of their optical counterpart. X-ray pulsators are generally part of a high mass X-ray binary (HMXRB) and their optical spectra are those of the massive companion, without any significant contribution from the vicinity of the neutron star. In cataclysmic variables (CVs), on the other hand, the white dwarf is accreting from a late-type unevolved star and the optical spectrum is dominated by emission from the accretion disc (if present) or the accretion stream (when the magnetic field is too strong to allow the formation of an accretion disc). In polars (AM Her stars), the magnetic field ($`B5\times 10^6\mathrm{G}`$) is dominant: there is no accretion disc and the orbit and spin periods are synchronised. In magnetic CVs with weaker magnetic fields (intermediate polars) rotation is not synchronous and an accretion disc, an accretion stream or both can be present.
Strong modulation (at an 80% level) was discovered in the X-ray signal from the ROSAT PSPC source 1WGA J1958.2+3232 by Israel et al. (1998). The pulse period was poorly determined at 721$`\pm `$14 s, though a later ASCA observation allowed the derivation of a much more accurate value 734$`\pm `$1 s (Israel et al. 1999). The energy spectrum was fitted by a simple power law giving a photon index $`\mathrm{\Gamma }=0.8_{0.6}^{+1.2}`$ and a column density $`N_\mathrm{H}=\left(6_5^{+24}\right)\times 10^{20}\mathrm{cm}^2`$. Given these parameters and the fact that the source was close to the Galactic plane, Israel et al. (1998) were unable to decide whether the source was a low-luminosity persistent Be/X-ray binary (see Negueruela 1998; Reig & Roche 1999) or an intermediate polar (see Patterson 1994).
Later Israel et al. (1999) located an $`V=15.7`$ emission line object inside the 30$`\mathrm{}`$ X-ray error circle, which is the optical counterpart. Based on a low signal-to-noise spectrum, Israel et al. (1999) classified the object as a Be star, in spite of the evident presence of strong He ii$`\lambda `$4686ร
emission, which is never seen in classical Be stars (in Be/X-ray binaries, if at all present, it only shows as some in-filling in the photospheric line). Based on some features that they identified as interstellar lines, they speculated that the optical counterpart was a slightly reddened B0Ve star at a distance of 800 pc. However, if a B0V star was slightly reddened, it should have an apparent magnitude $`V6`$ rather than $`16`$, and a very large reddening is unlikely given that the extinction in that direction has been measured to be small (Neckel & Klare 1980) . This led us to obtain higher resolution spectra of the source. In this paper we show that the optical properties of the object clearly identify it as an intermediate polar, rather than a Be/X-ray binary.
## 2 Observations
### 2.1 Optical spectroscopy
We observed the optical counterpart to 1WGA J1958.2+3232 on July 12, 1999, using the Intermediate Dispersion Spectroscopic and Imaging System (ISIS) on the 4.2-m William Herschel Telescope (WHT), located at the Observatorio del Roque de los Muchachos, La Palma, Spain. The blue arm was equipped with the R300B grating and the EEV#10 CCD, which gives a nominal dispersion of $`0.9`$ ร
/pixel over $`3500`$ ร
. The red arm was equipped with the R1200R grating and the Tek4 CCD, which gives a nominal dispersion of $`0.4`$ ร
/pixel at H$`\alpha `$. The exposure time was $`1500\mathrm{s}`$. The data were processed using the Starlink packages ccdpack (Draper 1998) and figaro (Shortridge et al. 1997) The extracted spectra are displayed in Figures 1 and 2.
We obtained lower resolution spectroscopy using the 1.3-m Telescope at the Skinakas Observatory (Crete, Greece) on July 26, 1999. The telescope is an f/7.7 Ritchey-Cretien and was equipped with a 2000 $`\times `$ 800 ISA SITe chip CCD. This camera has 15$`\mu `$m pixels and reaches maximum efficiency ($``$ 90%) in the red, at around H$`\alpha `$. The spectrum (a 1800-s exposure) was taken with a 1300 line mm<sup>-1</sup> grating and a 320 $`\mu `$m width slit (6<sup>โฒโฒ</sup>.7) which gave a dispersion of 1 ร
pixel<sup>-1</sup>. The spectrum, which is displayed in Figure 3, was reduced using figaro.
### 2.2 Optical photometry
We obtained Strรถmgren photometry of the field using the 1.3-m Telescope at Skinakas Observatory on August 16, 1999 (JD 2,451,407). The telescope was equipped with a 1024 $`\times `$ 1024 pixel SITe CH360 CCD. The size of the pixels was 24$`\mu `$m, representing approximately 0<sup>โฒโฒ</sup>.5 on the sky. The source was observed through standard $`u`$, $`v`$, $`b`$, $`y`$ filters with exposure times of 1200, 900, 600 and 300 seconds, respectively. A sufficient number of standards were observed in order to compute the atmospheric extinction coefficients and allow the transformation to the standard system.
The results are displayed in Table 1. We have also obtained measurements for the only other star of similar brightness which was inside both the ASCA and ROSAT error circles โ dubbed โcandidate Aโ by Israel et al. (1999). As can be seen, the values of $`y`$ for the proposed optical counterpart and candidate A are compatible with the $`V`$ values obtained by Israel et al. (1999) โ 15.7$`\pm `$0.2 and 15.4$`\pm `$0.2 respectively.
## 3 Discussion
The blue spectrum of the optical counterpart to 1WGA J1958.2+3232 is displayed in Figure 1. The spectrum is typical of a cataclysmic variable with no obvious absorption stellar features and strong emission in all Balmer lines (down to H12). The absence of photospheric features rules out the possibility that 1WGA J1958.2+3232 is a Be/X-ray binary โ see, for example, Steele et al. (1999), where it is shown that even for the Be stars with the strongest emission veiling, the photospheric features allow spectral classification to the spectral subtype.
In the spectrum of 1WGA J1958.2+3232, on the other hand, as is typical in intermediate polars, He ii $`\lambda `$4686ร
and the Bowen complex are strongly in emission. Many other He i and He ii transitions are also in emission. The Balmer lines are all double-peaked and asymmetric with a stronger blue peak (note that the profile of H$`ฯต`$ is modified by the interstellar Ca ii $`\lambda `$3968ร
line). The asymmetry is still stronger in the He ii lines and can be seen in the weaker He i lines. The centroids of emission lines (determined by fitting a single Gaussian to the profile) show no displacement from the rest wavelength within the resolution achieved. The blue peaks of the H i and He ii lines are displaced by $`250\mathrm{km}\mathrm{s}^1`$.
Figure 2 displays H$`\alpha `$ and He i $`\lambda `$6678ร
at higher resolution. The double-peaked shape can be seen in greater detail in the H$`\alpha `$ line. This is evidence for the presence of an accretion disc surrounding the white dwarf. The exact shape of the lines must depend on the orbital phase at which the observation was taken. Given that the X-ray flux is strongly pulsed and an accretion disc is present, the object must be an intermediate polar. Therefore the observed X-ray variation should represent the spin period of the cataclysmic variable or the beat period between the spin and orbital periods, since it should be an asynchronous system. The sharpness of the peaks indicates that the 25-min exposure does not represent a significative portion of the orbit (otherwise the peaks would be blurred). This is consistent with expected orbital periods of a few hours.
In the lower resolution spectrum taken two weeks later (Fig. 3), H$`\alpha `$ and the He i are single-peaked and red-dominated, indicating that the source was observed at a different orbital phase. Even though the resolution is rather lower than in the WHT spectrum, a peak separation similar to that measured in the first spectrum ($`v375\mathrm{km}\mathrm{s}^1`$) should have been resolved. The interstellar Na i lines are not detectable above the noise level. Due to their weakness and the irregularity of the continuum, no diffuse interstellar bands (DIB) can be measured even in the higher resolution spectra. We set upper limits for the Equivalent Width (EW) of the DIBs at $`\lambda `$4430ร
and $`\lambda `$6613ร
as EW$`<400\mathrm{m}`$ร
and $`<50\mathrm{m}`$ร
, both of which are consistent with $`E(BV)<0.2`$ (Herbig 1975). This is in accordance with the measurements of interstellar absorption in this direction ($`l=69\mathrm{deg},b=1.7`$) by Neckel & Klare (1980), who find $`A_V<0.5\mathrm{mag}`$ and $`A_V<1.0\mathrm{mag}`$ at 1 kpc for the two fields between which 1WGA J1958.2+3232 approximately lies.
In the WHT observations, we set the slit in such a way as to also observe the nearby star dubbed โCandidate Aโ by Israel et al. (1999), which is about $`40\mathrm{}`$ away from the optical counterpart to 1WGA J1958.2+3232, and therefore could provide some information on the reddening in that direction. Even though Israel et al. (1999) claim that this object is an early-type star, comparison with the spectra of several stars taken from the electronic database of Leitherer et al. (1996) shows that its spectral type is F8V (see Fig. 4). We cannot see the $`\lambda `$4430ร
DIB down to the level of the many weak features in the spectrum, which gives an upper limit of $`\mathrm{EW}300\mathrm{m}`$ร
. From the measured $`(by)=0.59\pm 0.07`$ and the intrinsic $`(by)_0=0.350`$ for an F8V star (Popper 1980) we obtain the interstellar reddening $`E(by)=0.24`$. Using the relation of Crawford & Mandwewala (1976) $`E(BV)=1.35E(by)`$, this implies $`E(BV)=0.32`$, significantly higher than the upper limit that could be derived from the interstellar $`\lambda `$4430ร
DIB, which implies $`E(BV)<0.13`$, according to the relation by Herbig (1975). Assuming $`M_V=+4.2`$ for a main-sequence F8 star (Deutschman et al. 1976) and the standard reddening $`R=3.1`$, this star is situated at a distance $`d0.9\mathrm{kpc}`$.
Given its brightness, 1WGA J1958.2+3232 should be located at a distance $`1d1.5\mathrm{kpc}`$ (see Israel et al. 1998), i.e., farther away than the F8V star and therefore would have a higher reddening. If the reddening is $`E(BV)>0.3`$, the soft X-ray flux could be absorbed, which would explain the relatively low $`L_\mathrm{x}/L_{\mathrm{opt}}`$ of the source when compared to less distant intermediate polars (see Israel et al. 1998). We note that the interstellar lines indicate a lower reddening, but in the F8V star this estimate is also rather lower than the photometric determination of the reddening.
With a pulse period of $`734\mathrm{s}`$, this system falls in between the two groups of short and long period intermediate polars defined by Norton et al. (1999), and characterised by different X-ray pulse shapes. Clearly further X-ray observations of the source are needed and either RXTE or Chandra could provide more detailed timing observations. Also, future time-resolved photometric and spectroscopic observations are needed in order to determine the orbital period and whether the observed X-ray pulsations correspond to the spin period.
## 4 Conclusions
Based on intermediate-resolution spectroscopy, we conclude that 1WGA J1958.2+3232 is an intermediate polar, rather than a Be/X-ray binary. From the magnitudes measured for the object and a very nearby F8V star we can estimate that 1WGA J1958.2+3232 is situated at a distance of 1 โ 1.5 kpc and moderately reddened with $`E(BV)0.3`$.
## Acknowledgements
The WHT is operated on the island of La Palma by the Royal Greenwich Observatory in the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrofรญsica de Canarias. The observations were taken as part of the ING service observing programme. Skinakas Observatory is a collaborative project of the University of Crete, the Foundation for Research and Technology-Hellas and the Max Planck Institut fรผr Extraterrestrische Physik. The authors would like to thank Dr. GianLuca Israel for his help with this work and Drs D. di Martino and A. J. Norton for their helpful comments on the draft. We are also grateful to Drs E. V. Paleologou and I. E. Papadakis for helping with the spectroscopic and photometric observations at Skinakas Observatory, respectively. IN is supported by an ESA external fellowship. PR acknowledges partial support via the European Union Training and Mobility of Researchers Network Grant ERBFMRX/CT98/0195. JSC is supported by a PPARC research assistantship. |
no-problem/9912/math9912180.html | ar5iv | text | # On extensions of representations for compact Lie groups
## 1. Introduction
One of the classical problems in finite group theory is to characterize extensions of representations. We mean an extension of a representation in the following way: Given a normal subgroup $`H`$ of a group $`G`$, a (complex) representation $`\rho :H\mathrm{GL}(n,)`$ is called *extendible to $`G`$* if there exists a representation $`\stackrel{~}{\rho }:G\mathrm{GL}(n,)`$ (called a *$`G`$-extension*) such that $`\rho =\stackrel{~}{\rho }`$ on $`H`$. It is to be noted that the dimension $`n`$ is not changed, since $`\rho `$ as a sub-representation is always contained in the restriction of the induced representation of $`\rho `$ to $`H`$.
In the case of finite $`G`$, it is well known that every complex irreducible representation of $`H`$, which is $`G`$-invariant under conjugation (see Section 2 for the definition), is extendible to $`G`$ if the second group cohomology $`H^2(G/H,^{})`$ vanishes \[Isa76, Theorem 11.7\]. On the other hand the extension problem for infinite groups has not been extensively studied. In this article we study the problem for compact Lie groups when $`G/H`$ is connected. Our main result is a necessary and sufficient condition for every complex representation of $`H`$ to be extendible to $`G`$. It is also shown that the condition is related to a topological invariant, the fundamental group of $`G/H`$.
For any group $`G`$, let $`G^{}`$ denote the commutator subgroup of $`G`$.
###### Theorem 1.1.
Let $`G`$ be a compact Lie group and $`H`$ a closed normal subgroup such that $`G/H`$ is connected. Then every complex representation of $`H`$ is extendible to $`G`$ if and only if $`H`$ is a direct summand of $`G^{}H`$.
###### Corollary 1.2.
Let $`G`$ be a compact Lie group and $`H`$ a closed normal subgroup such that $`G/H`$ is connected. Then every complex representation of $`H`$ is extendible to $`G`$ if the fundamental group $`\pi _1(G/H)`$ is torsion free, or equivalently if $`(G/H)^{}`$ is simply connected.
Our theorem provides a complete characterization of the triviality of complex $`G`$-vector bundles over the homogeneous space $`G/H`$. Let $`E`$ be a complex $`G`$-vector bundle over $`G/H`$. We recall that $`E`$ is *trivial* if it is isomorphic to the product bundle $`G/H\times V`$ for some complex $`G`$-module $`V`$. Since $`E`$ is uniquely determined by the fiber at the identity element of $`G/H`$ (say $`E_0`$), the bundle $`E`$ is trivial if and only if $`E_0`$ as a complex representation of $`H`$ is extendible to $`G`$. Theorem 1.1 leads us to the following corollary.
###### Corollary 1.3.
Let $`G`$ be a compact Lie group and $`H`$ a closed normal subgroup such that $`G/H`$ is connected. Then every complex $`G`$-vector bundle over the homogeneous space $`G/H`$ is trivial if and only if $`H`$ is a direct summand of $`G^{}H`$. โ
The existence of $`G`$-extensions plays an important role even in equivariant $`K`$-theory. Let $`X`$ be a connected topological space with a compact Lie group $`G`$ action. Let $`H`$ be the normal subgroup of $`G`$ which consists of all elements of $`G`$ acting trivially on $`X`$. Then the projection $`GG/H`$ induces the canonical homomorphism $`\varphi :K_{G/H}(X)K_G(X)`$ which sends a $`G/H`$-vector bundle over $`X`$ to the same bundle viewed as a $`G`$-vector bundle with the trivial $`H`$-action.
On the other hand, suppose that every complex irreducible representation of $`H`$ is extendible to $`G`$. Then there is an injective group homomorphism $`e:R(H)R(G)`$ between two representation rings defined as follows. For each irreducible complex $`H`$-module $`U`$ choose a $`G`$-extension $`U_G`$, and define $`e([U])=[U_G]`$ where $`[]`$ denote the classes in the representation rings. Then extend the definition of $`e`$ to $`R(H)`$ so that it defines a homomorphism $`R(H)R(G)`$. For each complex $`G`$-module $`V`$ we can associate the trivial complex $`G`$-vector bundle $`\underset{ยฏ}{V}=X\times V`$, which defines the natural homomorphism $`t:R(G)K_G(X)`$. We now define a group homomorphism
(1)
$$\mu :R(H)K_{G/H}(X)K_G(X),(V,\xi )te(V)\varphi (\xi ).$$
This homomorphism is an isomorphism. Indeed, the inverse is given as follows. Let $`\mathrm{Irr}(H)`$ denote the set of all isomorphism classes of complex irreducible representations of $`H`$. For each $`[\chi ]\mathrm{Irr}(H)`$ choose a $`G`$-extension of $`\chi `$, and let $`V_\chi `$ be the corresponding $`G`$-module to the chosen $`G`$-extension. For a complex $`G`$-vector bundle $`E`$ over $`X`$, the canonical isomorphism
$$E\stackrel{}{}\underset{[\chi ]\mathrm{Irr}(H)}{}\underset{ยฏ}{V_\chi }\mathrm{Hom}_H(\underset{ยฏ}{V_\chi },E)$$
induces a group homomorphism $`K_G(X)R(H)K_{G/H}(X)`$ which is the desired inverse (see \[CKMS99, Section 2\] for more general arguments). Therefore we have a generalization of Proposition 2.2 in \[Seg68\] which deals with the extreme case when $`G`$ acts trivially on $`X`$.
###### Corollary 1.4.
Let $`G`$ be a compact Lie group and $`H`$ a closed normal subgroup such that $`G/H`$ is connected. Let $`X`$ be a connected $`G`$-space such that $`H`$ acts trivially on $`X`$. If $`H`$ is a direct summand of $`G^{}H`$, then the map $`\mu :R(H)K_{G/H}(X)K_G(X)`$ in (1) can be defined, and it is a group isomorphism. โ
This article is organized as follows. In Section 2 we shall give some basic notions and then show that a complex irreducible representation of $`H`$, which is $`G`$-invariant under conjugation, induces an associated projective representation of $`G`$ which may be viewed as a $`G`$-extension in the projective representation level. Section 3 is devoted to prove that every complex representation of $`H`$ has a $`G`$-extension when $`G/H`$ is connected and abelian. In Section 4 we shall proceed the study in the case that $`G/H`$ is semisimple and connected. After showing that the extension problem can be reduced to this case, we shall prove Theorem 1.1.
The authors wish to thank Professor Mikiya Masuda of Osaka City University for valuable discussions on the overall contents of the article. The authors also wish to thank Professor I. Martin Isaacs of University of Wisconsin and Professor Hi-joon Chae of Hong-Ik University for helpful discussions on finite and Lie group representations.
## 2. Associated projective representations
Let $`G`$ be a topological group and $`H`$ a closed normal subgroup of $`G`$. By a (complex) *representation* of $`G`$ we shall mean a continuous homomorphism of $`G`$ into the general linear group $`\mathrm{GL}(n,)`$ of nonsingular $`n\times n`$ matrices over the field $``$ of complex numbers. A representation $`\rho :H\mathrm{GL}(n,)`$ is called *extendible to $`G`$* if there exists a representation $`\stackrel{~}{\rho }:G\mathrm{GL}(n,)`$ (called a *$`G`$-extension* of $`\rho `$) such that $`\rho (h)=\stackrel{~}{\rho }(h)`$ for all $`hH`$.
Moreover, it is enough to get a $`G`$-extension of $`\rho `$ that there is a representation $`\stackrel{~}{\rho }:G\mathrm{GL}(n,)`$ such that its restriction to $`H`$ is isomorphic (or similar) to $`\rho `$, i.e., there exists a matrix $`M\mathrm{GL}(n,)`$ such that $`M^1\stackrel{~}{\rho }(h)M=\rho (h)`$ for all $`hH`$.
Given a representation $`\rho :H\mathrm{GL}(n,)`$ the map $`{}_{}{}^{g}\chi :H`$ defined by the conjugation $`{}_{}{}^{g}\chi (h)=\chi (g^1hg)`$ becomes a representation of $`H`$ for each $`gG`$. We say that $`\rho `$ is *$`G`$-invariant* if it is isomorphic to the conjugate representation $`{}_{}{}^{g}\rho `$ for all $`gG`$, which is a necessary condition of $`\rho `$ to be extendible to $`G`$.
In the following we assume that a representation $`\rho :H\mathrm{GL}(n,)`$ is irreducible and $`G`$-invariant. Then there exists a matrix $`M_g\mathrm{GL}(n,)`$ for each $`gG`$ such that $`M_g^1\rho (h)M_g={}_{}{}^{g}\rho (h)=\rho (g^1hg)`$ for all $`hH`$. Since $`\rho `$ is irreducible, the Schurโs lemma implies that $`M_g`$ is unique up to multiplication by nonzero constant in $`^{}=\{0\}`$. So we are able to define a function $`\rho ^{}`$ of $`G`$ into the projective linear group $`\mathrm{PGL}(n,)=\mathrm{GL}(n,)/^{}`$ by $`\rho ^{}(g)=[M_g]`$ for each $`gG`$, where $`[M_g]`$ denotes the image of $`M_g`$ by the canonical projection $`\pi :\mathrm{GL}(n,)\mathrm{PGL}(n,)`$.
###### Lemma 2.1.
Let $`G`$ be a topological group and $`H`$ a compact normal subgroup of $`G`$. Given a complex irreducible representation $`\rho :H\mathrm{GL}(n,)`$ which is $`G`$-invariant, the function $`\rho ^{}:G\mathrm{PGL}(n,)`$ defined above is a continuous homomorphism, called the projective representation of $`G`$ associated with $`\rho `$. Moreover, the image of $`\rho ^{}`$ is contained in $`U(n)/S^1\mathrm{PGL}(n,)`$ if $`\rho `$ is a unitary representation of $`H`$.
###### Proof.
It is immediate that $`\rho ^{}`$ is a homomorphism. Since $`H`$ is compact we may assume that $`\rho `$ is a unitary representation of $`H`$, i.e., the image of $`\rho `$ is contained in the unitary group $`U(n)`$. Then $`M_g`$ is a constant multiple of a matrix in $`U(n)`$ so that $`\rho ^{}(g)`$ is contained in $`U(n)/S^1`$ for all $`gG`$. For the continuity of $`\rho ^{}`$ it suffices to show that the graph of $`\rho ^{}`$ in $`G\times \mathrm{PGL}(n,)`$ is closed, since $`U(n)/S^1`$ is a compact Hausdorff space.
Consider the family of continuous maps $`\mathrm{\Phi }_h:G\times \mathrm{GL}(n,)\mathrm{GL}(n,)`$ for each $`hH`$ given by $`(g,M)\rho (h)M\rho (g^1hg)^1M^1`$. Then the set
$$\underset{hH}{}\mathrm{\Phi }_h^1(I)=\underset{gG}{}\{(g,M)G\times \mathrm{GL}(n,)M\pi ^1(\rho ^{}(g))\},$$
is the inverse image of the graph of $`\rho ^{}`$ in $`G\times \mathrm{PGL}(n,)`$ by the canonical projection $`1\times \pi :G\times \mathrm{GL}(n,)G\times \mathrm{PGL}(n,)`$, which is obviously closed in $`G\times \mathrm{GL}(n,)`$. Therefore the graph of $`\rho ^{}`$ is also closed in $`G\times \mathrm{PGL}(n,)`$. โ
We may say that $`\rho `$ is extendible to $`G`$ in the projective representation level, since $`\rho ^{}(h)=[\rho (h)]`$ for all $`hH`$, i.e., $`\rho ^{}=\pi \rho `$ on $`H`$.
Note that any $`G`$-extension (if exists) $`\stackrel{~}{\rho }`$ of $`\rho `$ is a lifting homomorphism of $`\rho ^{}`$, i.e., $`\rho ^{}=\pi \stackrel{~}{\rho }`$, since $`\rho ^{}(g)=[\stackrel{~}{\rho }(g)]`$ for all $`gG`$.
###### Remark.
In case that $`G`$ is finite, choose a transversal $`T`$ containing $`e`$ for $`H`$ in $`G`$ and set $`M_e=I`$, the identity matrix in $`\mathrm{GL}(n,)`$. For each $`tT`$ and $`hH`$, the map $`\rho ^{}:G\mathrm{GL}(n,)`$ sending $`thM_t\rho (h)`$ is a lifting (not necessarily homomorphism) of $`\rho ^{}`$, i.e., $`\pi \rho ^{}=\rho ^{}`$, and it determines a cocycle $`\beta `$ in the second group cohomology $`H^2(G/H,^{})`$, which depends only on $`\rho `$. Moreover, $`\rho `$ is extendible to $`G`$ if and only if $`\beta `$ is trivial, see \[Isa76, Theorem 11.7\] for more details.
## 3. Extensions when $`G/H`$ is connected abelian
In this section we shall prove that every complex representation of $`H`$ is extendible to $`G`$ when $`G/H`$ is compact, connected, and abelian, that is a torus. We begin with a general result on extensions of representations in the special case when $`G=SH`$ for some closed subgroup $`S`$ of $`G`$.
###### Lemma 3.1.
Let $`G`$ be a compact topological group such that $`G=SH`$ for a closed subgroup $`S`$ and a closed normal subgroup $`H`$ of $`G`$. Then a complex representation $`\rho :H\mathrm{GL}(n,)`$ is extendible to $`G`$ if and only if there exists a representation $`\phi :S\mathrm{GL}(n,)`$ such that
1. $`\phi =\rho `$ on $`SH`$, and
2. $`\phi (s)^1\rho (h)\phi (s)=\rho (s^1hs)`$ for all $`sS`$ and $`hH`$.
###### Proof.
The necessity is obvious so we prove the sufficiency. Define a function $`\stackrel{~}{\rho }:G\mathrm{GL}(n,)`$ by $`\stackrel{~}{\rho }(sh)=\phi (s)\rho (h)`$ for $`sS`$ and $`hH`$. It is immediate that $`\stackrel{~}{\rho }=\rho `$ on $`H`$. In this proof we shall use the symbols $`s,s^{}`$ and $`h,h^{}`$ for elements in $`S`$ and $`H`$, respectively.
*Claim: $`\stackrel{~}{\rho }`$ is well-defined.* If $`sh=s^{}h^{}G`$, then $`(s^{})^1s=h^{}h^1SH`$. Then the condition (1) implies that $`\phi (s^{})^1\phi (s)=\rho (h^{})\rho (h)^1`$ and thus $`\stackrel{~}{\rho }(sh)=\phi (s)\rho (h)=\phi (s^{})\rho (h^{})=\stackrel{~}{\rho }(s^{}h^{})`$.
*Claim: $`\stackrel{~}{\rho }`$ is a homomorphism.* For $`sh,s^{}h^{}G`$, the condition (2) implies that
$`\stackrel{~}{\rho }((s^{}h^{})(sh))`$ $`=\phi (s^{})\phi (s)\rho (s^1h^{}s)\rho (h)`$
$`=\phi (s^{})\phi (s)\phi (s)^1\rho (h^{})\phi (s)\rho (h)`$
$`=\stackrel{~}{\rho }(s^{}h^{})\stackrel{~}{\rho }(sh),`$
since $`(s^{}h^{})(sh)=(s^{}s)(s^1h^{}s)h`$ and $`s^1h^{}sH`$.
*Claim: $`\stackrel{~}{\rho }`$ is continuous.* The map $`p:S\times HG`$ sending $`(s,t)st`$ is a continuous surjection. Since both $`S`$ and $`H`$ are compact, $`p`$ is a closed map so that $`G`$ has the quotient topology induced by $`p`$.
Then the continuity of $`\stackrel{~}{\rho }`$ follows from the universal property of the identification map $`p`$ since the composition $`\stackrel{~}{\rho }p:S\times H\mathrm{GL}(n,)`$ sending $`(s,t)\phi (s)\rho (t)`$ is continuous. โ
###### Remark.
In case that $`\rho `$ is irreducible, the condition (2) in Lemma 3.1 implies that $`\phi `$ is a lifting homomorphism of the associated projective representation $`\rho ^{}`$ (defined in the previous section) over $`S`$, i.e., $`\pi \phi =\rho ^{}`$ on $`S`$. On the other hand, any lifting homomorphism $`\phi `$ of $`\rho ^{}`$ over $`S`$ satisfies the condition (2).
Our main concern in this paper is to study extensions of representations when $`G`$ is a compact Lie group and $`H`$ is a closed normal subgroup of $`G`$ such that $`G/H`$ is connected. In this case every complex representation $`\rho `$ of $`H`$ is $`G`$-invariant. Indeed, for each $`gG`$, there is a continuous path $`g_t`$ in $`G`$ from $`g`$ to an element $`hH`$ since every connected component of $`G`$ contains an element of $`H`$. Then the path $`g_t`$ induces a continuous family of conjugate representations $`{}_{}{}^{g_t}\rho `$ so that all representations $`{}_{}{}^{g_t}\rho `$ are isomorphic (see \[CF64, Lemma 38.1\] for more general result). In particular, $`{}_{}{}^{g}\rho ={}_{}{}^{g_0}\rho `$ and $`\rho ={}_{}{}^{h}\rho ={}_{}{}^{g_1}\rho `$ are isomorphic.
Let $`\rho `$ be a complex irreducible representation of $`H`$. Since $`\rho `$ is always $`G`$-invariant, the associated projective representation $`\rho ^{}`$ exists by Lemma 2.1. To get a $`G`$-extension of $`\rho `$ we shall first find a closed subgroup $`S`$ of $`G`$ such that $`G=SH`$, and then construct a lifting homomorphism $`\phi `$ of $`\rho ^{}`$ over $`S`$ (so that the condition (2) is satisfied). Finally modifying $`\phi `$ a little to satisfy the condition (1) we may get a $`G`$-extension of $`\rho `$.
###### Lemma 3.2.
Let $`G`$ be a compact Lie group and $`H`$ a closed normal subgroup such that $`G/HS^1`$. Then there exists a circle subgroup $`S`$ of $`G`$ such that $`G=SH`$ and $`SH`$ is finite cyclic.
###### Proof.
Let $`G_0`$ denote the identity component of $`G`$. Since the canonical projection $`p:GG/H`$ is open and closed, $`p(G_0)`$ is a connected component of $`G/H`$ so that $`p(G_0)=G/H`$. It is well known in Lie group theory \[HM98, Theorem 6.15\] that $`G_0=Z_0G_0^{}`$, where $`Z_0`$ is the identity component of the center of $`G_0`$, which is a torus and $`G_0^{}`$ is the commutator subgroup of $`G_0`$. Then $`G_0^{}G_0HH`$ since $`G/H=G_0/(G_0H)`$ is abelian, and thus $`p(Z_0)=G/H`$. Using the isomorphism $`G/HU(1)`$ we may view $`p|_{Z_0}`$ as a one-dimensional unitary representation of the torus $`Z_0`$. It is elementary in representation theory that there exists a circle subgroup $`SZ_0`$ such that $`p(S)=G/H`$. Therefore $`G=SH`$ and, furthermore, the proper subgroup $`SH`$ of the circle group $`S`$ is finite cyclic. โ
###### Lemma 3.3.
Let $`T`$ be a maximal torus in $`U(n)`$. Then the exact sequence $`0S^1TT/S^10`$ splits. Here $`S^1`$ is identified with the subgroup of $`U(n)`$ consisting of constant multiples $`zI`$ for $`zS^1`$ where $`I`$ denotes the identity matrix.
###### Proof.
Since any maximal torus $`T`$ in $`U(n)`$ is conjugate to the subgroup $`\mathrm{\Delta }(n)U(n)`$ of diagonal matrices
$$D(z_1,\mathrm{},z_n)=\left(\begin{array}{ccc}z_1& & \\ & \mathrm{}& \\ & & z_n\end{array}\right),z_iS^1,$$
it suffices to show that the exact sequence $`0S^1\mathrm{\Delta }(n)\mathrm{\Delta }(n)/S^10`$ splits. But the splitting is immediate because of the homomorphism $`\mathrm{\Delta }(n)S^1`$ mapping a diagonal matrix $`D(z_1,\mathrm{},z_n)`$ to the constant multiple $`z_1IS^1`$. โ
###### Proposition 3.4.
Let $`G`$ be a compact Lie group and $`H`$ a closed normal subgroup such that $`G/HS^1`$. Then every complex representation of $`H`$ is extendible to $`G`$.
###### Proof.
Let $`\rho :H\mathrm{GL}(n,)`$ be a given representation. Since $`H`$ is compact, we may assume that all the images of $`\rho `$ are contained in $`U(n)\mathrm{GL}(n,)`$. Moreover, it is enough to prove the case that $`\rho `$ is irreducible. Since $`G/HS^1`$ is connected, $`\rho `$ is $`G`$-invariant so that the associated projective representation $`\rho ^{}:GU(n)/S^1\mathrm{PGL}(n,)=\mathrm{GL}(n,)/^{}`$ exists by Lemma 2.1. From Lemma 3.2 we can choose a circle subgroup $`S`$ of $`G`$ such that $`G=SH`$ and $`SH`$ is finite cyclic.
We shall find a lifting homomorphism $`\phi _0:SU(n)`$ of $`\rho ^{}`$ over $`S`$. Since $`\rho ^{}(S)`$ is compact, connected, and abelian, it is a torus in $`U(n)/S^1`$. Note that every maximal torus in $`U(n)/S^1`$ has the form $`T/S^1`$ for some maximal torus $`T`$ of $`U(n)`$ \[BtD85, Theorem 2.9, Chapter IV\]. Choose a maximal torus $`T`$ of $`U(n)`$ such that $`\rho ^{}(S)T/S^1`$. By Lemma 3.3 the exact sequence $`0S^1T\stackrel{๐}{}T/S^10`$ splits, i.e., the canonical projection $`\pi :TT/S^1`$ has a continuous section (homomorphism) $`s:T/S^1T`$ such that the composition $`\pi s`$ is the identity map of $`T/S^1`$. Then $`\phi _0=s\rho ^{}|_S`$ is a desired lifting homomorphism of $`\rho ^{}`$ over $`S`$.
Let $`t_0`$ denote a generator of the finite cyclic group $`SH`$. Since $`\pi \phi _0=\rho ^{}=\pi \rho `$ on $`SH`$, $`\phi _0(t_0)=\xi \rho (t_0)`$ for some constant $`\xi S^1^{}`$. Note that $`\xi `$ is an $`n`$-th root of unity, where $`n`$ is the order of $`SH`$. So it is possible to choose a one-dimensional unitary representation $`\tau `$ of the circle group $`S`$ such that $`\tau (t_0)=\xi ^1`$. Then the unitary representation $`\phi =\tau \phi _0`$ satisfies the conditions (1) and (2) in Lemma 3.1. โ
###### Corollary 3.5.
Let $`G`$ be a compact Lie group and $`H`$ a closed normal subgroup such that $`G/H`$ is connected and abelian. Then every complex representation of $`H`$ is extendible to $`G`$.
###### Proof.
Since $`G/H`$ is compact, connected, and abelian, it is isomorphic to a torus. So we have a finite chain of subgroups
$$H=H_0\mathrm{}H_1\mathrm{}\mathrm{}\mathrm{}H_{n1}\mathrm{}H_n=G$$
such that $`H_i`$ is normal in $`H_{i+1}`$ and $`H_{i+1}/H_iS^1`$. Applying Proposition 3.4 inductively, any representation of $`H`$ is extendible to $`G`$. โ
## 4. Extensions when $`G/H`$ is connected
In this section we consider the general case, so $`G/H`$ will be assumed to be connected (not necessarily abelian). In this case the commutator subgroup $`(G/H)^{}=G^{}H/H`$ of $`G/H`$ is semisimple connected \[HM98, Theorem 6.18\]. The following proposition reduces the extension problem to the case that $`G/H`$ is semisimple and connected.
###### Proposition 4.1.
Let $`G`$ be a compact Lie group and $`H`$ a closed normal subgroup of $`G`$ such that $`G/H`$ is connected. A complex representation of $`H`$ is extendible to $`G`$ if and only if it is extendible to $`G^{}H`$.
###### Proof.
The necessity is obvious, and the sufficiency follows from Corollary 3.5 since the factor group $`G/G^{}H(G/H)/(G^{}H/H)=(G/H)/(G/H)^{}`$ is compact, connected, and abelian, that is a torus. โ
In the case that $`G/H`$ is semisimple connected, the following result is well known in Lie group theory (see for instance, \[HM98, Proposition 6.14\]).
###### Lemma 4.2.
Let $`G`$ be a compact Lie group and $`H`$ a closed normal subgroup such that $`G/H`$ is semisimple and connected. Then there is a semisimple connected closed normal subgroup $`S`$ in $`G`$ such that $`G=SH`$ and the map $`S\times HG`$ sending $`(s,h)sh`$ is a homomorphism with a discrete kernel isomorphic to $`SH`$. โ
###### Remark.
Proposition 6.14 in \[HM98\] deals with the case when $`G`$ is connected. However, the same proof holds even if $`G`$ is not connected, since $`G/H`$ is connected. Moreover, we can find the fact in the proof that $`S`$ is semisimple and connected.
The following result implies that the existence of a $`G`$-extension when $`G/H`$ is semisimple and connected is completely determined by the restriction of a given representation to $`SH`$.
###### Proposition 4.3.
Under the hypotheses of Lemma 4.2, a complex irreducible representation $`\rho `$ of $`H`$ is extendible to $`G`$ if and only if $`\rho `$ is trivial on $`SH`$, i.e., $`\rho (g)=I`$, the identity matrix, for all $`gSH`$.
###### Proof.
It is immediate that $`S`$ commutes with $`H`$, since the map $`S\times HG`$ sending $`(s,h)sh`$ is a homomorphism. To prove the sufficiency, it is enough to choose the trivial representation $`\phi `$ of $`S`$, i.e., $`\phi (s)=I`$ for all $`sS`$. Since $`S`$ commutes with $`H`$, the two conditions (1) and (2) in Lemma 3.1 are satisfied immediately.
On the other hand, suppose $`\stackrel{~}{\rho }`$ is a $`G`$-extension of $`\rho `$. Since $`S`$ commutes with $`H`$, we have $`\stackrel{~}{\rho }(s)^1\rho (h)\stackrel{~}{\rho }(s)=\rho (h)`$ for all $`sS`$ and $`hH`$. Then the Schurโs lemma implies that $`\stackrel{~}{\rho }(s)`$ is constant for all $`sS`$, so we may view the restriction $`\stackrel{~}{\rho }|_S`$ as a one-dimensional complex representation of $`S`$. Since semisimple Lie groups have no nontrivial abelian factor group, the trivial representation is the unique one-dimensional complex representation of $`S`$. Therefore, $`\stackrel{~}{\rho }`$ is trivial on $`S`$, in particular, on $`SH`$. โ
###### Remark.
Note that the number of $`G`$-extensions (if exist) is exactly one, since every $`G`$-extension should be trivial on $`S`$.
###### Corollary 4.4.
Let $`G`$ be a compact Lie group and $`H`$ a closed normal subgroup such that $`G/H`$ is semisimple and connected. Every complex representation of $`H`$ is extendible to $`G`$ if and only if $`H`$ is a direct summand of $`G`$, i.e., $`GS\times H`$ for some subgroup $`S`$ of $`G`$.
###### Proof.
The sufficiency is obvious so we prove the necessity. If $`H`$ is not a direct summand of $`G`$, then $`SH`$ in Lemma 4.2 contains a nontrivial element, say $`s_0`$. Since a faithful representation of $`H`$ always exists \[BtD85, Theorem 4.1, Chapter III\], we can choose an irreducible sub-representation $`\rho `$ of $`H`$ such that $`\rho (s_0)`$ is not trivial. Then $`\rho `$ does not extend to a representation of $`G`$ by Proposition 4.3. โ
We shall now prove the main result in this paper. For the second statement of Theorem 1.1, we need the following lemma giving a relation between the normal subgroup $`SH`$ in Lemma 4.2 and the fundamental group of $`G/H`$.
###### Lemma 4.5.
Under the hypotheses of Lemma 4.2, there exists a surjective homomorphism $`\pi _1(G/H)SH`$.
###### Proof.
Since $`S/(SH)=G/H`$, the restriction of the canonical projection $`p:GG/H`$ on $`S`$ is surjective and its kernel $`SH`$ is discrete. It follows that $`p|_S`$ is a covering homomorphism of $`G/H`$. From the uniqueness of the universal covering homomorphism $`\stackrel{~}{q}:\stackrel{~}{G/H}G/H`$, there exists a covering homomorphism $`q:\stackrel{~}{G/H}S`$ such that the diagram
commutes (compare with \[HM98, Proposition 9.12\]). Since $`SH=\mathrm{ker}p|_S=q(\mathrm{ker}\stackrel{~}{q})`$ and $`\mathrm{ker}\stackrel{~}{q}`$ is isomorphic to $`\pi _1(G/H)`$, we have a surjective homomorphism of $`\pi _1(G/H)`$ onto $`SH`$. โ
###### Theorem 1.1 (rephrased).
Let $`G`$ be a compact Lie group and $`H`$ a closed normal subgroup such that $`G/H`$ is connected. Then every complex representation of $`H`$ is extendible to $`G`$ if and only if $`H`$ is a direct summand of $`G^{}H`$.
###### Proof.
Since the factor group $`G^{}H/H=(G/H)^{}`$ is semisimple and connected, the theorem follows immediately from Proposition 4.1 and Corollary 4.4. โ
###### Proof of Corollary 1.2.
We claim that $`\mathrm{Tor}(\pi _1(G/H))`$, the torsion subgroup of $`\pi _1(G/H)`$, is isomorphic to $`\pi _1((G/H)^{})`$. Denote by $`T`$ the torus $`(G/H)/(G/H)^{}`$. Then the homotopy exact sequence of the fibration $`(G/H)^{}G/HT`$ implies that $`\pi _1(G/H)\pi _1((G/H)^{})\pi _1(T)`$, since the second homotopy group of a compact Lie group vanishes, see \[BtD85, Proposition 7.5, Chapter V\]. Since $`(G/H)^{}`$ is semisimple, $`\pi _1((G/H)^{})`$ is finite \[BtD85, Remark 7.13, Chapter V\] so that it is isomorphic to $`\mathrm{Tor}(\pi _1(G/H))`$ as we claimed. Therefore, the condition of $`\pi _1(G/H)`$ being torsion free is equivalent to $`(G/H)^{}`$ being simply connected.
By Lemma 4.2 and 4.5, $`G^{}H=SH`$ for some semisimple connected closed normal subgroup $`S`$ in $`G^{}H`$ and there is a surjective homomorphism $`\pi _1(G^{}H/H)=\pi _1((G/H)^{})SH`$. Therefore, if $`(G/H)^{}`$ is simply connected, then $`\pi _1((G/H)^{})=SH=\{e\}`$ so that $`H`$ is a direct summand of $`G^{}H`$. โ |
no-problem/9912/astro-ph9912411.html | ar5iv | text | # The Neutral Hydrogen Distribution in Merging Galaxies: Differences between Stellar and Gaseous Tidal Morphologies
## 1. Introduction
Nearly 30 years ago, Toomre & Toomre (1972) elegantly demonstrated that the tails and bridges emanating from many peculiar galaxies may arise kinematically from dynamically cold disk material torn off of the outer regions of galaxies experiencing strong gravitational interactions. Early spectroscopic studies of gas within the tidal tails of merging galaxies provided observational support for this hypothesis by showing the tails to have the kinematics expected for a gravitational origin (e.g. Stockton 1974a,b). H i mapping studies are particularly well suited to such studies, as the tidally ejected disk material is usually rich in neutral hydrogen and can be traced to very large distances from the merging systems (e.g. van der Hulst 1979; Simkin et al. 1986; Appleton et al. 1981, 1987; Yun et al. 1994). Once mapped, the tidal kinematics can be used either alone, to disentangle the approximate spin geometry of the encounter (Stockton 1974a,b; Mihos et al. 1993; Hibbard & van Gorkom 1996, hereafter HvG96; Mihos & Bothun 1998), or in concert with detailed numerical models, to constrain the full encounter geometry (e.g. Combes 1978; Combes et al. 1988; Yun 1992, 1997; Hibbard & Mihos 1995; Gardiner & Noguchi 1996).
However, not all systems can be easily explained by purely gravitational models such as those used by Toomre & Toomre. For example, gravitational forces by themselves should not lead to differences between stellar and gaseous tidal components. Numerical models which include hydrodynamical effects do predict a decoupling of the dissipative gaseous and non-dissipative stellar components (e.g. Noguchi 1988; Barnes & Hernquist 1991, 1996; Weil & Hernquist 1993; Mihos & Hernquist 1996; Appleton, Charmandaris & Struck 1996; Struck 1997), but only in the inner regions or along bridges where gas orbits may physically intersect (see e.g. Fig. 4 of Mihos & Hernquist 1996). Decoupling of the gaseous and stellar components within the tidal tails is not expected.
Nonetheless, differences between the optical and gaseous tidal morphologies have been observed. These differences can be subtle, with the peak optical and H i surface brightnesses simply displaced by a few kpc within the tails (e.g. NGC 4747, Wevers et al. 1984; NGC 2782 Smith 1994; NGC 7714/4 Smith et al. 1997; Arp 295A, NGC 4676B, and NGC 520 Southern tail, Hibbard 1995, HvG96), or they can be extreme, with extensive H i tidal features apparently decoupled from, or even anti-correlated with, the optical tidal features. It is this latter category of objects that we wish to address in this paper. In particular, we address the morphology of the tidal gas and starlight in the merging systems NGC 520 (Arp 157), Arp 220, and Arp 299 (NGC 3690).
The three systems were observed as part of our on-going studies on the tidal morphologies of optically and IR selected mergers (Hibbard 1995, HvG96, Hibbard & Yun 1996 and in prep.). These studies involve moderate resolution ($`\theta _{FWHM}15^{\prime \prime }`$) VLA H i spectral-line mapping observations and deep optical $`B`$ and $`R`$ broad-band imaging with large format CCDs using the KPNO 0.9m (NGC 520) and the University of Hawaii 88โณ telescopes. The H i and optical observations, reduction, and data products have been presented in Hibbard (1995) and HvG96 for NGC 520, in Hibbard & Yun (1999, hereafter HY99) for Arp 299, and in Yun & Hibbard (2000; see also Hibbard & Yun 1996) for Arp 220. We refer the reader to these papers for details of the observations and data reduction. These systems are extremely disturbed, and we cannot hope to offer a full description of their peculiarities here. For more information we refer the reader to the above references.
## 2. Observed Stellar and Gaseous Tidal Morphologies
Figures 13 show the optical and atomic gas morphologies of each of the three systems discussed here. For NGC 520 and Arp 220 only the inner regions are shown in order to highlight the differences we wish to address. Panel (a) presents a greyscale representation of the optical morphology of each system with features of interest labeled. Panel (b) shows the H i distribution. Contours indicate the distribution of H i mapped at low-resolution ($`\theta _{FWHM}30^{\prime \prime }`$), whereas the greyscales show the H i mapped at higher resolution ($`\theta _{FWHM}15^{\prime \prime }`$). The former is sensitive to diffuse low column density ($`N_{HI}`$) neutral hydrogen, while the latter delineates the distribution of the higher column density H i. The central region of each H i map appears to have a hole (indicated by the dotted contours), which is due to H i absorption against the radio continuum associated with the disk-wide starbursts taking place in each galaxy (see Condon et al. 1990). In panel (c), we again present the optical morphology in greyscales, and the higher resolution H i distribution as contours. Finally, panel (d) presents a smoothed, star-subtracted $`R`$-band image contoured upon a greyscale representation of the high-resolution H i map.
In the final panels of Figs. 13 dashed lines labeled โSliceโ indicate the locations from which H i and optical intensity profiles have been extracted; these profiles are plotted in Figure 4. Arrows labeled โSuperwindโ indicate the position angle (P.A.) of H$`\alpha `$ or soft x-ray plumes, believed to arise from a starburst-driven outflow or galactic superwind in each system. Such outflows are common in other IR bright starbursts (e.g. Heckman, Armus & Miley 1987, 1990 hereafter HAM90; Armus, Heckman, & Miley 1990; Lehnert & Heckman 1996), and are thought to arise when the mechanical energy from massive stars and supernovae in the central starburst is sufficient to drive the dense interstellar medium outward along the minor axis (e.g. Chevalier & Clegg 1985; Joseph & Wright 1985; Suchkov et al. 1994). Often, such starbursts are powerful enough to drive a freely expanding wind of hot plasma completely out of the galaxy (โblowoutโ; HAM90).
In the following subsections we briefly discuss what is known about the dynamical state of each system, and describe the differences between the stellar and gaseous tidal morphologies. Throughout this paper distances and other physical properties are calculated assuming $`H_0=75\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$.
### 2.1. NGC 520
NGC 520 (Arp 157, UGC 966) is an intermediate-stage merger, with the two progenitor nuclei separated by 40โณ (5.8 kpc for $`D`$= 30 Mpc, 1โณ = 145 pc) and embedded within a common luminous envelope (HvG96 and references therein; see Fig. 1a). There is a bright optical tidal tail stretching 24 kpc to the southeast (henceforth referred to as the S Tail) which bends sharply eastward and connects onto a broad optical plume<sup>1</sup><sup>1</sup>1We follow the naming convention of Schombert et al. 1990 and refer to tidal features with flat intensity profiles as plumes, and ones with Gaussian profiles as tails. This plume continues to the north and west for 60 kpc before it appears to connect onto extended light surrounding the dwarf galaxy UGC 957 (outside the region plotted in Fig. 1a; see Stockton & Bertola 1980).
The primary nucleus (the easternmost nucleus in Fig. 1a) possesses a massive ($`5\times 10^9M_{}`$) 1 kpc-scale rotating molecular gas disk (Sanders et al. 1988; Yun & Hibbard 1999). An H i disk is kinematically centered on this molecular disk, and extends to a radius of $``$ 20 kpc (labeled โInner Diskโ in Fig. 1b). Beyond this there is an intermediate ring of H i with a mean radius of $``$ 30 kpc (i.e., the material which contains the feature labeled โN Clumpโ in Fig. 1b), and a nearly complete outer ring of H i with a mean radius of 60 kpc, which extends smoothly through the dwarf galaxy<sup>2</sup><sup>2</sup>2The HI kinematics show that UGC 957 is kinematically associated with this outer gas ring, although it may lie slightly above or below it. Rudimentary numerical modeling has shown that it is unlikely that the UCG galaxy is responsible for the main optical features of the NGC 520 system (Stanford & Balcells 1991). It is unclear whether this system is an interloper or was recently assembled from the surrounding ring of gas. UCG 957 (only partially seen in Fig. 1b). There is a kinematic and morphological continuity between the molecular gas disk, the inner H i disk, and outer H i ring (Yun & Hibbard 1999), which suggests that all of this material is associated with the primary nucleus.
The observations suggests that the NGC 520 interaction involved a prograde-retrograde or prograde-polar spin geometry: The linear morphology of the optical tail-to-plume system is typical of features produced by a disk experiencing a prograde encounter (i.e., the disk rotates in the same direction as the merging systems orbit each other). The disk-like morphology and rotational kinematics of the large-scale HI and the lack of any aligned linear tidal features, on the other hand, are more typical of polar or retrograde encounter geometries (i.e., disk rotation either perpendicular to or opposite the direction of orbital motion). Such encounters fail to raise significant tails (Toomre & Toomre 1972; Barnes 1988), and much of the disk material remains close to its original rotational plane.
Neither the intermediate nor the outer H i ring has an optical counterpart ($`\mu _R`$$`>`$ 27 mag arcsec<sup>-2</sup>). Despite smooth rotational kinematics, this outer H i has a very clumpy and irregular morphology, with notable gaps near the optical minor axis (labeled โNE gapโ and โSW gapโ in Fig. 1c). This figure shows that the outer H i and optical structures are anti-correlated, with the peak H i column densities (associated with the N clump) located to one side of the optical plume. In Fig. 1d the H i clump appears to be bounded on three sides by the optical contours. In Fig. 4a we present an intensity profile at the location indicated by the dotted line in Fig. 1d, showing that the gas column density increases precisely where the optical light decreases.
While the H i features exhibit a clear rotational kinematic signature, the well-defined edges and nearly-linear structure of the optical plume suggests that its constituent stars are moving predominantly along the plume, rather than in the plane of the sky: any substantial differential rotation would increase the width of the plume and result in a more disk-like morphology. We therefore conclude that the gas rings and optical plume are both morphologically and kinematically distinct entities. This suggests that the observed gas/star anti-correlation is either transient (and fortuitous) or actively maintained by some process.
A deep H$`\alpha `$ image of NGC 520 shows plumes of ionized gas emerging both north and south along the minor axis and reaching a projected height of 3 kpc from the nucleus (HvG96). It has been suggested that this plume represents a starburst-driven outflow of ionized gas (HvG96, Norman et al. 1996). The position angle of this plume is indicated by an arrow in Fig. 4d (P.A. = 25$`\stackrel{}{\mathrm{.}}`$). This direction corresponds to the most dramatic H i/optical anti-correlations mentioned above, and in the following we suggest that this region of the optical tail actually lies directly in the path of the out-flowing wind.
### 2.2. Arp 299
Arp 299 (NGC 3690/IC 694, UGC 6471/2, Mrk 171, VV 118) is also an intermediate stage merger, with two disk systems (IC 694 to the east, NGC 3690 to the west โ see Figure 2a) in close contact but with their respective nuclei separated by 20โณ (4.7 kpc for $`D`$= 48 Mpc, 1โณ = 233 pc). A long, narrow, faint ($`\mu _R`$$``$ 26mag arcsec<sup>-2</sup>) tidal tail stretches to the north to a radius of $``$ 125 kpc. H i imaging of this system by HY99 (see also Nordgren et al. 1997) shows a rotating gas-rich disk within the inner regions, and a pair of parallel H i filaments extending to the north. From the H i morphology and kinematics, HY99 deduce that Arp 299 is the result of a prograde-retrograde or prograde-polar encounter between two late-type spirals, with the inner H i disk associated with the retrograde disk of IC 694, and the northern optical tail and tidal H i filaments ejected by the prograde disk of NGC 3690.
The parallel-filament or bifurcated morphology of the tidal H i is quite unlike that of the optical tail. The inner H i filament (so labeled in Fig. 2b) is of lower characteristic column density ($`N_{HI}`$$``$ 8$`\times 10^{19}\mathrm{cm}^2`$) and is associated with the low surface brightness stellar tail (Fig. 2c). The gas in this filament has a more irregular morphology than that in the outer filament (e.g., the โgapโ and โknotโ in Fig. 2b), and much of this material is detectable only after a substantial smoothing of the data (Fig. 2b). The outer filament is characterized by a higher H i column density ($`N_{HI}`$$``$1.5$`\times 10^{20}\mathrm{cm}^2`$) but has no optical counterpart ($`\mu _R`$$`>`$ 27.5 mag arcsec<sup>-2</sup>). This filament is displaced by approximately 20 kpc (in projection) to the west of the inner filament for much its length, after which the filaments merge together in a feature labeled the โN Clumpโ in Fig. 2b. The parallel filaments have nearly identical kinematics along their entire lengths, and join smoothly at the N Clump. This implies that these features form a single physical structure.
Based on preliminary numerical simulations, HY99 suggest that a bifurcated morphology can arise quite naturally during tail formation. This occurs when the optically faint, gas-rich outer regions of the progenitor disk are projected adjacent to optically brighter regions coming from smaller initial radii<sup>3</sup><sup>3</sup>3To produce filaments as well separated as found in Arp 299 HY99 suggest that the bifurcation is exacerbated by a pre-exisiting gaseous warp in the progenitor disk. (see also Mihos, 2000). However, this scenario does not explain why the inner filament, presumably drawn from optically bright but still gas-rich material within the optical disk of the progenitor, should lack accompanying H i.
As in NGC 520, there is an anti-correlation between the optical and gaseous column densities across the N clump, with the highest gas column densities (2โ3$`\times 10^{20}\mathrm{cm}^2`$) located on either side of the optical tail. This is illustrated in Fig. 4b, where we plot a profile along the position indicated by the dotted line in Fig. 2d. The optical tail emerges above the N clump, and appears to curve exactly around the northern edge of the N clump (labeled โhookโ in Fig. 2c). Also labeled in Fig. 2c are the three regions with anomalously high H i velocity dispersions noted by HY99 ($`\sigma _{HI}`$ 13โ20 km s<sup>-1</sup> compared with $`\sigma _{HI}`$ 7โ10 km s<sup>-1</sup> for the remainder of the tail; see Fig. 7d of HY99); we will will refer to these regions in the discussion (ยง 3.4.2).
Within the main body of Arp 299, vigorous star formation is taking place, with an inferred star formation rate (SFR) of 50 $`M_{}\mathrm{yr}^1`$ (HAM90). Recent X-ray observations reported by Heckman et al. (1999) show evidence for hot gas emerging from the inner regions and reaching 25 kpc to the north, which the authors interpret as evidence for a hot, expanding superwind. The position angle of this feature (P.A.=25$`\stackrel{}{\mathrm{.}}`$) is indicated by the arrow in Fig. 2d, and points towards the inner tidal filament and N clump.
### 2.3. Arp 220
Arp 220 (UGC 9913, IC 4453/4) is the prototypical ultraluminous infrared galaxy with $`L_{8100\mu m}=1.5\times 10^{12}L_{}`$ (Soifer et al. 1984). It is an advanced merger system with two radio and infrared nuclei separated by 0.9โณ (345 pc for $`D`$= 79 Mpc), and a bright optical plume extending 35 kpc to the NW (Fig. 3a). Each of the two nuclei has its own compact molecular disk. The two nuclear disks are in turn embedded in one larger 1 kpc scale molecular gas disk (see Scoville, Yun, & Bryant 1997, Downes & Solomon 1998, Sakamoto et al. 1999 and references therein). The spin axis of the eastern nucleus is aligned with that of the kpc-scale disk while the western nucleus rotates in the opposite direction. These observations suggest that Arp 220 is the product of a prograde-retrograde merger of two gas rich spiral galaxies (Scoville, Yun, & Bryant 1997).
An irregular disk-like distribution of neutral hydrogen extends over a 100 kpc diameter region surrounding the optical galaxy (Yun & Hibbard 1999a). The overall H i kinematics indicates that this material has a component of rotation in the same sense as that for the eastern nucleus and the molecular gas disk, and opposite the rotation of the western nucleus. This suggests that the H i disk and eastern nucleus originated from the retrograde progenitor, while the the western nucleus and NW optical plume (Fig. 3a) arose from the prograde progenitor.
Because of the vigorous star formation occurring within Arp 220 (SFR=340 $`M_{}\mathrm{yr}^1`$, HAM90), much of the H i within the optical body of the system is seen only in absorption against the bright radio continuum emission from the central starburst. Beyond this, the H i has high column densities ($`N_{HI}`$$``$1.5$`\times 10^{20}\mathrm{cm}^2`$), but only to the NE and SW. Most notably, there are local H i minima to the NW and SE (see gaps in Fig. 3b). Comparison of the H i map with the optical image (Fig. 3c) shows that the NW gap occurs exactly at the location of the optical tail. The relationship between the optical and H i surface brightness levels across this feature are illustrated by an intensity profile measured along the dotted line shown in Fig. 3d, and plotted in Fig. 4c. As in NGC 520 and Arp 299, the gas column density increases precisely where the optical light from the tail begins to fall off. There is a similar H i gap to the SE, but in this case there is no corresponding optical feature associated with it. At even larger radii, the H i is more diffuse ($`N_{HI}`$$``$3$`\times 10^{19}\mathrm{cm}^2`$) and has no optical counterpart down to $`\mu _R`$=27 mag arcsec<sup>-2</sup>.
An X-ray image obtained with the ROSAT HRI camera (Heckman et al. 1996) reveals an extended central source that is elongated along P.A.=135$`\stackrel{}{\mathrm{.}}`$ (indicated by arrows in Fig. 3). A deep H$`\alpha `$ \+ \[N ii\] image of Arp 220 reveals ionized gas with a bright linear morphology at this same position angle (Heckman, Armus & Miley 1987). The optical emission line kinematics are suggestive of a bipolar outflow (HAM90), and the physical properties of the warm and hot gas strongly support the superwind scenario for this emission (HAM90, Heckman et al. 1996). As in NGC 520 and Arp 299, the position angle of the putative expanding superwind is in the same direction as the H i minima, i.e. NW and SE.
## 3. Discussion
Figures 14 provide evidence for both small- and large-scale differences in the distributions of the tidal gas and stars in these three systems. The small-scale differences are of the type illustrated in Fig. 4, whereby the gas column density falls off just as the optical surface brightness increases at various edges of the tidal features. In NGC 520 and Arp 220, the large-scale differences are between the outer H i rings and disks (which have no associated starlight) and the optical tails and plumes (which have no associated H i). Although these features are kinematically decoupled at present (with the gas rings and disks predominantly in rotation and the optical tails and plumes predominantly in expansion), it is possible that they had a common origin and have subsequently decoupled and evolved separately. In Arp 299, on the other hand, the H i filaments and optical tail have similar morphologies and continuous kinematics and are therefore part of the same kinematic structure. In this system we believe the bifurcated tidal morphology results from a progenitor with a warped gaseous disk (ยง2.2 & HY99), and we seek to understand why the inner filament is gas-poor, given that its progenitor was obviously gas-rich.
In this section we investigate a number of possible explanations for these observations. In particular, we discuss the possible role played by: differences in the initial radial distribution of the gas and stars (ยง3.1), dust obscuration (ยง3.2), kinematic decoupling of the gas due to collisions within the developing tidal tail (ยง3.3), ram pressure stripping of the gas, either by a halo or by a galactic scale wind (ยง3.4), and photoionization of the gas, either by the starburst or by local sources (ยง3.5).
### 3.1. Differences in the Radial Distribution of Gas and Starlight
In interacting systems the H i is often more widely distributed than the optical light (see, e.g. the H i map of the M81 system by Yun et al. 1994; see also van der Hulst 1979; Appleton, Davies & Stephenson 1981). These gas-rich extensions frequently have no associated starlight down to very faint limits (e.g. Simkin et al. 1986; HvG96). A natural explanation is that such features arise from the H i-rich but optically faint outer radii of the progenitor disks. The relatively short lifetimes of luminous stars and the larger velocity dispersions of less luminous stars, especially with respect to the gas, will further dilute the luminous content of this material, and the H i-to-light ratio of the resulting tidal features will increase with time (Hibbard et al. 1994). Gaseous tidal extensions with very little detectable starlight would seem to be the natural consequence. The outer H i rings in NGC 520 and Arp 220 and the gas-rich outer filament in Arp 299 are all likely to have arisen in this manner.
However, gas-rich outer disks cannot give rise to gas-poor optical structures, such as the optical plume in NGC 520, the optical tail in Arp 220, or the inner filament in Arp 299. Since these features presumably arise from optically brighter regions of the progenitor disks (regions which are characterized by H i column densities higher than that of the outer disks) one would have expected a priori that these features should also be gas-rich. It is possible that the disks which gave rise to the plumes in NGC 520 and Arp 220 were gas-poor at all radii. However, this would not account for the discontinuities in the outer gaseous features that project near these optical features (i.e., NE gap in NGC 520 and NW gap in Arp 220). We therefore seek other explanations for these structures.
### 3.2. Effects of Dust Obscuration
The correspondence between rising gas column density and falling optical surface brightness (Fig. 4) suggests that dust associated with the cold gas may attenuate the optical light. To address this possibility, we calculate the expected extinction in the $`R`$band for a given column density of H i. We adopt the Milky Way dust-to-gas ratio determined by Bohlin, Savage, & Drake (1978; $`N_{HI}/E(BV)`$= 4.8$`\times 10^{21}\mathrm{cm}^2`$ mag<sup>-1</sup>), which is supported by direct imaging of the cold dust in the outer regions of eight disk galaxies (Alton et al. 1998). This is combined with the Galactic extinction law of OโDonnell (1994; $`A_R/E(BV)=2.673`$, from Table 6 of Schlegel et al. 1998) to yield an expected extinction in the $`R`$-band of $`A_R=\frac{N_{HI}}{1.8\times 10^{21}\mathrm{cm}^2}`$ mag.
From Fig. 4, the peak H i column densities on either side of the optical features are $``$3$`\times 10^{20}\mathrm{cm}^2`$. The predicted extinction is therefore of order 0.2 mag in the $`R`$-band. From Fig. 4 we see that the mean light level drops by about 1.0 mag arcsec<sup>-2</sup> for Arp 299 (from 26.5 mag arcsec<sup>-2</sup> to below 27.5 mag arcsec<sup>-2</sup>), about 1.5 mag arcsec<sup>-2</sup> for NGC 520 (from 25 mag arcsec<sup>-2</sup> to below 26.5 mag arcsec<sup>-2</sup>), and by about 2.5 mag arcsec<sup>-2</sup> for Arp 220 (from 23.5 mag arcsec<sup>-2</sup> to below 26 mag arcsec<sup>-2</sup>) along the extracted slices. To produce this amount of extinction, the tidal gas would have to have a dust-to-gas ratio that is ten times that in the Milky Way.
The above analysis assumes that the measured neutral gas column density represents the total gas column density. However, the sharp drop in H i column density observed in many tidal features (HvG96, Hibbard & Yun in preparation) suggests that the tidal gas may be highly ionized by the intergalactic UV field (see also references in ยง3.5). Since large dust grains should survive in the presence of this ionizing radiation, the opacity per atom of neutral hydrogen ($`A_R/N_{HI}`$) should increase in regions of increasing ionization fraction. Observations of NGC 5018 (Hilker & Kissler-Patig 1996), in which blue globular clusters are absent in a region underlying an associated H i tidal stream, may support a high $`A_R/N_{HI}`$ ratio for tidal gas. Nevertheless, the lack of obvious reddening of the $`BR`$ colors along the slices in Arp 299 and NGC 520 (Hibbard 1995; HY99) argues against a much higher extinction in these regions.
We conclude that extinction might be important for shaping the morphology of the faintest optical features (e.g., the โHookโ and the end of the optical tail of Arp 299, Fig. 2, which has $`\mu _R`$ near the detection limit of 28 mag arcsec<sup>-2</sup>), but is insufficient to greatly affect the overall tidal morphology. However, an anomalously high tidal dust-to-gas ratio remains a possibility. This question could be resolved by the direct detection of cold dust in tidal tails with sub-millimeter imaging.
### 3.3. Collisions within Developing Tidal Tails
During the tail formation process, the leading-edge of the tail is decelerated with respect to the center of mass of the progenitors, while the trailing-edge is accelerated, and the two edges move towards each other (see Toomre & Toomre 1972, Fig. 3). Eventually, the two edges appear to cross, forming a caustic (Wallin 1990; Struck-Marcell 1990). In most cases, the caustics are simply due to projection effects. Only for low-inclination encounters will these crossings correspond to physical density enhancements, and numerical experiments suggest that in these cases the density will increase by factors of a few (Wallin 1990). It has been suggested that collisions experienced by the crossing tidal streams in such low-inclination encounters may lead to a separation between the dissipational (gas) and non-dissipational (stellar) tidal components (Wevers 1984; Smith et al. 1997).
The present data do not allow us to directly address this question, since the kinematic decoupling presumably took place long ago. However, several arguments lead us to suspect that this collisional process is not important in tidal tails: (1) large scale decoupling between the stellar and gaseous tidal morphologies is not seen in many systems known to have experienced low-inclination encounters (e.g. NGC 4038/9, โThe Antennaeโ, Hibbard et al. in preparation; NGC 7252 โAtoms for Peaceโ Hibbard et al. 1994; NGC 4676 โThe Miceโ HvG96); (2) the broad plume-like morphologies of the optical features in Arp 220 and NGC 520 suggest rather inclined encounters (ยงยง2.1, 2.3), in which case wide-spread collisions are not expected; and (3) the parallel filaments in the Arp 299 tail have identical kinematics, whereas one would expect kinematic differences between the stripped and unstripped material.
Therefore while gaseous collisions and dissipation might result in differences between gas and stars during tidal development (particularly along tidal bridges, where the gas streamlines are converging; e.g. Struck 1997; NGC 7714/5 Smith et al. 1997; Arp 295 HvG96), we believe that they are not likely to lead to a wide-spread decoupling in the outer regions.
### 3.4. Ram Pressure Stripping
If the tidal features pass through a diffuse warm or hot medium, or if such a medium passes through the tidal features, it is possible that the tidal gas exchanges energy and momentum with this medium due to collisions. Such effects have been proposed to explain the stripping of the cool interstellar medium from spiral galaxies as they move through the hot IGM in clusters (Gunn & Gott 1972), and is referred to as Ram Pressure Stripping (RPS). Tidal features should be relatively easily stripped, as they lack the natural restoring forces present in disk galaxies, except possibly at a small number of self-gravitating regions. In this case, the momentum imparted due to ram pressure is simply added to or subtracted from the momentum of the gaseous tidal features, and a separation of stellar and gaseous components might be expected.
In the next two subsections, we investigate two possible sources for ram pressure: an extended halo associated with the progenitors (ยง3.4.1); and an expanding starburst driven superwind (ยง3.4.2).
#### 3.4.1 RPS from Extended Halo Gas
Our own galaxy is known to have an extended halo of hot gas (Pietz et al. 1998). The existence of similar halos around external galaxies has been inferred from observations of absorption line systems around bright galaxies (e.g. Lanzetta et al. 1995). These halos may have sufficient density to strip any low column density gas moving through them. Several investigators have suggested that such stripping is responsible for removing gas from the Magellanic Clouds as they orbit through the Galaxyโs halo, producing the purely gaseous Magellanic Stream (e.g. Meurer, Bicknell & Gingold 1985; Sofue 1994; Moore & Davis 1994). Sofue & Wakamatsu (1993) and Sofue (1994) specifically stress that stripping by galaxy halos should also play an important role in the evolution of H i tidal tails.
The tidal features in each of our systems have H i column densities and velocities similar to those assumed in the numerical models of Sofue (1994) and Moore & Davis (1994), which resulted in rather extreme stripping of the H i clouds. Although this may seem to provide an immediate explanation for our observations of gas/star displacements, we point out that these column densities and velocities are typical of all of the tails thus far imaged in H i, the great majority of which do not show the extreme displacements we describe here. There is no reason to believe that the halo properties of NGC 520, Arp 299 and Arp 220 are any different from, or that the encounters were any more violent than similar mergers which do not exhibit such dramatic displacements (e.g., NGC 4038/9, NGC 7252, NGC 4676 HvG96; NGC 3628 Dahlem et al. 1996; NGC 2623, NGC 1614, Mrk 273 Hibbard & Yun in preparation; NGC 3256 English et al. 1999). In fact, in light of the stripping simulations mentioned above, one wonders why such displacements are not more common.
A possible solution to this puzzle is suggested by the results of numerical simulations of major mergers. In these simulations, the material distributed throughout the halos of the progenitors is tidally distended along with the tails, forming a broad sheath around them (see e.g. the video accompanying Barnes 1992). This sheath has similar kinematics as the colder tail material, resulting in much lower relative velocities than if the tail was moving through a static halo, thereby greatly reducing any relative ram pressure force.
In summary, while halo stripping might be effective for discrete systems moving through a static halo (such as the LMC/SMC through the halo of the Galaxy, or disks through a hot cluster IGM), the lack of widespread H i/optical decoupling in mergers suggests that it is not very effective for removing gas from tidal tails, and it does not appear to be a suitable explanation of the present observations.
#### 3.4.2 RPS from Expanding Superwind
The three systems under discussion host massive nuclear starbursts with associated powerful outflows or โsuperwindsโ. Optical emission lines and/or X-ray emission reveal that the observed outflows extend for tens of kpc from the nuclear regions. Theoretical calculations suggest that the observed gas plumes represent just the hottest, densest regions of a much more extensive, lower density medium (Wang 1995). In each of these three systems, the most extreme gaps in the H i distribution appear along the inferred direction of the expanding hot superwind (Figs. 1d, 2d, 3d). A very similar anti-correlation between an out-flowing wind and tidally disrupted H i has been observed in the M82 system (Yun et al. 1993; Strickland et al. 1997; Shopbell & Bland-Hawthorn 1998), the NGC 4631 system (Weliachew et al. 1978; Donahue et al. 1995; Wang et al. 1995; Vogler & Pietsch 1996), and possibly the NGC 3073/9 system (Irwin et al. 1987; Filippenko & Sargent 1992). It has been suggested that this anti-correlation is due to an interaction between the blown-out gas of the superwind and the cold gaseous tidal debris, either as the wind expands outward into the debris, or as the tidal debris passes through the wind (Chevalier & Clegg 1985; Heckman, Armus & Miley 1987, and references above).
Figure 5 presents the suggested geometry for the case of Arp 299. This figure is constructed from our preliminary efforts to model the northern tail of Arp 299 using N-body simulations similar to those presented in Hibbard & Mihos (1995; i.e. no hydrodynamical effects are included). We found that we could not match the morphology and kinematics of both filaments simultaneously, but could match either one separately. Fig. 5 presents the results of combining these two solutions. In this sense this figure is not a self-consistent fit to the data, but simply a cartoon which illustrates the proposed relative placement of the tidal tail and the expanding wind. In this figure, the wind opening angle is illustrated by the cone, the gas-rich regions of the tails are represented by dark and light grey circles, and the gas-poor regions of the tails are represented by white circles. The figure illustrates how the restricted opening angle of such a wind (or of an ionization cone, cf. ยง3.5.1) may intersect only a portion of the ribbon-like tail.
Here we estimate whether ram pressure stripping by the nuclear superwind can exert sufficient pressure on gas at the large distances typical of tidal tails. We use equation (5) from Heckman, Lehnert & Armus (1993; see also Chevalier & Clegg 1985) to calculate the expected ram pressure ($`P_{RPS}`$) of the superwind far from the starburst as a function of its bolometric luminosity ($`L_{bol}`$):
$$P_{RPS}(r)=4\times 10^{10}\mathrm{dyne}\mathrm{cm}^2\left(\frac{L_{bol}}{10^{11}L_{}}\right)\left(\frac{1\mathrm{kpc}}{r}\right)^2$$
(1)
This equation has been shown to fit the pressure profile derived from X-ray and optical emission line data of Arp 299 (Heckman et al. 1999). It should provide a lower limit to the ram pressure, since it assumes that the wind expands spherically, while observations suggest that the winds are limited in solid angle.
This pressure can be compared with the pressure of the ambient medium in the tidal tail ($`P_{tidal}`$), given by the energy per unit volume:
$$P_{tidal}=C\times \rho _{gas}\sigma _{gas}^2$$
(2)
where $`C`$ is a constant, $`\rho _{gas}`$ is the mass density of the gas, and $`\sigma _{gas}`$ is the velocity dispersion of the gas. For an equation of state of the form $`P\rho ^\gamma `$, the constant $`C`$ is equal to $`\gamma ^1=\frac{3}{5}`$, and $`\sigma _{gas}`$ corresponds to gas sound speed, while for a self-gravitating cloud, $`C=\frac{3}{2}`$ and $`\sigma _{gas}`$ is the one-dimensional velocity dispersion of the cloud. In both cases, we assume that the observed line-of-sight velocity dispersion of the H i is a suitable measure of $`\sigma _{gas}`$.
The mass density of the tidal gas is given by $`\rho _{gas}=1.36\times m_H\times n_{HI}`$, where the numerical constant accounts for the presence of He, $`m_H`$ is the mass of a hydrogen atom, and $`n_{HI}`$ is the number density of atomic hydrogen. If we assume the gas is uniformly distributed<sup>4</sup><sup>4</sup>4Clearly the results will be very different if the tidal gas is mainly in dense clouds, a point that can be tested with higher-resolution VLA observations. For now we calculate the ram pressure effect on the diffuse gas. with a column density $`N_{HI}`$ along a length $`dL`$, we have $`n_{HI}=6.5\times 10^3\mathrm{cm}^3(\frac{N_{HI}}{2\times 10^{20}\mathrm{cm}^2})(\frac{10\mathrm{kpc}}{dL})`$, where the fiducial values are typical of tidal features (HvG96, Hibbard & Yun in preparation). We rewrite eqn. (2) in terms of the observables:
$`P_{tidal}=2\times 10^{14}\mathrm{dyne}\mathrm{cm}^2\left({\displaystyle \frac{C}{1.5}}\right)`$ (3)
$`\left({\displaystyle \frac{N_{HI}}{2\times 10^{20}\mathrm{cm}^2}}\right)\left({\displaystyle \frac{10\mathrm{kpc}}{dL}}\right)\left({\displaystyle \frac{\sigma _{HI}}{10\mathrm{km}\mathrm{s}^1}}\right)^2`$
The maximum radius out to which we expect material to be stripped ($`R_{RPS}`$) is then given by the requirement that $`P_{RPS}`$(r) $`=P_{tidal}`$(r) at r$`=R_{RPS}`$. We replace $`L_{bol}`$ in eqn. (1) by the IR luminosity ($`L_{IR}`$), under the assumption that the IR luminosity arises from reprocessed UV photons from the starburst<sup>5</sup><sup>5</sup>5This assumes that the IR luminosity is not enhanced due to the presence of AGN. There is no evidence for an energetically important AGN in any of these three systems. (Lonsdale, Persson & Matthews 1984, Joseph & Wright 1985). The very high IR luminosities of these systems ($`L_{IR}/L_B>10`$) make it likely that this is indeed the case, and we are probably making an error of $``$ 10% (e.g. Heckman, Lehnert & Armus 1993). Equating eqns. (1) and (3) we find:
$`R_{RPS}=140\mathrm{kpc}\left({\displaystyle \frac{1.5}{C}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{L_{IR}}{10^{11}L_{}}}\right)^{\frac{1}{2}}`$ (4)
$`\left({\displaystyle \frac{2\times 10^{20}\mathrm{cm}^2}{N_{HI}}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{dL}{10\mathrm{kpc}}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{10\mathrm{km}\mathrm{s}^1}{\sigma _{HI}}}\right)`$
In Table 1 we provide estimates of $`R_{RPS}`$ for the systems considered here. In calculating $`R_{RPS}`$ we made the very conservative assumption that the values of $`N_{HI}`$ and $`\sigma _{HI}`$ for the stripped gas are equal to the maximum values found within the tidal tails (see Table 1). The results of these calculations indicate that, in all cases, $`R_{RPS}`$ is larger than the radii of the observed gaps in the tidal H i distributions. Therefore, in principle, the wind should be able to strip the gas from any tidal material in its path.
The above derivation assumes that the tidal gas is at rest with respect to the wind. It can be easily generalized to the case of a wind impacting an expanding tidal feature by reducing the wind ram pressure by a factor $`(\frac{V_{wind}V_{tidal}}{V_{wind}})^2`$. For NGC 520 and Arp 220, the tidal gas is primarily in rotation (i.e., moves perpendicular to superwind), so we expect the gas to feel the full ram pressure given above. For Arp 299, Heckman et al. (1999) find $`V_{wind}`$ = 800 km s<sup>-1</sup>, and we estimate a maximum $`V_{tidal}`$ = 240 km s<sup>-1</sup> (HY99). Therefore, the ram pressure could be reduced by 50%, reducing $`R_{RPS}`$ by 70% from that listed in Table 1, i.e. $`R_{RPS}100`$ kpc for Arp 299. This is still large enough to reach to the region of the N clump in Fig. 2. The regions of high H i velocity dispersion indicated in Fig. 2c may be due to the influence of such a wind. We note that these regions occur on the side of the H i features that face the starburst region. However, no such kinematic signatures are visible in the gas near the wind axis in NGC 520 or Arp 220.
The lack of gaseous/stellar displacements in the tidal tails of many superwind systems might seem to provide a strong argument against the scenario outlined above. However, there are two conditions needed to produce wind-displaced tidal features: the starburst must be of sufficient energy and duration to achieve โblowoutโ, and the tidal H i must intersect the path of the expanding wind material. The second condition is not met for the blowout systems NGC 4676, NGC 3628, NGC 2623, NGC 1614, and NGC 3256 (references given in ยง3.4.1). In these systems the tidal tails appear to lie at large angles with respect to the blowout axis, and their tidal tails should not intersect the wind. Both conditions are met for M82 and NGC 4631, which both show extreme H i/optical displacements. For these two systems, the high-latitude H i appears to be accreted from nearby disturbed companions (M81 and NGC 4656, respectively; Yun et al. 1993, Weliachew et al. 1978), while for the three major mergers under study here the H i appears to intercept the path of the wind as a result of a highly inclined encounter geometry. A high inclination encounter geometry is therefore a prerequisite for such displaced morphologies, and the host of the superwind should be the disk with a retrograde or polar spin geometry.
Nevertheless, ram pressure stripping cannot provide a complete explanation of the observations. Since the stars are unaffected by the wind, it would be an unusual coincidence for the edges of the optical plumes to correspond with the edge of the cold gas which is presently being ablated. Nor does it seem likely that the wind could be sufficiently collimated to โboreโ into the northern H i clump in Arp 299 just where the optical tail appears projected upon it. Therefore a second process is still needed to explain the small-scale anti-correlations. In conclusion, the expanding wind should affect any tidal H i in its path; however this effect alone cannot explain all the details of the observations.
### 3.5. Photoionization
Disk galaxies are known to exhibit a precipitous drop in neutral hydrogen column density beyond column densities of a few times $`10^{19}`$ cm<sup>-2</sup> (Corbelli, Schneider & Salpeter 1989; van Gorkom 1993; Hoffman et al. 1993). This drop has been attributed to a rapid change in the ionization fraction of the gas due to influence of the intergalactic UV field (Maloney 1993, Corbelli & Salpeter 1993, Dove & Shull 1994; see also Felten & Bergeron 1969, Hill 1974), rather than a change in the total column density of H.
Tidal tails are assembled from the outermost regions of disk galaxies. Since this material is redistributed over a much larger area than it formerly occupied, its surface brightness must decrease accordingly. Therefore, if the progenitors were typical spirals, with H i disks extending to column densities of a few times $`10^{19}`$ cm<sup>-2</sup>, then the resulting tidal tail must have gas at much lower column densities. However, tidal tails exhibit a similar edge in their column density distribution at a similar column density (HvG96; Hibbard & Yun in preparation). This is one of the most compelling pieces of evidence for an abrupt change in the phase of the gas at low H i column density. The outer tails mapped in H i should therefore be the proverbial โtip of the icebergโ of a lower column density, mostly ionized medium. With the tidal gas in this very diffuse state, fluctuations in the incident ionizing flux might be expected to produce accompanying fluctuations in the neutral gas fraction.
Given these considerations, we examine the possibility that the total hydrogen column density does not change at the regions illustrated in Figs. 14, but that the neutral fraction does, i.e. that the gas in the regions under study has a higher ionization fraction than adjacent regions. The intergalactic UV field should be isotropic, and would not selectively ionize certain regions of the tails. Here we examine the possibility of two non-isotropic sources of ionizing flux: (1) leakage of UV flux from the circumnuclear starburst; (2) ionization by late B stars and white dwarfs associated with the evolved stellar tidal population.
Our procedure is to compare the expected ionizing flux density shortward of 912 ร
to the expected surface recombination rate of the gas. We assume that in the area of interest the gas is at a temperature of $`10^4`$ K, for which a case-B recombination coefficient of $`\alpha _B=2.6\times 10^{13}\mathrm{cm}^3\mathrm{s}^1`$ is appropriate (Spitzer 1956). We assume that the hydrogen is almost completely ionized, so $`n_en_H`$. We further assume that the density of ionized gas is the same as the density of the neutral gas in the adjacent regions, $`n_Hn_{HI}`$, where $`n_{HI}`$ is calculated as above ($`n_HN_{HI}/dL`$, ยง3.4.2). The detailed ionization state will depend sensitively on the clumpiness of the gas, but a full treatment of this problem is beyond the scope of this paper. Here we wish to investigate if these processes are in principle able to create effects similar to those observed.
#### 3.5.1 Photoionization by the Starburst
Here we consider the case that the superwind does not affect the tidal gas by a direct interaction, but influences it by providing a direct path from the tidal regions to the starburst, free of dust and dense gas (see also Fig. 5). Through these holes, ionizing photons from the young hot stars stream out of the nuclear regions and are quickly absorbed by the first neutral atoms they encounter. Following Felton & Bergeron (1969; see also Mahoney 1993), we solve the equation:
$$n_{HI}^2\alpha _BdL=I=\frac{1}{4\pi r^2}\underset{h\nu =13.6eV}{\overset{\mathrm{}}{}}\frac{L_\nu }{h\nu }๐\nu $$
(5)
The right hand side of this equation represents the total ionizing radiation escaping the starburst region along a direction that has been cleared of obscuring material by the superwind. We express this in terms of the total ionizing flux of a completely unobscured starburst of a given bolometric luminosity $`L_{bol}`$ by introducing the factor $`f_{esc}(\mathrm{\Omega }_{wind})`$ to account for the fact that only a fraction of the photons emitted into a solid angle $`\mathrm{\Omega }_{wind}`$ find their way out of the starburst region<sup>6</sup><sup>6</sup>6It is important to differentiate $`f_{esc}`$, the total fraction of ionized photons emerging from a starburst, and $`f_{esc}(\mathrm{\Omega }_{wind})`$, the fraction emerging along a particular sightline. $`f_{esc}`$ is the total angle averaged fraction, i.e. $`f_{esc}`$ is the integral of $`f_{esc}(d\mathrm{\Omega })`$ over all solid angles, while $`f_{esc}(\mathrm{\Omega }_{wind})`$ is the integral over a solid angle cleared by the wind. Most studies in the literature quote values for $`f_{esc}`$.. The expected ionizing flux for a starburst of a given bolometric luminosity $`L_{bol}`$ is calculated from the population synthesis models of Bruzual & Charlot (1993; 1995), assuming continuous star formation with a duration longer than 10 Myr (long enough for the burst to achieve blowout), a Salpeter IMF with $`M_{lower}=0.1M_{}`$ and $`M_{upper}=125M_{}`$, and solar metallicity. This yields<sup>7</sup><sup>7</sup>7With similar assumptions, the โStarburst99โ models of Leitherer et al. (1999) give approximately the same numerical coefficient. $`I=f_{esc}(\mathrm{\Omega }_{wind})\times \frac{1.83\times 10^{54}\mathrm{photons}\mathrm{s}^1}{4\pi r^2}\times \frac{L_{bol}}{10^{11}L_{}}`$. Again making the standard assumption that most of the starburst luminosity is emitted in the far infrared (i.e. $`L_{bol}=L_{IR}`$, cf. ยง3.4.2), we rearrange eqn. (5) to solve for the radius, $`R_{ionized}`$, out to which the starburst is expected to ionize a given column density of H i of thickness $`dL`$:
$`R_{ionized}`$ $`=`$ $`66\mathrm{kpc}\left({\displaystyle \frac{f_{esc}(\mathrm{\Omega }_{wind})}{0.10}}\right)^{1/2}\left({\displaystyle \frac{L_{IR}}{10^{11}L_{}}}\right)^{1/2}`$ (6)
$`\left({\displaystyle \frac{2\times 10^{20}\mathrm{cm}^2}{N_{HI}}}\right)\left({\displaystyle \frac{dL}{10\mathrm{kpc}}}\right)^{1/2}`$
Resulting values for $`R_{ionized}`$ are listed in Table 1. For this computation, we have adopted a value of 10% for $`f_{esc}(\mathrm{\Omega }_{wind})`$. This is equal to the total fraction of ionizing photons, $`f_{esc}`$, escaping from a normal disk galaxy as calculated by Dove, Shull & Ferrara (1999). Even higher values of $`f_{esc}`$ are expected in starburst systems (Dove et al. 1999). Since we stipulate that a higher fraction of ionizing photons escape along sightlines above the blowout regions than are emitted along other directions, it follows that $`f_{esc}(\mathrm{\Omega }_{wind})>f_{esc}`$, and as a result the values of $`R_{ionized}`$ calculated in Table 1 should be conservative estimates.
Table 1 shows that under these simplified conditions, $`R_{ionized}`$ is of the order of, or larger than, the tidal radii of interest. We therefore conclude that the starburst seems quite capable of ionizing tidal H i, if indeed there is an unobstructed path from the starburst to the tidal regions. This might explain the lack of H i along the wind axis in NGC 520 and Arp 220, and the absence of H i along the optical tidal tail in Arp 299.
This process is especially attractive since it can potentially explain the lack of H i at the bases of otherwise gas-rich tidal tails in NGC 7252 (Hibbard et al. 1994), Arp 105 (Duc et al. 1997), and NGC 4039 (Hibbard, van der Hulst & Barnes in preparation). These systems do not show evidence for expanding superwinds, which rules out the possibility that RPS is playing a role. And each of these systems possesses a level of star formation that, according to eqn. 6, is capable of ionizing gas out to the necessary radii.
However, photoionization by the central starburst does not seem capable of explaining all of the observations. As with the wind hypothesis above, it would be an unusual coincidence for the edges of the optical plumes to correspond with the edge of ionization cone. Therefore a second process is still needed to explain the small-scale anti-correlations.
#### 3.5.2 Photoionization by the Optical Tails
The fact that the H i column density falls off just as the optical surface brightness increases at the edges of various tidal features (Fig. 4) leads us to suspect that there may be local sources of ionization within the stellar features themselves. For NGC 520 and Arp 220, we believe the outer H i is in a disk structure which is intersected by the tidal stellar plumes and we wish to investigate whether ionization by evolved sources within the stellar plumes, such as late B stars and white dwarfs, could be responsible for decreasing the neutral fraction of the diffuse outer H i. For Arp 299 the geometry is more complicated, and we refer the reader to Fig. 5. Here we suggest that part of the purely gaseous tidal filament (the light grey filament in Fig. 5) is ionized by evolved sources near the end of the stellar tail (the dark grey filament in Fig. 5, especially those regions nearest the gas-rich filament in the right hand panel of this figure).
As in the previous section, we balance the surface recombination rate with the expected ionizing flux density (eqn. 5). In this case, we calculate the ionizing flux density for an evolved population of stars of a given $`R`$-band luminosity density ($`\mathrm{\Sigma }_R`$, in $`L_{}\mathrm{pc}^2`$).
In order to approximate the stellar populations in the tidal tails, we assume that the tails arise from the outer edges of an Sbc progenitor, and that star formation ceased shortly after the tails were launched. We again use the models of Bruzual & Charlot (1993, 1995) for a Salpeter IMF over the mass range 0.1โ125 $`M_{}`$, and adopt an exponentially decreasing SFR with a time constant of 4 Gyr (typical of an Sbc galaxy, Bruzual & Charlot 1993), which is truncated after 10 Gyr and allowed to age another 500 Myr. This simulates the situation in which star formation within the disk is extinguished as the tail forms, and the ejected stellar population passively fades thereafter. While tidal tails frequently exhibit in-situ star formation (e.g. Schweizer 1978; Mirabel, Lutz & Maza 1991), it is usually not widespread. Under these assumptions, a population with a projected $`R`$-band surface brightness of 1 $`L_{R,}`$ pc<sup>-2</sup> should produce an ionizing flux of $`2.36\times 10^4\mathrm{ph}\mathrm{s}^1\mathrm{cm}^2`$. Therefore eqn. (5) becomes $`n_{HI}^2\alpha _BdL<2.36\times 10^4\mathrm{ph}\mathrm{s}^1\mathrm{cm}^2\times \mathrm{\Sigma }_R`$, which can be rewritten as:
$`\mathrm{\Sigma }_R`$ $`>`$ $`14.28L_{R,}\mathrm{pc}^2\left({\displaystyle \frac{N_{HI}}{2\times 10^{20}\mathrm{cm}^2}}\right)^2\left({\displaystyle \frac{10\mathrm{kpc}}{dL}}\right)`$ (7)
Noting that 1 $`L_{}\mathrm{pc}^2`$ corresponds to $`\mu _R`$=25.9 mag arcsec<sup>-2</sup>, we rewrite this as a condition on the surface brightness of the tidal features:
$`\mu _R`$ $`<`$ $`23.0\mathrm{mag}\mathrm{arcsec}^2`$ (8)
$`2.5\times log\left[\left({\displaystyle \frac{N_{HI}}{2\times 10^{20}\mathrm{cm}^2}}\right)^2\left({\displaystyle \frac{10\mathrm{kpc}}{dL}}\right)\right]`$
Referring to Fig. 4, we see that only the northern plume of Arp 220 is bright enough to ionize nearby tidal H i at the appropriate column densities. Neither the optical plume in the NGC 520 system nor the northern tail in the Arp 299 system appears bright enough to ionize the necessary columns of hydrogen unless the tidal features are unreasonably thick ($``$60 kpc). However, since we have no other explanation for the small scale H i/optical differences illustrated in Fig. 4, we are hesitant to abandon this explanation too quickly.
A possible solution is to invoke continued star formation even after the tails are ejected. For instance, if we do not truncate the star formation rate after 10 Gyr, instead allowing the star formation rate to continue its exponential decline as the tail expands, then the ionizing flux per 1 $`L_{R,}`$ pc<sup>-2</sup> is 70 times higher than the value of $`2.36\times 10^4\mathrm{ph}\mathrm{s}^1\mathrm{cm}^2`$ used above. This would lower the fiducial surface brightness in eqn. (8) from 23.0 mag arcsec<sup>-2</sup> to 27.5 mag arcsec<sup>-2</sup>, in which case the faint tidal features in NGC 520 ($`\mu _R`$ 25 mag arcsec<sup>-2</sup>) and Arp 299 ($`\mu _R`$ 26.5 mag arcsec<sup>-2</sup>) could indeed ionize the necessary column densities of adjoining H i.
The observed broad-band colors of the tidal tails are not of sufficient quality to discriminate between these two star formation histories, since the expected color differences are only of order $`BR`$=0.1 mag. However, whether or not the gas is more highly ionized in the regions of interest can be addressed observationally. The expected emission measure ($`EM=n_e^2๐l`$) can be parameterized as:
$$EM=0.42\mathrm{cm}^6\mathrm{pc}\left(\frac{N_{HI}}{2\times 10^{20}\mathrm{cm}^2}\right)^2\left(\frac{10\mathrm{kpc}}{dL}\right)^2$$
(9)
Since emission measures of order 0.2 cm<sup>-6</sup> pc have been detected with modern CCD detectors (e.g. Donahue et al. 1995; Hoopes, Walterbos & Rand 1999), there is some hope of being able to observationally determine if regions of the tidal tails are significantly ionized. If the gas has a clumpy distribution, then there should be some high density peaks which might be sufficiently bright to yield reliable emission line ratios. Such ratios would allow one to determine the nature of the ionizing source, e.g. photoionization vs. shocks. Therefore, while we cannot assert unequivocally that photoionization plays a role in shaping the outer tidal morphologies, it is possible to test this hypothesis with future observations.
The hypothesis that ionizing flux from a stellar tidal feature may ionize gas in a nearby gaseous tidal feature is not necessarily at odds with the observations that many stellar tails are gas-rich. This is because tails with cospatial gas and stars arise from regions originally located within the stellar disk of the progenitors, while the optical faint gas-rich tidal features likely arise from regions beyond the optical disk (ยง 3.1). In normal disk galaxies, the H i within the optical disk is dominated by a cooler component with a smaller scale height and velocity dispersion, while the H i beyond the optical disk is warmer and more diffuse, with a larger scale height and velocity dispersion (Braun 1995, 1997). As a result, $`dL`$ should be considerably larger for purely gaseous tidal features than for optically bright tidal features.
## 4. Conclusions
In this paper we have described differences between gaseous and stellar tidal features. There are large-scale differences, such as extensive purely gaseous tidal features (the outer disks in NGC 520 and Arp 220 and the outer filament in Arp 299) and largely gas-poor optical features (tidal plumes in NGC 520 and Arp 220 and the inner filament in Arp 299). And there are smaller-scale differences: the anti-correlation between the edges of gaseous and optical features depicted in Fig. 4. A similar anti-correlation is observed between H i and optical shells in shell galaxies (Schiminovich et al. 1994a,b, 1999), many of which are believe to be more evolved merger remnants.
We have examined a number of possible explanations for these observations, including dust obscuration, differences in the original distribution of gas and starlight in the progenitor disks, gas cloud collisions within the developing tails, ram pressure stripping due to an extensive hot halo or an expanding superwind, and photoionization by either the central starburst or evolved sources in the tidal tails themselves. However, no one model easily and completely explains the observations, and it is conceivable that all explanations are playing a role at some level.
The most likely explanation for the lack of starlight associated with the outer tidal H i is that such features arise from the H i-rich but optically faint outer radii of the progenitor disks. The relatively short lifetimes of luminous stars and the large velocity dispersions of less luminous stars, especially with respect to the gas, will further dilute the luminous content of this material, and the H i-to-light ratio of the resulting tidal features will increase with time (Hibbard et al. 1994). Gaseous tidal extensions with very little detectable starlight would seem to be the natural consequence. The outer H i rings in NGC 520 and Arp 220 and the gas-rich outer filament in Arp 299 are all likely to arise from these gas-rich regions of their progenitor disks.
For the gas-poor tidal features we suggest that the starburst has played an important role in shaping the gaseous morphology, either by sweeping the features clear of gas via a high-pressure expanding superwind, or by excavating a clear sightline towards the starburst and allowing ionizing photons to penetrate the tidal regions. The primary supporting evidence for this conclusion is rather circumstantial: the five galaxies with the most striking H i/optical displacements (the three systems currently under study here, and the H i accreting starburst systems M82 and NGC 4631) host massive nuclear starbursts with associated powerful outflows or superwinds aligned with the direction of the most extreme H i/optical displacements.
NGC 520, Arp 299, and Arp 220 each experienced prograde/polar or prograde/retrograde encounters. This relative geometry may be a pre-requisite for the morphological differences reported here. Retrograde and polar encounters do not raise extensive tidal tails (e.g. Barnes 1988), leaving large gaseous disks in the inner regions. These disks should help collimate and โmass-loadโ the superwind (Heckman, Lehnert & Armus 1993; Suchkov et al. 1996), which in turn leads to denser and longer-lived winds. Simultaneously, the combination of opposite spin geometries provides the opportunity for the tidal tail from the prograde system to rise above the starburst region in the polar or retrograde system, where it may intersect the escaping superwind or UV radiation. If this suggestion is correct, only systems hosting a galactic superwind and experiencing a high-inclination encounter geometry should exhibit such extreme differences between their H i and optical tidal morphologies.
The observations do not allow us to discriminate between either the RPS or the photoionization models: simple calculations suggest that either is capable of affecting the diffuse outer gas if the geometry is right. There might be some evidence for the effects of an impinging wind on the outer material in Arp 299 from the increased velocity dispersion at several points (HY99); however NGC 520 and Arp 220 show no such signatures. Photoionization is an attractive solution, as it offers a means of explaining the lack of tidal H i found at the base of otherwise gas-rich tidal tails in mergers which show no evidence of a superwind (e.g. NGC 7252, Arp 105, NGC 4039; see ยง 3.5.1).
Since any ionized hydrogen will emit recombination lines, both explanations can be checked observationally. The expected emission measure is given by eqn. (9), which predicts detectable features at the column densities of interest. The morphology of the ionized gas should reveal the nature of the ionizing source: photoionized gas should be smoothly distributed, while gas excited by RPS should be concentrated in dense shocked regions on the edges of the H i that are being compressed by the superwind, i.e. on the edges nearest the wind axis in Figs. 1d, 2d & 3d. If the gas is clumpy, there may be regions bright enough to allow line ratios to be measured, which should further aid in discriminating between photoionization or shock excitation.
Only two scenarios are offered to explain the small-scale anti-correlations: dust obscuration and photoionization due to evolved sources in the optical tails. Dust obscuration likely affects the apparent tidal morphologies at the lowest light levels, but we suspect that the dust content is too low to significantly obscure the brighter tidal features. However, if the tidal tails are highly ionized, with the neutral gas representing only a small fraction of the total hydrogen column density, it is possible that we are grossly underestimating the expected amount of absorption. This question can be investigated directly with submm imaging of the cold dust in tidal tails.
The other possibility is that the UV flux from evolved sources in the optical tails is responsible for ionizing nearby diffuse outer H i. A simple calculation suggest that the tidal tail in Arp 220 is bright enough to ionize nearby H i, but the expected ionization flux from the optical tails in NGC 520 and Arp 299 is too low to explain the observed differences, unless significant star formation continued within these features after their tidal ejection. If this is indeed the case, then the regions where the neutral gas column density drops rapidly (see Fig.4) should contain ionized gas which would emit recombination radiation. The expected levels of emission should be observable with deep imaging techniques (see above). This situation requires that the gas and stellar features are physically close, and not just close in projection, which can tested with detailed numerical simulations.
We would like to thank Lee Armus Tim Heckman for sharing of unpublished results, and Rhodri Evans, Jacqueline van Gorkom, Dave Schiminovich, and Josh Barnes for useful discussions. We thank the referee, Chris Mihos, for a thorough and useful report. |
no-problem/9912/hep-ph9912442.html | ar5iv | text | # Electromagnetic Corrections to Low-Energy ๐โข๐ Scattering
## INTRODUCTION
The Chiral Perturbation Theory (ChPT) analyses, in both the generalized and the standard frameworks, of two-loop effects in low-energy $`\pi \pi `$ scattering lead to strong interaction corrections which are rather small as compared to the leading order and one-loop contributions, of the order of 5% in the case of the S wave scattering lengths $`a_0^0`$ and $`a_0^2`$, for instance. This leads one to expect that higher order corrections to these quantities are well under control and can be safely neglected. However, these calculations were undertaken without taking into account isospin breaking effects, coming either from the mass difference between the $`u`$ and $`d`$ quarks, or from the electromagnetic interaction. The smallness of the two-loop corrections naturally raises the question of how they compare to these isospin breaking effects. The quark mass difference induces corrections of the order $`๐ช\left((m_dm_u)^2\right)`$, which are expected to be negligible, as already known to be the case for the pion mass difference $`M_{\pi ^\pm }M_{\pi ^0}`$, the latter being in fact dominated by electromagnetic effects due to the virtual photon cloud .
In the present contribution, we shall review the status of radiative corrections to the amplitude $`\pi ^+\pi ^{}\pi ^0\pi ^0`$ . The reason why we focus on the latter comes from the fact that it directly appears in the expression of the lifetime of the $`\pi ^+\pi ^{}`$ dimeson atom (see in particular the last of these references), that will be measured by the DIRAC experiment at CERN .
## VIRTUAL PHOTONS IN ChPT: THE GENERAL FRAMEWORK
The general framework for a systematic study of radiative corrections in ChPT has been described in . It consists in writing down a low-momentum representation for the generating functional of QCD Greenโs functions of quark bilinears in the presence of the electromagnetic field,
$$e^{i๐ต[v_\mu ,a_\mu ,s,p,Q_L,Q_R]}=๐[\mu ]_{QCD}๐[A_\mu ]e^{i{\scriptscriptstyle d^4x}},$$
(1)
with
$$=_{QCD}^0+_\gamma ^0+\overline{q}\gamma ^\mu [v_\mu +\gamma _5a_\mu ]q\overline{q}[si\gamma _5p]q+A_\mu [\overline{q_L}\gamma ^\mu Q_Lq_L+\overline{q_R}\gamma ^\mu Q_Rq_R].$$
(2)
Here $`_{QCD}^0`$ is the QCD lagrangian with $`N_f`$ flavours of massless quarks, while $`_\gamma ^0`$ is the Maxwell lagrangian of the photon field. The coupling of the latter to the left-handed and right-handed quark fields, $`q_{L,R}=\frac{1\gamma _5}{2}q`$, occurs via the spurion sources $`Q_{L,R}(x)`$. Under local $`SU(N_F)_L\times SU(N_f)_R`$ chiral transformations $`(g_L(x),g_R(x))`$, they transform as (the transformation properties of the vector ($`v_\mu `$), axial ($`a_\mu `$), scalar ($`s`$) and pseudoscalar ($`p`$) sources can be found in Ref. )
$$q_I(x)g_I(x)q_I(x),Q_I(x)g_I(x)Q_I(x)g_I(x)^+,I=L,R,$$
(3)
so that the generating functional $`๐ต`$ remains invariant (up to the usual Wess-Zumino term). Thus, although the electromagnetic interaction represents an explicit breaking of chiral symmetry, this breaking occurs in a well defined way, which is precisely the information encoded in the transformation properties of Eq. 3, much in the same way as the transformation properties of the scalar source $`s(x)`$ conveys the information on how the quark masses break chiral symmetry. At the end of the calculation, the sources $`v_\mu (x)`$, $`a_\mu (x)`$ and $`p(x)`$ are set to zero, $`s(x)`$ becomes the diagonal quark mass matrix, while the electromagnetic spurions are turned into the diagonal charge matrix of the quarks. Additional symmetries of $`๐ต`$ consist of the discrete transformations like parity and charge conjugation. Finally, $``$ is invariant under an additional charge conjugation type symmetry, which however affects only the photon field and the electromagnatic spurion sources,
$$Q_{L,R}Q_{L,R}(x),A_\mu (x)A_\mu (x).$$
(4)
The low-energy representation of $`๐ต`$ is constructed systematically in an expansion in powers of momenta, of quark masses and of the electromagnetic coupling, by computing tree and loop graphs with an effective lagrangian $`_{\text{eff}}`$ involving the $`N_f\times N_f`$ matrix $`U(x)`$ of pseudoscalar fields, and constrained by the chiral symmetry properties as well as the above discrete symmetries.
At lowest order, in the counting scheme where the electric charge $`e`$ and the spurions $`Q_{L,R}(x)`$ count as $`๐ช(\text{p})`$, the effective lagrangian is thus simply given by (for the notation, we follow )
$$_{\text{eff}}^{(2)}=\frac{F^2}{4}d^\mu U^+d_\mu U+\chi ^+U+U^+\chi \frac{1}{4}F^{\mu \nu }F_{\mu \nu }+CQ_RUQ_LU^+.$$
(5)
The effect of the electromagnetic interaction is contained in the covariant derivative $`d_\mu `$, defined as $`d_\mu U=_\mu Ui(v_\mu +Q_RA_\mu +a_\mu )U+iU(v_\mu +Q_LA_\mu a_\mu )`$, and in the low-energy constant $`C`$, which at this order is responsible for the mass difference of the charged and neutral pions,
$$\mathrm{\Delta }_\pi M_{\pi ^\pm }^2M_{\pi ^0}^2=\mathrm{\hspace{0.17em}2}Ce^2/F^2.$$
(6)
In fact, for the case of two light flavours ($`N_f=2`$), to which we restrict ourselves from now on, this is the only direct effect induced by this counterterm. Of course, this mass splitting will in turn modify the kinematics of the low-energy $`\pi \pi `$ amplitudes and the corresponding scattering lengths. The details of this lowest order analysis can be found in Ref. . Here, we shall rather consider the structure of the effective theory at next-to-leading order. Besides the counterterms described by the well known low-energy constants $`l_i`$ , there are now, if we restrict ourselves to constant spurion sources, 11 additional counterterms at order $`๐ช(e^2\text{p}^2)`$, and three more at order $`๐ช(e^4)`$. The latter contribute only to the scattering amplitudes involving charged pions alone. The complete list of these counterterms $`k_i`$, $`i=1,\mathrm{}14`$ and of their $`\beta `$-function coefficients can be found in Refs. .
## RADIATIVE CORRECTIONS TO THE ONE LOOP $`\pi ^+\pi ^{}\pi ^0\pi ^0`$ AMPLITUDE
The computation of the amplitude $`๐^{+;00}(s,t,u)`$ for the process $`\pi ^+\pi ^{}\pi ^0\pi ^0`$, including corrections of order $`๐ช(e^2\text{p}^2)`$ and of order $`๐ช(e^4)`$, is then a straightforward exercice in quantum field theory. The explicit expressions can be found in Ref. and will not be reproduced here. Let us rather discuss some features of the one-photon exchange graph of Fig. 1, which induces an electromagnetic correction to the strong vertex. This graph contains both the long range Coulomb interaction between the charged pions, and an infrared singularity. The latter is treated in the usual way, the physical, infrared finite, observable being the cross section for $`\pi ^+\pi ^{}\pi ^0\pi ^0`$ with the emission of soft photons (one soft photon is enough at the order at which we are working here). The Coulomb force leads to a singular behaviour of the amplitude $`๐^{+;00}(s,t,u)`$ at threshold ($`q`$ denotes the momentum of the charged pions in the center of mass frame),
$$Re๐^{+;00}(s,t,u)=\frac{4M_{\pi ^\pm }^2M_{\pi ^0}^2}{F_\pi ^2}\frac{e^2}{16}\frac{M_{\pi ^\pm }}{q}+Re๐_{\text{thr}}^{+;00}+O(q),$$
(7)
with
$`Re๐_{\text{thr}}^{+;00}`$ $`=`$ $`32\pi \left[{\displaystyle \frac{1}{3}}(a_0^0)_{\text{str}}+{\displaystyle \frac{1}{3}}(a_0^2)_{\text{str}}\right]{\displaystyle \frac{\mathrm{\Delta }_\pi }{F_\pi ^2}}+{\displaystyle \frac{e^2M_{\pi ^0}^2}{32\pi ^2F_\pi ^2}}(\mathrm{\hspace{0.17em}30}3๐ฆ_1^{\pm 0}+๐ฆ_2^{\pm 0})`$ (8)
$`{\displaystyle \frac{\mathrm{\Delta }_\pi }{48\pi ^2F_\pi ^4}}\left[M_{\pi ^0}^2(1+4\overline{l}_1+3\overline{l}_312\overline{l}_4)6F_\pi ^2e^2(10๐ฆ_1^{\pm 0})\right]`$
$`+{\displaystyle \frac{\mathrm{\Delta }_\pi ^2}{480\pi ^2F_\pi ^4}}\left[\mathrm{\hspace{0.17em}212}40\overline{l}_115\overline{l}_3+180\overline{l}_4\right],`$
and $`(a_0^0)_{\text{str}}`$ and $`(a_0^2)_{\text{str}}`$ denote the S wave scattering lengths in the presence of the strong interactions only, but expressed, for convention reasons, in terms of the charged pion mass ,
$`(a_0^0)_{\text{str}}`$ $`=`$ $`{\displaystyle \frac{7M_{\pi ^\pm }^2}{32\pi F_\pi ^2}}\left\{\mathrm{\hspace{0.17em}1}+{\displaystyle \frac{5}{84\pi ^2}}{\displaystyle \frac{M_{\pi ^\pm }^2}{F_\pi ^2}}\left[\overline{l}_1+2\overline{l}_2{\displaystyle \frac{3}{8}}\overline{l}_3+{\displaystyle \frac{21}{10}}\overline{l}_4+{\displaystyle \frac{21}{8}}\right]\right\}`$
$`(a_0^2)_{\text{str}}`$ $`=`$ $`{\displaystyle \frac{M_{\pi ^\pm }^2}{16\pi F_\pi ^2}}\left\{\mathrm{\hspace{0.17em}1}{\displaystyle \frac{1}{12\pi ^2}}{\displaystyle \frac{M_{\pi ^\pm }^2}{F_\pi ^2}}\left[\overline{l}_1+2\overline{l}_2{\displaystyle \frac{3}{8}}\overline{l}_3{\displaystyle \frac{3}{2}}\overline{l}_4+{\displaystyle \frac{3}{8}}\right]\right\},`$ (9)
corresponding to the numerical values (we use $`F_\pi =92.4`$ MeV) $`(a_0^0)_{\text{str}}=0.20\pm 0.01`$ and $`(a_0^2)_{\text{str}}=0.043\pm 0.004`$, respectively . The quantity $`Re๐_{\text{thr}}^{+;00}`$, which is by itself free of the infrared divergence mentioned above, appears directly in the lifetime of the pionium atom , the long range Coulomb interaction being, in that case, absorbed by the bound state dynamics. The contributions of the low-energy constants $`k_i`$ are contained in the two quantities $`๐ฆ_1^{\pm 0}`$ and $`๐ฆ_2^{\pm 0}`$. Naive dimensional estimates lead to $`(e^2F_\pi ^2/M_{\pi ^0}^2)๐ฆ_1^{\pm 0}=1.8\pm 0.9`$ and $`(e^2F_\pi ^2/M_{\pi ^0}^2)๐ฆ_2^{\pm 0}=0.5\pm 2.2`$. With these estimates, one obtains
$$\frac{1}{32\pi }ReA_{\text{thr}}^{+;00}\left[\frac{1}{3}(a_0^0)_{\text{str}}+\frac{1}{3}(a_0^2)_{\text{str}}\right]=(1.2\pm 0.7)\times 10^3,$$
(10)
whereas the two-loop correction to the same combination of scattering lengths appearing between brackets amounts to $`4\times 10^3`$. For a more careful evaluation of the contribution of the counterterms $`k_i`$ to $`ReA_{\text{thr}}^{+;00}`$, see .
Fig. 1. The one photon exchange electromagnetic correction to the strong vertex.
## CONCLUSION
We have evaluated the radiative corrections of order $`๐ช(e^2\text{p}^2)`$ and of order $`๐ช(e^4)`$ to the amplitude $`๐^{+;00}(s,t,u)`$, which is relevant for the description of the pionium lifetime. Similar results for the scattering process involving only neutral pions can be found in Refs. . The formalism presented here for the case $`N_f=2`$ has also been applied to the study of radiative corrections to the pion form factors . Radiative corrections for the scattering amplitudes involving only charged pions, which would be relevant for the $`2p2s`$ level-shift of pionium, for instance, have however not been worked out so far.
Information on the low-energy scattering of pions can only be obtained in an indirect way, either from the pionium lifetime, or from $`K_\mathrm{}4`$ decays. This last process, however, has electromagnetic corrections of its own, which are only partly covered by the present analysis. A systematic framework devoted to the study of radiative correction for the semi-leptonic processes has been presented in Refs. . |
no-problem/9912/hep-ex9912021.html | ar5iv | text | # Review of Charm Lifetimes
## 1 Introduction
### 1.1 Motivation
The study of charm particle lifetimes is broadly motivated by two main goals. The first is to enable the conversion of relative branching fractions to partial decay rates and the second is to learn more about the strong interaction.
Experimental data on charm decays are normally obtained by measuring decay fractions, e.g. $`\mathrm{\Gamma }(D^0K^{}\pi ^+)/\mathrm{\Gamma }_{tot}(D^0)`$, whereas theory calculates the partial decay rate, $`\mathrm{\Gamma }(D^0K^{}\pi ^+)`$. The lifetime of the particle, $`\tau =\mathrm{}/\mathrm{\Gamma }_{tot}(D^0)`$, is needed in order to convert the experimentally measured decay fractions into decay rates. Not only does this allow tests of theoretical predictions but it also enables the extraction of Standard Model parameters if the theoretical calculations are reliable, e.g. a comparison of $`D`$ semileptonic decay rates may allow a direct extraction of $`|V_{cs}|`$ and $`|V_{cd}|`$ allowing a test of the unitarity of the CKM matrix.
The second motivation for the study of lifetimes is that they are interesting in their own right. They allow us to learn more about the โTheoretically-Challengedโ part of the Standard Model, i.e. non-perturbative QCD. This is one of the few areas of the Standard Model where experimental data and theoretical ideas closely interact and is thus intellectually interesting. For example, even though we have some models, we have little idea about exactly how quarks turn into hadrons and we are still learning about the importance of different contributions to quark decays. Calculations using Lattice QCD are only just now being used to study the dynamics of decays and reliable results are still being eagerly awaited .
### 1.2 Decay Diagrams
The lifetime of a particle is given by the following expression:
$$\tau =\frac{\mathrm{}}{\mathrm{\Gamma }_{SL}+\mathrm{\Gamma }_{NL}+\mathrm{\Gamma }_{PL}}$$
(1)
where $`\mathrm{\Gamma }_{SL}`$ is the semileptonic decay rate, (e.g. $`\mathrm{\Gamma }(D^+\mathrm{}^+\nu _{\mathrm{}}X)`$), $`\mathrm{\Gamma }_{NL}`$ is the non-leptonic or hadronic decay rate, (e.g. $`\mathrm{\Gamma }(D^+\mathrm{hadrons})`$), and $`\mathrm{\Gamma }_{PL}`$ is the purely leptonic decay rate, (e.g. $`\mathrm{\Gamma }(D^+\mathrm{}^+\nu _{\mathrm{}})`$). Compared to the total rate, the purely leptonic decay rate is normally very small due to helicity suppression.<sup>1</sup><sup>1</sup>1The $`D`$ meson has spin $`0`$ so that in the decay, the resulting lepton (anti-lepton) and anti-neutrino (neutrino) must both be either left-handed or both right-handed in order to conserve angular momentum. However the $`VA`$ nature of the weak interaction requires left-handed particles and right-handed anti-particles. In addition current data for $`D`$ meson decays indicate that the semileptonic rates for $`D^+`$ and $`D^0`$ are equal to within at least about 10% if not better.<sup>2</sup><sup>2</sup>2The semileptonic decay rate is given by the ratio of the semileptonic branching ratio to the lifetime. Using the world average values for these compiled by the Particle Data Group , $`\mathrm{\Gamma }_{SL}(D^+)=(1.071\pm 0.119)\times 10^{13}`$ GeV and $`\mathrm{\Gamma }_{SL}(D^0)=(1.067\pm 0.041)\times 10^{13}`$ GeV. This means that the large difference between the observed $`D^+`$ and $`D^0`$ lifetimes ($`\tau (D^+)/\tau (D^0)=2.55\pm 0.04`$) is due to a large difference in the hadronic decay rates for the $`D^+`$ and the $`D^0`$. Thus in contrast to the spectator model which only has the free charm quark decay diagram and predicts equal $`D^+`$ and $`D^0`$ lifetimes, we need to take into account spectator quark effects. This entails taking into account other decay diagrams like those in figure 1 and any interferences between them.
The conventional wisdom used to explain the smaller hadronic width of the $`D^+`$ relative to the $`D^0`$ is that in the $`D^+`$ Cabibbo-allowed decays ($`c\overline{d}s(u\overline{d})\overline{d}`$), there exist identical quarks in the final state unlike for $`D^0`$, so there are additional (destructive) interference contributions for the $`D^+`$. Or, we can talk about a model where one views the interference as that occurring between the external spectator and internal spectator decay diagrams of figure 1 which can lead to the same exclusive final state. Ignoring the more complicated soft gluonic exchanges, it is relatively easy in this model to roughly show that the additional interference for inclusive hadronic decays for $`D^+`$ is destructive and can lead to a lifetime ratio of $`\tau (D^+)/\tau (D^0)2.0`$. However it is difficult to determine exactly how large a ratio of $`\tau (D^+)/\tau (D^0)`$ interference effects can accommodate and therefore how large is the additional contribution of Cabibbo-allowed W-exchange decays needed for the $`D^0`$. One has to take care in calculating the size of the Pauli interference since naive calculations can produce too large a value resulting in a negative total decay rate for the $`D^+`$ . Cabibbo-allowed W-exchange decay is expected to contribute to lowering the $`D^0`$ lifetime but this contribution is wavefunction and helicity suppressed ($`|f_d|^2m_s^2/m_c^410^3`$ without gluon exchange) and is difficult to calculate reliably.
Clearly a better understanding of charm inclusive decays is necessary. Experimental data on lifetimes from all the charm particles will allow us to learn more about how they decay and in turn use the data to extract Standard Model parameters like quark masses and the CKM matrix elements $`|V_{cs}|`$ and $`|V_{cd}|`$.
### 1.3 Theoretical Overview
A systematic approach now exists for the treatment of inclusive decays that is based on QCD and consists of an operator product expansion in the Heavy Quark Mass . In this approach the decay rate is given by:
$$\mathrm{\Gamma }_{H_Q}=\frac{G_F^2m_Q^5}{192\pi ^3}\mathrm{\Sigma }f_i|V_{Qq_i}|^2\left[A_1+\frac{A_2}{\mathrm{\Delta }^2}+\frac{A_3}{\mathrm{\Delta }^3}+\mathrm{}\right]$$
(2)
where the expansion parameter $`\mathrm{\Delta }`$ is often taken as the heavy quark mass and $`f_i`$ is a phase space factor. $`A_1=1`$ gives the spectator model term and the $`A_2`$ term produces differences between the baryon and meson lifetimes. The $`A_3`$ term includes the non-spectator W-annihilation and Pauli interference effects. For meson decays, parts of these terms can be related to certain observables whereas for baryons one relies solely on particular quark models or QCD sum rules to determine the parameters fully. The importance of higher order terms is not really known though some studies have pointed to possibly large higher order contributions .
A theoretical review is outside the scope of this article and the reader is referred to other reviews .
## 2 Review of Experimental Results
There have been new measurements of charm lifetimes since the 1998 review performed by the PDG . Some are results published in journals while others were presented at conferences this year. Table 1 shows the experiments that have shown new lifetime measurements.
### 2.1 Experimental Method
Unlike the lifetime measurements for the $`b`$ particles, the methods used for measurements of the charm particle lifetimes are more straight forward. Firstly, the number of reconstructed charm decays are large enough that only exclusive decays are used โ inclusive methods are not needed. This means that the charm particle momentum is fully measured.
For the fixed target experiments, the resolutions of the production and decay vertices are about 10 $`\mu `$m in each of the two directions transverse to the beam direction and about 400โ600$`\mu `$m along the beam direction. The resolution varies with the multiplicity of charged tracks in the vertices as well as on the momenta of the charged tracks. Since the boost is typically large ($`\beta \gamma 40`$$`100`$) the full 3-dimensional decay length ($`\mathrm{}`$) is used to measure the proper time for the decay, $`t=\mathrm{}/\gamma \beta c=(\mathrm{}/c)\times (m_D/p_D)`$, where $`p_D`$ and $`m_D`$ are the momentum and rest mass of the charm particle respectively. The typical proper time resolution is about 40โ60 fs for E791 and FOCUS and is smaller, $``$20 fs, for SELEX due to their much larger average $`D`$ momentum. To eliminate background, charm candidates are selected that have a large separation between the production and decay vertices, typically by many $`\sigma _{\mathrm{}}`$, i.e. $`\mathrm{}>N\sigma _{\mathrm{}}`$. This selection drastically reduces the acceptance of candidates with short lifetimes and the acceptance as a function of proper time is rapidly varying at short proper times. In order to reduce the systematic uncertainty that would be associated with having to know this acceptance function accurately, one uses the reduced proper time, $`t^{}=t(N\sigma _{\mathrm{}}/c)\times (m_D/p_D)`$. The acceptance as a function of $`t^{}`$ is quite flat and therefore only small acceptance corrections are necessary. The effect of using the reduced proper time is to start the clock at a different point for each charm candidate event, determined by $`\sigma _{\mathrm{}}`$. One assumes, and can check that there is no drastic bias in $`\sigma _{\mathrm{}}`$ that could affect the $`t^{}`$ distribution from following a pure exponential decay. Any bias would have to be correctly simulated in the Monte Carlo.
Even with the relatively small boost ($`\beta \gamma 1.7`$) for charm mesons produced in a $`e^+e^{}`$ collider running at the $`\mathrm{{\rm Y}}(4S)`$, data from CLEO-II.5 can be used to measure lifetimes. This is possible due to a newly installed silicon vertex detector, which enabled CLEO to obtain a resolution on the decay vertex of 80โ100 $`\mu `$m in the D flight direction in the $`xy`$ plane. This corresponds to relatively poor proper time resolutions of about 140โ200 fs, but is however sufficient to competitively measure the lifetimes of the charm mesons as these are longer lived than the charm baryons. Due to the detector and magnetic field arrangement of CLEO, the decay length and momentum of the charm meson is measured in the $`xy`$ plane, (which is transverse to the beam direction). The inherently smaller backgrounds in $`e^+e^{}`$ collisions allow selection of charm signals without any vertex detachment selection criteria. This means that the the absolute proper time $`t=(\mathrm{}^{xy}/c)\times (m_D/p_D^{xy})`$ can be used, thus eliminating one contribution to the acceptance uncertainty. However, the relatively large proper time resolution requires good knowledge of this resolution including non-Gaussian tails which could bias the fitted lifetime. Although the new silicon tracker in CLEO-II.5 has enabled them to measure lifetimes to a precision rivaling the fixed target-dominated world averages, the next generation fixed target experiment FOCUS will be overwhelming with a huge sample of fully reconstructed charm decays.
The lifetimes are usually extracted using a maximum likelihood fit. Either a binned (proper time) likelihood or an unbinned (candidate-by-candidate) likelihood is used. For the binned likelihood, events are taken from the mass peak region with events from mass sidebands giving an estimate of the background lifetime distribution. For the unbinned likelihood, the mass as well as the proper time for each charm candidate is used where candidates from a wide mass region are selected. As well as fitting for the lifetime, the fraction of background is also usually varied in the fits. The details of each fit are different for each lifetime measurement.
### 2.2 Measurements of Charm Lifetimes
The world average lifetimes for the weakly decaying charm particles are dominated by measurements from Fermilab E687 published in 1993-1995. These are beginning to be superseded by updates this year to the $`D`$ meson lifetimes as well as to the $`\mathrm{\Lambda }_c^+`$ lifetime.
The CLEO collaboration has published their measurements for the lifetimes of the $`D^+`$, $`D^0`$ and $`D_s^+`$ . The modes used were $`D^0K^{}\pi ^+`$, $`K^{}\pi ^+\pi ^0`$, $`K^{}\pi ^+\pi ^{}\pi ^+`$, $`D^+K^{}\pi ^+\pi ^+`$, and $`D_s^+\varphi \pi ^+`$ with $`\varphi K^+K^{}`$. Besides the usual vertexing requirements, to additionally suppress backgrounds they required that the $`D^0`$ and $`D^+`$ come from $`D^+`$ decays to $`D^0\pi ^+`$ and $`D^+\pi ^0`$ respectively. The momentum of the $`\pi ^0`$ in the decay $`D^0K^{}\pi ^+\pi ^0`$ is required to be $`>`$ 100 MeV/c and the $`D^+`$ and $`D_s^+`$ mesons are required to have momenta larger than 2.5 GeV/c. A seven parameter fit is used to extract the lifetime for each mode before any averaging is done. Three proper time resolutions are used in the fit, two of them to model underestimates of the mismeasurement errors. Two backgrounds are fitted, one with zero lifetime and another component with a finite lifetime. An unbinned likelihood is used but with the probability associated with the candidate mass determined in a separate (mass) fit. The CLEO measurements are shown in table 2, and the figures are available in their publication .
E791 is a hadroproduction experiment that took data in 1990โ1991 at Fermilab and new measurements using these data have recently been published for the lifetimes of the $`D_s^+`$ and the $`D^0`$ . Figures of the signals and lifetime fits are available in these publications. For the $`D_s^+\varphi \pi ^+`$ measurement, due to the requirement of a resonance $`\varphi K^+K^{}`$, only a loose ฤerenkov particle ID requirement is made on the kaon with the same sign as the pion. However any possible background from $`D^+K^{}\pi ^+\pi ^+`$ where one of the pions is misidentified as a kaon is eliminated by removing candidates that have a $`K^{}\pi ^+\pi ^+`$ mass within $`\pm `$30 MeV/c<sup>2</sup> of the $`D^+`$ mass. This selection requires that the background mass distribution be modeled with a piecewise linear function with a discontinuity fixed at 1.95 GeV/c<sup>2</sup>. An unbinned likelihood fit is performed over the whole mass range that extracts the $`D^+`$ lifetime as well as the $`D_s^+`$ lifetime for this mode. In order to reduce any uncertainty in the acceptance, the acceptance is not obtained using only a Monte Carlo simulation, instead $`D^+K^{}\pi ^+\pi ^+`$ data are used together with the ratio of Monte Carlo $`\varphi \pi `$ and $`K\pi \pi `$ acceptances. The acceptance for $`K\pi \pi `$ is obtained by dividing the data distribution by a pure exponential with the world average $`D^+`$ lifetime, $`ฯต_{data}(K\pi \pi )`$. The acceptance for $`\varphi \pi `$ is then given by $`ฯต_{data}(K\pi \pi )\times \left(ฯต_{MC}(\varphi \pi )/ฯต_{MC}(K\pi \pi )\right)`$ for each $`t^{}`$ bin. The lifetime results are shown in table 2. Results are also shown in the table for the $`D^0`$ lifetime measured using the $`K^{}\pi ^+`$ decay mode. This measurement was performed together with a lifetime measurement in the $`K^+K^{}`$ decay mode . Here, a different technique was used to extract the lifetime since the $`K^+K^{}`$ sidebands do not accurately reflect the background under the $`D^0`$ mass peak. Events were split into reduced proper time bins and the number of $`D^0`$ signal events was found from a mass fit using a Gaussian with mean and sigma fixed to that obtained in a fit to all events. A fit to these signal events as a function of $`t^{}`$ using a single exponential after particle identification weighting and acceptance corrections gives the extracted lifetime.
The Fermilab FOCUS photoproduction experiment took data in 1996โ1997 and is the follow-on experiment to E687 with significant improvements to the data quality as well as having collected charm samples 15โ20 times larger than the E687 sample . A preliminary measurement of the $`D_s^+`$ lifetime has been made using 50% of the data sample in the decay mode $`D_s^+\varphi \pi ^+`$ . The signal and selection regions are shown in figure 2. As well as a cut on the $`K^+K^{}`$ mass to select a $`\varphi `$, a cut is also made on the helicity angle of the decay. Since the $`D_s^+`$ and $`\pi ^+`$ each have spin $`0`$ and the $`\varphi `$ has spin $`1`$, to conserve angular momentum the $`\varphi `$ and $`\pi ^+`$ must be in an orbital angular momentum $`L=1`$ state. Hence the distribution of the angle between the $`\pi ^+`$ and one of the kaons in the $`\varphi `$ centre-of-mass should vary as $`(Y_{L=1}^{m=0})^2cos^2\phi `$ for signal candidates whereas the yield of background candidates are expected to be independent of $`\phi `$. This allows a selection for candidates with $`|\phi |>0.3`$ to increase signal-to-noise. The result of the lifetime fit is shown in figure 3. The preliminary result on the measured lifetime using 50% of the FOCUS data is given in table 2. Also shown in table 2 is the preliminary measurement of the lifetime of the $`\mathrm{\Lambda }_c^+`$ using 80% of the FOCUS data. The $`\mathrm{\Lambda }_c^+`$ is reconstructed using the $`pK^{}\pi ^+`$ decay mode and the signal and results of the lifetime fit are shown in figure 4. For both measurements a binned likelihood is used, taking events from the sidebands as the model for the lifetime distribution for background events under the charm mass peak. The acceptance is taken from Monte Carlo simulations. The acceptance correction is small, being larger for $`D_s^+`$ than for $`\mathrm{\Lambda }_c^+`$.
SELEX is another Fermilab experiment that collected data in 1996-1997. The data were taken using a 600 GeV $`\mathrm{\Sigma }^{}`$ beam and a $`\pi ^{}`$ beam. The experiment was designed for good acceptance in the forward region and to produce larger fractions of charm-strange baryons. Shown in table 2 is a preliminary measurement of the $`\mathrm{\Lambda }_c^+`$ lifetime using 100% of the SELEX data in the $`\mathrm{\Lambda }_c^+pK^{}\pi ^+`$ mode . The acceptance correction was obtained using $`D^0`$ data and checked with $`K_s^0`$ decays which occur near the interaction region. The signal and fit are published elsewhere .
The measurements and new world averages are shown in figure 5. The most significant result of these new measurements is that $`\tau (D_s^+)`$ is conclusively larger than $`\tau (D^0)`$. The world average is now $`\tau (D_s^+)/\tau (D^0)=1.211\pm 0.017`$ using the FOCUS measurement with statistical error only, this can be compared to the earlier PDG98 value of $`1.125\pm 0.042`$.
## 3 Lifetimes and Theory
### 3.1 $`๐ซ^\mathrm{๐}`$ and $`๐ซ_๐^\mathbf{+}`$ Lifetimes
The $`D_s^+`$ lifetime is now conclusively measured to be above the $`D^0`$ lifetime, $`\tau (D_s^+)/\tau (D^0)=1.211\pm 0.017`$. Bigi and Uraltsev have used the QCD-based operator product expansion method to analyze this lifetime difference and have concluded that $`\tau (D_s^+)/\tau (D^0)=1.00`$$`1.07`$ is possible without W-annihilation or W-exchange contributions . The $`D_s^+`$ lifetime is reduced by $`3`$% due to $`D_s^+\mathrm{}^+\nu _{\mathrm{}}`$; Pauli interference in Cabibbo-suppressed $`D_s^+`$ decays increase the $`D_s^+`$ lifetime by $`4`$% ; and $`SU(3)_f`$ breaking in the โFermi motionโ of the $`c`$ quark is expected to increase the $`D_s^+`$ lifetime by $`4`$%, (one can view the quarks in the $`D_s^+`$ as more confined since $`f_{D_s}`$ and hence the wavefunction at the original is larger for $`D_s^+`$ than for $`D^0`$.) Any difference in the measured $`D_s^+`$ and $`D^0`$ lifetimes larger than 7% must be attributed to sizable W-annihilation or W-exchange (WA/WX) effects.
With their estimation of the WA/WX contribution, Bigi and Uraltsev conclude that the ratio $`\tau (D_s^+)/\tau (D^0)=1.00`$$`1.27`$, though $`0.8`$$`1.27`$ is possible since the sign could change when one allows for interference between the WA/WX and the spectator contributions .
In a recent paper Cheng and Yang have also examined the $`D_s^+`$ and $`D^0`$ lifetime difference using the QCD-based operator product expansion technique together with the QCD sum rule approach to estimate the hadronic matrix elements . They obtained $`\tau (D_s^+)/\tau (D^0)1.08\pm 0.04`$ including their estimation of WX/WA contributions to both $`D^0`$ and $`D_s^+`$ decays. For the size of the WX/WA they calculate $`\mathrm{\Gamma }_{WX}(D^0)/\mathrm{\Gamma }_{NL}^{Spect}=0.10\pm 0.06`$ and $`\mathrm{\Gamma }_{WA}(D_s^+)/\mathrm{\Gamma }_{NL}^{Spect}=0.04\pm 0.03`$, where $`\mathrm{\Gamma }_{NL}^{Spect}`$ is the spectator decay contribution to the non-leptonic width.
### 3.2 Phenomenological Extraction of the W-exchange/W-annihilation in Inclusive $`๐ซ^\mathrm{๐}`$ and $`๐ซ_๐^\mathbf{+}`$ Decays
With the currently available large charm samples and more precise measurements of rare branching fractions, one may be able to do more phenomenological extractions from the data. As an illustration, a phenomenological extraction of the strength of the W-exchange/W-annihilation contribution to inclusive $`D^0`$ and $`D_s^+`$ decays can be done using some simple assumptions. The extraction is made possible by a now fairly precise measured value for
$$r_{DCSD}=\frac{\mathrm{\Gamma }(D^+K^+\pi ^{}\pi ^+)}{\mathrm{\Gamma }(D^+K^{}\pi ^+\pi ^+)}=(6.8\pm 0.9)\times 10^3$$
(3)
This is an average of the value obtained by the PDG and a preliminary FOCUS measurement of $`r_{DCSD}=(6.5\pm 1.1)\times 10^3`$ which includes statistical errors only .
I make the following assumptions:
1. $`\mathrm{\Gamma }(D^+K^+\pi ^{}\pi ^+)tan^4\theta _c\mathrm{\Gamma }_{NL}^{Spect}+\mathrm{\Gamma }_{WA}^{D^+}`$;
2. $`\mathrm{\Gamma }(D^+K^{}\pi ^+\pi ^+)\mathrm{\Gamma }_{NL}^{PI}`$;
3. $`\mathrm{\Gamma }_{WA}^{D^+}<<tan^4\theta _c\mathrm{\Gamma }_{NL}^{Spect}`$; and
4. No interference between the WA/WX contribution and the spectator contribution.
Assumptions 1 and 2 make a possibly dubious relationship between an exclusive decay rate and a part of the inclusive rate. This could be approximately accurate if the effects of resonances and final state interactions in these decays are small enough to allow this assumption. With assumption 3, one can set $`\mathrm{\Gamma }_{WA}^{D^+}=0`$ in assumption 1. Finally assumption 4 gives $`\mathrm{\Gamma }_{tot}(D^0)=\mathrm{\Gamma }_{NL}^{Spect}+\mathrm{\Gamma }_{SL}+\mathrm{\Gamma }_{WX}`$ and $`\mathrm{\Gamma }_{tot}(D^+)=\mathrm{\Gamma }_{PI}+\mathrm{\Gamma }_{SL}`$.
Using $`r_{DCSD}=(6.8\pm 0.9)\times 10^3`$ together with $`\tau (D^+)/\tau (D^0)=2.55\pm 0.04`$ and $`\mathrm{\Gamma }_{SL}/\mathrm{\Gamma }_{tot}=0.135\pm 0.006`$ obtained from the measured value of $`BR_{SL}(D^0X\mathrm{}\nu _{\mathrm{}})`$, the value for the strength of the W-exchange contribution can be extracted:
$$\frac{\mathrm{\Gamma }_{WX}}{\mathrm{\Gamma }_{NL}^{Spect}}=0.29\pm 0.17$$
(4)
where the error is just from the measured quantities and does not of course include uncertainties implicit in the assumptions of this model. The error is dominated by the error in $`r_{DCSD}`$.
In addition, using $`\tau (D_s^+)/\tau (D^0)=1.211\pm 0.017`$, $`\mathrm{\Gamma }_{NL}^{Spect}(D_s^+)=\alpha \mathrm{\Gamma }_{NL}^{Spect}`$ and together with $`\mathrm{\Gamma }_{WA}(D_s^+)=\beta \mathrm{\Gamma }_{WX}`$, the relative strength of the W-annihilation in $`D_s^+`$ decays to W-exchange in $`D^0`$ decays can be extracted to be $`\beta =0.33`$ and thus:
$$\frac{\mathrm{\Gamma }_{WA}(D_s^+)}{\mathrm{\Gamma }_{NL}^{Spect}}=0.10$$
(5)
The value of $`\alpha `$ has been taken to be $`1/1.07`$ to account for the differences between the $`D_s^+`$ and $`D^0`$ non-spectator decay contributions mentioned in the previous section.
This illustration only serves to give a somewhat more quantitative measure of the unexpectedly large size of the W-exchange/W-annihilation contributions. The phenomenologically extracted values of these are 2โ3 times larger than those calculated by Cheng and Yang . A more detailed model treatment is limited by the large uncertainties on some of the measured quantities used.<sup>3</sup><sup>3</sup>3If one sets $`\mathrm{\Gamma }_{WA}^{D^+}=tan^4\theta _c\times \mathrm{\Gamma }_{WX}^{D^0}`$ we would get a non-sensible result of $`\mathrm{\Gamma }_{WA}^{D^+}=\mathrm{\Gamma }_{NL}^{Spect}`$. A more reasonable assumption may be to set $`\mathrm{\Gamma }_{WA}^{D^+}=tan^4\theta _c\times \mathrm{\Gamma }_{WA}^{D_s^+}`$. However other problems arise here too, either because the assumptions are too simplistic or the measured quantities are still not yet measured precisely enough for a more sophisticated model. Note that we expect $`\mathrm{\Gamma }_{WX}^{D^0}`$ to be different from $`\mathrm{\Gamma }_{WA}^{D_s^+}`$ since the former is colour-suppressed whereas the latter is colour-allowed, but also since this in itself would predict the wrong sign for this difference, there must be more complicated processes, for example in the gluon exchanges in the two cases.
## 4 Conclusions
A number of new charm particle lifetime measurements have been published or were shown at conferences this year. The most significant update is that the $`D_s^+`$ lifetime is now conclusively measured to be above the $`D^0`$ lifetime. The ratio $`\tau (D_s^+)/\tau (D^0)=1.191\pm 0.024`$ using published measurements. Using the FOCUS preliminary measurement gives $`\tau (D_s^+)/\tau (D^0)=1.211\pm 0.017`$. This lifetime ratio is now large enough for one to conclude that the W-exchange contribution in $`D^0`$ decays is large, estimated to be about 30% of the non-leptonic spectator contribution using a simple phenomenological model. The W-exchange contribution appears to be at the limit of or larger than the values calculated using the QCD-based operator production expansion techniques. More precise charm data, for example in semileptonic decays, is needed to extract the size of the matrix elements used in these techniques to control the weight of WA/WX in $`D`$ decays . Note that this is in contrast to studies of W-exchange contributions in exclusive $`D^0`$ decays which is always complicated by final-state interactions, e.g. $`D^0\varphi K_s^0`$. If the W-exchange contribution is as large as the lifetime measurements suggest, then it must appear somewhere in the exclusive decays. However, conclusive evidence of W-exchange contributions in exclusive $`D^0`$ decays is still missing. Where are they?
We can look forward to more precise charm particle lifetimes from the Fermilab FOCUS and SELEX experiments, for both charm baryons and mesons. This should ensure continued theoretical interest in the physics of charm lifetimes. |
no-problem/9912/hep-ph9912526.html | ar5iv | text | # BUHEP-99-30 hep-ph/9912526 Technicolor Signatures at the High Energy Muon Collider
## 1. Introduction
It is a real pleasure to talk at a workshop in which the theorists are downโtoโearth participants and the machine physicists are wildโeyed dreamers. Here is an e-mail exchange between between my session organizer and me:
* Joe โ
I just realized that the workshop title refers to muon colliders at 10-100 TeV (!). I donโt have a hell of a lot in the way of TC signals at those energies. How seriously should I take that energy range as a charge??
Ken
* You can completely ignore the 10 TeV stuff - that is for the accelerator people (i.e. what is the highest energy muon collider one could ever have any hope of building).
โJoe
Accordingly, I prepared a talk that discusses TC signatures at 1โ4 TeV: The technivector mesons $`\rho _T`$ and $`\omega _T`$ of the minimal, oneโdoublet TC model tc ; the $`Z^{}`$ and higherโdimensional electroweak singlet technifermions of topcolorโassisted technicolor tctwohill ; and the electroweakโ$`SU(2)`$ singlet fermions of the top seesaw model seesaw .
In the course of this, however, I recalled an old idea that would give the HEMC physics to do all the way from 1 TeV to 100 TeV. This has to do with the fact that walking technicolor wtc , an essential ingredient of any viable TC model, implies that the spectrum of technivector mesons cannot be QCDโlike tasi ; ichep94 ; eduardo . It must extend in some sense to 100 TeV and beyond. This idea is so intriguing that I will emphasize it here. I hope someone will be able to decide whether it makes sense.
The rest of this paper is organized as follows: Section 2 presents a summary of the dynamical approach to electroweak and flavor symmetry breaking: technicolor tc , extended technicolor (ETC) etcsd ; etceekl , and all that. This scenarioโs signatures at the HEMC are discussed in Section 3, with emphasis on the technivector spectrum in walking technicolor models.
## 2. Overview of Technicolor
Technicolorโa strong interaction of fermions and gauge bosons at the scale $`\mathrm{\Lambda }_{TC}1\mathrm{TeV}`$โinduces the breakdown of electroweak symmetry to electromagnetism without elementary scalar bosons tc . Technicolor has a strong precedent in QCD. There, the chiral symmetry of massless quarks is spontaneously broken by strong QCD interactions, resulting in the appearance of massless Goldstone bosons, $`\pi `$, $`K`$, $`\eta `$<sup>1</sup><sup>1</sup>1The hard masses of quarks explicitly break chiral symmetry and give mass to $`\pi `$, $`K`$, $`\eta `$, which are then referred to as pseudo-Goldstone bosons. In fact, if there were no Higgs bosons, this chiral symmetry breaking would itself cause the breakdown of $`SU(2)U(1)`$ to electromagnetism. Furthermore, the $`W`$ and $`Z`$ masses would be given by $`M_W^2=M_Z^2\mathrm{cos}^2\theta _W=\frac{1}{8}g^2N_Ff_\pi ^2`$, where $`g`$ is the weak $`SU(2)`$ coupling, and $`N_F`$ the number of massless quark flavors. Alas, the pion decay constant $`f_\pi `$ is only $`93\mathrm{MeV}`$ and the $`W`$ and $`Z`$ three orders of magnitude too light.
In its simplest form, technicolor is a scaled up version of QCD, with massless technifermions whose chiral symmetry is spontaneously broken at $`\mathrm{\Lambda }_{TC}`$. If left and right-handed technifermions are assigned to weak $`SU(2)`$ doublets and singlets, respectively, then $`M_W=M_Z\mathrm{cos}\theta _W=\frac{1}{2}gF_\pi `$, where $`F_\pi =246\mathrm{GeV}`$ is the weak technipion decay constant. <sup>2</sup><sup>2</sup>2In the minimal model with one doublet $`(U,D)`$ of technifermions, there are just three technipions. They are the linear combinations of massless Goldstone bosons that become, via the Higgs mechanism, the longitudinal components $`W_L^\pm `$ and $`Z_L^0`$ of the weak gauge bosons. In non-minimal technicolor, the technipions include the longitudinal weak bosons as well as additional Goldstone bosons associated with spontaneous technifermion chiral symmetry breaking. The latter must and do acquire massโfrom the extended technicolor interactions discussed below.
In the standard model and its extensions, the masses of quarks and leptons are produced by their Yukawa couplings to the Higgs bosonsโcouplings of arbitrary magnitude and phase that are put in by hand. This option is not available in technicolor because there are no elementary scalars. Instead, this explicit breaking of quark and lepton chiral symmetries must arise from gauge interactions alone. The most economical approach employs extended technicolor etcsd ; etceekl . In its proper formulation etceekl , the ETC gauge group contains technicolor, color, and flavor as subgroups and there are very stringent restrictions on the representations to which technifermions, quarks, and leptons belong: Specifically, they must be combined into the same few large representations of ETC. Otherwise, unbroken chiral symmetries lead to axionโlike particles. Quark and lepton hard masses are generated by their coupling (with strength $`g_{ETC}`$) to technifermions via ETC gauge bosons of generic mass $`M_{ETC}`$:
$$m_q(M_{ETC})m_{\mathrm{}}(M_{ETC})\frac{g_{ETC}^2}{M_{ETC}^2}\overline{T}T_{ETC},$$
(1)
where $`\overline{T}T_{ETC}`$ and $`m_{q,\mathrm{}}(M_{ETC})`$ are, respectively, the technifermion condensate and quark and lepton masses renormalized at the scale $`M_{ETC}`$.
Technicolor is an asymptotically free gauge interaction. If it is like QCD, with its running coupling $`\alpha _{TC}`$ rapidly becoming small above its characteristic scale $`\mathrm{\Lambda }_{TC}1\mathrm{TeV}`$, then $`\overline{T}T_{ETC}\overline{T}T_{TC}\mathrm{\Lambda }_{TC}^3`$. To obtain quark masses of a few GeV thus requires $`M_{ETC}/g_{ETC}<30\mathrm{TeV}`$. This is excluded: Extended technicolor boson exchanges also generate four-quark interactions which, generically, include $`|\mathrm{\Delta }S|=2`$ and $`|\mathrm{\Delta }B|=2`$ operators. For these not to conflict with $`K^0`$-$`\overline{K}^0`$ and $`B_d^0`$-$`\overline{B}_d^0`$ mixing measurements, $`M_{ETC}/g_{ETC}`$ must exceed several hundred TeV etceekl . This implies quark and lepton masses no larger than a few MeV, and technipion masses no more than a few GeV.
Because of this conflict between constraints on flavor-changing neutral currents and the magnitude of ETC-generated quark, lepton and technipion masses, classical QCDโlike technicolor was superseded long ago by โwalkingโ technicolor wtc . Here, the strong technicolor coupling $`\alpha _{TC}`$ runs very slowly, or walks, for a large range of momenta, possibly all the way up to the ETC scale of several hundred TeV. The slowly-running coupling enhances $`\overline{T}T_{ETC}/\overline{T}T_{TC}`$ by almost a factor of $`M_{ETC}/\mathrm{\Lambda }_{TC}`$. This, in turn, allows quark and lepton masses as large as a few GeV and $`M_{\pi _T}>100\mathrm{GeV}`$ to be generated from ETC interactions at $`M_{ETC}=๐ช(100\mathrm{TeV})`$.
In almost all respects, walking technicolor models are very different from QCD with a few fundamental $`SU(3)`$ representations. One example is that integrals of weak-current spectral functions and their moments converge much more slowly than they do in QCD. The consequence of this for the HEMC will be discussed in Section 3. Meanwhile, this and other calculational tools based on naive extrapolation from QCD and on large-$`N_{TC}`$ arguments are suspect. It is not yet possible to predict with confidence the influence of technicolor degrees of freedom on precisely-measured electroweak quantitiesโthe $`S,T,U`$ parameters to name a frequently discussed example pettests .
Another major development in technicolor was motivated by the discovery of the top quark at Fermilab toprefs . Theorists have concluded that ETC models cannot explain the top quarkโs large mass without running afoul of either experimental constraints from the $`\rho `$ parameter and the $`Z\overline{b}b`$ decay rate zbbth โthe ETC mass must be about 1 TeV to produce $`m_t=175\mathrm{GeV}`$; see Eq. (1)โor of cherished notions of naturalnessโ$`M_{ETC}`$ may be higher, but the coupling $`g_{ETC}`$ then must be fine-tuned near to a critical value. This state of affairs led to the proposal of โtopcolor-assisted technicolorโ (TC2) tctwohill .
In TC2, as in many top-condensate models of electroweak symmetry breaking topcondref , almost all of the top quark mass arises from a new strong โtopcolorโ interaction topcref . To maintain electroweak symmetry between (left-handed) top and bottom quarks and yet not generate $`m_bm_t`$, the topcolor gauge group under which $`(t,b)`$ transform is usually taken to be $`SU(3)U(1)`$. The $`U(1)`$ provides the difference that causes only top quarks to condense. Then, in order that topcolor interactions be naturalโi.e., that their energy scale not be far above $`m_t`$โwithout introducing large weak isospin violation, it is necessary that electroweak symmetry breaking is still due mostly to technicolor interactions tctwohill .
Extended technicolor interactions are still needed in TC2 models to generate the masses of light quarks and the bottom quark, to contribute a few GeV to $`m_t`$<sup>3</sup><sup>3</sup>3Massless Goldstone โtop-pionsโ arise from top-quark condensation. This ETC contribution to $`m_t`$ is needed to give them a mass in the range of 150โ250 GeV. and to give mass to technipions. The scale of ETC interactions still must be hundreds of TeV to suppress flavor-changing neutral currents and, so, the technicolor coupling still must walk. In TC2 there is no need for large technifermion isospin splitting associated with the top-bottom mass difference. Thus, for example, $`\omega _T`$ and $`\rho _T`$ partners are nearly degenerate $`\overline{U}U\pm \overline{D}D`$ states.
Another, more recent, variant of topcolor models is the โtop seesawโ mechanism seesaw . Its motivation is to realize the original topโcondensate idea of the Higgs boson as a fermionโantifermion bound state. This failed for the top quark because it turned out to be too light! In top seesaw models, an electroweak singlet fermion $`F`$ acquires a dynamical mass of several TeV. Through mixing of $`F`$ with the top quark, it gives the latter a much smaller mass (the seesaw) and the scalar $`\overline{F}F`$ bound state acquires a component with an electroweak symmetry breaking vacuum expectation value.
This completes our brief summary of technicolor. We turn now to the technicolor signatures for which a high energy muon collider is wellโsuited.
## 3. Technicolor Signatures at the HEMC
The principal signals of technicolor are discussed in a number of places lowsigs . Most of them are accessible at low energiesโat the Tevatron in Run II, certainly at the LHC, and, possibly, even at LEP. In the minimal technicolor model, with just one technifermion doublet, the only prominent signals in a TeVโscale collider are modest enhancements in longitudinally-polarized weak boson production. These are the $`s`$โchannel colorโsinglet technirho resonances near 1.5โ2 TeV: $`\rho _T^0W_L^+W_L^{}`$ and $`\rho _T^\pm W_L^\pm Z_L^0`$. The $`๐ช(\alpha ^2)`$ cross sections of these processes are quite small at such masses. This and the difficulty of reconstructing weak-boson pairs with reasonable efficiency make observing these enhancements a challenge. These states would be more easily seen in a lepton colliderโif one can be built with $`\sqrt{s}=1.5`$$`2\mathrm{TeV}`$ at an affordable cost. Nonminimal technicolor models are much more accessible in a hadron collider because they have a rich spectrum of lower mass technirho vector mesons and technipion states into which they may decay.
If technicolor is the basis for electroweak symmetry breaking, it will have been discovered once the LHC has acquired and analyzed $`10\mathrm{fb}^1`$ of data. The question we address here is what the HEMC can do to add to our understanding of this new dynamics.
### 3.1 The Technivector Spectrum of Walking Technicolor
The slow decrease with energy of the coupling $`\alpha _{TC}`$ in walking technicolor means that the $`\mu ^+\mu ^{}`$ cross section approaches asymptotia only near the extended technicolor scale, probably even above the reach of the HEMC. This is most directly seen by considering the integrals in Weinbergโs spectral function sum rules for the weakโisospin vector and axial vector currents sfsr . These sum rules are
$`{\displaystyle _0^{\mathrm{}}}๐s\left[\rho _V(s)\rho _A(s)\right]=F_\pi ^2`$
$`{\displaystyle _0^{\mathrm{}}}๐ss\left[\rho _V(s)\rho _A(s)\right]=0,`$ (2)
where $`F_\pi =246\mathrm{GeV}`$. Here, the spectral functions $`\rho _V`$ and $`\rho _A`$ are analogs for the weakโisospin currents of the ratio of cross sections, $`R(s)=\sigma (e^+e^{}\mathrm{hadrons})/\sigma (e^+e^{}\mu ^+\mu ^+)`$. In QCD, the sum rules corresponding to Eq. (3.1 The Technivector Spectrum of Walking Technicolor) are saturated by the lowest lying spinโone resonances, $`\rho `$ and $`A_1`$, and the sum rules converge rapidly above the $`A_1`$ mass. Similarly, in technicolor without a walking coupling, the sum rules would be saturated by the lowest $`\rho _T`$ and $`A_{1T}`$ and the difference $`\rho _V\rho _A1/s^3`$ for $`s>M_{A_{1T}}^21\mathrm{TeV}^2`$. In walking technicolor, the slow running of $`\alpha _{TC}(s)`$ implies that $`\rho _V\rho _A1/s^2`$ below $`sM_{ETC}^2`$ and $`1/s^3`$ above. Thus, the spectral functions cannot be saturated by a single pair of lowโlying resonances. Either there must be a tower of resonances above $`\rho _T`$ and $`A_{1T}`$, all of which contribute significantly to the spectral integrals (see Ref. tasi ; ichep94 ; also Ref. eduardo for an explicit attempt to realize this), or the spectral functions are smooth but anomalously slowly decreasing up to $`M_{ETC}`$. The same alternative applies to the $`\mu ^+\mu ^{}`$ cross section. Moreover, the isoscalar state $`\omega _T`$ and its excitations appear there. Thus, exploration of the 1โ100 TeV region of $`\mu ^+\mu ^{}`$ annihilation is bound to reveal crucial information on the dynamics of a walking gauge theory, dynamics on which we theorists can only speculate.
In the minimal oneโdoublet model of technicolor, it has always been assumed that the lowest lying $`\rho _T`$, $`\omega _T`$, and $`A_{1T}`$ decay mainly into two and three longitudinallyโpolarized weak bosons, $`W_L^\pm `$ and $`Z_L^0`$. In the minimal model, however, $`M_{\rho _T}M_{A_{1T}}=1`$$`2\mathrm{TeV}`$, and this is so far above $`2M_W`$ that it is possible that decay modes with more than two or three weak bosons are important if not dominant. <sup>4</sup><sup>4</sup>4The QCD $`2^3S_1`$ state $`\rho ^{}(1700)`$ decays predominantly to four, not two pions, presumably because the twoโpion mode is suppressed by an exponential form factor and/or a node in the decay amplitude. Thus, in the minimal walking technicolor model, there may be a tower of vector and axial vector mesons in the $`s`$โchannel of $`\mu ^+\mu ^{}`$ annihilation which decay to many $`W`$ and $`Z`$ bosons. It is an open question how narrow and discernible these resonances will be.
In nonminimal models, the spectrum of technihadrons is quite rich and the scale of their masses is lower (roughly as the square root of the number of technifermion doublets). There are technipions $`\pi _T`$ as well as weak bosons for the $`\rho _T`$, $`\omega _T`$, and $`A_{1T}`$ to decay into. These $`\pi _T`$ may be color singlets and, if colored technifermions exist, octets and triplets (โleptoquarksโ). Technipions are expected to have masses in the range 100โ500 GeV and to decay into the heaviest fermion pairs allowed. The large value of $`\overline{T}T_{ETC}/\overline{T}T_{TC}`$ in walking technicolor significantly enhances technipion masses. Thus, for example, $`\rho _T\pi _T\pi _T`$ decay channels may be closed for the lowestโlying state. Instead, $`\rho _TW_LW_L`$, $`W_L\pi _T`$, and $`\gamma \pi _T`$ lowsigs . The excited states should be able to decay into pairs of technipions. The $`\rho _T`$, $`\omega _T`$, and $`A_{1T}`$ that lie above multiโ$`\pi _T`$ threshold are likely to be wider than their counterparts in the minimal model. Still, the structure of $`\mu ^+\mu ^{}`$ annihilation up to 100 TeV will provide valuable insight to walking gauge dynamics.
### 3.2 TopcolorโTechnicolor Signals
As I said above, topcolorโassisted technicolor generally employs an extra โhyperchargeโ $`U(1)`$ to help induce a large condensate for the top, but not the bottom quark. This additional $`U(1)`$ is broken, leading to a $`Z^{}`$ boson which is strongly coupled to at least the third generation. In the models of Ref. tctwoklee , it is strongly coupled to all fermions. Some of the lower energy phenomenology of this $`Z^{}`$ was studied in Refs. bonini ; rador . Its nominal mass, in the range 1โ4 TeV, and potentially strong coupling to muons make it a target of opportunity for the HEMC. <sup>5</sup><sup>5</sup>5Top seesaw models also have an extra $`U(1)`$ gauge symmetry, broken spontaneously. There, the $`Z^{}`$ boson mass is expected to be roughly 5 TeV. Unfortunately, its strong couplings and many decay channels to ordinary fermions and technifermions may also make the $`Z^{}`$ so broad that it is difficult discover and study in any collider.
An intriguing feature of this $`Z^{}`$ is that it must acquire its mass from condensation of a technifermion $`\psi `$ tctwoklee . The $`Z^{}`$ mass of several TeV implies that the $`\psi `$โfermionโs mass is 1โ2 TeV. Thus, $`\psi `$ must transform according to a higherโthanโfundamental representation of the technicolor gauge group. In order that its condensation not break electroweak $`SU(2)U(1)`$, $`\psi `$ must either be a singlet or transform vectorially under this symmetry. The obvious way to access it is via $`Z^{}\overline{\psi }\psi `$ in the $`s`$โchannel of the HEMC. The phenomenology of these higher representation technifermions has not been studied in detail. One crucial question is whether $`\psi `$ is stable. If not, how does it decay? If it is, what are the cosmological consequences?
Finally, there is the $`SU(2)`$ singlet, chargeโ2/3 quark $`F`$ of top seesaw models. This fermion also has a mass of several TeV and may be pair produced via $`\gamma ,Z,Z^{}`$ at the HEMC. It decays by virtue of its mixing with the top quark as $`FtWb`$, a striking signature indeed.
## 4. Conclusions and Acknowledgements
The HEMC technicolor signatures that I have presented here are, quite obviously, at a primitive stage of development. I think all of them deserve further thought because they bear directly on unfamiliar dynamics such as walking technicolor and stronglyโcoupled topcolor. Corresponding uncertainties face the design of the HEMC. Again, the particle theorists and the accelerator theorists are in the same boat. The need to go on to higher energies remains and it always will. This was said very well by an Amherst poet long ago:
> โFaithโ is a fine invention
> When Gentlemen can see โ
>
> But Microscopes are prudent
> In an Emergency.
>
> Emily Dickinson, 1860
I thank the organizers, especially Bruce King and Joe Lykken for inviting me to this stimulating workshop and for the wonderful opportunities to explore Montauk and Block Island. Kathleen Tuohy ran a perfect workshop and I send her my gratitude. I am grateful to my fellow participants in the joint Physics and Detector Working Group. They provided the mental stimulation that led to my contribution. I am also indebted to Sekhar Chivukula for discussions about top seesaw models and for reading this manuscript. This research was supported in part by the Department of Energy under Grant No. DEโFG02โ91ER40676. |
no-problem/9912/cond-mat9912491.html | ar5iv | text | # Finite temperature effects in Coulomb blockade quantum dots and signatures of spectral scrambling
## Abstract
The conductance in Coulomb blockade quantum dots exhibits sharp peaks whose spacings fluctuate with the number of electrons. We derive the temperature-dependence of these fluctuations in the statistical regime and compare with recent experimental results. The scrambling due to Coulomb interactions of the single-particle spectrum with the addition of an electron to the dot is shown to affect the temperature-dependence of the peak spacing fluctuations. Spectral scrambling also leads to saturation in the temperature dependence of the peak-to-peak correlator, in agreement with recent experimental results. The signatures of scrambling are derived using discrete Gaussian processes, which generalize the Gaussian ensembles of random matrices to systems that depend on a discrete parameter โ in this case, the number of electrons in the dot.
A quantum dot is a sub-micron-sized conducting device containing up to several thousand electrons. In closed dots, the coupling between the dot and the leads is weak and the charge on the dot is quantized . The addition of an electron into the dot requires a charging energy of $`E_C=e^2/C`$ (where $`C`$ is the capacitance of the dot). This charging energy can be compensated by varying the gate voltage $`V_g`$, leading to Coulomb blockade oscillations of the conductance versus $`V_g`$. In the quantum regime (i.e., for temperatures below the mean level spacing $`\mathrm{\Delta }`$), conductance occurs by resonant tunneling, and sharp conductance peaks are observed as a function of $`V_g`$.
Dots can be fabricated with little disorder such that the electron dynamics in the dot is ballistic. Larger dots are often characterized by irregular shape, resulting in chaotic classical dynamics of the electrons. Such dots are expected to exhibit universal mesoscopic fluctuations which are the signature of quantum chaos. In particular, the distributions of the conductance peak heights in Coulomb blockade quantum dots at $`T\mathrm{\Delta }`$)) were predicted to be universal, depending only on the underlying space-time symmetries . The measured distributions were found to agree well with theory. The statistics of the peak heights at finite temperatures ($`T\mathrm{\Delta }`$) were also derived recently using random matrix theory (RMT) . The measured distributions become narrower and less asymmetric with increasing temperature, in qualitative agreement with theory, although significant deviations were observed at higher temperatures, presumably due to dephasing.
Another quantity whose statistics was recently studied both experimentally and theoretically is the spacing between successive conductance peaks. In the simple constant interaction (CI) model (where Coulomb interactions are included only as an average charging energy) and for $`T\mathrm{\Delta }`$, a (shifted) Wigner-Dyson peak spacing distribution is expected, but the observed distributions are Gaussian-like . This has been explained as an interaction effect by numerical diagonalization of a small Anderson model with Coulomb interactions . The temperature dependence of the peak spacing statistics was also measured recently . In this paper we use the finite-temperature theory plus RMT to study the peak spacing fluctuations at temperatures $`T\mathrm{\Delta }`$. We find a rapid decrease of the fluctuations above $`T/\mathrm{\Delta }0.5`$, in agreement with the experimental results.
Interaction effects beyond the charging energy were not included in the finite temperature theory. They can be treated in a single-particle framework within a mean-field approximation (e.g., Hartree-Fock). Due to charge rearrangement, we expect the spectrum to change or โscrambleโ upon the addition of an electron into the dot . The effects of scrambling on the statistics can be described by a random matrix model that depends on a discrete parameter: the number of electrons on the dot. The theory of discrete Gaussian processes can then be used to analyze the finite temperature statistics of peak spacings and peak heights for various degrees of scrambling. A rescaled parametric distance controls how fast the spectrum is changing, and will be referred to as the scrambling parameter. It was shown that spectral scrambling can lead to nearly Gaussian peak spacing distributions at low temperatures . In this paper we derive two main signatures of a changing spectrum in the finite temperature statistics: the less rapid decrease of the spacing fluctuations with temperature for $`T/\mathrm{\Delta }0.5`$, and the saturation of the number of correlated peaks at higher temperatures . The first effect has not been experimentally observed while the second has been qualitatively suggested and observed in Ref. . We also derive a simple expression for the scrambling parameter in terms of the dotโs properties.
At $`T\mathrm{\Delta }`$, several resonances contribute to each conductance peak
$$G(T,\stackrel{~}{E}_F)=\frac{e^2}{h}\frac{\pi \overline{\mathrm{\Gamma }}}{4kT}g=\underset{\lambda }{}w_\lambda (T,\stackrel{~}{E}_F)g_\lambda ,$$
(1)
where $`g`$ is the dimensionless conductance expressed as a thermal average over individual level conductances $`g_\lambda =2\overline{\mathrm{\Gamma }}^1\mathrm{\Gamma }_\lambda ^l\mathrm{\Gamma }_\lambda ^r/(\mathrm{\Gamma }_\lambda ^l+\mathrm{\Gamma }_\lambda ^r)`$. The thermal weight $`w_\lambda (T,\stackrel{~}{E}_F)`$ of a level $`\lambda `$ (for $`TE_C`$) is given by $`w_\lambda =4f(\mathrm{\Delta }F_๐ฉ\stackrel{~}{E}_F)n_\lambda __๐ฉ[1f\left(E_\lambda \stackrel{~}{E}_F\right)]`$. $`\mathrm{\Delta }F_๐ฉF(๐ฉ)F(๐ฉ1)`$ where $`F_๐ฉ`$ is the canonical free energy of $`๐ฉ`$ non-interacting electrons on the dot, $`\stackrel{~}{E}_F=E_F+e\alpha V_g(๐ฉ1/2)E_C`$ is an effective Fermi energy ($`\alpha `$ is the ratio between the plunger gate to dot capacitance and the total capacitance), and $`n_\lambda `$ is the canonical occupation of a single-particle level $`\lambda `$. Both the canonical free energy and occupation are calculated exactly using particle-number projection . In the statistical theory, the eigenvalues $`E_\lambda `$ and wavefunctions $`\psi _\lambda `$ fluctuate according to the corresponding Gaussian random matrix ensemble. The fluctuations of the partial widths $`\mathrm{\Gamma }_\lambda `$ are calculated by relating the widths to the eigenfunctions across the dot-lead interfaces.
Eq. (1) was used in Ref. to calculate the conductance peak distributions by full RMT simulations, and an approximate analytic expression was derived in the limit where spectral fluctuations are ignored. The finite temperature formulation can also be used to calculate the temperature-dependence of the peak spacing distributions. Unlike the peak height statistics, the peak spacing statistics are sensitive to fluctuations of both the spectrum and the wavefunctions, and full RMT simulations are required. The location of the $`๐ฉ`$-th peak is determined by finding the value of $`\stackrel{~}{E}_F`$ for which the conductance (1) is maximal. Statistics of peak spacings are collected from different successive peaks as well as from different realizations of the dotโs Hamiltonian. The peak spacings exhibit less fluctuations at higher temperatures as is demonstrated in the top panel of Fig. 4, where the spacings $`\mathrm{\Delta }_2`$ for a typical peak series calculated in one random matrix realization are shown at $`T/\mathrm{\Delta }=0.5`$ and $`T/\mathrm{\Delta }=2`$. While we do not expect to reproduce the observed functional form (i.e. Gaussian-like) of the peak spacing distribution using a fixed spectrum, it is still meaningful to study the temperature-dependence of the standard deviation of the spacings $`\sigma (\stackrel{~}{\mathrm{\Delta }}_2)`$ (where $`\stackrel{~}{\mathrm{\Delta }}_2=(\mathrm{\Delta }_2\mathrm{\Delta }_2)/\mathrm{\Delta }`$). The results for the GUE statistics are shown in the bottom panel of Fig.4 (solid line). The width shows a slight increase until about $`T/\mathrm{\Delta }0.5`$ and then decreases rapidly with increasing $`T/\mathrm{\Delta }`$. We compare the calculations with recent experimental results in the presence of a magnetic field (circles). The calculations somewhat underestimate the experimental width but describe well the overall observed temperature dependence. Shown in the inset is the calculated ratio $`\sigma _{\mathrm{GOE}}(\stackrel{~}{\mathrm{\Delta }}_2)/\sigma _{\mathrm{GUE}}(\stackrel{~}{\mathrm{\Delta }}_2)`$ which increases as a function of $`T/\mathrm{\Delta }`$. The experimental ratio of $`1.21.3`$ measured at $`T100`$ mK is consistent with our theoretical results.
So far we have taken into account only an average value for the Coulomb interaction ($`๐ฉ^2E_C/2`$). Interaction effects can be described within a single-particle framework in the Hartree-Fock (HF) approximation . The peak spacing at $`T\mathrm{\Delta }`$ can be expressed as a second order difference of the ground state energy of the dot as a function of particle number . According to Koopmansโ theorem, the change in the HF ground state energy when an electron is added is given by the HF single-particle energy of the added electron
$$_{HF}^{(๐ฉ+1)}_{HF}^{(๐ฉ)}E_{๐ฉ+1}^{(๐ฉ+1)},$$
(2)
where $`_{HF}^{(i)}`$ is the HF ground-state energy of the dot with $`i`$ electrons, and $`E_j^{(i)}`$ is the energy of $`j`$-th single-particle state for a dot with $`i`$ electrons. The theorem is valid in the limit when the single-particle eigenfunctions are independent of the number of electrons, and is expected to hold for large dots and for interactions that are not too strong. Its validity in numerical models of quantum dots was recently studied . The spacing $`\mathrm{\Delta }_2(๐ฉ+1)`$ between the $`๐ฉ`$-th and $`๐ฉ+1`$ peak is then given by
$$\mathrm{\Delta }_2(๐ฉ+1)=E_{๐ฉ+1}^{(๐ฉ+1)}E_๐ฉ^{(๐ฉ)}=\mathrm{\Delta }E^{(๐ฉ+1)}+\mathrm{\Delta }E_๐ฉ.$$
(3)
$`\mathrm{\Delta }E^{(๐ฉ+1)}E_{๐ฉ+1}^{(๐ฉ+1)}E_๐ฉ^{(๐ฉ+1)}`$ is the level spacing for a fixed number of electrons ($`๐ฉ+1`$), and $`\mathrm{\Delta }E_๐ฉE_๐ฉ^{(๐ฉ+1)}E_๐ฉ^{(๐ฉ)}`$ is the energy variation of the $`๐ฉ`$-th level when the $`๐ฉ+1`$ electron is added to the dot.
The HF Hamiltonian of the dot depends on the number of electrons, and in the following we denote by $`H(x_๐ฉ)`$ the Hamiltonian for $`๐ฉ`$ electrons (in this notation $`E_\lambda ^{(๐ฉ)}E_\lambda (x_๐ฉ)`$). For a dot whose single-electron dynamics is chaotic we shall assume that $`H(x_๐ฉ)`$ is a discrete Gaussian process, i.e., a discrete sequence of correlated Gaussian ensembles of a given symmetry class . Such a process can be embedded in a continuous process $`H(x)`$ where the Hamiltonian depends on a continuous parameter $`x`$ . A parametric dependence that originates in the dotโs deformation as a function of $`V_g`$ was used to explain the Gaussian-like shape of the peak spacing distribution . Here the parametric dependence is assumed to be mainly due to interaction effects, as recent experimental results indicate . We further assume that $`\mathrm{\Delta }x=x_{๐ฉ+1}x_๐ฉ`$ is approximately independent of $`๐ฉ`$. According to the theory of Gaussian processes, the parametric statistics are universal upon scaling of the parameter by the rms level velocity . We denote by $`\mathrm{\Delta }\overline{x}`$ the variation of the scaled parameter between successive number of electrons. The rms level velocity depends on the symmetry class, and in the following the parameter is always scaled by the GUE rms level velocity.
A simple Gaussian process is given by
$$H(x)=\mathrm{cos}xH_1+\mathrm{sin}xH_2,$$
(4)
where $`H_1`$ and $`H_2`$ are uncorrelated Gaussian random matrices. For each realization of the Gaussian process we calculate the single-particle spectrum $`E_\lambda (x)`$ as a function of the parameter. At $`T\mathrm{\Delta }`$, Eq. (3) does not hold. Instead we determine the spacing from the location of successive peaks: the $`๐ฉ`$-th peak is determined using the levels $`E_\lambda (x_๐ฉ)`$ and wavefunctions $`\psi _\lambda (x_๐ฉ)`$ as explained before, while the $`๐ฉ+1`$ peak is determined similarly but using a different spectrum $`E_\lambda (x_{๐ฉ+1})`$ and eigenfunctions $`\psi _\lambda (x_{๐ฉ+1})`$. Consequently, the spacing $`\mathrm{\Delta }_2`$ depends now on both $`T/\mathrm{\Delta }`$ and $`\mathrm{\Delta }\overline{x}`$.
Fig. 4 shows the standard deviation of the GUE peak spacing distribution $`\sigma (\mathrm{\Delta }_2)/\mathrm{\Delta }`$ as a function of $`T/\mathrm{\Delta }`$ (on a log-log scale) for several values of the โscramblingโ parameter $`\mathrm{\Delta }\overline{x}`$. As for the $`\mathrm{\Delta }\overline{x}=0`$ case, we observe a decrease above $`T/\mathrm{\Delta }0.5`$, except that the decrease is more moderate. It would be interesting to see whether this signature of spectral scrambling can be observed experimentally.
Another quantity that is sensitive to spectral scrambling is the peak-to-peak correlator, which is characterized by its full width at half maximum (FWHM), i.e., the number of correlated peaks $`n_c`$. For a constant single-particle spectrum, $`n_c`$ is found to increase linearly with temperature since the number of levels contributing to a given peak is $`T/\mathrm{\Delta }`$. However, for a changing spectrum the number of correlated peaks is expected to saturate (as a function of temperature) at a value $`m`$ that measures the number of electrons needed to scramble the spectrum completely. This effect was observed in the experiment . It can be calculated quantitatively by using the Gaussian process (4) . For each value of $`\mathrm{\Delta }\overline{x}`$ and $`T/\mathrm{\Delta }`$, the peak-to-peak correlations are determined universally. The top left panel of Fig. 4 shows the calculated peak-to-peak correlator $`c(n)`$ at several temperatures for a constant spectrum (i.e., $`\mathrm{\Delta }\overline{x}=0`$), where the correlator width is seen to increase with temperature. The right inset shows the correlator $`c(n)`$ for the same temperatures but for a changing spectrum $`\mathrm{\Delta }\overline{x}=0.5`$, where its width is seen to saturate at higher temperatures. The bottom panel of Fig. 4 shows the number of correlated peaks $`n_c`$ versus $`T/\mathrm{\Delta }`$ for several values of $`\mathrm{\Delta }\overline{x}`$. $`n_c`$ saturates at a smaller value of $`m`$ as $`\mathrm{\Delta }\overline{x}`$ gets larger, i.e., when the single-particle spectrum scrambles faster. To further illustrate this effect, we show in Fig. 4 the peak height fluctuation $`g\overline{g}`$ for a series of peaks using one realization of the GP (4) at $`T/\mathrm{\Delta }=0.5`$ and $`2`$. For a fixed spectrum ($`\mathrm{\Delta }\overline{x}=0`$) we observe a significant increase of the correlations at the higher temperature (left panels), while for a changing spectrum ($`\mathrm{\Delta }\overline{x}=0.5`$) the correlations do not change much with temperature (right panels).
Experimentally it was found that, in a smaller dot, $`n_c`$ saturates at a smaller value $`m`$ . This suggests faster scrambling (i.e., larger $`\mathrm{\Delta }\overline{x}`$) in the smaller dot. We can derive an expression for the scrambling parameter $`\mathrm{\Delta }\overline{x}`$ in terms of the dotโs properties by relating the parametric approach to the microscopic HF-RPA approach of . In the latter the fluctuations in the variation $`\mathrm{\Delta }E_๐ฉ`$ of the $`๐ฉ`$-th level upon the addition of an electron (see Eq. (3)) are dominated by charge that is pushed to the surface, and are estimated to be $`\sigma ^2(\mathrm{\Delta }E_๐ฉ)\beta ^1\mathrm{\Delta }^2/g`$, where $`g`$ is the dimensionless Thouless conductance. In the parametric approach we find, using perturbation theory , $`\sigma ^2(\mathrm{\Delta }E_๐ฉ)=2\beta ^1\mathrm{\Delta }^2(\mathrm{\Delta }\overline{x})^2`$. Comparing the two expression for $`\sigma ^2(\mathrm{\Delta }E_๐ฉ)`$ we obtain
$$\mathrm{\Delta }\overline{x}g^{1/2}๐ฉ^{1/4}.$$
(5)
The symmetry class parameter $`\beta `$ drops out in this relation. Indeed we expect $`\mathrm{\Delta }\overline{x}`$ to depend only on the dotโs properties irrespective of the presence or absence of a magnetic field. Relation (5) is valid in the regime where $`\sigma ^2(\mathrm{\Delta }E_๐ฉ)`$ is linear in $`(\mathrm{\Delta }\overline{x})^2`$, i.e., for $`\mathrm{\Delta }\overline{x}0.3`$ (see Fig. 2 of Ref. ). The second estimate in (5) is for a ballistic dot where $`g๐ฉ^{1/2}`$. Relation (5) confirms that $`\mathrm{\Delta }\overline{x}`$ is larger for the dot with a smaller number of electrons. Notice the qualitative similarity between the theoretical results of Fig. 4 and the experimental results in Fig. 2 of Ref. .
In conclusion, we used the statistical theory of Coulomb blockade quantum dots to calculate the temperature-dependence of the peak spacing fluctuations. Statistical scrambling of the spectrum upon adding an electron to the dot affects the temperature-dependence of both the peak spacing fluctuations and the peak-to-peak correlator.
This work was supported in part by the Department of Energy grant No. DE-FG-0291-ER-40608. |
no-problem/9912/quant-ph9912072.html | ar5iv | text | # Nonclassical correlations of photon number and field components in the vacuum state
## I Introduction
One of the main differences between quantum mechanics and classical physics is the impossibility of assigning well defined values to all physical variables describing a system. As a consequence, all quantum measurements necessarily introduce noise into the system. A measurement which only introduces noise in those variables that do not commute with the measured variable is referred to as a quantum nondemolition (QND) measurement . In most of the theoretical and experimental investigations , the focus has been on the overall measurement resolution and on the reduction of fluctuations in the QND variable as observed in the correlation between the QND measurement results and a subsequent destructive measurement of the QND variable. However, at finite resolution, quantum nondemolition measurements do not completely destroy the original coherence between eigenstates of the QND variable . By correlating the QND measurement result with subsequent destructive measurements of a noncommuting variable, it is therefore possible to determine details of the measurement induced decoherence .
In particular, QND measurements of a quadrature component of the light field introduce not only noise in the conjugated quadrature component, but also in the photon number of a state. By measuring a quadrature component of the vacuum field, โquantum jumpsโ from zero photons to one or more photons are induced in the observed field. It is shown in the following that, even at low measurement resolutions, the โquantum jumpโ events are strongly correlated with extremely high measurement results for the quadrature component. This correlation corresponds to a nonclassical relationship between the continuous field components and the discrete photon number, which is directly related to fundamental properties of the operator formalism. Thus, this experimentally observable correlation of photon number and fields reveals important details of the physical meaning of quantization.
In section II, QND measurements of a quadrature component $`\widehat{x}`$ of the light field are discussed and a general measurement operator $`\widehat{P}_{\delta x}(x_m)`$ describing a minimum noise measurement at a resolution of $`\delta x`$ is derived. In section III, the measurement operator is applied to the vacuum field and the measurement statistics are determined. In section IV, the results are compared with fundamental properties of the operator formalism. In section V, an experimental realization of photon-field coincidence measurements is proposed and possible difficulties are discussed. In section VI, the results are interpreted in the context of quantum state tomography and implications for the interpretation of entanglement are pointed out. In section VII, the results are summarized and conclusions are presented.
## II QND measurement of a quadrature component
Optical QND measurements of the quadrature component $`\widehat{x}_S`$ of a signal mode $`\widehat{a}_S=\widehat{x}_S+i\widehat{y}_S`$ are realized by coupling the signal to a a meter mode $`\widehat{a}_M=\widehat{x}_M+i\widehat{y}_M`$ in such a way that the quadrature component $`\widehat{x}_M`$ of the meter mode is shifted by an amount proportional to the measured signal variable $`\widehat{x}_S`$. This measurement interaction can be described by a unitary transformation operator,
$$\widehat{U}_{SM}=\mathrm{exp}\left(i\mathrm{\hspace{0.33em}2}f\widehat{x}_S\widehat{y}_M\right),$$
(1)
which transforms the quadrature components of meter and signal to
$`\widehat{U}_{SM}^1\widehat{x}_S\widehat{U}_{SM}`$ $`=`$ $`\widehat{x}_S`$ (2)
$`\widehat{U}_{SM}^1\widehat{y}_S\widehat{U}_{SM}`$ $`=`$ $`\widehat{y}_Sf\widehat{y}_M`$ (3)
$`\widehat{U}_{SM}^1\widehat{x}_M\widehat{U}_{SM}`$ $`=`$ $`\widehat{x}_M+f\widehat{x}_S`$ (4)
$`\widehat{U}_{SM}^1\widehat{y}_M\widehat{U}_{SM}`$ $`=`$ $`\widehat{y}_M.`$ (5)
In general, the unitary measurement interaction operator $`\widehat{U}_{SM}`$ creates entanglement between the signal and the meter by correlating the values of the quadrature components. Such an entanglement can be realized experimentally by squeezing the two mode light field of signal and meter using optical parametric amplifiers (OPAs) . The measurement setup is shown schematically in figure 1. Note that the backaction changing $`\widehat{x}_S`$ is avoided by adjusting the interference between the two amplified beams. Therefore, the reflectivity of the beam splitters depends on the amplification. A continuous adjustment of the coupling factor $`f`$ would require adjustments of both the pump beam intensities of the OPAs and the reflectivities of the beam splitter as given in figure 1.
If the input state of the meter is the vacuum field state, $`\text{vac.}`$, and the signal field state is given by $`\mathrm{\Phi }_S`$, then the entangled state created by the measurement interaction is given by
$`\widehat{U}_{SM}\mathrm{\Phi }_S;\text{vac.}`$ $`=`$ $`{\displaystyle ๐x_S๐x_Mx_S\mathrm{\Phi }_Sx_Mfx_S\text{vac.}x_S;x_M}`$ (6)
$`=`$ $`{\displaystyle ๐x_S๐x_M\left(\frac{2}{\pi }\right)^{\frac{1}{4}}\mathrm{exp}\left((x_Mfx_S)^2\right)x_S\mathrm{\Phi }_Sx_S;x_M}.`$ (7)
Reading out the meter variable $`x_M`$ removes the entanglement by destroying the coherence between states with different $`x_M`$. It is then possible to define a measurement operator $`\widehat{P}_f(x_M)`$ associated with a readout of $`x_M`$, which acts only on the initial signal state $`\mathrm{\Phi }_S`$. This operator is given by
$`x_S\widehat{P}_f(x_M)\mathrm{\Phi }_S`$ $`=`$ $`x_S;x_M\widehat{U}_{SM}\mathrm{\Phi }_S;\text{vac.}`$ (8)
$`=`$ $`\left({\displaystyle \frac{2}{\pi }}\right)^{\frac{1}{4}}\mathrm{exp}\left((x_Mfx_S)^2\right)x_S\mathrm{\Phi }_S.`$ (9)
The measurement operator $`\widehat{P}_f(x_M)`$ multiplies the probability amplitudes of the $`\widehat{x}_S`$ eigenstates with a Gaussian statistical weight factor given by the difference between the eigenvalue $`x_S`$ and the measurement result $`x_M/f`$. By defining
$`x_m`$ $`=`$ $`{\displaystyle \frac{1}{f}}x_M`$ (10)
$`\delta x`$ $`=`$ $`{\displaystyle \frac{1}{2f}},`$ (11)
the measurement readout can be scaled, so that the average results correspond to the expectation value of $`\widehat{x}_S`$. The normalized measurement operator then reads
$$\widehat{P}_{\delta x}(x_m)=\left(2\pi \delta x^2\right)^{1/4}\mathrm{exp}\left(\frac{(x_m\widehat{x}_S)^2}{4\delta x^2}\right).$$
(12)
This operator describes an ideal quantum nondemolition measurement of finite resolution $`\delta x`$. The probability distribution of the measurement results $`x_m`$ is given by
$`P(x_m)`$ $`=`$ $`\mathrm{\Phi }_S\widehat{P}_{\delta x}^2(x_m)\mathrm{\Phi }_S`$ (13)
$`=`$ $`{\displaystyle \frac{1}{\sqrt{2\pi \delta x^2}}}{\displaystyle ๐x_S\mathrm{exp}\left(\frac{(x_Sx_m)^2}{2\delta x^2}\right)|x_S\mathrm{\Phi }_S|^2}.`$ (14)
Thus the probability distribution of measurement results is equal to the convolution of $`|x_S\mathrm{\Phi }_S|^2`$ with a Gaussian of variance $`\delta x`$. The corresponding averages of $`x_m`$ and $`x_m^2`$ are given by
$`{\displaystyle ๐x_Sx_mP(x_m)}`$ $`=`$ $`\mathrm{\Phi }_S\widehat{x}_S\mathrm{\Phi }_S`$ (15)
$`{\displaystyle ๐x_Sx_m^2P(x_m)}`$ $`=`$ $`\mathrm{\Phi }_S\widehat{x}_S^2\mathrm{\Phi }_S+\delta x^2.`$ (16)
The measurement readout $`x_m`$ therefore represents the actual value of $`\widehat{x}_S`$ within an error margin of $`\pm \delta x`$. The signal state after the measurement is given by
$$\varphi _S(x_m)=\frac{1}{\sqrt{P(x_m)}}\widehat{P}_{\delta x}(x_m)\mathrm{\Phi }_S.$$
(17)
Since the quantum coherence between the eigenstates of $`\widehat{x}_S`$ is preserved, the system state is still a pure state after the measurement. The system properties which do not commute with $`\widehat{x}_S`$ are changed by the modified statistical weight of each eigenstate component. Thus the physical effect of noise in the measurement interaction is correlated with the measurement information obtained.
## III Measurement of the vacuum field
If the signal is in the vacuum state $`\text{vac.}`$, then the measurement probability is a Gaussian centered around $`x_m=0`$ with a variance of $`\delta x^2+1/4`$,
$$P(x_m)=\frac{1}{\sqrt{2\pi (\delta x^2+1/4)}}\mathrm{exp}\left(\frac{x_m^2}{2(\delta x^2+1/4)}\right).$$
(18)
The quantum state after the measurement is a squeezed state given by
$$\varphi _S(x_m)=๐x_S\left(\pi \frac{4\delta x^2}{1+4\delta x^2}\right)^{\frac{1}{4}}\mathrm{exp}\left(\frac{1+4\delta x^2}{4\delta x^2}\left(x_S\frac{x_m}{1+4\delta x^2}\right)^2\right)x_S.$$
(19)
The quadrature component averages and variances of this state are
$`\widehat{x}_S_{x_m}`$ $`=`$ $`{\displaystyle \frac{x_m}{1+4\delta x^2}}`$ (20)
$`\widehat{y}_S_{x_m}`$ $`=`$ $`0`$ (21)
$`\widehat{x}_S^2_{x_m}\widehat{x}_S_{x_m}^2`$ $`=`$ $`{\displaystyle \frac{\delta x^2}{1+4\delta x^2}}`$ (22)
$`\widehat{y}_S^2_{x_m}\widehat{y}_S_{x_m}^2`$ $`=`$ $`{\displaystyle \frac{1+4\delta x^2}{16\delta x^2}}.`$ (23)
Examples of the phase space contours before and after the measurement are shown in figure 2 for a measurement resolution of $`\delta x=0.5`$ and a measurement result of $`x_m=0.5`$. Note that the final state is shifted by only half the measurement result.
The photon number expectation value after the measurement is given by the expectation values of $`\widehat{x}_S^2`$ and $`\widehat{y}_S^2`$. It reads
$`\widehat{n}_S_{x_m}`$ $`=`$ $`\widehat{x}_S^2_{x_m}+\widehat{y}_S^2_{x_m}{\displaystyle \frac{1}{2}}`$ (24)
$`=`$ $`{\displaystyle \frac{1}{16\delta x^2(1+4\delta x^2)}}+{\displaystyle \frac{x_m^2}{(1+4\delta x^2)^2}}.`$ (25)
The dependence of the photon number expectation value $`\widehat{n}_S_{x_m}`$ after the measurement on the squared measurement result $`x_m^2`$ describes a correlation between field component and photon number defined by
$`C(x_m^2;\widehat{n}_S_{x_m})`$ $`=`$ $`{\displaystyle \left(๐x_mx_m^2\widehat{n}_S_{x_m}P(x_m)\right)}\left({\displaystyle ๐x_mx_m^2P(x_m)}\right)\left({\displaystyle ๐x_m\widehat{n}_S_{x_m}P(x_m)}\right).`$ (26)
According to equations (18) and (24), this correlation is equal to
$$C(x_m^2;\widehat{n}_S_{x_m})=\frac{1}{8}$$
(28)
for measurements of the vacuum state. This result is independent of the measurement resolution. In particular, it even applies to the low resolution limit of $`\delta x\mathrm{}`$, which should leave the original vacuum state nearly unchanged. It is therefore reasonable to conclude, that this correlation is a fundamental property of the vacuum state, even though it involves nonzero photon numbers.
## IV Correlations of photon number and fields in the operator formalism
Since the measurement readout $`x_m`$ represents information about operator variable $`\widehat{x}_S`$ of the system, it is possible to express the correlation $`C(x_m^2;\widehat{n}_S_{x_m})`$ in terms of operator expectation values of $`\widehat{x}_S`$ and $`\widehat{n}_S`$. Equation (15) shows how the average over $`x_m^2`$ can be replaced by the operator expectation value $`\widehat{x}_S^2`$. Likewise, the average over the product of $`x_m^2`$ and $`\widehat{n}_S_{x_m}`$ can be transformed into an operator expression. The transformation reads
$`{\displaystyle ๐x_mx_m^2\widehat{n}_S_{x_m}P(x_m)}=`$ (29)
$`=`$ $`{\displaystyle ๐x_S๐x_S^{}\left(\frac{(x_S+x_S^{})^2}{4}+\delta x^2\right)\text{vac.}x_Sx_S\widehat{n}_Sx_S^{}x_S^{}\text{vac.}\mathrm{exp}\left(\frac{(x_Sx_S^{})^2}{8\delta x^2}\right)}`$ (30)
$`=`$ $`{\displaystyle ๐x_m\left(\frac{1}{4}\widehat{x}_S^2\widehat{n}_S+2\widehat{x}_S\widehat{n}_S\widehat{x}_S+\widehat{n}_S\widehat{x}_S^2_{x_m}+\delta x^2\widehat{n}_S_{x_m}\right)P(x_m)}.`$ (31)
The average expectation value of photon number after the measurement is given by
$$\widehat{n}_S_{\text{av.}}=๐x_m\widehat{n}_S_{x_m}P(x_m).$$
(32)
Using the index av. to denote averages over expectation values after the measurement, the correlation $`C(x_m^2;\widehat{n}_S_{x_m})`$ may be expressed by the average final state expectation values as
$$C(x_m^2;\widehat{n}_S_{x_m})=\left(\frac{1}{4}\widehat{x}_S^2\widehat{n}_S+2\widehat{x}_S\widehat{n}_S\widehat{x}_S+\widehat{n}_S\widehat{x}_S^2_{\text{av.}}n_S_{\text{av.}}x_S^2_{\text{av.}}\right).$$
(33)
The correlation observed in the measurement is therefore given by a particular ordered product of operators. The most significant feature of this operator product is the $`\widehat{x}_S\widehat{n}_S\widehat{x}_S`$-term, in which the photon number operator $`\widehat{n}_S`$ is sandwiched between the field operators $`\widehat{x}_S`$. The expectation value of $`\widehat{x}_S\widehat{n}_S\widehat{x}_S`$ of an eigenstate of $`\widehat{n}_S`$ does not factorize into the eigenvalue of $`\widehat{n}_S`$ and the expectation value of $`\widehat{x}_S^2`$, because the field operators $`\widehat{x}_S`$ change the original state into a state with different photon number statistics. According to the commutation relations,
$$\widehat{x}_S\widehat{n}_S\widehat{x}_S=\frac{1}{2}(\widehat{x}_S^2\widehat{n}_S+\widehat{n}_S\widehat{x}_S^2)+\frac{1}{4}.$$
(34)
Therefore, the expectation value of $`\widehat{x}_S\widehat{n}_S\widehat{x}_S`$ of a photon number state is exactly $`1/4`$ higher than the product of the eigenvalue of $`\widehat{n}_S`$ and the expectation value of $`\widehat{x}_S^2`$. The correlation $`C(x_m^2;\widehat{n}_S_{x_m})`$ may then be expressed by the final state expectation values as
$$C(x_m^2;\widehat{n}_S_{x_m})=\left(\frac{1}{2}\widehat{x}_S^2\widehat{n}_S+\widehat{n}_S\widehat{x}_S^2_{\text{av.}}n_S_{\text{av.}}x_S^2_{\text{av.}}\right)+\frac{1}{8}.$$
(35)
Since the additional correlation of $`1/8`$ does not depend on the measurement resolution $`\delta x`$, it should not be interpreted as a result of the measurement dynamics. Instead, the derivation above reveals that it originates from the operator ordering in the quantum mechanical expression for the correlation. Since it is the noncommutativity of operator variables which distinguishes quantum physics from classical physics, the contribution of $`1/8`$ is a nonclassical contribution to the correlation of photon number and fields. Specifically, it should be noted that the classical correlation of a well defined variable with any other physical property is necessarily zero. Only the quantum mechanical properties of noncommutative variables allow nonzero correlations of photon number and fields even if the field mode is in a photon number eigenstate. The operator transformation thus reveals that the correlation $`C(x_m^2;\widehat{n}_S_{x_m})`$ of $`1/8`$ found in measurements of the vacuum state is a directly observable consequence of the nonclassical operator order dependence of correlations between noncommuting variables.
## V Experimental realization: photon-field coincidence measurements
The experimental setup required to measure the correlation between a QND measurement of the quadrature component $`\widehat{x}_S`$ and the photon number after the measurement is shown in figure 1. It is essentially identical to the setups used in previous experiments . However, instead of measuring the x quadrature in the output fields, it is necessary to perform a photon number measurement on the signal branch. The output of this measurement must then be correlated the output from the homodyne detection of the meter branch. The homodyne detection of the meter simply converts a high intensity light field into a current $`I_M(t)`$, while the signal readout produces discreet photon detection pulse. These pulses can also be described by a detection current $`I_S(t)`$, which should be related to the actual photon detection events by a response function $`R_S(\tau )`$, such that
$$I_S(t)=\underset{i}{}R_S(tt_i),$$
(36)
where $`t_i`$ is the time of photon detection event $`i`$. According to the theoretical prediction discussed above, each photon number detection event should be accompanied by an increase of noise in the homodyne detection current of the meter. However, the temporal overlap of the signal current $`I_S(t)`$ and the increased noise in the meter current $`I_M(t)`$ is an important factor in the evaluation of the correlation. Due to the frequency filtering employed, the meter mode corresponding to a signal detection event is given by a filter function with a width approximately equal to the inverse frequency resolution of the filter. For a typical filter with a Lorentzian linewidth of $`2\gamma `$, the mode of interest would read
$$\widehat{a}_i=\sqrt{\gamma }๐t\mathrm{exp}\left(\gamma |tt_i|\right)\widehat{a}(t).$$
(37)
The actual meter readout should therefore be obtained by integrating the current over a time of about $`2/\gamma `$. For practical reasons, it seems most realistic to use a direct convolution of the meter current $`I_M`$ and the signal current $`I_S`$, adjusting the response function $`R_S(\tau )`$ to produce an electrical pulse of duration $`2/\gamma `$. A measure of the correlation $`C(x_m^2;\widehat{n}_S_{x_m})`$ can then be obtained from the current correlation
$$\xi C(x_m^2;\widehat{n}_S_{x_m})=\overline{(I_SI_M)^2}\overline{I_S^2}\overline{I_M^2},$$
(38)
where the factor $`\xi `$ denotes the efficiency of the measurement, as determined by the match between the response function $`R_S(\tau )`$ and the filter function given by equation (37). Moreover, the efficiency of the experimental setup may be reduced further by the limited quantum efficiency of the detector.
Fortunately, the requirement of efficiency for the experiment is not very restrictive, provided that the measurement resolution is so low that only few photons are created. In that case, the total noise average in the meter current $`I_M`$ is roughly equal to the noise average in the absence of a photon detection event, which is very close to the shot noise limit of the homodyne detection. However, the fluctuations of the time averaged currents within a time interval of about $`1/\gamma `$ around a photon detection event in the signal branch correspond to the fluctuations of the measurement values $`x_m`$ for a quantum jump event from zero photons to one photon. In particular, the measurement result $`x_m(i)`$ associated with a photon detection event at time $`t_i`$ is approximately given by
$$x_m(i)C๐tR(tt_i)I_M(t),$$
(39)
where $`C`$ is a scaling constant which maps the current fluctuations of a vacuum input field onto an $`x_m`$ variance of $`\delta x^2`$. In the case of a photon detection event, however, the probability distribution over the measurement results $`x_m(i)`$ is given by the difference between the total probability distribution $`P(x_m)`$ and the part $`P_0(x_m)`$ of the probability distribution associated with no photons in the signal,
$`P_{QJ}(x_m)`$ $`=`$ $`P(x_m)P_0(x_m)`$ (40)
$`=`$ $`\text{vac.}\widehat{P}_{\delta x}^2\text{vac.}\text{vac.}\widehat{P}_{\delta x}\text{vac.}^2`$ (41)
$`=`$ $`{\displaystyle \frac{1}{\sqrt{2\pi (\delta x^2+1/4)}}}\mathrm{exp}\left({\displaystyle \frac{x_m^2}{2(\delta x^2+1/4)}}\right)\sqrt{{\displaystyle \frac{32\delta x^2}{\pi (1+8\delta x^2)^2}}}\mathrm{exp}\left({\displaystyle \frac{4}{1+8\delta x^2}}x_m^2\right).`$ (42)
Figure 3 shows the results for a measurement resolution of $`\delta x=1`$, which is close to the experimentally realized resolution reported in . There is only a slight difference in $`P(x_m)`$ and $`P_0(x_m)`$, even though the total probability of a quantum jump to one or more photons obtained by integrating $`P_{QJ}(x_m)`$ is about 5.72% . The peaks of the probability distribution are close to $`\pm 2`$, eight times higher than the fluctuation of $`\widehat{x}_S`$ in the vacuum. The measurement fluctuations corresponding to a photon detection event are given by
$$\frac{๐x_mx_m^2P_{QJ}(x_m)}{๐x_mP_{QJ}(x_m)}=\frac{1}{4}+\delta x^2\left(2+\sqrt{1+\frac{1}{8\delta x^2}}\right)3\delta x^2.$$
(43)
For $`\delta x1`$, this result is three times higher than the overall average. For $`\delta x=1`$, the ratio between the fluctuation intensity of a detection event and the average fluctuation intensity of $`1/4+\delta x^2`$ is still equal to 2.65. In other words, the fluctuations of the measurement result $`x_m`$ nearly triple in the case of a quantum jump event. The corresponding increase in the fluctuations of the homodyne detection current $`I_M`$ should be detectable even at low efficiencies $`\xi `$. Moreover, it does not matter how many photon events go undetected, since the ratio has been determined relative to the overall average of the meter fluctuations. It is thus possible to obtain experimental evidence of the fundamental correlation of field component and photon number even with a rather low overall efficiency of the detector setup.
## VI Interpretation of the quantum jump statistics
What physical mechanism causes the quantum jump from the zero photon vacuum to one or more photons? The relationship between the photon number operator and the quadrature components of the field is given by
$$\widehat{n}_S+\frac{1}{2}=\widehat{x}_S^2+\widehat{y}_S^2.$$
(44)
According to equation (2) describing the measurement interaction, the change in photon number $`\widehat{n_S}`$ should therefore be caused by the change in $`\widehat{y}_S`$ caused by $`\widehat{y}_M`$,
$$\widehat{U}_{SM}^1\widehat{n}_S\widehat{U}_{SM}=\widehat{n}_S2f\widehat{y}_S\widehat{y}_M+f^2\widehat{y}_M^2.$$
(45)
Thus the change in photon number does not depend explicitly on either the measured quadrature $`\widehat{x}_S`$ or the meter variable $`\widehat{x}_M`$. Nevertheless, the meter readout shows a strong correlation with the quantum jump events. In particular, the probability distribution of meter readout results $`x_m`$ for a quantum jump to one or more photons shown in figure 3 has peaks at values far outside the range given by the variance of the vacuum fluctuations of $`\widehat{x}_S`$.
Moreover, the correlation between readout and photon number after the measurement does not disappear in the limit of low resolution ($`\delta x\mathrm{}`$). Rather, it appears to be a fundamental property of the vacuum state even before the measurement. This is confirmed by the operator formalism, which identifies the source of the correlation as the expectation value $`\widehat{x}_S\widehat{n}_S\widehat{x}_S`$. This expectation value is equal to $`1/4`$ in the vacuum, even though the photon number is zero. Since the operator formalism does not allow an identification of the operator with the eigenvalue unless it acts directly on the eigenstate, it is possible to find nonzero correlations even if the system is in an eigenstate of one of the correlated variables. In particular, the action of the operator $`\widehat{x}_S`$ on the vacuum state is given by
$$\widehat{x}_S\text{vac.}=\frac{1}{2}n_s=1,$$
(46)
so the operator $`\widehat{x}_S`$ which should only determine the statistical properties of the state with regard to the quadrature component $`x_S`$ changes the vacuum state into the one photon state. The application of operators thus causes fluctuations in a variable even when the eigenvalue of that variable is well defined.
The nature of this fluctuation might be clarified by a comparison of the nonclassical correlation obtained for fields and photon number in the vacuum with the results of quantum tomography by homodyne detection. In such measurements, the photon number is never obtained. Rather, the complete Wigner distribution $`W(x_S,y_S)`$ can be reconstructed from the results. It is therefore possible to deduce correlations between the field components and the field intensity defined by $`I=x_S^2+y_S^2`$, which is the classical equivalent of equation (44). For the vacuum, the Wigner function reads
$$๐x_S๐y_Sx_S^4W_0(x_S,y_S)(๐x_S๐y_Sx_S^2W_0(x_S,y_S))^2=1/8.$$
(47)
The correlation of $`I`$ and $`x_S^2`$ is given by
$`C(x_S^2;I)=`$ (49)
$`{\displaystyle \left(๐x_S๐y_Sx_S^2IW_0(x_S,y_S)\right)}\left({\displaystyle ๐x_S๐y_Sx_S^2W_0(x_S,y_S)}\right)\left({\displaystyle ๐x_S๐x_SIW_0(x_S,y_S)}\right)`$
$`=`$ $`C(x_m^2;n_S_{x_m})={\displaystyle \frac{1}{8}}.`$ (50)
Thus, the correlation between $`I=x_S^2+y_S^2`$ and $`x_S^2`$ described by the Wigner distribution is also equal to $`1/8`$. In fact, the โintensity fluctuationsโ of the Wigner function can be traced to the same operator properties that give rise to the correlations between the field measurement result and the induced photon number. For arbitrary signal fields, the correlation between the squared measurement result and the photon number after the measurement can therefore be derived by integrating over the Wigner function of the signal field after the measurement interaction according to equation (49).
Of course the โintensity fluctuationsโ of the Wigner function cannot be observed directly, since any phase insensitive determination of photon number will reveal the well defined result of zero photons in the vacuum. Nevertheless even a low resolution measurement of the quadrature component $`\widehat{x}_S`$ which leaves the vacuum state nearly unchanged reveals a correlation of $`\widehat{x}_S^2`$ and $`n_S`$ which corresponds to the assumption that the measured quadrature $`\widehat{x}_S`$ contributes to a fluctuating vacuum energy. The quantum jump itself appears to draw its energy not from the external influence of the measurement interaction, but from the fluctuating energy contribution $`\widehat{x}_S^2`$. These energy fluctuations could be interpreted as virtual or hidden fluctuations existing only potentially until the energy uncertainty of the measurement interaction removes the constraints imposed by quantization and energy conservation. In particular, energy conservation does require that the energy for the quantum jump is provided by the optical parametric amplification process. Certainly the average energy is supplied by the pump beam. However, the energy content of the pump beam and the meter beam cannot be defined due to the uncertainty principle. The pump must be coherent and the measurement of the meter field component $`\widehat{x}_M`$ prevents all energy measurements in that field. If it is accepted that quantum mechanical reality is somehow conditioned by the circumstances of the measurement, it can be argued that the reality of quantized photon number only exists if the energy exchange of the system with the environment is controlled on the level of single quanta. Otherwise, it is entirely possible that the vacuum energy might not be zero as suggested by the photon number eigenvalue, but might fluctuate according to the statistics suggested by the Wigner function.
Even though it may appear to be highly unorthodox at first, this โrelaxationโ of quantization rules actually corresponds to the noncommutativity of the operators, and may help explain the seemingly nonlocal properties of entanglement associated with the famous EPR paradox . The definition of elements of reality given by EPR reads โIf, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity.โ This definition of elements of reality assumes that the eigenvalues of quantum states are real even if they are not confirmed in future measurements. In particular, the photon number of the vacuum would be considered as a real number, not an operator, so the operator correlation $`\widehat{x}_S\widehat{n}_S\widehat{x}_S`$ should not have any physical meaning. However, the nonzero correlation of fields and photon number in the vacuum observed in the QND measurement discussed above suggests that even the possibility of predicting the value of a physical quantity with certainty only defines an element of reality if this value is directly observed in a measurement. Based on this conclusion, there is no need to assume any โspooky action at a distanceโ, or physical nonlocality, in order to explain Bellโs inequalities . Instead, it is sufficient to point out that knowledge of the wavefunction does not provide knowledge of the type of measurement that will be performed. In the case of spin-1/2 systems, the quantized values of spin components are not a property inherent in the spin system, but a property of the measurement actually performed. To assume that spins are quantized even without a measurement does not correspond to the implications of the operator formalism, since it is not correct to replace operators with their eigenvalues.
In the same manner, the correlation discussed in this paper would be paradoxical if one regarded the photon number eigenvalue of zero in the vacuum state as an element of reality independent of the measurement actually performed. One would then be forced to construct mysterious forces changing the photon number in response to the measurement result. However, the operator formalism suggests no such hidden forces. Instead, the reality of photon number quantization depends on the operator ordering and thus proofs to be rather fragile.
## VII Summary and conclusions
The change in photon number induced by a quantum nondemolition measurement of a quadrature component of the vacuum is strongly correlated with the measurement result. An experimental determination of this correlation is possible using optical parametric amplification in a setup similar to previously realized QND measurements of quadrature components . The observed correlation corresponds to a fundamental property of the operator formalism which allows nonvanishing correlations between noncommuting variables even if the system is in an eigenstate of one of the variables.
The quantum jump probability reflects the properties of intensity fluctuations corresponding to the vacuum fluctuations of the field components. The total correlation of fields and photon number therefore reproduces the result that would be expected if there was no quantization. It seems that quantum jumps are a mechanism by which the correspondence between quantum mechanics and classical physics is ensured. The quantum jump correlation observable in the experimental situation discussed above thus provides a link between the discrete nature of quantized information and the continuous nature of classical signals. Finite resolution QND measurements could therefore provide a more detailed understanding of the nonclassical properties of quantum information in the light field.
## Acknowledgements
One of us (HFH) would like to acknowledge support from the Japanese Society for the Promotion of Science, JSPS. |
no-problem/9912/astro-ph9912294.html | ar5iv | text | # Untitled Document
1 INTRODUCTION
The properties of the host galaxies of active galactic nuclei (AGN) and QSOs play a fundamental role in our understanding of the AGN phenomenon. The size, luminosity and structure of the host galaxy can provide valuable clues to the origin and fuelling of AGN (e.g. Smith & Heckman 1990). Ground-based optical imaging studies of low redshift AGN over the past 20 years (see e.g. Adams 1977, Simkin, Su and Schwarz 1980, Smith et al. 1986, MacKenty 1990, Zitelli et al. 1993, Kontilainen & Ward 1994) have been limited by the spatial resolution attainable from the ground; 1 arcsec $`2.8`$h$`{}_{}{}^{1}{}_{50}{}^{}`$kpc at $`z=0.1`$, and no strong concensus has been been reached over the general properties of AGN host galaxies from such studies (see e.g. Vรจron-Cetty and Woltjer 1990).
More recently, near infra-red imaging studies of AGN have yielded a clearer picture (McLeod & Reike 1994, Dunlop et al. 1993, Taylor et al. 1996). Infra-red studies, of course, not only benefit from the improved seeing in the $`H`$ and $`K`$ bands, but also because of the increased dominance of the red host galaxy against the blue AGN. Such studies reveal that powerful AGN inhabit luminous ($`L>L^{}`$) and massive ($`r_{1/2}>10`$kpc) galaxies. Furthermore, these studies find that radio-loud AGN are found exclusively in early-type galaxies. Taylor et al. (1996) also found early-type galaxies acting as hosts for almost half of the radio-quiet AGN in their sample, challenging the existing orthodoxy that radio-quiet AGN are predominantly found in spiral galaxies.
With the excellent imaging performance provided by the COSTAR-corrected optics, a number of QSO host galaxy studies have recently been carried out with the HST (Bachall et al. 1997, Boyce et al. 1997, McLure et al. 1999). These studies each contain typically 15โ20 QSOs and confirm many of the earlier results obtained in the infra-red. The predominantly bright ($`M_B<23`$) QSOs imaged by the HST appear to lie in bright ($`L>L^{}`$) with large radii ($`r_{1/2}>10`$kpc). McClure et al. (1999) also confirm their previous finding that a signficant fraction (90 per cent) of the radio-quiet QSOs imaged have elliptical hosts.
An extremely comprehensive HST imaging study of 256 AGN and starbursts has also been carried out by Malkan, Gorjian & Tam (1998). This study has focussed on much lower redshift ($`z<0.035`$) and consequently lower luminosity AGN. This survey also confirms the tendency for a significant fraction of broad-line radio-quiet AGN to reside in earlier type galaxies. In contrast to studies of bright AGN, few of the AGN host galaxies in this study show direct evidence for interactions or recent merger activity.
Despite careful selection of the object sample, all these studies have relied heavily on existing heterogeneous compilations of QSO catalogues (see e.g. Vรจron and Vรจron-Cetty 1997) on which to base their initial target list. In particular they have focussed on luminous optically-selected or radio-selected QSOs, where strong selection effects may favour particular, and possibly non-representative types of QSOs. For example, the Palomar-Green (PG) survey (Green, Schmidt & Liebert 1986) is the source of many of QSOs used in the studies above, yet it is strongly biassed toward star-like images in the original photographically-identified sample.
Imaging surveys of radio-selected AGN also avoid the problem associated with optical selection biases, but such objects only comprise $`5`$per cent of all AGN (Peacock, Miller & Mead 1986) and so inferences drawn from such samples are limited to a small fraction of the AGN population.
With limitations for both optically-selected and radio-selected AGN samples, the increasing availability of complete, X-ray-selected samples of AGN offers an alternative method to study of AGN host galaxies. Unlike radio samples, X-ray AGN do form a representative sample of all AGN; there being few, if any, X-ray-quiet AGN (Avni & Tanenbaum 1986). In addition, X-ray flux limited samples with complete optical identification suffer from none of the inherent biases towards dominant nuclei or peculiar morphological types present in existing optically-selected samples of low redshift AGN.
X-ray-selected samples of AGN have been studied in the past; Kontilainen & Ward (1994) carried out ground-based optical and near-infra-red imaging of 31 AGN in the $`210`$keV sample of Piccinotti et al. (1982), and Malkan, Margon & Chanan (1984) obtained optical images for 24 AGN selected from the $`0.33.5`$keV Einstein Medium Sensitivity Survey (EMSS, Stocke et al. 1991). The AGN studied by Kontilainen & Ward (1994) were heavily weighted towards extremely low redshifts ($`z<0.015`$) and thus were of low luminosity ($`M_B>21`$). Conversely, the Malkan et al. survey comprised a wide range of much higher redshift objects ($`0.1<z<1.8`$), although the ground-based imaging gave inconclusive results for the nine AGN with $`z>0.4`$ and limited results on the properties of the host galaxy assciated with the lower redshift AGN.
Nevertheless, the EMSS is an extremely powerful sample of AGN to use. With near-complete optical identification (94 per cent), it does not suffer from any strong optical biases. Over 95 per cent of the sample is radio-quiet, including all of the AGN with $`z<0.2`$. For $`z<0.15`$ the spatial resolution of the HST is ideally suited to the study of the innermost regions ($`<400`$h$`{}_{}{}^{1}{}_{50}{}^{}`$pc) of the host galaxy.
We therefore initiated an imaging campaign with HST to obtain snapshot F814W observations of approximately 100 AGN in the EMSS with $`0.03<z<0.15`$. The magnitude range spanned by these AGN is $`24<M_{B\left(\mathrm{AB}\right)}<18`$, straddling the predicted โbreakโ luminosity ($`M_{B\left(\mathrm{AB}\right)}22.3`$) in the AGN luminosity function (LF) at these redshifts (Boyle et al. 1988).
An important aspect to this programme is that we also have ground-based imaging in the $`B`$ and $`R`$ passbands from the 1-m Jacobus Kapteyn Telescope (JKT) and 40-inch telescope operated by the Mount Stromlo and Siding Spring Observatories (MSSSO) to complement the HST observations. Although the ground-based images were only taken in moderate seeing conditions (1 arcsec โ 3 arcsec) they are complementary to the HST data, permitting us to model the host galaxy accurately well beyond the central regions, to surface brightness levels ($`B_\mu =26`$mag arcsec<sup>-2</sup>) unattainable with the snapshot HST observations.
In this paper we report on the results obtained from the 76 AGN imaged in this programme. We describe the HST and ground-based observations in section 2. In section 3 we discuss the fitting procedure used, including the technique of 2-dimensional profile fitting used to extract information on the AGN host galaxy. We present our results in section 4, comparing the properties of AGN host galaxies derived from this study with those obtained from previous observations. We summarise our conclusions in section 5.
2 DATA
2.1 The AGN sample
The AGN sample used in this imaging study were selected from the EMSS (Stocke et al. 1991). Over 830 X-ray sources were identified in the EMSS, of which 420 were classified as AGN, i.e., as having broad emission lines. The EMSS was selected in the โsoftโ X-ray band $`0.33.5`$keV, with a mean flux limit of $`S(0.33.5\mathrm{keV})10^{13}`$ erg s$`^1`$cm<sup>-2</sup>.
We selected 80 low redshift ($`z<0.15`$) AGN for our imaging study. Our imaging campaign began with observations made at the 1-m JKT and so our sample was initially defined to be those low redshift EMSS AGN that were observable from La Palma. However, the subsequent sucess of our HST snapshot proposal led us to expand the sample to include a further 13 EMSS QSOs at southern declinations. Follow-up ground-based observations for these QSOs was carried out on the MSSSO 40-inch telescope. The ground-based studies and the HST imaging campaigns were largely carried out in parallel over the period 1993 โ 1998. The unpredictability of both the weather in the ground-based observations and the sequence of images obtained in snapshot mode meant that it was impossible to maintain an exact correspondance between the AGN imaged in the ground-based and HST programs.
We obtained a total of 76 snapshot images with the HST. These AGN form the basis of the sample analysed in this paper. We have some form of ground-based $`B`$ or $`R`$ imaging data for 69 of these AGN; of these 11 have only B-band imaging and 2 have R-band imaging only. Positions, redshifts and observational details for all AGN are listed in Table 1. Positions and redshifts were taken from the revised EMSS catalogue published by Maccacaro et al. (1995).
Fig. 1 illustrates the region of the AGN absolute magnitude-redshift plane sampled by this study. In this diagram we have plotted the catalogued redshift against total (nuclear + host) $`M_{B\left(\mathrm{AB}\right)}`$ magnitudes for each AGN in the sample. The magnitudes were derived from the HST and ground-based images using the fitting procedures described below. The AGN span a range $`23.6<M_{B\left(\mathrm{AB}\right)}<18.5`$, with a median luminosity $`M_{B\left(\mathrm{AB}\right)}<21.5`$
All magnitudes given in the present paper are in the AB system. For the ground-based observations, we adopted the following transformations from the Landolt system: $`B_{\mathrm{AB}}=B0.17`$ and $`R_{\mathrm{AB}}=R+0.05`$. Throughout this paper we use $`H_0=50\mathrm{h}_{50}`$km s<sup>-1</sup>Mpc<sup>-1</sup>, $`\mathrm{\Omega }_\mathrm{M}=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$.
Eight of the EMSS AGN in our data set were classified by Stocke et al. (1991) as uncertain or ambiguous AGN, usually because the identification spectra lack sufficient signal-to-noise to determine the presence of broad Balmer emission lines. More detailed spectroscopy of โambiguousโ EMSS AGN by Boyle et al. (1995) has shown that these sources are a mix of AGN (Seyfert 1.5 โ 2) and star-forming galaxies. The only ambiguous AGN in this data set that was also studied by Boyle et al. (1995) is MS1334.6+0351 which was classified by Boyle et al. (1995) as a Seyfert 1.5 on the basis of its broad H$`\alpha `$ emission line. For this analysis, MS1334.6+0351 was therefore classified as a bona fide AGN, whereas the remainder of these objects were treated as uncertain AGN.
2.2 HST observations
HST WFPC2 observations were obtained for all 76 AGN listed in Table 1 as part of a Cycle 6 snapshot program. The observations used in this analysis were obtained over the period May 1996 to January 1999; the date of each snapshot observation is given in Table 1. The observations were carried out in the F814W ($`I`$) passband, chosen to assist in the detection of the redder host galaxy components over the bluer nucleus. Each snapshot observation lasted for 600 sec, comprising three separate 200-sec exposures. In each case, the AGN was imaged at the centre of the Planetary Camera (PC), with a pixel scale of 0.0455 arcsec pixel<sup>-1</sup> thus maximising the resolution attainable. For a few of the brightest objects in the sample, a short (10 sec) exposure was also taken to obtain an unsaturated image of the nuclear component. Fig. 2 shows the central 30 arcsec $`\times `$ 30 arcsec region of the reduced HST image for each AGN in the sample.
The data were processed using standard STSDAS pipelines including flat-fielding and bias-subtraction. The three images of each object were obtained using an integer-shift dithering pattern. The images were combined using these offsets and taking the median of the images which resulted in the removal of warm pixels and cosmic rays. Photometric zeropoints were obtained using the header information and converting to the AB system $`m_{\mathrm{AB}}=2.5\mathrm{log}f_\nu +48.6`$. Saturation was dealt with by replacing saturated pixels with re-scaled unsaturated pixels from the 10-sec integrations where available. Otherwise saturated pixels were defined as unusable and were ignored in the fitting process.
2.3 Ground-based imaging
Ground-based observations in the $`B`$ and $`R`$ passbands were carried out on the 1-m JKT and the 40-inch MSSSO telescope. Harris $`B`$ and $`R`$ filter sets were used on both telescopes. The JKT observations were made over three observing seasons: 1993 January 18-24, 1994 January 4-10 and 1995 January 4-10. The observations in 1993 and 1994 were made with a $`700\times 500`$-pixel GEC chip ($`0.35`$arcsec pixel<sup>-1</sup>) and the 1995 observations were made with a $`1024^2`$-pixel Tektronix chip ($`0.31`$arcsec pixel<sup>-1</sup>). An equivalent of nine nightsโ data were obtained over the three runs, with the 1994 run providing the best conditions. $`B`$-band observations were carried out exclusively in 1993 and 1994, with $`R`$-band observations made in 1995.
Additional imaging for the southern QSOs in our target list was obtained with the MSSSO 40-inch telescope equipped with a $`1024^2`$ Tektronix CCD ($`0.25`$arcsec pixel<sup>-1</sup>) on the nights of 1997 July 28-31. Only 1.5 nightsโ data in poor seeing ($`>2`$arcsec) were obtained.
Table 1 gives the total integration time and median seeing of the images for each AGN, together with the telescope used to obtained the ground-based images. Observations of each AGN were split into a number of short exposures, typically $`1200`$sec for $`B`$-band observations and $`900`$sec for $`R`$-band observations.
As far as possible during the observing runs, we attempted to match the prevailing seeing conditions with the redshift of the AGN currently being observed, thus preserving, as far as possible, a constant physical size for the resolution. Observations of some AGN were repeated until a lower FWHM ($`<2`$arcsec) was achieved. Since the groundbased images were primarily used to provide information on the larger angular scales (e.g. bulge and particularly disk) where the HST images provide less information, good seeing was not considered as important as image depth. Some images with very poor seeing still provided useful constraints at large angular scales on the fitting process, improving the overall 3-component fit.
We reduced the CCD frames using the IRAF package at the Cambridge STARLINK node and at the DAO. Standard techniques were used to bias-correct and flat-field the data. We created flat-fields for each night by averaging sky-limited data frames from different AGN fields, after bright stars had been masked out and other deviant points rejected using a 3oe clipping algorithm.
Based on 3โ4 Landolt (1992) standard star sequences we observed each night, we were able to obtain a zeropoint consistent to $`\pm 2`$ per cent on each night on which observations were carried out.
2.4 HST and ground-based imaging
A feature of this analysis is the complementary information provided by the ground- and space-based images. The ground-based observations provide low-resolution data that is suitable for defining the extended disk component (even with poor seeing) while the HST observations provide the information on small spatial scales required to deconvolve the strongly peaked ($`r^{1/4}`$) galaxy bulge and point source contributions. The relative levels of signal-to-noise are such that the HST data are only moderately effective at characterising the host galaxy properties (because of their 600-sec integration times) on large scales. The low-resolution ground-based imaging provides effective constraints at large radii but has little power to discriminate between bulge and point-source components.
The choice of filters reinforces the role of each dataset. We chose the F814W imaging from HST to emphasize the redder bulge component relative to the bluer nuclear source. By the same token, the bluer $`B`$ and $`R`$ filters in the groundbased observations provide important colour information on the outer regions of the galaxy.
3 PROFILE FITTING
3.1 Method
To derive the observed parameters for the different AGN components, we performed a simultaneous three-component parametric model fit to the $`B`$, $`R`$ and $`I`$ images for each AGN in the sample. The components fitted were a point source, an exponential disk, and a de Vaucouleurs $`r^{1/4}`$ bulge. In this procedure the specified model was transformed into the observational space of each dataset using the pixel scale, detector orientation, filter, and point spread function (PSF) appropriate for each image.
For the ground-based images, the PSF was derived from several (typically five) bright stars in the same image that contained the AGN. The PSFs were defined and managed using the DAOPHOT package (Stetson 1987) within IRAF. The core of ground-based observations was fitted with a gaussian function and the residuals were retained as a lookup table.
For the HST images, the PSF was fitted using a Lorentzian function plus a look-up table of residuals. Sampling errors are severe with HST and simulations showed that these can be substantially reduced in our fitting procedure by using DAOPHOT to construct the PSF for each observation centered at the same position (with respect to the pixel grid) as the observed object before the fitting procedure begins.
Very few of the HST observations had suitable PSF stars on the PC chip. Therefore, a single PSF constructed from several bright stars in a star cluster observations was used to fit all of the galaxies. We were able to check the adopted PSF against seven stellar PSFs observed in this sample, where a PSF star was present near the AGN. After processing the stellar PSFs in the same manner as done for the fitting procedure, the FWHM for the PSFs showed a full range of 0.11 PC pixels ($`0.005`$arcsec). As a result the normalisation of the PSF is not exact, leading to a photometric errors at 2 โ 3 per cent level when measured with a 0.05 arcsec aperture.
Details of the fitting procedure are described by Schade et al. (1996). The bulge component is characterised by:
$`I_B(r_B)=I_B(0)\mathrm{exp}\left[7.67\left({\displaystyle \frac{r_B}{r_e}}\right)^{0.25}\right]`$
and the disk component by:
$`I_D(r_D)=I_D(0)\mathrm{exp}\left({\displaystyle \frac{r_D}{h}}\right)`$
where $`I(0)`$ is the central surface brightness, $`r_e`$ is the bulge effective (or half-light) radius, and $`h`$ is the disk scale length. The point source is simply a scaled version of the PSF and is assumed to be coincident with the galaxy center (we found no case where this assumption failed).
Given the position of the galaxy center $`(x_c,y_c)`$, then at a position $`(x,y)`$, $`dx=xx_c`$ and $`dy=yy_c`$, $`dx_B=dx\mathrm{cos}(\theta _B)+dy\mathrm{sin}(\theta _B)`$ $`dy_B=\left(dx\mathrm{sin}(\theta _B)+dy\mathrm{cos}(\theta _B)\right)/ar_B`$ and $`r_B^2=dx_B^2+dy_B^2`$ where $`\theta _B`$ is the position angle of the major axis of the bulge component and $`ar_B`$ is the axial ratio (minor/major) of the bulge. A similar equation holds for the disk component. The position angles of the two components are allowed to vary independently.
Since the colors of the bulge, disk, and point-source components are expected to be different, the normalisation of the components in each passband were also allowed to vary independently in the fit. However, the structural parameters e.g. orientation, axial ratio and scale length were held fixed for each component across the different passbands.
This gives a maximum of 17 free parameters for each fit, the ($`x,y`$) position of the image centre, the relative normalisation of the point source, bulge and disk component in each passband, plus the axial ratio, orientation and scale length of both the bulge and disk components.
For each $`BRI`$ image set, the relative rotation of the detectors used was determined to better than one degree prior to the fitting procedure by comparison of images in each passband. The ground-based frames had rotations near 90 or 180 degrees from each other (i.e. one of the detector axes was always aligned within a few degrees of north) whereas the HST rotation varied continuously and was determined from the position angle of the V3 axis given in the WFPC2 image header.
The fitting was done by minimising $`\chi ^2`$ using a modified Levenberg-Marquardt algorithm. The fitting was typically done over a radius of six arcseconds on both the ground-based and HST images with some variation for individual objects where necessary. The point-source probability was derived using an F-test comparing the value of $`\chi ^2`$ for the best-fit model and the model that fit best without a point source.
We used a relatively large radius in the fitting process primarily to ensure the galaxy model went to zero at large radii. As a result, a significant number of sky pixels that are effectively perfectly fit by the model are included in the calculation of the reduced $`\chi ^2`$. For some fits, this may bias our estimates of reduced $`\chi ^2`$ value towards lower values.
The fitting procedure is difficult and complex because the general models have three concentric components (bulge, disk, point source) which may, for some parameter values, be similar in shape and size. Thus there is a high degree of correlation between these parameters. In other words, there may be long, flat-bottomed valleys in the $`\chi ^2`$ surface where various combinations of bulge, disk and point source are equally good fits. The correlations will be greatly reduced when the galaxy components are much larger that the point-spread function and/or when the galaxy components have axial ratios much different from unity. In order to ensure that we fit models that are minimal in the sense that they contain the smallest number of components consistent with a good fit to the data, we performed fits of pure disk, pure bulge, bulge-plus-disk, disk-plus-point, bulge-plus-point, bulge-plus-disk-plus-point, and pure point-source models. The models and residuals were examined and the minimal model that was a good fit was accepted.
3.2 Errors
The errors on the 17 parameters in each individual fit can be estimated from the correlation matrix, but such errors are unreliable since there are strong correlations between the errors in different parameters (e.g. between the amplitudes of the point source and bulge components). An estimate of the errors can be made using simulations. The fitted galaxy parameters can be adopted as a starting point and the measured parameters can be varied to produce a range of input models. The results given here provide an indication of the reliability of the fitting results but are not a complete analysis of the problem. The present work is focussed on estimating the errors on the point source versus host galaxy luminosities.
Sets of images were produced with identical object parameters (point source magnitude, galaxy magnitude and morphology) as the selected galaxies in the sample. The objects were simulated in the same (two or three) bands as the observations and convolved with the appropriate PSF. We derived the number of counts from the magnitude and integration times for the real observations. We also set the sky levels and noise in the simulated frames to values typical of those measured in blank regions of the HST and ground-based images. We did, however, make the simplifying assumption that the simulated data frames has been perfectly flat-fielded. Poisson errors were assumed throughtout.
After the actual measured parameters were simulated, some of the parameters were varied to produce a range of simulated object properties. In total 1400 galaxies with 27 different combinations of parameters and PSFs were simulated and fit. The simulations were limited to bulge-plus-point source models because many of the objects are in that class and also because this is a challenging case in terms of disentangling the two most compact components. The only shortcut that was adopted was the use of fitting regions of about three arcseconds as opposed to the six arcsecond regions use for the real data. This was done to save computing time but simulations with varying size of the fitting region shows that this may contribute to systematic errors in the galaxy properties in some cases. A more complete analysis of the errors would require a larger fitting radius.
A simulation requires an input point-spread function for each observation. The PSFs for ground-based observations were always derived from multiple stars on the same frame as the observation itself and thus are accurate and reliable. On the other hand, the HST observations rarely had suitable PSF stars on the image itself and so a single PSF derived from several bright stars on the PC chip was used for all of the fits. This PSF was used to construct all of the simulations but the fitting of the simulations used this same PSF and also two other PSFs that were constructed from the few AGN snapshot frames where stars were available. Thus we can estimate the contribution to the errors that is due to an imperfect knowledge of the PSF.
Fig. 3 shows the results of the simulations and the associated errors in the $`M_{B\left(\mathrm{AB}\right)}`$(nucleus) โ $`M_{B\left(\mathrm{AB}\right)}`$(host) plane. The errors in the photometry produced by the fitting process are dominated by systematic errors in the shape and normalization of the PSF, rather than by statistical errors or sky subtraction. This is because the signal-to-noise ratio of the observations is high. Typical errors near the centroid of the distribution of actual objects in this plane are 5-10 per cent in the magnitudes of the nucleus and host galaxy. As expected, errors in the galaxy magnitude are large where the object is dominated by the nucleus and vice versa. Near \[$`M_{B\left(\mathrm{AB}\right)}`$(nucleus); $`M_{B\left(\mathrm{AB}\right)}`$(host)\] = \[โ22,โ18\] the corresponding errors are \[2 per cent, 10 per cent\] whereas near \[โ15,โ22\] the errors are \[50 per cent, 2 per cent\]. Even when nuclear light dominates it is still possible to detect faint ($`M_{B\left(\mathrm{AB}\right)}18`$) host galaxies. Conversely, faint nuclei can be detected in bright ($`M_{B\left(\mathrm{AB}\right)}22`$) host galaxies.
Fig. 4 shows a comparison of input and recovered values of the ratio of $`I`$-band nuclear-to-host galaxy luminosity ($`L_N/L_G`$). The simulations indicate that low $`L_N/L_G`$ values are recovered to within a few per cent by the fitting method. On the other hand, objects which are dominated by a strong point source ($`L_N/L_G>1`$) may be subject to systematic errors in the sense that the contribution of the nuclear component may be underestimated. As demonstrated below, our HST sample contains relatively few objects with $`L_N/L_G>1`$ so that this effect does significantly affect our results.
Histograms of the recovered values for $`L_N/L_G`$ for two different input ratios (0.38 and 1.318) are shown in Fig. 5. Separate histograms are shown for each PSF used in the fitting process. This demonstrates that the errors due to PSF uncertainty (measured by the shifts between the individual histograms) are significantly larger than the statistical errors of the fitting process (measured by the instrinsic width of individual histograms). For high input values of $`L_N/L_G`$, the systematic errors are approximately 25 per cent; for lower values of $`L_N/L_G`$ the systematic errors reduce to 2 โ 3 per cent.
These simulations indicate that the errors derived from the correlation matrix are too small by a factor that varies with the input parameters but is typically a few or more. Those errors are not reliable for multiple component fits (although they are normally good for single component fits). The actual errors for data with a very high signal-to-noise ratio (such as the present case) are dominated by systematic errors due to uncertainty in the point-spread function used in the fitting process. Variations in PSF shape and errors in normalization both contribute to this problem. The actual errors for a particular galaxy depend on the relative contributions of the galaxy and nuclear components (see Fig. 3).
When combining data of varying image quality, there is also the concern that the inclusion of lower (ground-based) resolution data may degrade the fit at the smallest scales, in particular compromising measurement of the nuclear (point source) and/or bulge component. For the 69 AGN in the sample with both HST and ground-based imaging, we therefore compared the nuclear and bulge $`I`$-band magnitudes derived from fits to the full (HST + ground-based) data set and to the HST data alone. The comparison between the nuclear/total galaxy $`I`$ band flux ratio ($`N/T`$) obtained from the fits to these two different data-sets is shown in Fig. 6a). A similar comparison for the $`I`$ band bulge/total galaxy flux ratio ($`B/T`$) is shown in Fig. 6b).
For the vast majority of the sample, the derived $`I`$-band magnitudes for both nuclear and bulge components are largely unaffected by inclusion of the ground-based data in the fit. A least squares fit to the relation in Fig. 6a) gives a slope of 1.000 with an rms deviation of 0.067. There are only three cases (MS0721.2+6904, MS1217.0+0700 and MS1306.1โ0115) where $`N/T`$ varies between the fits by greater than this value. Removal of these three points reduces the rms to 0.015, smaller that systematic errors inherent in the fitting process due to the PSF.
The relation between the different $`B/T`$ estimates shows a larger scatter, $`\sigma (B/T)=0.092`$, dominated by six AGN whose $`B/T`$ values which differ by greater than 0.1 between the fits. Removal of these objects from the comparison reduces the observed scatter to $`\sigma (B/T)=0.02`$, again below the level of the systematic errors introduced by the PSF fitting.
However, even the presence of small numbers of objects in our sample with potentially large uncertainties in their $`B/T`$ values ($`\mathrm{\Delta }(B/T)>0.1`$) have little effect on the results presented below. In this paper, we use the $`B/T`$ ratio largely to conduct an quantitative (albeit crude) morphological classification of the host galaxy. Independant visual classification of the host galaxies confirm that the $`B/T`$ values derived from the joint HST/ground-based data-set yield accurate morphological types. If we were to use the $`B/T`$ values derived from the HST data alone to carry out the morphological classification this would only change the type assigned to four AGN host galaxies โ a net change over the entire sample of 1 less elliptical, 3 more Sab and 2 less Sbc galaxies.
We conclude that the overall effect of simultaneously fitting to the full imaging data-set provides useful additional constraints on the overall parameters of the fits, without systematically biasing the estimates of nuclear or bulge properties. In a small fraction of AGN (5 โ 10 per of the total sample) there are differences between the derived nuclear/bulge luminosities with/without the inclusion of ground-based data. However, these are at level which do not significantly affect any of our conclusions drawn below. Indeed, there is no reason to believe that fits to the HST data alone necessarily produce more accurate estimates of the bulge and/or nuclear properties. By neglecting the ground-based data we may be poorly fitting the low surface brightness disk, biassing the derived properties for the bulge and/or point source component.
4 RESULTS
4.1 Observed Properties of the sample
The results of the fitting procedure are listed in Table 2. This table lists 11/17 free parameters in the fit including the fitted $`B`$, $`R`$ and $`I`$ AB magnitudes for the point source, bulge and disk components for each AGN. The ratio of bulge-to-total light in the $`I`$ passband ($`B/T`$) is also given, together with the bulge ($`R_e`$) and disk ($`h`$) radii. We also give the reduced $`\chi ^2`$ for the fit and the probability ($`P_{\mathrm{PS}}`$) that a point source is not required by the fit.
Ten objects imaged in this survey show little evidence for a point source component: $`P_{\mathrm{PS}}=1`$. Of these, only two (MS0039.0โ0145 and MS1114.4+1801) are โambiguousโ AGN as identified by Stocke et al. (1991). This leaves eight objects, or approximately ten per cent of the sample, which have been classified as broad emission-line AGN but have no detectable nuclear component. It is possible that these objects were incorrectly classified as broad-emission line AGN in the EMSS, despite the care taken to flag all potentially ambiguous cases. Although no cases were found of an AGN without a detectable point source in any of the HST imaging survey of bright ($`M_B<23`$) AGN (Bachall et al. 1997, Boyce et al. 1997, McLure et al. 1999), in an HST imaging study of 91 Seyfert 1 galaxies, Malkan et al. (1998) find an even greater percentage ($``$35 per cent) of broad emission line AGN that exhibit no evidence of any point source component. Malkan et al. (1998) ascribe this to dust absorption of the central source. It could be argued that the amount of dust required to obscure the central regions under these circumstances would also extinguish the broad line region, the basis on which these objects were classified as AGN. Of course, the obscuration may be patchy and the nucleus may have become obscured since its spectroscopic classification as a broad lined AGN. Equally, these objects may not even harbour a compact point source; the broad lines created by intense star formation in the central regions of the galaxy (see e.g. Terlevich et al. 1992). Whatever the origin, the results from the current HST surveys appear to indicate a trend for an increasing fraction of AGN with no point source component with decreasing AGN luminosity.
At the opposite extreme, we do not find any cases where there is no evidence for a host galaxy. In the case of MS1020.2+6850, HST imaging shows evidence for a weak disk only but both $`B`$ and $`R`$-band images show a luminosity profile that is significantly more extended than the PSF.
Good fits ($`\chi ^22`$) were obtained for the vast majority of the AGN in this analysis. Although the large fitting radius may bias estimates of $`\chi ^2`$ towards low values (see above), visual examination of all residual (data $``$ model) images confirmed that no significant systematic effects remained after the model fitting process. The largest reduced $`\chi ^2`$ residual ($`\chi ^2=4.2`$) was found for the fit to MS0754.6+3928. Visual inspection of this object clearly reveals a strong point source component and a low surface brightness disk.
Fig. 7 shows the observed $`(BI)_{\mathrm{AB}}`$ colour histograms for the point source, bulge and disk components. The mean fitted $`(BI)_{\mathrm{AB}}`$ colour for the point source component is significantly bluer, $`(BI)_{\mathrm{AB}}=0.2`$, than that derived for the disk or bulge components, $`(BI)_{\mathrm{AB}}=1.2`$. These colours are consistent with previous observations of QSOs and galaxies, and thus provide a useful consistency check on the fitting procedure since no a priori assumptions were fed into the fit relating to the colour of the components.
Although the $`(BI)_{\mathrm{AB}}`$ colour distribution for the galaxy components are reasonably tight ($`\sigma =0.3`$mag), there is a long tail to both blue and red in the $`(BI)_{\mathrm{AB}}`$ colour distribution for point sources. This is an artifact caused by the fitting procedure. Where the point source is weak and/or the ground-based $`B`$ data is poor, the $`B`$ fit is poorly constrained, resulting in large errors. For this reason, we chose not to use $`B`$ band magnitudes obtained from the fit to compute derived rest-frame absolute magnitudes for point sources in the $`M_{B\left(\mathrm{AB}\right)}`$ band. Instead we used the mean point source $`(BI)_{\mathrm{AB}}`$ colour to transform the $`I`$-band HST fits to the $`B`$ passband. For the galaxy components we used the fitted $`B`$\- or $`R`$-band magnitudes, unless there were no ground-based data in the relevant band. In that case, the median colour for the component was used.
The absolute $`M_{B\left(\mathrm{AB}\right)}`$ magnitudes for all three components derived in this manner are given in Table 3, along with the physical sizes for the disk and bulge components. For completeness, the monochromatic X-ray, radio and optical fluxes (at 2keV, 5GHz and 2500ร
respectively) of the point source component are also given. To derive the radio and X-ray luminosities we have assumed spectral indices of $`\alpha _\mathrm{R}=0.5`$ and $`\alpha _\mathrm{X}=1`$ in the radio and X-ray regimes. We have also assumed that all the radio and X-ray flux comes from the central component.
4.2 Host Galaxy Properties
4.2.1 Luminosity
The histogram of host galaxy luminosities (corrected to the rest-frame $`B(\mathrm{AB})`$ pass-band) is plotted in Fig. 8. The luminosity range of the host galaxies is $`23.1<M_{B\left(\mathrm{AB}\right)}<18.3`$ with a median value of $`M_{B\left(\mathrm{AB}\right)}=21.1`$. The mean value for the $`I`$-band nuclear-to-host luminosity ratio (also plotted in Fig. 8) is $`L_N/L_G=0.2`$, lower than that observed in previous samples of bright AGN. Over 75 per cent of our sample exhibit $`L_N/L_G<0.5`$. In contrast, McLure et al. (1999) obtain a median $`R`$-band value $`L_N/L_G=1.5`$ from their sample of nine radio-quiet AGN. At the low $`L_N/L_G`$ measured in this sample, any systematic effects introduced by the fitting procedure are, at most, at the 2 โ 3 per cent level (see section 3.2). We conclude, therefore the low values for $`L_N/L_G`$ found here are unlikely to be artifact of the fitting procedure.
To establish whether the properties of the AGN host galaxies were representative of the field galaxy population, we tested the luminosity distribution of the host galaxies in this sample against a control sample of galaxies from the Autofib redshift survey (Ellis et al. 1996). For each AGN host galaxy, ten galaxies with the same apparent magnitude ($`\pm 0.05`$mag) were chosen at random and with replacement from the Autofib sample. A small random offset ($`0.01<\delta z<0.01`$) was applied to each redshift in the Autofib sample to minimise the effects of clustering in this sample. If the luminosity distributions of the randomly-drawn Autofib sample and the AGN host sample are identical then we would expect the redshift distributions of the two samples to match one another. A Kolmogorov-Smirnoff (K-S) test shows that the distributions are different at greater than the 99.9 per cent significance level (see Fig. 9a). The sense of the difference is that the AGN hosts are displaced toward higher redshifts, implying the hosts are more luminous than typical field galaxies as represented by the Autofib sample. By applying increasingly large magnitude offsets to the galaxies drawn at random from the Autofib sample, we were able to establish that the AGN hosts were brighter by $`0.75\pm 0.25`$mag at the 95 per cent confidence level than the Autofib galaxies.
4.2.2 Morphology
The morphological types can be characterised according to the output parameters of the fitting procedure. In this scheme the fractional bulge luminosity $`B/T`$ is the primary classification parameter. We approximately follow Simien & de Vaucouleurs (1986) and define the E/S0 class with $`0.5B/T<1.0`$, Sab; $`0.3B/T<0.5`$, Sbc; $`0.1B/T<0.3`$, and we define our own โLateโ class as $`B/T<0.1`$. Fig. 10a) plots the histogram of the rest-frame B-band values of $`B/T`$ computed using the median $`(BI)_{\mathrm{AB}}`$ galaxy colors.
An alternative method is to visually classify the images roughly according to the Hubble classification system. We used the Hubble Atlas (Sandage 1961) as a reference. One difficulty with this approach is that some of the objects in this sample are dominated by a nuclear component so that estimating the contribution of the bulge is problematical. The bulge and nuclear light are easily confused. This problem was dealt with by subtracting the point source component from the best-fit model and then re-evaluating the classifications. The affect of this re-evaluation was negligible. However, nine galaxies were very compact and/or dominated by the nuclear contribution so that it was not possible to classify them with any degree of confidence. All of these objects were classified as โLateโ for the purposes of the comparisons below. A comparison of the distributions of the profile-fitting and visual classification (Figs 10a and b) shows no significant difference.
To test whether these distributions are characteristic of the field galaxy population at these magnitudes, we compared our visual classifications against those from 10 random samples generated from the Autofib survey using the method described above. The resulting histogram of morphological types for the Autofib sample is shown in Fig. 10c). We adopted our visual classifications for the purpose of this comparison since these are likely to be derived in a similar way to those of the Autofib survey. Using only the 4 rough classes defined above, the comparison between the Autofib and AGN host galaxies samples yielded a $`\chi ^2=69`$ for 4 degrees of freedom . Clearly the AGN host galaxies are drawn from a different parent population than the general field population. AGN host galaxies in this sample tend to be of earlier type than the field. Remarkably, 55 per cent of the AGN host galaxies are E/S0 type.
This percentage is similar to the fraction of early-type hosts identified amongst bright radio-quiet QSOs ($`M_B<23`$) by both Bahcall et al. (1997) and McLure et al. (1999). Bahcall et al. (1997) identified 7 out of 14 of their radio-quiet AGN to have bulge luminosity profiles, while McLure et al. (1999) found elliptical galaxy fits were favoured over disk galaxy fits in seven out of the nine radio-quiet AGN they studied. Because the disk host galaxies were found preferentially around the low luminosity AGN in their sample, McLure et al. (1999) postulated that early-type hosts may be more prevalent amongst bright AGN. Within the statistical errors, our analysis would suggest that this is not the case; the frequency of early type hosts is almost as high amongst our fainter sample as in the McLure et al. (1999) sample. This observation is also internally consistent within our own sample. Using the Spearman rank test, we find no significant correlation between $`B/T`$ and point source luminosity (see Fig. 11). A least squares fit to the data points in Fig. 11 formally gives a slope of $`0.0`$. At the very brightest nuclear magnitudes, our statisitics are too poor to determine whether AGN inhabit exclusively bulge-dominant systems as suggested by McLure et al. We have only two AGN with $`M_{B\left(\mathrm{AB}\right)}(\mathrm{nucleus})<23`$ in our sample, both of which have $`B/T>0.5`$.
In contrast, although Malkan et al. (1998) report that Seyfert 1s have earlier-type host galaxies than Seyfert 2s, the overall fraction of Seyfert 1 galaxies in E/S0 hosts in their HST imaging survey is much lower ($`20`$ per cent) than observed in this analysis. Thus the high incidence of early-type hosts for radio quiet AGN may break down at the very lowest AGN luminosities ($`M_B>20`$).
The observation that the AGN host galaxies are biased towards earlier types is also consistent with our observation that the absolute magnitudes of the host galaxies are brighter than the field population. Folkes et al. (1999) have recently derived the field galaxy luminosity function for different spectral types in the 2dF galaxy redshift survey. Based on almost 6000 galaxies, they obtain an $`M_{B\left(\mathrm{AB}\right)}^{}=21.2`$ for early type galaxies, $`0.7`$mag brighter that the $`M_{B\left(\mathrm{AB}\right)}^{}`$ for late-type galaxies. This is close to the median luminosity of the AGN host galaxies in this sample. Furthermore, the difference between the $`M_{B\left(\mathrm{AB}\right)}^{}`$ derived for early and late-type galaxies in the 2dF survey is close to the observed luminosity difference between the AGN host galaxies and the random field sample.
We performed a variant of the earlier test with the Autofib sample to see whether the luminosity difference is consistent with the galaxies being biased toward earlier spectral types. This time we selected galaxies at random from the Autofib sample with identical apparent magnitudes ($`\pm 0.05`$mag) and spectral types. Since we were much more restricted in our choice of galaxy from the Autofib sample we were only able to do this test with the same number of objects in the randomly-selected Autofib sample as in the EMSS sample (typically only 1โ3 objects had the same apparent magnitude and morphology) as the host galaxies in the EMSS sample. We computed the KS probability for the two resultant redshift distributions being drawn from the same sample (see Fig. 9b). In this case the KS probability was $`P_{\mathrm{KS}}=0.75`$, i.e. there is no evidence that the AGN host galaxies in this sample have a different luminosity distribution when compared to the same morphological type distribution in the field. The difference in luminosity between the AGN host galaxies and the random field galaxy population is therefore a natural consequence of the bias towards earlier type galaxies in this population.
4.2.3 Sizes
In Fig. 12 we have plotted the fitted disk and bulge scale lengths against galaxy luminosity. We have also plotted in this diagram the observed size/luminosity relation for ellipticals from Schade, Barrientos, & Lopez-Cruz (1997) and spirals (Freeman 1970). The AGN host galaxies follow these relations surprisingly well, the large scatter caused, in part, by the errors on the parameters in the fitting process.
We can straightforwardly compare the sizes of these host galaxies with those identified with HST by other authors. McLure et al. (1999), Boyce et al. (1997) and Bahcall et al. (1997) all give absolute magnitudes and effective radii or scale heights for their favoured fit (bulge or disk) to the AGN host galaxies. In the comparison, we have only considered properties of the radio-quiet AGN observed by these authors. To minimise possible discrepancies arising from different fitting procedures, we used the results of the 2D-fitting process employed by all authors. Bahcall et al. (1997) and Boyce et al. (1997) both give host galaxy magnitudes in the V passband. To convert this into the $`B(\mathrm{AB})`$ band we used the following relations:
$`B(\mathrm{AB})=V+0.78(\mathrm{bulge})`$
$`B(\mathrm{AB})=V+0.42(\mathrm{disk})`$
For McLure et al. (1999), we adopted the following transformations between their $`R`$ passband and the $`B(\mathrm{AB})`$ band.
$`B(\mathrm{AB})=R+1.41(\mathrm{bulge})`$
$`B(\mathrm{AB})=R+0.99(\mathrm{disk})`$
Note that the absolute magnitudes derived by these authors correspond to total galaxy luminosity which is fit by a single component i.e. either bulge or disk but not both as in this analysis. Thus the bulge or disk luminosities quoted by these authors will be systematically higher than the similar luminosities derived in this analysis where a bulge plus disk model is fit simultaneously. In general, however, one or other of the components is likely to be dominant (particularly true for bulges) and so the offset will be small.
We have plotted the size-absolute magnitude distribution for these host galaxies alongside those for the EMSS sample in Fig. 12. Although the galaxies are clearly larger and more luminous on average than the EMSS sample, they follow the identical relation to the EMSS AGN and exhibit a large overlap in their properties. From this diagram we conclude that AGN host galaxies exhibit a continuous range of properties, broadly correlated with their nuclear luminosity. To investigate this further, we now consider the detailed correlation between host galaxy and nuclear luminosity.
4.3 Host and Nuclear properties
We have plotted in Fig. 13a) the rest-frame $`M_{B(\mathrm{AB})}`$ host galaxy absolute magnitude as a function of point source $`M_{B(\mathrm{AB})}`$. Comparison with Fig. 3 gives an indication of the errors associated with the determination of the host galaxy and nuclear absolute magnitudes at various points in this diagram. There is a weak but significant correlation between the magnitude of the galaxy and the point source AGN component. A Spearman rank test yields a positive correlation at greater the $`3\sigma `$ level. The least squares fit (slope=0.21) to the points with point source detections (filled circles) is also shown.
In Fig. 13b) we have included the data points from the radio-quiet AGN observed by McLure et al. (1999). In this case the $`R`$ magnitude for the nuclear component was transformed to the $`B`$ band by:
$`B(\mathrm{AB})=R+0.55`$
Although at the high luminosity end of the distribution, the data points of McLure et al. (1999) are consistent with the trend seen in the EMSS sample.
We have also added the results of the Bahcall et al. (1997) and Boyce et al. (1997) analysis in Fig 13c). These results are treated separately because the nuclear components in these studies was strongly saturated in the HST images, leading to some uncertainties in the photometry of the point source component. In this case the following relation was used to convert nuclear V band magnitudes into the $`B(\mathrm{AB})`$ band.
$`B(\mathrm{AB})=V+0.19`$
Again, these points lie at the high luminosity end of the distribution, although in this case all the points appear systematically shifted toward lower host galaxy luminosities than the trend apparent in this analysis or in the observations of McLure et al. (1999). This could be caused by a strongly saturated nuclear image making it difficult to detect all the galaxy light, or simply the fact that any weak correlation between nuclear and host galaxy luminosity breaks down at the highest luminosities.
However, similar weak correlations have also been found by a number of other authors (e.g. Bahcall et al. 1997, McLeod et al. 1999). This has been potentially ascribed to an underlying correlation between the bulge mass ($`M_{\mathrm{bulge}}`$) and black hole mass ($`M_{\mathrm{BH}}`$) where $`M_{\mathrm{BH}}=0.006M_{\mathrm{bulge}}`$ based on the observations of Magorrian et al. (1998). Translating this into a correlation between bulge and nuclear absolute magnitudes, the approximate relation can be obtained (see McLeod et al. 1999):
$`M_{B(AB)_{\mathrm{AGN}}}=M_{B(AB)_{\mathrm{Bulge}}}6.02.5[\mathrm{log}ฯต+\mathrm{log}({\displaystyle \frac{\mathrm{{\rm Y}}_{B(AB)}}{10M/L}})+\mathrm{log}({\displaystyle \frac{f}{0.006}})\mathrm{log}({\displaystyle \frac{\mathrm{BC}}{10}})]`$
where $`ฯต`$ is the ratio to Eddington luminosity, $`\mathrm{{\rm Y}}_{B\left(AB\right)}`$ is the mass-to-light ratio in the $`B\left(AB\right)`$ band, BC is the bolometric correction from $`B\left(AB\right)`$ band luminosity to total luminosity for the AGN, and $`f`$ is the fraction of the spheroid mass in the black hole. The normalisation constants are the typical observed values for each of these parameters.
Substituting these default values for $`\mathrm{{\rm Y}}_{B(AB)}`$, BC and $`f`$, the correlation expected for a constant Eddington ratio (in this case $`L=0.1L_{\mathrm{Edd}}`$) is shown in Fig. 13d), where we have now plotted bulge luminosity against nuclear luminosity for the EMSS sample. Again we find a correlation which is significant at the 99 per cent confidence level (based on the Spearman rank test) with a least squares slope of 0.36. Based on the black hole model described above, the lower bound of the correlation is consistent with an inferred Eddington ratio of $``$ 5 per cent, with most AGN radiating significantly below this limit.
The correlation observed between the host galaxy and nuclear components is, of course, very much flatter than that given by a single Eddington ratio. Such a flat correlation could be explained by appealing to the fact that lower luminosity AGN preferentially radiate at lower Eddington ratios. This would naturally explain the inference that brighter AGN appear to radiate at Eddington ratios up to 20 per cent (McLure et al. 1999, McLeod et al. 1999), see also Fig. 13b) and c).
Perhaps the greatest concern over any correlation is that it may simply be an artifact of the detection limits of our analysis procedure. For example, bright galaxies (in particular bright bulges) might mask the existence of a weak point source. Equally bright point source components might hide faint galaxies (in particular small bulges). Thus the areas of Fig. 13 which might be selected against are precisely those areas in which no data points are seen.
The simulations (section 3.2) that were done to estimate the errors are indicative rather than comprehensive. Nevertheless, they show that relatively faint galaxies can be detected in the presence of a strong point source and that relatively weak point sources can be detected in the presence of bright galaxies (see Fig. 3). These results suggest that the correlation between host galaxy and nuclear luminosity is unlikely to be due to selection effects in the fitting process.
4.4 Interactions
In marked contrast to previous studies of bright AGN (Bahcall et al. 1997, Boyce et al. 1997), few, if any, of the AGN in this study show evidence for interaction or a strong excess of close companions. The latter result is hardly surprising, since Smith et al. (1995) have already demonstrated that the excess number of galaxies around $`z<0.3`$ AGN in the EMSS is consistent with clustering strength of field galaxies. Similar results are also reported for Seyfert 1 galaxies (Dultzin-Hacyan et al. 1999).
The frequency of mergers in this sample is harder to put on a quantitative basis. Nevertheless the fit residuals (see Table 2) show little evidence for significant postmerger/interaction activity (e.g. disrupted morphologies, tidal tails etc.). Inner bars and weak spiral structure are the most common residual features seen. As noted by McLure et al. (1999) a definitive measure of the extent to which AGN activity is accompanied by evidence for interactions awaits a detailed study of the level of activity in otherwise โnormalโ galaxies. A low incidence of tidal tails/multiple nuclei ($`<10`$ per cent) was also noted by Malkan et al. (1998) in their imaging study of lower luminosity Seyfert galaxies.
It is certainly true that the limited depth of our 600-sec HST exposures could lead us to miss low level residuals implying post-merger activity. Nevertheless, the level of strong interactions seen in this low-luminosity AGN sample ($`<5`$ per cent) is much less than has been seen in similar studies of brighter AGN. One possible interpretation is that interactions do not play as strong a role in fuelling lower luminosity AGN ($`M_B>23`$). Another explanation might be that lower luminosity AGN represent a more advanced stage of the AGN evolutionary process, i.e. the AGN declines in luminosity with time from the merger event which initially fuelled the AGN.
Unfortunately with the absence of a similarly detailed morphological study of โnormalโ galaxies it is impossible to determine whether AGN do, in fact, show any strong evidence for any enhanced merger/interaction activity compared to the field galaxy population. From this study, the indication is that there is little, if any, evidence for such activity.
5 CONCLUSIONS
We have carried out a systematic ground- and HST-based imaging study of a large sample of nearby AGN. The X-ray selection of the initial sample minimises any optical morphological bias. Although on average ten time fainter than many previous samples of nearby AGN imaged with HST, the objects studied here comprise the bulk of local AGN, with space densities up to 100 times higher than their more luminous counterparts. As such they are responsible for the vast majority of the AGN luminosity density in the local Universe.
We find that the properties of the host galaxies of these AGN are much more โnormalโ compared to those of more luminous AGN/QSOs. The host galaxies follow the observed size-luminosity relations for bulges and disks, with sizes typically $`10h_{50}`$kpc. The host galaxies span a wide range in luminosity, with a median luminosity of $`M_{B\left(\mathrm{AB}\right)}=21.5`$. All but one of the host galaxies are detected with $`M_{B\left(\mathrm{AB}\right)}>18`$.
Compared to a random sample of field galaxies at these redshifts, the host galaxies are biased towards early morphological types (E, S0). This is consistent with the observation that the host galaxies are also $`0.75\pm 0.25`$mag brighter than field galaxies at $`z<0.15`$. The median luminosity of the sample is also consistent with the most recent estimates of $`L^{}`$ for early spectral types.
There is a weak correlation between the host galaxy and nuclear luminosity, the origin of which may be due to the underlying energy generation mechanism. Assuming the standard black hole model for energy generation in AGN and the derived relation between spheroid and black hole mass, the AGN in this study typically radiate at or below a few per cent of their Eddington luminosity.
There is no evidence for any enhanced merger activity/interactions in this sample of objects. The host galaxies of these AGN thus appear to represent a rather typical subset of โnormalโ galaxies in the local Universe, albeit biased towards bulge-dominated objects.
When combined with HST imaging studies of brighter AGN, it is clear that the properties of AGN host galaxies form a continuous distribution, over all sizes and luminosities. The host galaxies of AGN are not unusual with respect to the overall galaxy population. Galaxies with luminosities $`L^{}`$ and fainter are capable of harbouring an AGN. Indeed the correlation which leads the brighter AGN to be found in the large, more luminous galaxies also reveals that the fainter AGN that comprise the bulk of the population in the local Universe will be found in normal galaxies. The underlying parameter driving this correlation may be bulge mass and/or energy generation efficiency.
The HST continues to provide a wealth of information on AGN host galaxies at low redshifts. However, the vast majority of low redshift AGN imaged to date are only of moderate luminosity ($`24<M_B<18`$). Even the most luminous of low redshift AGN ($`M_B25`$) are still significantly fainter than the typical โbreakโ-luminosity QSOs ($`M_B=26`$) at $`z2`$, where QSO activity reaches its peak. One of the next major observational steps will therefore be the extension of similarly comprehensive AGN imaging studies to high redshift. It is only by considering unbiased samples over as wide a luminosity as possible that we can hope to disentangle the relationship between the large scale (the host galaxy and its environment) and the small scale (the nucleus and energy generation mechanism) phenomena in AGNs. It is to be hoped that the combination of large aperture and outstading image quality provided by new groud-based telescopes such as Gemini will yield major advances in this field in the near future.
ACKNOWLEDGEMENTS
The observations were obtained with the Jacobus Kapteyn Telescope at the Observatorio del Roque de los Muchacos operated by the Royal Greenwich Observatory, the 40-inch telescope at Siding Spring operated by the Research School of Astronomy and Astrophysics, Australian National University and with the Hubble Space Telescope operated by STScI. BJB acknowledges the hospitality of Dominion Astrophysical Observatory. We are indebted to Matthew Colless for supplying the Autofib survey galaxy catalogue in digital format. We thank Nicholas Ross and Danielle Frenette for their work on the morphological classifications.
REFERENCES
Adams T.F., 1977, ApJS, 33, 19
Avni Y., Tanenbaum H., 1986, ApJ, 305, 83
Bahcall J.N., Kirakos, S., Saxe D.H., Schneider D.P., 1997, ApJ, 479, 658
Boyce P.J. et al., 1997, MNRAS, 298, 121
Boyle B.J., McMahon R.G., Wilkes B.J., Elvis M. 1995, MNRAS, 276, 315
Boyle B.J., Shanks T., Peterson B.A., 1988, MNRAS, 235, 935
Dultzin-Hacyan D., Krongold Y., Fuentes-Guridi I., Marziani 1999, ApJ., 513, 111
Dunlop J.S., Taylor G.L., Hughes D.H., Robson E.I., 1993, MNRAS 264, 455
Ellis R.S., Colless M., Broadhurst T., Heyl J., Glazebrook K., 1996, MNRAS, 280, 235
Folkes S. et al., 1999, MNRAS, 308, 459
Freeman K. 1970, ApJ, 160, 811
Kotilainen J., Ward M.J., 1994, MNRAS, 266, 953
Green R., Schmidt M., Liebert J., 1986, ApJS, 61, 305
Landolt 1992, AJ, 104, 320
Maccacaro T., Wolter A., McLean B., Gioia I., Stocke J.T., Della Ceca R., Burg R., Faccini R., 1994, Ap. Lett. & Comm., 29, 267
Magorrian J. et al., 1998, AJ, 115, 2285
MacKenty J.W., 1990, ApJS, 72, 231
Malkan M.A., Margon B., Chanan G.A., 1984, ApJ, 280, 66
Malkan M.A., Gorjian V., Tam R., 1998, ApJS, 117, 25
McLeod K.K., Reike G.H., 1994, ApJ, 431, 137
McLeod K.K., Reike G.H., Storrie-Lombardi L.J., 1999, ApJ, 511, 67
McLure R.J., Dunlop J.S., Kukula M.J., Baum S.A., OโDea C.P., Hughes D.H., 1999, MNRAS, 308, 377
Peacock J.A., Miller L., Mead A.R.G., 1986, MNRAS, 218, 265
Piccinotti et al. 1982, ApJ, 253, 485
Sandage A., 1961, The Hubble Atlas of Galaxies, (Carnegie Institution: Washington)
Schade D.J., Barrientos L., Lopez-Cruz O. 1997, ApJ, 477, 17
Schade D.J., Lilly S.J., Le Fรจvre O., Hammer F., Crampton D. 1996, ApJ, 464, 79
Simien, de Vaucouleurs G. 1986, ApJ, 302, 564
Simkin S.M., Su H.J., Schwarz M.P., 1980, ApJ, 237, 404
Smith E.P., Heckman T.M., Bothun G.D., Romanshin W., Balick B. 1986, ApJ, 306, 64
Smith E.P., Heckman T.M., 1990, ApJ, 348, 38
Smith R.J., Boyle B.J., Maddox S.J. 1995, MNRAS, 277, 270
Stetson, P., 1987, PASP, 99, 191
Stocke J.T. et al., 1991, ApJS, 76, 813
Taylor G.L., Dunlop J.S., Hughes D.H., Robson E.I., 1996, MNRAS, 283, 930
Terlevich R., Tenorio-Tagle G., Franco J., Melnick J., 1992, MNRAS, 255, 713
Vรจron-Cetty M.P, Woltjer L., 1990, A&A., 236, 69
Vรจron-Cetty M.P., Vรจron P., 1997, A Catalogue of Active Galactic Nuclei, 7th Edition
Zitelli V., Granato G.L., Mandolesi N., Wade R., Danese L., 1993, ApJS, 84, 185
This paper has been produced using the Blackwell Scientific Publications macros. |
no-problem/9912/astro-ph9912081.html | ar5iv | text | # Unmasking the tail of the cosmic ray spectrum
## Abstract
A re-examination of the energy cosmic ray spectrum above $`10^{20}`$ eV is presented. The overall data-base provides evidence, albeit still statistically limited, that non-nucleon primaries could be present at the end of the spectrum. In particular, the possible appearance of superheavy nuclei (seldom discussed in the literature) is analysed in detail.
The origin and nature of cosmic radiation have been a constant source of mystery and discovery since 1949 . Most notably, Greisen, Zatsepin and Kuzโmin (GZK) pointed out that extremely high energy cosmic rays (usually assumed to be nucleons or nuclei) undergo reactions with the pervasive microwave background radiation (MBR) yielding a steep drop in their energy attenuation length . Specifically, any proton energy above 50 EeV is degraded by resonant scattering via $`\gamma +p\mathrm{\Delta }p/n+\pi `$, and heavy nuclei with energies above a few tens EeV get attenuated mainly by photodisintegration off the MBR and intergalactic infrared background photons (IR). Over the last few years, several giant air showers have been detected which confirm the arrival of particles with energies $`100`$ EeV, this is, above the GZK cutoff (see for a recent survey). Many models have been proposed as source candidates of such high energy events , however, it is not known for certain at the present time from where the rays originate.
In revealing their origin, the observed anisotropy of these cosmic rays is one of the most useful features. Very recently, the Flyโs Eye and Akeno Giant Air Shower Array (AGASA) experiments reported a small but statistically significant anisotropy $`๐ช(4\%)`$ in the cosmic ray flux towards the galactic plane at energies around 1 EeV. With increasing energy the picture looks rather different: although at $`E>40`$ EeV an enhancement of the flux from the Supergalactic plane was reported , the arrival directions above 100 EeV are best described as isotropic, without imprint of correlation with the galactic plane or Supergalactic plane . There are two extreme explanations for this puzzle: i) the bunch of nearby sources follows an isotropic distribution (which hardly could be the case) ii) One (A few) source(s) dominates at the highest energies whilst the background fields of the intergalactic medium strongly modify the particle propagation. For the latter explanation, it was suggested that a Galactic wind akin the solar wind could bend all the orbits of the highest energy cosmic rays towards the Virgo cluster (VC) . Actually, if one assumes that these particles are protons, except for the two highest energy events (the one recorded at AGASA and the super-GZK event reported by the Flyโs Eye group ) all trajectories can be traced to within less than about 20 degrees from Virgo.<sup>1</sup><sup>1</sup>1Notice that the highest energy Yakutsk event was excluded from this sample because of the great uncertainty on its energy determination. While first estimates suggested a primary energy around 120 EeV , a re-estimation of the number of charged particles at 600 m from the shower core yields a possible primary energy of 300 EeV .
At the highest energies, observed extensive air showers seem to be consistent with nucleon primaries but due to the poor statistics and large fluctuations from shower to shower an accurate determination of the particle species is not possible at the moment. Furthermore, extensive air shower simulations depend to some extent on the hadronic interaction event generator which complicates the interpretation of data even more . Interestingly enough, however, the muon component of the highest AGASA event agrees with the expectation extrapolated from lower energies . Indeed, a population of piled-up protons is expected at 50 EeV , and the picture seems quite consistent. On the other hand, the Flyโs Eye event occurs high in the atmosphere, and, although a primary proton cannot be excluded, a heavy nucleus more closely fits its shower development .
It is widely believed that the cosmic ray spectrum beyond the โcrossover energyโ (energy at which the local spectrum becomes comparable to or less than the cosmological component) could be associated with the presence of a particularly bright extragalactic, though relative nearby source, superimposed on a cosmological diffuse background. In Fig. 1 we show the evolved energy spectrum of nucleons assuming a cosmologically homogeneus population of sources โ usually referred to as the universal hypothesis (UH)โ , together with a compilation of recent air shower data . In addition, we show the modified spectrum for the case of an extended source described by a Gaussian distribution of width 2 Mpc at a distance of 18.3 Mpc (see for details). Assuming that there is no other significant energy loss mechanism beyond interactions with the MBR for cosmic rays traversing parts of the cluster, this could be taken as a very crude model of Virgo. It is important to stress that for a galactic magnetic field $`B_{\mathrm{gal}}=7\mu `$G (as in ) which extends $`R_{\mathrm{halo}}1.5`$ Mpc in the galactic halo, the mean flight time of the protons during their trip through the Milky Way is $`5.05\times 10^6`$ yr . This means that the bending does not add substantially to the travel time, and the continuous energy loss within the straight line approximation is expected to be reasonable for the problem at hand. From Fig. 1 we realize that the spectrum of the VC successfully reproduces the AGASA data above 100 EeV. However, it apparently cannot account for the super-GZK Flyโs Eye event. The interpretation that we give for this result is that, without specific knowledge of the chemical composition, the best guess is that at the end of the spectrum two different types of characters are playing.<sup>2</sup><sup>2</sup>2We remark that AGASA data could be also reproduced if sources of ultra high energy protons trace the inhomogeneous distribution of luminous matter in the local present-epoch universe .
At this stage, it is interesting to note that the measured density profile of the highest energy Yakutsk event (excluded in the analysis of ) shows a huge number of muons. Remarkably, its arrival direction coincides with the 300 EeV Flyโs Eye event, within angular resolution, possibly indicating a common origin. If this is the case, the almost completely muonic nature of this event, recently associated with a dust grain impact , could be, perhaps, the signature of a super-heavy nucleus.
It has been generally thought that <sup>56</sup>Fe is a significant end product of stellar evolution and higher mass nuclei are rare in the cosmic radiation. Strictly speaking, the atomic abundances of middle-weight $`(60A<100)`$ and heavy-weight ($`A>100`$) elements are approximately 3 and 5 orders of magnitude lower, respectively, than that of the iron group . The synthesis of the stable super-heavy nuclides is classically ascribed to three different stellar mechanisms referred to as the s-, r-, and p-processes. The s-process results from the production of neutrons and their capture by pre-existing seed nuclei on time scales longer than most $`\beta `$-decay lifetimes. There is observational evidence that such a kind of process is presently at work in a variety of chemically peculiar Red Giants and in special objects like FG Sagittae or SN1987A . The abundance of well developed nuclides peaks at mass numbers $`A=138`$ and $`A=208`$. The neutron-rich (or r-nuclides) are synthesized when seed nuclei are subjected to a very intense neutron flux so that $`\beta `$-decays near the line of stability are far too slow to compete with the neutron capture. It has long been thought that appropriate r-process conditions could be found in the hot ($`T10^{10}K`$) and dense ($`\rho 10^{10}10^{11}`$ g/cm<sup>3</sup>) neutron-rich (neutronized) material located behind the outgoing shock in a type II supernova event . Its abundance distribution peaks at $`A=130`$ and $`A=195`$. The neutron-deficient (or p-nuclides) are 100 - 1000 times less abundant than the corresponding more neutron rich isobars, while their distribution roughly parallels the s- and r- nuclides abundance curve. It is quite clear that these nuclides cannot be made by neutron capture processes. It is generally believed that they are produced from existing seed nuclei of the s- or r-type by addition of protons (radiative proton captures), or by removal of neutrons (neutron photodisintegration). The explosion of the H-rich envelopes of type II supernovae has long been held responsible for the synthesis of these nuclides .
In light of the above, starbursts appear (hopefully) as the natural sources able to produce relativistic super-heavy nuclei. These astrophysical environments are supposed to comprise a considerable population of O and Red Giant stars , and we believe the supernovae rate is as high as 0.2 - 0.3 yr<sup>-1</sup> . Of special interest here, the arrival directions of the Flyโs Eye and Yakutsk super-GZK events ($`b=9.6^{}`$, $`l=163^{}`$ and $`b=3^{}`$, $`l=162^{}`$) seem to point towards the nearby metally-rich galaxy M82 ($`b=41^{}`$, $`l=141^{}`$) which has been described as the archetypal starburst galaxy and as a prototype of superwind galaxies . The joint appearance of the galactic wind and the galactic magnetic field during particle propagation could certainly account for the required 37 deflection. In addition, it was recently suggested that within this type of galaxies, iron nuclei can be accelerated to extremely high energies if a two step process is invoked . In a first stage, ions are diffusively accelerated up to a few PeV at single supernova shock waves in the nuclear region of the galaxy . Since the cosmic ray outflow is convection dominated, the typical residence time of the nuclei in the starburst results in $`t1\times 10^{11}`$ s. Thus, the total path traveled is substantially shorter than the mean free path (which scales as $`A^{2/3}`$) of a super-heavy nucleus (for details see ). Those which are able to escape from the central region without suffering catastrophic interactions could be eventually re-accelerated to superhigh energies at the terminal shocks of galactic superwinds generated by the starburst. The mechanism efficiently improves as the charge number $`Z`$ of the particle is increased. For this second step in the acceleration process, the photon field energy density drops to values of the order of the cosmic background radiation (we are now far from the starburst region). The dominant mechanism for energy losses in the bath of the universal cosmic radiation is the photodisintegration process . Notice that the energy loss rate due to photopair production could be estimated as $`Z^2/A`$ times higher than that of a proton with the same Lorentz factor, and thus could be safely neglected . The disintegration rate $`R`$ (in the system of reference where the MBR is at $`2.73K`$) of an extremely high energy nucleus with Lorentz factor $`\mathrm{\Gamma }`$, propagating through an isotropic soft photon background reads ,<sup>3</sup><sup>3</sup>3Primed quantities refer to the rest frame of the nucleus.
$$R=\frac{1}{2\mathrm{\Gamma }^2}_0^{\mathrm{}}๐ฯต\frac{n(ฯต)}{ฯต^2}_0^{2\mathrm{\Gamma }ฯต}๐ฯต^{}ฯต^{}\sigma (ฯต^{}),$$
(1)
where $`\sigma `$ stands for the total photon absortion cross section. The density of the soft photon background $`n(ฯต)`$ can be modeled as the sum of: i) the MBR component which follows a Planckian distribution of temperature $`2.73K`$, ii) the IR background photons as estimated in , iii) a black body spectrum with $`T=5000K`$ and a dilution factor of $`1.2\times 10^{15}`$ to account for the optical (O) photons. The total photon absortion cross section is characterized by a broad maximum, designated as the giant resonance, located at an energy of 12-20 MeV depending on the nucleus under consideration. For the medium and heavy nuclei, $`A50`$, the cross section can be well represented by a single, or in the case of the deformed nuclei, by the superposition of two Lorentzian curves of the form
$$\sigma (ฯต^{})=\sigma _0\frac{ฯต^2\mathrm{\Gamma }_0^2}{(ฯต_0^2ฯต^2)^2+ฯต^2\mathrm{\Gamma }_0^2}.$$
(2)
In order to make some estimates, hereafter we refer our calculations to a gold nucleus (the resonance parameters are listed in table I ). In Fig. 2 we show the <sup>197</sup>Au photodisintegration rate due to interactions with the starlight and relic photons. At the highest energies, the energy losses are dominated by collisions with the tail of $`2.73K`$ Planckian spectrum. It is straightforward to show that a superheavy nucleus of a few hundred EeV emitted by M82 can traverse almost unscathed through the primeval radiation to produce an extensive air shower after interaction with the earth atmosphere.
Additional support for the superheavy nucleus hypothesis comes from the CASA-MIA experiment (See in particular Fig. 9). The collected cosmic ray data between $`10^{14}`$ \- $`10^{16}`$ eV tends to favor a supernova shock wave acceleration scenario. The average mass increases with energy, becoming heavier above $`10^{15}`$ eV. At the maximum energy the results are consistent at 1$`\sigma `$ level with nuclei heavier than iron. However, โloreโ has settled down some comparisons of the admittedly limited ultra high energy cosmic ray sample against hadronic-interaction-event generators which predicts the arrival of particle species heavier than iron . We would like to stress that since simulations are used to interpret data, and then the data is used to modify the simulation, one has to be very careful and as we have shown, it is by no means clear that superheavy nuclei could not be present at the end of the spectrum.
The energy spectrum of nearby (around 3Mpc) nuclear sources was discussed elsewhere . The analysis showed that particles tend to pile up between 240 - 270 EeV. This bump-like feature is followed by a simultaneous drop in the cosmic ray flux of the preceding bins of energy, changing the relative detection probabilities. As a consequence particles in the pile-up are 50% more probable than those at lower energies.
In summary, the recently reported AGASA data can be successfully reproduced by a power law spectrum of nucleons hailing from the VC superimposed on a cosmological diffuse background. The Flyโs Eye observations may also fit this scenario, albeit with large errors. One might also consider less likely astrophysical sources. In particular, our analysis seems to indicate that the next-door galaxy M82 could be responsible for some events at the end of the CR spectrum. This has also been suggested elsewhere . At least some of these super-GZK events could be due to heavy, and even superheavy nuclei. Clearly, more data is needed before this hypothesis can be verified. In this regard, the coming avalanche of high quality cosmic ray observations at the Southern Auger Observatory will provide new insights to the ideas discussed in this letter.
Note added: After we finished this work, it was argued that the Galactic wind model assumed in Ref. is alone responsible for the focusing of positive particles towards the North galactic pole. Therefore the apparent clustering of the back-traced CR cannot be interpreted as evidence for a point source, this point source identified as M87 . It should be pointed out that the main input parameters for the determination of the CR-spectrum in Fig. 1 are the spectral index of the source, and the propagation distance of the nucleons in the extragalacic medium. The Galactic wind model is just used to collect all the traces in only one single direction in the sky. Therefore the discussion presented in this letter strongly supports older suspicions regarding M87, like the model proposed in Ref. .
In closing, we wish to thank Gustavo Medina Tanco for a fruitful discussion. The research of LAA was supported by CONICET. MTD was supported by CONICET and Fundaciรณn Antorchas. TPM-SR-JDS were supported by the National Science Foundation. |
no-problem/9912/astro-ph9912107.html | ar5iv | text | # Where the Baryons Are
## 1 Big Dave and the Big Bang
David Schramm loved baryons. He was fond of a few special baryonsโ those pesky ultra-high-energy cosmic rays, a few isotopes here and there in meteorite inclusions, the crystalline baryons above Aspenโ but his greatest love was for the light elements, unmatched for their sheer quantity and energy. Daveโs whole personality expressed abundance, and his appetite was a match for an entire Hubble volume of hydrogen and helium, flavored with a little deuterium and helium-3, and a small but soothing trace of lithium-7. I think the deuterium was always his favorite kind; he was always looking for it and often phoned me up to ask if Iโd found any more (or any less) recently.
## 2 The Baryon Budget
Tracking down baryons is of course a serious scientific issue. According to Standard Big Bang Nucleosynthesis, the composition of primordial matter depends on only one thing, the total amount of baryonic matter; so critical tests of cosmological theory revolve around nuclear abundances in primordial matter and the total density of baryons.
Table 1 shows a simple breakdown of the estimated density of baryons at the present epoch in all the forms where we have reasonably good direct or indirect estimates of their mean density. The typical errors in these estimates are still about a factor of two but the prospects are good for making these smaller. I summarize here the main issues; for more detail, discussion of errors, dependence on the Hubble constant, and detailed references to document the arguments below, the reader is referred to Fukugita, Hogan and Peebles (1998).
Table 1. Summary of baryon components today for $`h_{70=1}`$
| Baryonic Component | $`\mathrm{\Omega }_i\times 10^3`$ | Source of Estimate |
| --- | --- | --- |
| Spheroid Stars | 2.6 | Luminosity Density, $`M/L`$ |
| Disk Stars | 0.86 | Luminosity Density, $`M/L`$ |
| Neutral Atomic Gas | 0.33 | HI 21 cm surveys, Lyman-$`\alpha `$ absorption |
| Molecular Gas | 0.30 | CO surveys of galaxies |
| Ionized gas in clusters | 2.6 | X-ray emission |
| Ionized gas around groups | 14 | Soft X-rays, extrapolation from clusters |
| Total at $`z=0`$ | 21$`\times 2^{\pm 1}`$ | |
Some baryons are easy to spot, the most obvious being the shining stars. To estimate the density of matter in stars, we need to know the luminosity density and the mass-to-light ratio, both taking into account a particular waveband. The first quantity is directly measured to an accuracy of better than about 20% in blue light, but the mass-to-light ratio $`M/L`$ is not directly measured. For a single star of known mass, $`M/L`$ is known theoretically, but a stellar population contains some mixture of masses as well as a mass of dead remnants such as degenerate dwarfs, neutron stars and black holes which have accumulated over time. The traditional (over)simplified approach is to split stellar populations into two kinds, โdiskโ and โspheroidโ, treating each one as if it were homogeneous, and splitting the light of galaxies up as belonging to one type or another. Disk populations are those associated with some current star formation, spheroid populations have had little star formation for about a Hubble time. Thus, the $`M/L`$ of a disk population can be estimated from our own region of the Milky Way, where we can actually count the faint stars that dominate the mass individually (and verify that the integral at low mass seems to turn over and converge); we can also estimate the total disk mass dynamically from vertical Oort oscillations in the disk. Spheroid populations can similarly be studied dynamically in the central parts of elliptical galaxies where the dark matter is negligible. The $`M/L`$ in blue light estimated in this way is about 6 and 1.5 respectively in solar units for the spheroid and disk populations, with errors of about 30%. Reassuringly, these numbers agree with those estimated from a priori models of the populations; the spheroid populations are heavier because their bright blue stars have burned out, and because they have more dark remnants.
While the total blue light coming from the two types of populations is comparable (slightly greater from disk stars, by about 30%), the amount of mass is about three times bigger in spheroid populations. These numbers are similar to those derived for many years by similar techniques. Further progress will come from a more sophisticated and detailed modeling of the stellar populations.
The second category is atomic gas. Here we start with a coarse census by quasar Lyman-$`\alpha `$ line absorption, a fairly unbiased sampling of all the atoms by random background light sources. Their statistics reveal that almost all the atoms are in high-column-density clouds, whose total atomic mass can be measured by HI 21 cm hyperfine emission. Unbiased surveys show that such high-column clouds are almost always associated with galaxies (although the stellar content is sometimes very low surface brightness). The error in this component is therefore rather small (less than 30%) because we have a good direct census of almost all the atoms. Note that it is only one-tenth the mass in stellar populations, although in the disk galaxies where it resides the atomic gas comprises on average about a third of the stellar mass.
The third category is molecular gas. This component is denser and cooler than the atomic one, and very closely correlated spatially with regions of star formation. The estimate of molecular gas mass is very uncertain; it is based on a rather small and relatively uncontrolled sample of extragalactic detections, where the ratio of CO to HI mass is measured. The extrapolation to total molecular density (dominated by H<sub>2</sub>) is uncertain as is the extrapolation to the general galaxy population. However the main qualitative result is almost certainly correct, that this component is comparable overall to the atomic phase; the two taken together are almost the same as the mass in stars in the galaxies where they reside.
This coincidence probably reflects a real physical connection. In most galaxies these three components (stars, atomic and molecular gas) probably form a coupled and self-regulating system of star formation from gas, with gas and energy being returned from the stellar population. The spheroid population results when the gas is used up or blown away, which can be seen happening in starbursts today and which happened some time ago for most of the baryonic mass in the central parts of galaxies.
The bulk of the baryons however seem never to have made it into the galaxies. We see the best evidence of this in galaxy clusters, in which the bulk of the baryons is seen as diffuse ionized gas. In these settings the gas is hot and dense enough to detect in X-ray emission as well as Comptonization of the microwave background radiation (the โSunyaev-Zeldovich effectโ; see Carlstrom (1999) for a summary of the recent progress). Estimated either way the ratio of ionized gas mass to total mass (about 0.08) or of ionized gas mass to spheroid star mass (about 6:1) appears fairly uniform from cluster to cluster. There is so much gas in the clusters that even though they represent a small fraction of all the galaxies (about 1/6 in the Fukugita et al. definition), their gas contributes as many baryons as all the stars in all the galaxies.
Many aspects of galaxy and structure formation are poorly understood, but one feature seems to be robust: the total mixture of stuff in galaxy clusters is a fair sample of the universe as a whole. This statement needs some qualification (there is some ejection, there is some segregation, etc.) but by and large we can use the situation in clusters as a guide to that in the universe as a whole. This argument can be employed in several ways; we can assume that nucleosynthesis is correct and estimate the density of matter (White et al. 1993), we can assume a constant mass-to-spheroid-star-mass ratio to estimate the total density of matter (a technique with a long history), or we can assume a constant baryon-to-star or baryon-to-mass ratio to estimate the global density of baryons. If we employ the latter extrapolations, we get a much bigger number for the baryon density than we have found within galaxies so far.
The best guess for where these baryons reside is in ionized gas very similar to the clusters, but gathered instead around the more typical dark matter condensations of the universe, around groups of galaxies. The gas is cooler and less dense than that of clusters, reflecting the smaller temperature and density typical of dark matter halos around typical groups of galaxies. Instead of temperatures in excess of $`10^7`$K, the typical temperature is a few million degrees.
The bulk of the cosmic baryons thus seem to be parked in a form where we cannot study them easily. Gas at this temperature cools very inefficiently and radiates in soft X-rays which are notoriously difficult to detect. Some groups do indeed emit detectable thermal X-rays which could be the denser portion of this gas emerging above the backgrounds. There is a detected soft X-ray background which doubtless includes a contribution from the integrated light of all the groups of the universe, but this does not allow us to estimate the overall gas density unless we can estimate its overdensity, for example by detecting the extent of the emission around typical groups. It is also possible this gas will be detected in a few absorption lines of heavy elements (such as hydrogenic transitions of oxygen), in quasar spectra taken with large X-ray telescopes (e.g. Hellsten, Gnedin and Miralda-Escudรฉ 1999). The density information will be combined in this case with abundance information.
The abundances of cluster gas and high-redshift absorbers add additional clues about the history of star formation, supernova enrichment, gas ejection from galaxies, and the universality of cluster baryons (Renzini 1999, Pettini 1999). The best guess is that the pervasive ionized gas is, like the gas in clusters, fairly enriched in metals, to perhaps one-third of the solar value. Thus the intergalactic gas contains not only most of the baryons in the universe but also most of the heavy elements.
## 3 Evolution of the Baryon Distribution
This state for the baryons is a natural outcome in current models of galaxy and structure formation. Once gas gets hot enough it has few coolantsโ everything is ionized except the rare heavy elements. In the cosmic setting, the heating and cooling of most of the gas become dominated by dynamical effects. Gravitation causes motion, collisions lead to shock heating, compression by infall leads to adiabatic heating, and cosmic expansion leads to adiabatic cooling. With no radiative cooling the gas achieves a steady-state distribution of temperatures determined by the cosmic web of dark matter which defines the gravitational potential. This evolution is seen in simulations (Cen and Ostriker 1999, Wadsley et al. 1999).
One important issue is not yet resolved by the simulations, and that is the overall efficiency of galaxy and star formation. The gas at early times is not so hot, and indeed passes through temperatures $`10^{45}`$K where the cooling is very efficient. What prevents all of the gas from collapsing at this time into the protogalactic lumps around then? Part of the story must be the feedback from star formation, like that we see at work today in galaxies: the formation of a few stars heats the remaining gas so that it does not fall in. However, it is not clear that this is even the dominant effect; another important ingredient is kinematical โheatingโโ the continuous mixing, stirring and tidal disruption that occurs in a hierarchy.
In some ways we have better information about the baryons at high redshift than we do at zero redshift, since we can directly observe the dominant phases of gas. Because the gas is cooler, hydrogen and helium are not entirely ionized and we can detect the small fraction left in the form of HI or HeII by Lyman-$`\alpha `$ absorption. The bulk of the baryons are in the diffuse, ionized protogalactic web of gravitationally-collapsing gas which create the โLyman-$`\alpha `$ forestโ in quasar spectra. The helium ions provide information supplementary to the HI since they are more abundant than HI, they are detectable in gas at lower density and higher temperature (in the โvoidsโ), and their ionization state is constrained in a large region where the light of the target quasar dominates the radiation field; this information is now becoming obtainable with HST/STIS (Anderson et al. 1999, Heap et al. 1999). Together the HI and HeII can be used to paint a complete and compelling picture of the gas distribution at the time when most galaxy formation is happening. Hydrodynamical simulations reproduce the main features of the observed absorption and allow estimates of the mean density (Weinberg et al. 1997, Rauch et al. 1997, Zhang et al. 1997). A new technique based on HeII void absorption (Wadsley et al. 1999) gives an independent estimate, with ionizing radiation calibrated directly from the quasar light. From these estimates we infer that the bulk of the baryons at $`z=3`$ are as predicted in the (mostly ionized) gas producing the absorption, and the total baryon density is about what we infer today. As sampling and modeling improves, the systematic sources of error will come under better control and these measurements will provide our best direct estimates of the total baryon density. We already seem to have discovered that most of the baryons are now and have always been in diffuse gas.
## 4 Primordial and Unseen Baryons
The classical theory of Big Bang Nucleosynthesis cleanly predicts the composition of baryonic matter emerging from the early universeโ four light element abundances (deuterium, helium, helium-3, and lithium-7) as a function of only one parameter, the ratio $`\eta 10^{10}\eta _{10}`$ of baryons to photons. Once the background temperature is measured, specifying $`\eta `$ is equivalent to specifying the mean density of baryons, $`\mathrm{\Omega }_bh_{70}^2=7.45\times 10^3\eta _{10}`$. (The theoretical predictions with errors are now available as a java calculator for those who wish make their own comparisons with observations: see Mendoza and Hogan 1999). The story on abundances (reviewed for example by Steigman 1999) is constantly shifting. Recent discussions of helium (Izotov et al. 1999) have raised previous limits slightly, and the primordial abundance of lithium (Ryan et al. 1999) may be lower than previously thought, bringing these elements into good concordance with each other. Altough the central values of low estimates of the primordial deuterium (Burles and Tytler 1998ab, Kirkman et al. 1999) still prefer values of $`\eta `$ a higher than the other elements prefer, the full errors in these estimates allow a concordance. In spite of persistent debates about the correct values and errors to use, there always remains a comfortable spot giving reasonable concordance with current datasets (a spot which Dave Schramm always managed to find). This โsweet spotโ has been remarkably stable for years, $`\eta _{10}4\pm 1`$, $`\mathrm{\Omega }_bh_{70}^20.03\pm 0.01`$, squarely within the range from our tally of baryons, $`0.01\mathrm{\Omega }_b0.04`$.
This is a very tidy result, suggesting that we may be close to a complete accounting of baryons. It may be further verified soon, with measurements of the microwave background anisotropy. If this is right, there are no other major repositories of baryons, and the dark matter really must be in some nonbaryonic form.
The basic ideas of Standard Big Bang Nucleosynthesis have held up for half a century. The modern theoretical structure it has been mature for three decades, although even today its predictions are subject to refinements. The most amazing thing about it is that natureโs real universe is so simple that according to the steadily mounting evidence, this maximally simple and symmetric model seems empirically to be an accurate description of what really happened in the early universe, starting about a second after the Big Bang, everywhere in the $`10^3`$ cubic Gigaparsecs encompassed within our past light cone. To some of us this simplicity was always a hope (but remains a surprise); to David Schramm, it was almost an article of faith, of which he was the most ardent evangelist.
This work was supported at the University of Washington by NASA and NSF.
## 5 References
Anderson, S.F., Hogan, C. J., Williams, B. F. and Carswell, R. F. 1999, AJ 117, 56
Burles, S. and Tytler, D. 1998a, ApJ 507, 732
Burles, S. and Tytler, D. 1998b, ApJ 499, 699
Carlstrom, J. 1999 , these proceedings
Cen, R. and Ostriker, J. P. 1999, ApJ 514, 1
Fukugita, M., Hogan, C. J., and Peebles, P. J. E. 1998, ApJ 503, 518
Heap, S. R. et al. 1999, astro-ph/9812429
Hellsten, U. Gnedin, N. Y. and Miralda-Escudรฉ, J. 1999, ApJ in press (astro-ph/9804038)
Izotov, Y. I., Chaffee, F. H., Foltz, C. B., Green, R. F., Guseva, N. G., and Thuan, T. X. 1999, ApJ in press, astro-ph/9907228
Kirkman, D. et al. 1999, ApJ in press, astro-ph/9907128
Mendoza, L. and Hogan, C. J. 1999, astro-ph/9904334
Pettini, M. 1999, astro-ph/9902173, in โChemical Evolution from Zero to High Redshiftโ, ed. by J. Walsh and M. Rosa (Berlin: Springer)
Rauch, M. et al. 1997, ApJ 489, 7
Renzini, A. 1999, astro-ph/9902361, in โChemical Evolution from Zero to High Redshiftโ, ed. by J. Walsh and M. Rosa (Berlin: Springer)
Ryan, S. G. et al. 1999, astro-ph/9905211
Steigman, G. 1999, these proceedings
Wadsley, J., Hogan, C. J., and Anderson, S. 1999, to appear in the proceedings of Clustering at High Redshift, IGRAP International Conference, Marseilles, astro-ph/9911394
Wadsley, J. 1999, in preparation
Weinberg, D. et al. 1997, ApJ 490, 564
White, S. D. M., Navarro, J. F., Evrard, A. E., and Frenk, C. S. 1993, Nature 366, 429
Zhang, Y. et al. 1997, ApJ 485, 496 |
no-problem/9912/astro-ph9912478.html | ar5iv | text | # 1 Figure 1
|
no-problem/9912/cond-mat9912225.html | ar5iv | text | # 1 Introduction
## 1 Introduction
There has been considerable interest in diffusive-growth processes including growth phenomena for a droplet on a substrate. This growth phenomena in the case where diffusion and coalescence play the major roles are common in many areas of science and technology . In this process, each droplet diffuses and grows individually and coalesces with contacting droplets. The kinetics of these phenomenon have been studied experimentally and theoretically \[3-29\]. Some models have been developed to explain the kinetics of these processes. One such model consists of a single, motionless three dimensional droplet formed by diffusion and adsorption of non-coalescing monomers on a 2D substrate. In the model it is assumed that the diffusing monomers coalesce only with large immobile growing droplet and not with each other. In a static approximation was used to solve the diffusion equation and an approximate description of the long time behaviour was obtained. The static approach predicted an asymptotic power law growth rate for the radius of the droplet. Because of the growth of the droplet, the present problem involves a moving boundary. Moving boundary problems in the context of the diffusion equation are referred to as Stefan problems \[30-32\]. The only exact solutions for these problems have been found using a similarity variable method, see for instance, \[30-34\] and references therein. Using this method for a droplet of dimensionality $`d`$ growing on a substrate of the same dimensionality, an exact scaling solution can be found. In such a solution in one dimension has been derived which can be generalised to a higher dimension. However, the problem of a 3D droplet growing on a 2D substrate, may be treated by approximate methods. A simple treatment based on a quasistatic approximation has been presented in . A similarity variable approach was used to solve the Stefan problem with moving boundary . The results predicted that the radius of the droplet increases as $`[t/\mathrm{ln}(t)]^{1/3}`$ asymptotically. The asymptotic growth law predicted by the static approach differs from the quasistatic answer by a slowly varying logarithmic factor. In all the models in an adsorption boundary condition at the aggregate perimeter of the droplet, was considered.
In a generalisation of Smoluchowski model for diffusional growth of colloids, was presented. Smoluchowski considered the process of diffusional capture of particles assuming the growing aggregate is modeled as a sphere. He then solved the diffusion equation with an absorbing boundary condition at the aggregate surface of the sphere. In two other approaches were considered, a phenomenological model for the boundary condition and a radiation boundary condition. Both approaches allowed for incorporation of particle detachment in Smoluchowski model. Explicit expressions for the concentration and intake rate of particles were given in the long time limit .
In this paper we consider a single, motionless three-dimensional droplet growing by adsorption of diffusing monomers on a two-dimensional substrate. The diffusing non-coalescing monomers are adsorbed at the aggregate perimeter of the droplet with different boundary conditions. Models with different boundary conditions for the concentration of monomers are considered and solved in a quasistatic approximation. For each model, the diffusion equation is solved exactly, subject to a fixed boundary. Using mass conservation law at the aggregate perimeter of the growing droplet, we then obtain an expression for the growth rate of the moving boundary. Explicit asymptotic solutions in the both short and long time limits are given for the concentration, total flux of monomers at the perimeter of the growing droplet and for the growth rate of the droplet radius. This paper is organised as follows. In section 2, a model with an adsorption boundary condition is examined. In sections 3 and 4 we consider the two approaches which were introduced in to allow for particle detachment. A phenomenological model and a model with a radiation boundary condition are considered in sections 3 and 4, respectively. Another boundary condition which assumes a constant flux of monomers at the aggregate perimeter of the droplet, is also introduced in section 5. Finally, in section 6 we compare the results of different approaches and summarise our conclusions.
## 2 Growth Equations with Adsorption Boundary Condition
Consider an immobile three-dimensional droplet which is initially surrounded by monodisperse droplets. The droplet lies on a two-dimensional plane substrate on which the monomers diffuse. Monomers have the volume $`V`$ and diffuse with the diffusion constant $`D`$. Then, the concentration of monomers at point $`r`$ and at time $`t`$, $`c(r,t)`$, is described by the diffusion equation
$$\frac{c(r,t)}{t}=D\frac{1}{r}\frac{}{r}\left(r\frac{c(r,t)}{r}\right)$$
(1)
for $`rR`$, where $`R(t)`$ is the radius of the immobile growing droplet. The initial conditions are given by
$$c(r,t=0)=c_0,$$
(2)
which is the initial, uniform, monomer concentration and
$$R(t=0)=0,$$
(3)
which shows that the droplet is not present at the beginning of the process. We consider an adsorption boundary condition at the perimeter of the droplet
$$c(r=R,t>0)=0$$
(4)
and assume that at infinity the concentration of the monomers is finite and equal to $`c_0`$. Concentration gradients in the neighborhood of the droplet create a flux of monomers on the two-dimensional substrate. This flux feeds the growth of the droplet. Therefore, the rate of increase of the droplet volume is related to the total flux of monomers at the perimeter of the droplet by mass conservation,
$$\mathrm{\Phi }(t)=\lambda R^2\frac{dR}{dt},$$
(5)
where the total flux
$$\mathrm{\Phi }(t)=V\left[2\pi RD\frac{c}{r}|_R\right]$$
(6)
corresponds to the monomers incorporated at the perimeter of the droplet. In (5) $`\lambda `$ is a dimensionless factor related to the contact angle of the droplet.
In order to solve (1) with (2-4), we introduce the Laplace transform of the concentration,
$$\overline{c}(r,s)=_0^{\mathrm{}}๐te^{st}c(r,t),$$
(7)
which satisfies the equation
$$D\frac{1}{r}\frac{}{r}\left(r\frac{\overline{c}}{r}\right)=s\overline{c}c_0.$$
(8)
Here we have already used the initial condition (2). The general solution of this equation is given by
$$\overline{c}(r,s)=\frac{c_0}{s}+A(s)K_0(qr)+B(s)I_0(qr),$$
(9)
where $`q=\sqrt{s/D}`$, and $`K_0`$ and $`I_0`$ are Modified Bessel functions of order zero. To have a finite solution as $`r\mathrm{}`$, we set $`B(s)=0`$. The boundary condition (4) in the Laplace transform version becomes
$$\overline{c}(r=R,s)=0.$$
(10)
Using (10) the transformed concentration and its gradient normal to the droplet perimeter, yield
$$\overline{c}(r,s)=\frac{c_0}{s}\left[1\frac{K_0(qr)}{K_0(qR)}\right],$$
(11)
$$\frac{\overline{c}(r,s)}{r}=\frac{c_0}{(Ds)^{1/2}}\frac{K_1(qr)}{K_0(qR)}.$$
(12)
To find time dependent concentration and its radial gradient, we use the Inversion theorem for (11,12). Both (11,12) have a branch point at $`s=0`$, so in the Inversion formula, we use a contour which does not contain any zeros of $`s`$ and $`K_0(qR)`$. Consequently, time dependent concentration and also total flux at the droplet perimeter from (6), are given by
$$c(r,t)=\frac{2c_0}{\pi }_0^{\mathrm{}}e^{Du^2t}\left[\frac{J_0(Ru)N_0(ru)J_0(ru)N_0(Ru)}{J_0^2(Ru)+N_0^2(Ru)}\right]\frac{du}{u},$$
(13)
$$\mathrm{\Phi }(t)=\frac{8c_0DV}{\pi }_0^{\mathrm{}}e^{Du^2t}\frac{1}{\left[J_0^2(Ru)+N_0^2(Ru)\right]}\frac{du}{u},$$
(14)
where $`J_0`$ and $`N_0`$ are Bessel functions of order zero. Using (5,14) a differential equation for the growth rate of the droplet radius can be obtained
$$\lambda R^2\frac{dR}{dt}=\frac{8c_0DV}{\pi }_0^{\mathrm{}}e^{Du^2t}\frac{1}{\left[J_0^2(Ru)+N_0^2(Ru)\right]}\frac{du}{u},$$
(15)
which gives a general solution for $`R`$ as a function of the time. We are interested in the short and long time solutions for the concentration, the total flux of monomers at the perimeter of the droplet and the growth rate of the droplet radius.
For small values of the time, it is shown that the behaviours of $`c(r,t)`$ and $`c(r,t)/r`$ may be determined from the behaviors of $`\overline{c}(r,s)`$ and $`\overline{c}(r,s)/r`$, respectively, for large values of the transformed parameter $`s`$. Then, we expand the Bessel functions occuring in (11,12) supposing s to be large. The final result for the concentration of monomers, keeping the leading time dependent term, is
$$c(r,t)c_0\left[1\left(\frac{R}{r}\right)^{1/2}Erfc\left(\frac{rR}{2\sqrt{Dt}}\right)\right].$$
(16)
The total flux at the droplet perimeter and the growth rate of the droplet radius also in this limit using (6) and (5), respectively, are given by
$$\mathrm{\Phi }(t)2c_0VR\sqrt{\pi D}t^{1/2},$$
(17)
$$R(t)\left(\frac{8c_0V\sqrt{\pi D}}{\lambda }\right)^{1/2}t^{1/4}.$$
(18)
We see that in the short time limit, R grows as a power of the time with an exponent of $`1/4`$.
For large values of the time, the behaviours of $`c(r,t)`$ and $`c(r,t)/r`$ may be determined from the bahaviours of $`\overline{c}(r,t)`$ and $`\overline{c}(r,s)/r`$, respectively, for small values of the transform parameter $`s`$. We then expand the Bessel functions occuring in (11,12) supposing s to be small. Keeping the leading time dependence term, the concentration of monomers yields
$$c(r,t)2c_0\frac{\mathrm{ln}\left({\displaystyle \frac{r}{R}}\right)}{\mathrm{ln}\left({\displaystyle \frac{4Dt}{\sigma ^2R^2}}\right)},$$
(19)
where $`\sigma =e^\gamma =1.78107\mathrm{}`$, where $`\gamma =0.57722\mathrm{}`$ is Eulerโs constant. The total flux at the droplet perimeter and the growth rate of the droplet radius also in this limit using (6) and (5), respectively are given by
$$\mathrm{\Phi }(t)4\pi c_0DV\left[\mathrm{ln}\left(\frac{4Dt}{\sigma ^2R^2}\right)\right]^1,$$
(20)
$$R(t)A\left[\frac{\tau }{\mathrm{ln}(\tau )}\right]^{1/3},$$
(21)
where $`A=\left(9\pi V\sigma ^2/\lambda \right)^{1/3}`$ and $`\tau =4c_0Dt/\sigma ^2`$ is the dimensionless time. Up to a constant, these are the same results which were obtained by Krapvisky based on a quasistatic approximation using a similarity variable approach .
## 3 Phenomenological Rate Equation Model
One can consider various modification of the initial and boundary conditions (2) and (4). Here we improve the model and incorporate effects other than the irreversible adsorption at $`r=R`$ expressed by (4), Ref. . In this section we consider a phenomenological modification of the boundary condition (4) to allow for detachment. This was introduced in where the relation
$$\frac{c(r,t)}{t}=mc(r,t)+k$$
(22)
at $`r=R`$ was replaced for (4). Here it is assumed that the diffusing monomers that reach the perimeter of the droplet are incorporated in the aggregate structure at the rate $`mc`$ proportional to their concentration at $`R`$. The second term in (22) corresponds to detachment and is assumed that only depends on the internal processes, so there is no dependence on the external diffuser concentration .
To solve (1) with (2,3) and (22), we go through steps similar to section 2 and only emphasize the final expressions. In the Laplace transform version, the boundary condition becomes
$$(s+m)\overline{c}(r,s)=\frac{k}{s}+c_0$$
(23)
at $`r=R`$. The concentration and the radial gradient of the concentration in this version become
$$\overline{c}(r,s)=\frac{c_0}{s}\frac{mc_0k}{s(s+m)}\frac{K_0(qr)}{K_0(qR)},$$
(24)
$$\frac{\overline{c}(r,s)}{r}=\frac{mc_0k}{(Ds)^{1/2}}\frac{1}{(s+m)}\frac{K_1(qr)}{K_0(qR)}.$$
(25)
Now we look for the solutions in the short and long time limits.
For small values of the time, we use the asymptotic expansions of the Bessel functions in (24,25) for large values of $`s`$ and ignore $`m`$ in comparison to $`s`$ in the term $`(s+m)`$. Then, the concentration, the total flux at the droplet perimeter and the growth rate of the droplet radius in this limit, keeping only the leading time-dependent terms, are given by
$$c(r,t)c_0+4mt\left(c_0\frac{k}{m}\right)\left(\frac{R}{r}\right)^{1/2}Erfc\left(\frac{rR}{2\sqrt{Dt}}\right),$$
(26)
$$\mathrm{\Phi }(t)4mVR\sqrt{\pi D}\left(c_0\frac{k}{m}\right)t^{1/2},$$
(27)
$$R(t)\left[\frac{16mV}{3\lambda }\sqrt{\pi D}\left(c_0\frac{k}{m}\right)\right]^{1/2}t^{3/4}.$$
(28)
We see that in a phenomenological model, R grows as a power of the time with an exponent of $`3/4`$ in the short time limit. In the expressions (26-28), in comparison with (16-18) in the previous section, there is a term as $`(c_0k/m)`$ which shows a reduction of the rate due to detachment, proportional to the ratio $`k/m`$.
For large values of the time, we use the expansions of the Bessel functions in (24,25) supposing $`s`$ to be small and ignore $`s`$ in comparison with $`m`$ in the term $`(s+m)`$. Then, the concentration, total flux at the droplet perimeter and growth rate of the droplet radius, keeping only the leading time dependent terms, yield
$$c(r,t)\frac{k}{m}+2\left(c_0\frac{k}{m}\right)\frac{\mathrm{ln}\left({\displaystyle \frac{r}{R}}\right)}{\mathrm{ln}\left({\displaystyle \frac{4Dt}{\sigma ^2R^2}}\right)},$$
(29)
$$\mathrm{\Phi }(t)4\pi DV\left(c_0\frac{k}{m}\right)\left[\mathrm{ln}\left(\frac{4Dt}{\sigma ^2R^2}\right)\right]^1,$$
(30)
$$R(t)A\left[\frac{\tau }{\mathrm{ln}(\tau )}\right]^{1/3},$$
(31)
where $`A=(9\pi V\sigma ^2/\lambda )^{1/3}`$ and $`\tau =4Dt(c_0k/m)/\sigma ^2`$. These asymptotic expressions are quite similar to the long time forms (19-21) in section 2. The only change is the reduction of the rate due to the detachment, proportional to the ratio $`k/m`$.
For a fast enough detachment, the total fluxes of the monomers at the boundary in the both short and long time limits (27,30) can actually become negative. In this case, the flux does not feed the growth of the droplet and the droplet volume does not increase anymore. Therefore, the mass conservation (5) does not hold and the growth laws (28,31) are not valid anymore. For a case in which $`c_0=k/m`$, the system reaches a stationary state and therefore the total rate and the total flux of the monomers at the droplet perimeter, become zero for all times. Consequently, there is no growth for the droplet and the concentration of the monomers is equal to the initial concentration, $`c_0`$, for all times. These results can be obtained from the both short and long time expressions (26-28) and (29-31), respectively.
## 4 Radiation Boundary Condition
In this section we consider another modification of the boundary condition (4) and replace it with a radiation boundary condition
$$\alpha \frac{c(r,t)}{r}+\beta =c(r,t)$$
(32)
at $`r=R`$, Ref. . Here it is assumed that the concentration is proportional to its derivative, with an additional constant $`\beta `$. Again we go through steps similar to the section 2 and only emphasize the final expressions. In the Laplace transform version, the boundary condition becomes
$$\alpha \frac{\overline{c}(r,s)}{r}+\frac{\beta }{s}=\overline{c}(r,s)$$
(33)
at $`r=R`$. The concentration and its radial gradient in this version become
$$\overline{c}(r,s)=\frac{c_0}{s}\frac{(c_0\beta )}{s}\frac{K_0(qr)}{K_0(qR)+\alpha qK_1(qR)},$$
(34)
$$\frac{\overline{c}(r,s)}{r}=\frac{(c_0\beta )}{(Ds)^{1/2}}\frac{K_1(qr)}{K_0(qR)+\alpha qK_1(qR)}.$$
(35)
We concentrate our attention to the solutions in the short and long time limits.
For small values of the time, we use the asymptotic expansions of the Bessel functions in (34,35) to get the leading time dependent terms for the concentration, the total flux and the droplet growth rate
$$c(r,t)c_0\frac{2i}{\alpha }(c_0\beta )\left(\frac{DRt}{r}\right)^{1/2}Erfc\left(\frac{rR}{2\sqrt{Dt}}\right),$$
(36)
$$\mathrm{\Phi }(t)\frac{2\pi RDV}{\alpha }(c_0\beta ),$$
(37)
$$R(t)\left[\frac{4\pi DV}{\alpha \lambda }(c_0\beta )\right]^{1/2}t^{1/2}.$$
(38)
The term $`(c_0\beta )`$ in these expressions shows a reduction of the rate due to the detachment, proportional to the ratio $`\beta `$. We see that in the short time limit, the total flux at the droplet perimeter is time-independent and $`R`$ grows as a power law with an exponent equal to $`1/2`$.
For large values of the time, we expand the Bessel functions in (34,35) supposing $`s`$ to be small. Consequently, the asymptotic expressions for the concentration, total flux and the droplet growth rate, yield
$$c(r,t)\beta +2(c_0\beta )\frac{\mathrm{ln}\left({\displaystyle \frac{r}{R}}\right)}{\mathrm{ln}\left({\displaystyle \frac{4Dt}{\sigma ^2R^2}}\right)},$$
(39)
$$\mathrm{\Phi }(t)4\pi DV(c_0\beta )\left[\mathrm{ln}\left(\frac{4Dt}{\sigma ^2R^2}\right)\right]^1,$$
(40)
$$R(t)A\left[\frac{\tau }{\mathrm{ln}(\tau )}\right]^{1/3},$$
(41)
where $`A=(9\pi V\sigma ^2/\lambda )^{1/3}`$ and $`\tau =4Dt(c_0\beta )/\sigma ^2`$. These long time expressions have the same forms as (29-31) provided we identify
$$\beta =\frac{k}{m}.$$
(42)
For a fast enough detachment, analogue to the section 3, the total fluxes of the monomers at the boundary (37,40) can become negative. In this case, the growth laws (38,41) do not hold anymore. For a case in which $`c_0=\beta `$, analogue to the section 3, the system reaches a stationary state and therefore the total flux and the droplet growth rate become zero. The concentration also does not change with the time and is equal to the initial one. These can be seen from the both short and long time results (36-38) and (39-41), respectively.
## 5 Constant Flux Boundary Condition
In this section we impose a condition on the flux of the monomers assuming that the total flux of monomers at the droplet perimeter is constant. Therefore, we replace (4) with
$$\mathrm{\Phi }(t)=Q$$
(43)
at $`r=R`$, where $`\mathrm{\Phi }(t)`$ is given by (6) and $`Q`$ is a constant. The analogue to the previous sections, in the Laplace transform version, the boundary condition becomes
$$2\pi RDV\frac{\overline{c}(r,s)}{r}=\frac{Q}{s}$$
(44)
at $`r=R`$. The concentration and its radial gradient in this version are
$$\overline{c}(r,s)=\frac{c_0}{s}\frac{Q}{2\pi RVD^{1/2}}\frac{K_0(qr)}{s^{3/2}K_1(qR)}$$
(45)
and
$$\frac{\overline{c}(r,s)}{r}=\frac{Q}{2\pi RDV}\frac{K_1(qr)}{sK_1(qR)}.$$
(46)
Appropriate expansions of the Bessel functions in (45) give us the limiting forms of the concentrations. For small values of the time it yields
$$c(r,t)c_0\frac{iQ}{\pi V}\left(\frac{t}{DRr}\right)^{1/2}Erfc\left(\frac{rR}{2\sqrt{Dt}}\right)$$
(47)
and for large values of the time it gives
$$c(r,t)c_0\frac{Q}{4\pi DV}\mathrm{ln}\left(\frac{4Dt}{\sigma r^2}\right).$$
(48)
The trivial solution for the droplet growth rate using (5,43) is
$$R(t)=\left(\frac{3Q}{\lambda }\right)^{1/3}t^{1/3},$$
(49)
for all times.
## 6 Conclusions
We studied the growth of a single, motionless, three-dimensional droplet that accommodates monomers at its perimeter on a 2D substrate. The noncoalescing monomers diffuse and are adsorbed at the aggregate perimeter of the droplet with different boundary conditions. Models with adsorption and radiation boundary conditions, and a phenomenological model for the boundary condition, were considered and solved in a quasistatic approximation. In a model with adsorption boundary condition, the droplet forms an absorber and the concentration of the monomers at its perimeter is zero. In a phenomenological model, we assumed that the diffusing monomers that reach the perimeter of the droplet, are incorporated in the aggregate structure at a rate proportional to their concentration at the boundary. We also added another term which corresponds to detachment. In a model with radiation boundary condition we assumed that the concentration is proportional to its derivative with an extra detachment term. For each model, we solved exactly the diffusion equation for the concentration of the monomers, subject to a fixed boundary. Then, using a mass conservation law at the perimeter of the droplet, we found an expression for the growth rate of the moving boundary. Models were subjected to an initial, uniform concentration of monomers. Asymptotic results for the concentration, total flux of monomers at the boundary and the growth rate of the droplet radius, were obtained in both short and long time limits. The results revealed that in both phenomenological and radiation models, in comparison with adsorption model, there is a reduction of the rate due to the detachment. The rate can become negative if the detachment is fast enough. In this case, the total flux of the monomers at the perimeter of the droplet become negative. Therefore, the flux does not feed the growth of the droplet volume and the droplet growth laws obtained in the models, are not valid anymore. For a value of the detachment for which the total rate and therefore the total flux become zero, the system reaches a stationary state and there is no growth for the droplet anymore. The same reduction of the rate was obtained in where incorporation of particle detachment in Smoluchowski model of colloidal growth, was considered.
The results in the short time limit predicted that the radius of the droplet grows as a power of the time with different exponents for different boundary conditions. The exponents of the power laws were $`1/4`$, $`1/2`$ and $`3/4`$, respectively, for the models with adsorption, radiation and phenomenological boundary conditions. We see that the growth rate is the slowest for the adsorption boundary condition and is the fastest for the phenomenological model. This is because, as was said before, in the phenomenological model, the diffusing monomers at the perimeter of the droplet are incorporated in the aggregate structure of the droplet. The total flux of the monomers at the droplet perimeter is also power law with exponents of $`1/2`$ and $`1/2`$ for the adsorption and phenomenological model, respectively, and is a constant for the radiation model. Again the flux is maximum for the phenomenological model and is minimum for the adsorption model.
In the long time limit, the growth law for the radius of the droplet was the same for all boundary conditions. Also the concentration and total flux had the same time dependency in all models. The only change, as we said before, was the reduction of the rate due to the detachment in the both phenomenological and radiation models in comparison with adsorption model. Asymptotic results for large values of the time exhibited that the radius of the droplet increases as $`[t/\mathrm{ln}(t)]^{1/3}`$ in all models. This was obtained by Krapivsky where a similarity variable approach was used to treat the growth of a droplet with an adsorption boundary condition based on a quasistatic approximation.
We saw that the time dependency of the results was the same for all the models in the long time limit and was different for different models in the short time limit. This suggests that initially the flux of the monomers at the boundary and therefore the droplet growth rate, are affected by the condition at the boundary. But in the long time limit, the system reaches a stable state and the initial effects can be ignored, therefore all the models give the same results. This suggests that a rate as $`[t/\mathrm{ln}(t)]^{1/3}`$ is a universal asymptotic growth law for the radius of the droplet independent of the boundary conditions.
In the both models with phenomenological and radiation boundary conditions, similar to the results in , the value of the concentration at $`r=R`$ for large times, see (29) and (39), is exactly equal to $`\beta =k/m`$, independent of $`R`$. This suggests that as far as large $`R`$ and large time behaviours are concerned, we can use the boundary condition
$$c=\beta =\frac{k}{m}$$
(50)
at $`r=R`$, instead of phenomenological and radiation boundary conditions. Indeed, the value of the concentration at $`r=R`$ is the only parameter needed to calculate the modifications of the asymptotic behaviours due to the detachment. With this boundary condition for all times, the asymptotic results (29-31) and (39-41) become exact. Therefore, a constant concentration of the monomers at $`r=R`$ for all times, gives an exact growth rate as $`[t/\mathrm{ln}(t)]^{1/3}`$ .
We also examined another model with a constant flux of monomers at $`r=R`$. The results showed that the radius of the droplet grows as $`t^{1/3}`$ for all times. Thus, the growth laws predicted by a constant concentration and by a constant flux at the boundary, differ from each other by a slowly varying logarithmic factor. |
no-problem/9912/cs9912013.html | ar5iv | text | # Multivariate Regression Depth
## 1 Introduction
Linear regression asks for an affine subspace (a flat) that fits a set of data points. The most familiar case assumes $`d\mathrm{1}`$ independent or explanatory variables and one dependent or response variable, and fits a hyperplane to explain the dependent variable as a linear function of the independent variables. Quite often, however, there may be more than one dependent variable, and the multivariate regression problem requires fitting a lower-dimensional flat to the data points, perhaps even a succession of flats of increasing dimensions. Multivariate least-squares regression is easily solved by treating each dependent variable separately, but this is not correct for other common forms of regression such as least absolute deviation or least median of squares .
Rousseeuw and Hubert introduced the notion of regression depth as a robust criterion for linear regression. The regression depth of a hyperplane $`H`$ fitting a set of $`n`$ points is the minimum number of points whose removal makes $`H`$ into a nonfit. A nonfit is a hyperplane that can be rotated to vertical (that is, parallel to the dependent variableโs axis) without passing through any points. The intuition behind this definition is that a vertical hyperplane posits no relationship between the dependent and independent variables, and hence many points should have to be invalidated in order to make a good regression hyperplane combinatorially equivalent to a vertical hyperplane. Since this definition does not make use of the size of the residuals, but only uses their signs, it is robust in the face of skewed or heteroskedastic (data-dependent) error models. Regression depth also has a number of other nice properties including invariance under affine transformations, and a connection to the classical notions of center points and data depth.
This paper generalizes regression depth to the case of more than one dependent variable, that is, to fitting a $`k`$-flat to points in $`R^d`$. This generalization is not obvious: for example, consider fitting a line to points in $`R^\mathrm{3}`$. Generic lines can be rotated wherever one likes without passing through data points, so how are we to distinguish one line from another?
We start by reviewing previous work (Section 2) and stating our basic definitions (Section 3). We then provide a lemma that may be of independent interest, on finding a family of large subsets of any point set such that the family has no hyperplane transversal (Section 4). We prove the existence of deep $`k`$-flats for any $`k`$ (Section 5), and give tight bounds on depths of lines in $`R^\mathrm{3}`$ (Section 6). We conclude by discussing related generalizations of Tverbergโs theorem (Section 7), describing possible connections between $`k`$-flats and $`(dk\mathrm{1})`$-flats (Section 8), and outlining the algorithmic implications of our existence proof (Section 9). Along with the results proven in each section, we list open problems for further research.
## 2 Previous Work
Regression depth was introduced by Rousseeuw and Hubert as a combinatorial measure of the quality of fit of a regression hyperplane. An older notion, variously called data depth, location depth, halfspace depth, or Tukey depth, similarly measures the quality of fit of a single-point estimator. It has long been known that there exists a point of location depth at least $`n/(d+\mathrm{1})`$ (a center point). Rousseeuw and Hubert provided a construction called the catline for computing a regression line for a planar point set with depth at least $`n/\mathrm{3}`$, and conjectured that in higher dimensions as well there should always exist a regression hyperplane of depth $`n/(d+\mathrm{1})`$. Steiger and Wenger proved that a deep regression hyperplane always exists, but with a much smaller fraction than $`\mathrm{1}/(d+\mathrm{1})`$. Amenta, Bern, Eppstein, and Teng solved the conjecture using an argument based on Brouwerโs fixed-point theorem and a close connection between regression depth and center points.
On the algorithmic front, Rousseeuw and Struyf gave algorithms for testing the regression depth of a hyperplane. Their time bounds are exponential in the dimension, unsurprising since the problem is NP-complete for unbounded dimension . For the planar case, Van Kreveld, Mitchell, Rousseeuw, Sharir, Snoeyink, and Speckmann gave an $`O(nlog^\mathrm{2}n)`$ algorithm for computing a deepest line . Langerman and Steiger later improved this to an optimal $`O(nlogn)`$ time bound.
## 3 Definitions
Although regression is naturally an affine rather than projective concept, our constructions and definitions live most gracefully in projective space. We view $`d`$-dimensional real projective space as a renaming of objects in $`(d+\mathrm{1})`$-dimensional affine space. (Affine space is the familiar Euclidean space, only we have not specified a distance metric.) A $`k`$-flat, for $`\mathrm{1}kd`$, through the origin of $`(d+\mathrm{1})`$-dimensional affine space is a projective $`(k\mathrm{1})`$-flat. In particular a line through the origin is a projective point and a plane through the origin is a projective line. A projective line segment is the portion of a projective line between two projective points, that is, a pair of opposite planar wedges with vertex at the origin.
We can embed affine $`d`$-space into projective space as a hyperplane that misses the origin. There is a unique line through any point of this hyperplane and the origin, and hence each point of affine space corresponds to a unique projective point. There is, however, one projective hyperplane, and many projective $`k`$-flats for $`k<d\mathrm{1}`$, without corresponding affine flats; these are the projective $`k`$-flats parallel to the affine space. We say that these flats are at infinity.
Each projective point $`p`$ has a dual projective hyperplane $`D(p)`$, namely the hyperplane orthogonal to $`p`$ at the origin in $`(d+\mathrm{1})`$-dimensional affine space. Similarly a projective $`k`$-flat dualizes to its orthogonal $`(dk)`$-flat. Notice that in projective space, unlike in affine space, there are no exceptional cases: each $`k`$-flat is the dual of a $`(dk)`$-flat.
Now let $`X`$ be a set of points in $`d`$-dimensional projective space. (From now on we shall just say โpointโ, โlineโ, etc. rather than โprojective pointโ, โprojective lineโ, when there is no risk of confusion.) We now propose a key definition: a distance between flats with respect to the points in $`X`$. The definition is more intuitive in the dual formulation than in the primal, but we give both below for completeness. Let $`D(F)`$ denote the flat that is dual to flat $`F`$ and let $`D(X)`$ denote the set of hyperplanes that are dual to points of $`X`$. A double wedge is the (closed) region between two projective hyperplanes.
###### Definition 1
The crossing distance between two flats $`F`$ and $`G`$ with respect to $`X`$ is the minimum number of hyperplanes of $`D(X)`$ intersected by a (closed) projective line segment with one endpoint on $`D(F)`$ and the other on $`D(G)`$. In the primal formulation, the crossing distance between $`F`$ and $`G`$ is the minimum number of points of $`X`$ in a double wedge that contains $`F`$ in one bounding hyperplane and $`G`$ in the other.
We now turn our attention to linear regression and for ease of understanding, we return temporarily to $`d`$-dimensional affine space. Assume that we designate $`k`$ dimensions as independent variables and $`dk`$ as dependent variables. Let $`I`$ denote the linear subspace spanned by the independent dimensions. We call a $`k`$-flat vertical if its projection onto $`I`$ is not full-dimensional, that is, if its projection is not all of $`I`$. For example, let $`k=\mathrm{1}`$ and $`d=\mathrm{3}`$ and think of the $`x`$-axis as representing the independent variable; then any line contained in a vertical plane (that is, parallel to the $`yz`$-plane) is vertical.
In projective space, a $`k`$-flat is vertical if and only if it contains a point in a particular $`(dk\mathrm{1})`$-flat at infinity, which we call the $`(dk\mathrm{1})`$-flat at vertical infinity and denote by $`V_{dk\mathrm{1}}`$.
###### Definition 2
The regression depth of a $`k`$-flat $`F`$ is its crossing distance from $`V_{dk\mathrm{1}}`$. Equivalently, the regression depth of $`F`$ is the minimum number of points whose removal makes $`F`$ into a nonfit, where a nonfit is a $`k`$-flat with crossing distance zero from $`V_{dk\mathrm{1}}`$.
Any $`k`$-flat at infinity meets $`V_{dk\mathrm{1}}`$ and therefore has depth zero. Therefore, any method for selecting a $`k`$-flat of nonzero regression depth will automatically choose a $`k`$-flat coming from the original affine space, rather than one that exists only in the projective space used for our definitions.
Note that, unlike the case for ordinary least squares, there does not seem to be any way of solving $`k`$-flat regression separately for each dependent variable. Even for the problem of finding a line in $`R^\mathrm{3}`$, combining the solution to two planar regression lines may result in a nonfit.
## 4 Nontransversal Families
In order to prove that deep $`k`$-flats exist, we need some combinatorial lemmas on large subsets of points without a hyperplane transversal.
###### Definition 3
Let $`S`$ be a set of points. Then we say that a hyperplane $`H`$ cuts $`S`$ if both of the two open halfspaces bounded by $`H`$ contain at least one point of $`S`$. We say that a family of sets is transversal if there is a hyperplane that cuts all sets in the family.
###### Lemma 1 ()
Let $`d`$ be a constant, and assume we are given a set of $`n`$ points in $`R^d`$ and a parameter $`p`$. Then we can partition the points into $`p`$ subsets, with at most $`\mathrm{2}n/p`$ points in each subset, such that any hyperplane cuts $`o(p)`$ of the subsets.
###### Lemma 2 ()
Let $`pq>d`$ be constants. Then there is a constant $`C(p,q,d)`$ with the following property: If $``$ is any family of point sets in $`R^d`$, such that any $`p`$-tuple of sets in $``$ contains a transversal subfamily of $`q`$ sets, then $``$ can be partitioned into $`C(p,q,d)`$ transversal subfamilies.
###### Theorem 1
Let $`d`$ be a constant. Then there is a constant $`P(d)`$ with the following property: For any set $`S`$ of $`n`$ points in $`R^d`$, we can find a non-transversal family of $`d+\mathrm{1}`$ subsets of $`S`$, such that each subset in the family contains at least $`n/P(d)`$ points of $`S`$.
Proof: Choose $`p`$ to be a multiple of three, sufficiently large that the $`o(p)`$ bound of Lemma 1 is strictly smaller than $`p/(\mathrm{3}C(d+\mathrm{1},d+\mathrm{1},d))`$, and let $`P(d)=\mathrm{2}p`$. By Lemma 1, partition $`S`$ into $`p`$ subsets of at most $`\mathrm{2}n/p`$ points, such that any hyperplane cuts few subsets.
Let $``$ be the family consisting of the largest $`p/\mathrm{3}`$ subsets in the partition. If the smallest member of $``$ contains $`m`$ points, then the total size of all the members of the partition would have to be at most $`(p/\mathrm{3})\mathrm{2}n/p+(\mathrm{2}p/\mathrm{3})m=\mathrm{2}n/\mathrm{3}+\mathrm{2}pm/\mathrm{3}`$, but this total size is just $`n`$, so $`mn/(\mathrm{2}p)`$ and each member of $``$ contains at least $`n/P(d)`$ points.
If each $`(d+\mathrm{1})`$-tuple of sets in $``$ were transversal, we could apply Lemma 2 and partition $``$ into $`C(p,q,d)`$ transversal subfamilies, one of which would have to contain at least $`||/C(p,q,d)=p/(\mathrm{3}C(d+\mathrm{1},d+\mathrm{1},d))`$ subsets. But this violates the $`o(p)`$ bound of Lemma 1, so $``$ must contain a non-transversal $`(d+\mathrm{1})`$-tuple. This tuple fulfills the conditions of the statement of the lemma.
Clearly, $`P(\mathrm{1})=\mathrm{2}`$ since the median partitions any set of points on a line into two nontransversal subsets. Figure 1 depicts two different constructions showing that $`P(\mathrm{2})\mathrm{6}`$. Although the bound of six is tight for these two constructions (as can be seen by the example of points equally spaced on a circle) we do not know whether there might be a different construction that achieves a better bound; the best lower bound we have found is the following:
###### Theorem 2
$`P(\mathrm{2}){\displaystyle \frac{\pi }{\mathrm{2}sin^\mathrm{1}\frac{\mathrm{1}}{\mathrm{3}}}}\mathrm{4}.\mathrm{6}\mathrm{2}\mathrm{2}`$.
Proof: We form a distribution on the plane by centrally projecting the uniform distribution on a sphere. We show that any nontransversal triple for this distribution must have a set with measure at most $`\mathrm{1}/\mathrm{4}.\mathrm{6}\mathrm{2}\mathrm{2}`$ times the total measure. The same bound then holds in the limit for discrete point sets formed by taking $`ฯต`$-approximations of this distribution.
Let $`S_i`$, $`i\{\mathrm{1},\mathrm{2},\mathrm{3}\}`$ denote the three nontransversal subsets of the plane maximizing the minimum measure of any $`S_i`$. Without loss of generality, each $`S_i`$ is convex. Consider the three lines tangent to two of the $`S_i`$, and separating them from the third set (such a line must exist since the sets are nontransversal). These lines form an arrangement with seven (possibly degenerate) faces: a triangle adjacent on its edges to three three-sided infinite cells, and on its vertices to three two-sided infinite cells. The sets $`S_i`$ coincide with the three-sided infinite cells: any set properly contained in such a cell could be extended to the whole cell without violating the nontransversality condition, and if they instead coincided with the two-sided cells we could shrink the arrangementโs central triangle while increasing the sizes of all three $`S_i`$. The two arrangements in Figure 1 can both be viewed as such three-line arrangements, degenerate in different ways.
Each line in the plane lifts by central projection to a great circle on the sphere. Consider the great circles formed by lifting four lines: the three lines considered above and the line at infinity. Any arrangement of four circles on the sphere cuts the sphere in the pattern of a (possibly degenerate) cuboctahedron (Figure 2(a)). The three-sided infinite cells in the plane lift to quadrilateral faces of this cuboctahedron. Note that the area of a spherical quadrilateral is the sum of its internal angles, minus $`\mathrm{2}\pi `$.
Form the dual of the arrangement by treating each great circle as an โequatorโ and placing a pair of points at the corresponding two poles. The geodesics between these points have the pattern of a cuboid (Figure 2(b)) such that the length of each geodesic is an angle complementary to one of the cuboctahedron quadrilateralsโ internal angles. Thus, the cuboid minimizing the maximum quadrilateral perimeter corresponds to the cuboctahedron maximizing the minimum quadrilateral area. But any spherical cuboid has at least one face covering at least one-sixth of the sphere, and the minimum perimeter for such a quadrilateral is achieved when the quadrilateral is a square. Therefore, the regular cube minimizes the maximum perimeter and the regular cuboctahedron maximizes the minimum area. The ratio of a regular cuboctahedronโs square face area to the area of a full hemisphere is the value given, $`\frac{\pi }{\mathrm{2}sin^\mathrm{1}\mathrm{1}/\mathrm{3}}\mathrm{4}.\mathrm{6}\mathrm{2}\mathrm{2}`$.
We also do not know tight bounds on $`P(d)`$ for $`d\mathrm{3}`$. The proof of Theorem 1 (using the best known bounds in Lemma 1 ) leads to upper bounds of the form $`O(C(d+\mathrm{1},d+\mathrm{1},d)^d)`$. We may be able to improve this somewhat, to $`O(C(d+\mathrm{1},d,d\mathrm{1})^{\mathrm{1}d})`$, by a more complicated construction: Project the points arbitrarily onto a $`(d\mathrm{1})`$-dimensional subspace, find a partition in the subspace, and use Lemma 2 to find a family of $`d+\mathrm{1}`$ subsets such that no subfamily of $`d`$ subsets has a transversal. As in the catline construction , group these subsets into $`d`$ pairs, and form a ham sandwich cut in $`R^d`$ of these pairs, in such a way that this cut partitions each subset of the family in the same proportion $`a:b`$, and such that the half-subsets of size $`a`$ are above or below the ham sandwich cut accordingly as the members of the family of $`d+\mathrm{1}`$ subsets are on one or the other side of a Radon partition of those subsets in $`R^{d\mathrm{1}}`$. Without loss of generality, $`a>b`$; then choose the $`d+\mathrm{1}`$ subsets required by Lemma 1 to be the ones of size $`a`$.
###### Open Problem 1
Prove tighter bounds on $`P(d)`$ for $`d\mathrm{2}`$.
## 5 Deep $`k`$-Flats
It is previously known that deep $`k`$-flats exist for $`k=\mathrm{0}`$ and $`k=d\mathrm{1}`$ . In this section we show that such flats exist for all other values of $`k`$.
We first need one more result, a common generalization of centerpoints and the ham sandwich theorem:
###### Lemma 3 (The Center Transversal Theorem )
Let $`k+\mathrm{1}`$ point sets be given in $`R^d`$, each containing at least $`m`$ points, where $`\mathrm{0}k<d`$. Then there exists a $`k`$-flat $`F`$ such that any closed halfspace containing $`F`$ contains at least $`m/(dk+\mathrm{1})`$ points from each set.
The weaker bound $`m/(d+\mathrm{1})`$ can be proven simply by choosing a flat through the centerpoint of each subset.
###### Theorem 3
Let $`d`$ and $`\mathrm{0}k<d`$ be constants. Then there is a constant $`R(d,k)`$ such that for any set of $`n`$ points with $`k`$ independent and $`dk`$ dependent degrees of freedom, there exists a $`k`$-flat of regression depth at least $`n/R(d,k)`$.
Proof: Project the point set onto the subspace spanned by the $`k`$ independent directions, in such a way that the inverse image of each point in the projection is a $`(dk)`$-flat containing $`V_{dk\mathrm{1}}`$. By Theorem 1, we can find a family of $`k+\mathrm{1}`$ subsets of the data points, each with $`n/P(k)`$ points, such that the $`k`$-dimensional projection of this family has no transversal. We then let $`F`$ be the $`k`$-flat determined by applying Lemma 3 to this family of subsets.
Then consider any double wedge bounded by a hyperplane containing $`F`$ and a hyperplane containing $`V_{dk\mathrm{1}}`$. The vertical boundary of this double wedge projects to a hyperplane in $`R^k`$, so it must miss one of the $`k+\mathrm{1}`$ subsets in the family. Within this missed subset the double wedge appears to be simply a halfspace through $`F`$. By Lemma 3, the double wedge must therefore contain at least $`n/((dk+\mathrm{1})P(k))`$ points. Thus if let $`R(d,k)=(dk+\mathrm{1})P(k)`$ the theorem is satisfied.
For $`k=\mathrm{0}`$ or $`k=d\mathrm{1}`$ we know that $`R(d,k)=d+\mathrm{1}`$ . However exact values are not known for intermediate values of $`k`$.
###### Open Problem 2
Prove tighter bounds on $`R(d,k)`$ for $`\mathrm{1}kd\mathrm{2}`$.
The following conjecture would follow from the assumption that $`R(d,k)`$ is a linear function of $`d`$ for fixed $`k`$ (as the $`O(d)`$ bound of Theorem 3 makes plausible), since $`R(k,k)=\mathrm{1}`$ and $`R(k+\mathrm{1},k)=k+\mathrm{2}`$. It also matches the known results $`R(d,\mathrm{0})=R(d,d\mathrm{1})=d+\mathrm{1}`$ and the bounds $`R(d,\mathrm{1})\mathrm{2}d\mathrm{1}`$ and $`R(\mathrm{3},\mathrm{1})=\mathrm{5}`$ of the following section.
###### Conjecture 1
$`R(d,k)=(k+\mathrm{1})(dk)+\mathrm{1}`$.
## 6 Tighter Bounds for Lines
The proof of Theorem 3 shows that $`R(d,\mathrm{1})\mathrm{2}d`$; this can be slightly improved using a technique of overlapping sets borrowed from the catline construction.
###### Theorem 4
$`R(d,\mathrm{1})\mathrm{2}d\mathrm{1}`$.
Proof: The proof of Theorem 3 can be viewed as projecting the points onto a horizontal line, dividing the line into two rays at the median of the points, and applying the center transversal theorem to the two sets of $`n/\mathrm{2}`$ points contained in each ray. Instead, we project the points onto the horizontal line as before, but partition this line into three pieces: two rays containing $`(d\mathrm{1})n/(\mathrm{2}d\mathrm{1})`$ points each, and a line segment in the middle containing the remaining $`n/(\mathrm{2}d\mathrm{1})`$ points. We then apply the center transversal theorem to two sets $`S_\mathrm{1}`$ and $`S_\mathrm{2}`$ of $`dn/(\mathrm{2}d\mathrm{1})`$ points each, formed by the points having a projection in the union of the middle segment and one of the two rays. This theorem finds a line such that no halfspace containing it has fewer than $`n/(\mathrm{2}d\mathrm{1})`$ points in either of the sets $`S_i`$. We claim that this line has regression depth at least $`n/(\mathrm{2}d\mathrm{1})`$.
To prove this, consider any double wedge in which one hyperplane boundary contains the regression line, and the other hyperplane boundary is vertical. The vertical hyperplane then intersects the horizontal projection line in a single point. If this intersection point is in one of the two rays, then the vertical hyperplane misses the set $`S_i`$ formed by the other ray and the middle segment. In this case, the double wedge contains the same subset of $`S_i`$ as a halfspace bounded by the double wedgeโs other bounding plane, and so contains at least $`n/(\mathrm{2}d\mathrm{1})`$ points of $`S_i`$
In the remaining case, the vertical boundary of the double wedge intersects the horizontal projection line in its middle segment. Within each of the two sets to which we applied the center transversal theorem, the double wedge differs from a halfspace (bounded by the same nonvertical plane) only within the middle set. But any hyperplane bounds two different halfspaces, and the halfspace approximating the double wedge in $`S_\mathrm{1}`$ is opposite the halfspace approximating the double wedge in $`S_\mathrm{2}`$. Therefore, if we let $`X_i`$ denote the set of points in the halfspace but not in the double wedge, then $`X_\mathrm{1}`$ and $`X_\mathrm{2}`$ are disjoint subsets of the middle $`n/(\mathrm{2}d\mathrm{1})`$ points. The number of points in the double wedge within one set $`S_i`$ must be at least $`n/(\mathrm{2}d\mathrm{1})|X_i|`$, so the total number of points in the double wedge is at least $`\mathrm{2}n/(\mathrm{2}d\mathrm{1})|X_\mathrm{1}X_\mathrm{2}|\mathrm{2}n/(\mathrm{2}d\mathrm{1})n/(\mathrm{2}d\mathrm{1})=n/(\mathrm{2}d\mathrm{1})`$.
Thus in all cases a double wedge bounded by a hyperplane through the regression line and by a vertical hyperplane contains at least $`n/(\mathrm{2}d\mathrm{1})`$ points, showing that the line has depth at least $`n/(\mathrm{2}d\mathrm{1})`$.
As evidence that this $`\mathrm{2}d\mathrm{1}`$ bound may be tight, we present a matching lower bound for $`d=\mathrm{3}`$.
###### Theorem 5
$`R(\mathrm{3},\mathrm{1})=\mathrm{5}`$.
Proof: We have already proven that $`R(\mathrm{3},\mathrm{1})\mathrm{5}`$, so we need only show that $`R(\mathrm{3},\mathrm{1})\mathrm{5}`$. We work in the dual space, and construct an arrangement of $`n`$ planes in $`R^\mathrm{3}`$, for $`n`$ any multiple of 5, such that any line has depth at most $`n/\mathrm{5}`$.
Our arrangement consists of five groups of nearly parallel and closely spaced planes, which we label $`A_\mathrm{1}`$, $`A_\mathrm{2}`$, $`B_\mathrm{1}`$, $`B_\mathrm{2}`$, and $`C`$. Rather than describe the whole arrangement, we describe the line arrangements in the planar cross-sections at $`x=\mathrm{1}`$ and $`x=\mathrm{1}`$. Recall that the depth of a line in the three-dimensional arrangement is the minimum number of planes crossed by any vertical ray starting on the line. Limiting attention to rays contained in the two cross-sections (and hence to the planar depth of the two points where the given line intersects these cross-sections) gives an upper bound on the depth of the line, and so a lower bound on $`R(\mathrm{3},\mathrm{1})`$.
In the first cross-section, we place the groups of lines as shown in Figure 3(a), with the region where $`A_\mathrm{1}`$ and $`A_\mathrm{2}`$ cross contained inside the triangle formed by the other three groups. Moreover, $`A_\mathrm{1}`$ and $`A_\mathrm{2}`$ do not cross side $`B_\mathrm{2}`$ of the triangle, instead crossing group $`B_\mathrm{2}`$ at points outside the triangle. The points where members of $`A_\mathrm{1}`$ intersect each other are positioned on segment $`CA_\mathrm{1}`$$`A_\mathrm{1}A_\mathrm{2}`$. Similarly, the crossings within $`A_\mathrm{2}`$ are situated on segment $`A_\mathrm{1}A_\mathrm{2}`$$`A_\mathrm{2}B_\mathrm{1}`$. The crossings within $`B_\mathrm{1}`$, $`B_\mathrm{2}`$, and $`C`$ are situated along the corresponding sides of the triangle formed by these three groups.
In the cross-section formed as described above, points from most cells in the arrangement can reach infinity while crossing only one group, and so have depth at most $`n/\mathrm{5}`$. It is only within the segments $`CA_\mathrm{1}`$$`A_\mathrm{1}A_\mathrm{2}`$ and $`A_\mathrm{1}A_\mathrm{2}`$$`A_\mathrm{2}B_\mathrm{1}`$ that a point can have higher depth. The arrangement is qualitatively similar for nearby cross-sections $`x=\mathrm{1}\pm ฯต`$. Therefore, any deep line in $`R^\mathrm{3}`$ must be either nearly parallel to $`A_\mathrm{1}`$ and not near any $`B_i`$, or nearly parallel to $`A_\mathrm{2}`$ and not near $`B_\mathrm{2}`$.
In the second cross-section (Figure 3(b)), the groups $`A_i`$ and $`B_i`$ reverse roles: the point where $`B_\mathrm{1}`$ and $`B_\mathrm{2}`$ cross is contained in the triangle determined by the other three groups, and the other details of the arrangement are situated in a corresponding manner. Therefore, any deep line would have to be either nearly parallel to $`B_\mathrm{1}`$ and not near any $`A_i`$, or nearly parallel to $`B_\mathrm{2}`$ and not near $`A_\mathrm{2}`$.
There is no difficulty forming the two cross-sections described above from a single plane arrangement, since (as shown in Figures 3(a) and (b)) the slopes of the lines within each group can remain the same in each cross-section. But the requirements imposed on a deep line by these two cross-sections are contradictory, therefore no line can have depth greater than $`n/\mathrm{5}`$ in this arrangement.
We believe that a similar proof can be used to prove a more general $`\mathrm{2}d\mathrm{1}`$ lower bound on $`R(d,\mathrm{1})`$ in any dimension, matching the upper bound in Theorem 4: form an arrangement with hyperplane groups $`A_i`$, $`B_i`$, and $`C`$, so that in one cross-section the $`A_i`$ meet in a vertex contained in a simplex formed by the other groups, and in the other cross-section the groups $`A_i`$ and $`B_i`$ exchange roles. However we have not worked out the details of where to place the intersections within groups, how to choose hyperplane angles such that the inner groups miss a face of the outer simplex in both cross-sections, or which cells of the resulting arrangements can have high depth.
## 7 Generalizations of Tverbergโs Theorem
A Tverberg partition of a set of point sites is a partition of the sites into subsets, the convex hulls of which all have a common intersection. The Tverberg depth of a point $`t`$ is the maximum cardinality of any Tverberg partition for which the common intersection contains $`t`$. Note that the Tverberg depth is a lower bound on the location depth. Tverbergโs theorem is that there always exists a point with Tverberg depth $`n/(d+\mathrm{1})`$ (a Tverberg point); this result generalizes both the existence of center points (since any Tverberg point must be a center point) and Radonโs theorem that any $`d+\mathrm{2}`$ points have a Tverberg partition into two subsets.
Another way of expressing Tverbergโs theorem is that for any point set we can find both a partition into $`n/(d+\mathrm{1})`$ subsets, and a point $`t`$, such that $`t`$ has nonzero depth in each subset of the partition. Stated this way, there is a natural generalization to higher dimensional flats:
###### Theorem 6
Let $`d`$ and $`\mathrm{0}k<d`$ be constants. Then there is a constant $`T(d,k)`$ such that for any set of $`n`$ points with $`k`$ independent and $`dk`$ dependent degrees of freedom, there exists a $`k`$-flat $`F`$ and a partition of the points into $`n/T(d,k)`$ subsets, such that $`F`$ has nonzero regression depth in each subset.
Proof: As in the proof of Theorem 3, we project the point set onto the subspace spanned by the $`k`$ independent directions, in such a way that the inverse image of each point in the projection is a $`(dk)`$-flat containing $`V_{dk\mathrm{1}}`$. By Theorem 1, we can find a family of $`k+\mathrm{1}`$ subsets $`S_i`$, each with $`n/P(k)`$ points, such that the $`k`$-dimensional projection of this family has no transversal. We then find a Tverberg point $`t_i`$ and a Tverberg partition of each set $`S_i`$ into subsets $`T_{i,j}`$, for $`\mathrm{1}jn/(P(k)(d+\mathrm{1}))`$. We let $`F`$ be the $`k`$-flat spanning these $`k+\mathrm{1}`$ Tverberg points. We form each set $`T_j`$ in our Tverberg partition as the union $`_iT_{i,j}`$. Some points of $`S`$ may not belong to any set $`T_{i,j}`$, in which case they can be assigned arbitrarily to some set $`T_i`$ in the partition.
Then consider any double wedge bounded by a hyperplane containing $`F`$ and a hyperplane containing $`V_{dk\mathrm{1}}`$. The vertical boundary of this double wedge projects to a hyperplane in $`R^k`$, so it must miss one of the $`k+\mathrm{1}`$ subsets $`S_i`$. Within $`S_i`$ the double wedge appears to be simply a halfspace through $`t_i`$. It therefore contains at least one point of each set $`T_{i,j}`$ and a fortiori at least one point of each set $`T_j`$. Thus if let $`R(d,k)=(d+\mathrm{1})P(k)`$ the theorem is satisfied.
We know that $`T(d,\mathrm{0})=d+\mathrm{1}`$ by Tverbergโs theorem, and the catline construction shows that $`T(\mathrm{2},\mathrm{1})=\mathrm{3}`$. However even in the case $`k=d\mathrm{1}`$ we do not know a tight bound; Rousseeuw and Hubert conjectured that $`T(d,d\mathrm{1})=d+\mathrm{1}`$ but the best known bounds from our previous paper are $`T(d,d\mathrm{1})d(d+\mathrm{1})`$ and $`T(\mathrm{3},\mathrm{2})\mathrm{6}`$.
###### Open Problem 3
Prove tighter bounds on $`T(d,k)`$ for $`\mathrm{1}kd\mathrm{1}`$.
## 8 Connection Between $`k`$-Flats and $`(dk\mathrm{1})`$-Flats
There is a natural relation between finding a deep $`k`$-flat and finding a deep $`(dk\mathrm{1})`$-flat: in both cases one wants to find a $`k`$-flat and a $`(dk\mathrm{1})`$-flat that are far apart from each other, and the problems only differ in which of the two flats is fixed at vertical infinity, and which is to be found.
In our previous paper we exploited this connection in the following way, to show that $`R(d,d\mathrm{1})=d+\mathrm{1}`$. A centerpoint (corresponding to the value of $`R(d,\mathrm{0})`$) is just a point far from a given โhyperplane at infinityโ; in projective $`d`$-space, this hyperplane can be chosen arbitrarily, resulting in different centerpoint locations. We first found an appropriate way to replace the input point set by a smooth measure, and modify the definition of a centerpoint, in such a way that we could show that the modified centerpoint location varied continuously and equivariantly, as a function of the position of the hyperplane at infinity. In oriented projective space, the set of hyperplane locations and the set of centerpoint locations are both topological $`d`$-spheres, so we could have applied the Borsuk-Ulam theorem (in the form that every continuous equivariant function from the sphere to itself is surjective) to find a hyperplane in the inverse image of the point at vertical infinity; this hyperplane is the desired deep regression plane. Our actual proof used the Brouwer fixed point theorem in a similar way, avoiding the need to use the equivariance property.
Conjecture 1 implies in more generality that $`R(d,k)=R(d,dk\mathrm{1})`$, and one would naturally hope for a similar proof of this equality. There are two obstacles to such a hope: First, we do not know how to modify the definition of a deep $`k`$-flat in such a way as to choose a unique flat which varies continuously as a function of the location of the $`(dk\mathrm{1})`$-flat at infinity. A similar lack of a continuous version of Tverbergโs theorem blocked our attempts to prove that $`T(d,d\mathrm{1})=d+\mathrm{1}`$. However, some of our constructions (for instance the bound $`\mathrm{2}(d+\mathrm{1})`$ on $`R(d,\mathrm{1})`$ formed by vertically bisecting the points and choosing a line through the centerpoints of each half) can be made continuous using ideas from our previous paper. Second, and more importantly, the space $`F_k^d`$ of oriented $`k`$-flats does not form a topological sphere, and there can be continuous equivariant non-surjective functions from this space to itself. Nevertheless there might be a way of using generalizations of the Borsuk-Ulam theorem or a modification of our Brouwer fixed point argument to show that the deep $`k`$-flat function must be surjective, perhaps using the additional property that a deep $`k`$-flat cannot be incident to $`V_{dk\mathrm{1}}`$.
###### Open Problem 4
Can there exist a continuous non-surjective $`Z_\mathrm{2}`$-equivariant map $`c`$ from $`F_{dk\mathrm{1}}^d`$ to $`F_k^d`$ such that any $`(dk\mathrm{1})`$-flat $`V`$ and its image $`c(V)`$ are never incident?
###### Open Problem 5
Does $`R(d,k)=R(d,dk\mathrm{1})`$ for $`\mathrm{1}kd\mathrm{2}`$?
###### Open Problem 6
Does $`T(d,k)=T(d,dk\mathrm{1})`$ for $`\mathrm{0}kd\mathrm{1}`$?
## 9 Algorithmic Implications
We now show how to use our proof that deep flats exist as part of an algorithm for finding an approximate deepest flat. We begin with an inefficient exact algorithm.
###### Theorem 7
Let $`d`$ and $`k`$ be constants. Then we can find the deepest $`k`$-flat for a collection of $`n`$ points in $`R^d`$, in time $`n^{O(\mathrm{1})}`$.
Proof: Let $`A`$ be the arrangement of hyperplanes dual to the $`n`$ given points. The distance from points in $`R^d`$ to $`V_{dk\mathrm{1}}`$ is constant within each cell of $`A`$, and all such distances can be found in time $`O(n^d)`$ by applying a breadth first search procedure to the arrangement. The depth of a $`k`$-flat $`F`$ is just the minimum depth of any cell of $`A`$ pierced by $`F`$. Any two flats that pierce the same set of cells of $`A`$ have the same depth.
The space of $`k`$-flats forms a $`(k+\mathrm{1})(dk)`$-dimensional algebraic set $`F_k^d`$, in which the flats touching any $`(dk\mathrm{1})`$-dimensional cell of $`A`$ form a subset of codimension one. The arrangement of these $`O(n^d)`$ subsets partitions $`F_k^d`$ into $`O(n^{d(k+\mathrm{1})(dk)+ฯต})`$ cells, corresponding to collections of flats that all pierce the same set of cells. We can construct this arrangement, and walk from cell to cell maintaining a priority queue of the depths of the cells in $`A`$ pierced by the flats in the current cell, in time $`O(n^{d(k+\mathrm{1})(dk)+ฯต})`$.
We now use standard geometric sampling techniques to combine this exact algorithm with our lower bound on depth, resulting in an asymptotically efficient approximation algorithm.
###### Theorem 8
Let $`d`$, $`k`$, and $`\delta >\mathrm{0}`$ be constants. Then we can find the a $`k`$-flat with depth within a $`(\mathrm{1}\delta )`$ factor of the maximum, for a collection of $`n`$ points in $`R^d`$, in time $`O(n)`$.
Proof: We first construct an $`ฯต`$-approximation $`S`$ of the points, for the range space consisting of double wedges with one vertical boundary, where $`ฯต=\delta /(\mathrm{2}R(d,k))`$. Then if a flat $`F`$ has depth $`D`$ with respect to $`S`$, $`Dn/|S|`$ is within an additive $`ฯตn`$ term of the true depth of $`F`$ with respect to the original point set. $`S`$ can be found with $`|S|=O(ฯต^\mathrm{2}logฯต^\mathrm{1})`$, in time $`O(ฯต)`$, using standard geometric sampling algorithms. We then let $`F`$ be the deepest flat for $`S`$.
Suppose the optimal flat $`F^{}`$ for the original point set has depth $`cn`$. Then the depth of $`F^{}`$ in $`S`$, and therefore also the depth of $`F`$ in $`S`$, must be at least $`(cฯต)|S|`$. Therefore, the depth of $`F`$ in the original point set must be at least $`(c\mathrm{2}ฯต)n`$. Since $`c\mathrm{1}/R(d,k)`$, $`(c\mathrm{2}ฯต)n(\mathrm{1}\delta )cn`$.
Although our approximation algorithm takes only linear time, it is likely not practical due to its high constant factors. However, perhaps similar ideas can form the basis of a more practical random sampling based algorithm.
###### Open Problem 7
Improve the time bounds on finding an exact deepest $`k`$-flat. Is it any easier to find a $`k`$-flat with depth at least $`n/R(d,k)`$, that may not necessarily approximate the deepest flat?
### Acknowledgements
Work of Eppstein was done in part while visiting Xerox PARC. The authors would like to thank Peter Rousseeuw for helpful comments on a draft of this paper. |
no-problem/9912/quant-ph9912042.html | ar5iv | text | # Wave packet scattering from an attractive well
## 1 Introduction
One dimensional wave packet scattering off an attractive potential well was investigated in a previous work. In spite of being a thoroughly studied example of quantum scattering for plane wave stationary states, the effect found for packets, was unknown at that time.
Packets that are narrower than the well width initially, recede from the well in the form of a multiple peaked wave train. Packets that are wider than the well, do not show this behavior. A smooth wave hump proceeds both forwards and backwards. Moreover, for narrow packets, the reflected waves dominate and scatter back from the interaction region with an average speed that is independent of the initial average speed of the packet, whereas the transmitted waves proceed in accordance with expectation.
Although wave packets seem to occupy a place of honor in the educational literature of quantum mechanics, they are virtually absent from the research literature. Only fairly recently, and mainly due to the arrival of bose traps methods, there has been a resurgence of interest in the subject. Conventional scattering processes are dealt with by using plane waves for the incoming flux of particles. The justification for the approach, stems from the fact that, accelerators generate beams of particles that are almost monochromatic in energy (momentum), and extremely spread in spatial extent. For such beams, the packets look more like a plane wave, and, may be treated theoretically using stationary states.
The size of packets in actual scattering reactions is in any case enormous as compared to the size of scatterers, therefore, the dependence on packet details is irrelevant.
Some exceptions apply, however, for atomic scattering processes, such as those investigated in chemical physics. The semiclassical approximation is used in the study of those processes to describe the actual motion of individual atoms.
Present day capabilities of accelerators impede the production of particle beams of spatial extent smaller than the size of the scattering agents. Even optical pulses in the femtosecond range are still wider than the size of atoms from which they scatter. It is nevertheless, not totally unrealistic, to expect that the situation may change in the near future. The import of the present paper reinforces the need to produce narrow packets and design suitable experiments.
The technique of cold bose traps may serve as such a setup, because of the relative ease in handling beams of atoms at low energy and their subsequent scattering inside cavities taking the role of potentials. In such an experiment with a narrow bunch of atoms, the scattered atoms will proceed in a manner resembling the coherent light emerging from a laser.
The ALAS phenomenon in nuclear physics may also be related to the present findings, as described in section 4.
Polychotomous (multiple peak) waves are observed when a superintense laser field focuses on an atom. Ionization is hindered and the wave function is localized, in spite of the presence of the strong radiation field.
In section 2 we will summarize and expand the results of the previous work on the one-dimensional case. Section 3 will describe two-dimensional scattering. Some experiments are proposed in section 4.
## 2 One dimensional packet scattering off an attractive well
In a previous work, it was found that, a multiple peak coherent wave train is reflected from an attractive well, when the incoming packet is narrower than the well. These waves spend a large amount of time spreading out of the scattering region. The average speed of the reflected wave was found to be independent of the average energy of the packet.
The scattering was investigated in the framework of nonrelativistic potential scattering. Narrow wave packets have high frequency modes. One could suspect the approach not to be consistent, due to the emergence of relativistic corrections for these modes. However, we took precautions by choosing a very large mass as compared to the inverse of the packet spread. We took m = 20, while the width of the packet was $`\mathrm{\Delta }x^12`$ at least. For such a large mass, the speed of propagation of the frequency modes at the edge of the spectrum of the packet is still small in value, less than v=1 in our units.
Moreover, as will be depicted below, we are looking at an effect that unfolds immediately after the packets starts to swell and not at very long times for which one could doubt the validity of a nonrelativistic approach. The multiple peak behavior appears at times smaller than the spreading time of the packet. The relativistic corrections are then expected to be of a lesser concern.
A correct treatment of high frequencies or high momenta demands a relativistic wave equation. Other wave equations must be considered in order to assess the correctness of the above assumptions, such as the Klein-Gordon equation or the Dirac equation. In spite of the limitations of the Galilean invariant Schrรถdinger equation, it has proven quite successful in atomic, molecular and nuclear processes, even for time dependent reactions, as the one dealt with presently. This is the reason we opted for the nonrelativistic potential scattering approach.
A narrow packet scatters backwards as a polychotomous wave train, that is generated from the interference between the incoming wave and the reflected wave. For a narrow packet, the interference pattern is not blotted out as time passes. A very broad packet resembles more a plane wave, its spread in momenta is much smaller than that of a narrow packet. When the well reflects waves in the backward direction, they interfere destructively with the incoming broad packet, erasing the polychotomous behavior. A thin packet having short wavelength components of the order of the well width (slit) produces a cleaner diffraction pattern. Constructive interference with the incoming packet, allows the pattern to survive.
Quantum mechanics textbooks show pictures of the development of wave packet scattering from wells and barriers.. Large oscillations of the wave function are seen when a packet is traversing a well. These oscillations are propagated only in the backward direction.
Only wide packets, as compared to the width of the well, are shown in the literature. The effect of the width of the packet is not investigated. What was found in ref. and extended here, is that the oscillatory behavior persists for narrow packets.
The question now arises as to the lifetime of the multiple peak structure. Does it eventually die out and the peaks merge? The answer to this question lies in the behavior of the wave function inside the well. We will show below that the scattering proceeds through a metastable, quasi-bound, state inside the well. This state does not decay exponentially, but polynomially in time. Differing from the transient behavior such as that of diffraction in time, for which oscillations are set up by a shutter that is suddenly opened, the peak structure persists for very long times. Thousands of times longer than the transit time of the packet through the interaction region. Instead of diffraction in time, we are witnessing a diffraction in space and time. From the numerical calculations, it appears that as far as we can observe the long term behavior is still multiple peaked.<sup>1</sup><sup>1</sup>1This aspect will be addressed in a future work
We now proceed to review the results of the one dimensional case and add some further results. The next section will be devoted to the two-dimensional case.
In ref. we used a minimal uncertainty wave packet traveling from the left with an average speed $`v`$, initial location $`x_0`$, mass $`m`$, wave number $`q=mv`$ and width $`\mathrm{\Delta }`$,
$`\psi =Cexp\left(iq(xx_0){\displaystyle \frac{(xx_0)^2}{4\mathrm{\Delta }^2}}\right)`$ (1)
The attractive well was located around the origin, with depth $`V_0`$ and width parameter $`w`$. We used an Gaussian potential, but the results are not specific to this type of interaction.
$`V(x)=V_0exp\left({\displaystyle \frac{x^2}{w^2}}\right)`$ (2)
The packet above contains only positive energy components and is therefore orthogonal to any bound state inside the well. Any such superposition will be hindered by factors of the form $`e^{\kappa |x_0|}`$, where $`\kappa =\sqrt{2m|E_B|}`$, with $`|E_B|`$, the binding energy of the bound state. For initial locations $`x_0`$ faraway from the well, this superposition vanishes. However, the initial packet is not orthogonal to metastable states or quasi-bound states at positive energies.
We solved the Schrรถdinger equation for the scattering event in coordinate space taking care of unitarity. We used the method of Goldberg et al., that proved to be extremely robust and conserves the normalization of the wave function, with an error of less than 0.01 %, even after hundreds of thousands of time step iterations. We also verified that the solutions actually solve the equation with extreme accuracy by explicit substitution.
Figure 1 shows a series of pictures of the evolution of a narrow wave packet. The impinging packet has a width of $`\mathrm{\Delta }=0.5`$, a momentum $`q=1`$, and the well width is $`w=1`$. We used a large mass $`m=20`$ in order to prevent the packet from spreading too fast and to be on the safe side regarding relativistic effects.
The figure shows how the multiple peak structure is produced early on. After $`t=200`$, the reflected wave train surpasses the incoming wave and proceeds to propagate independently of it. The effect persists for extremely long times. Figure 2 depicts the scattered waves after t=5000, a time long enough for the waves to scatter at a large distance (Recall that the well width is w=1).
A polychotomous (multiple peak) wave recedes from the well. For low velocities, corresponding to average packet energies less than half the well depth, several peaks in the reflected wave show up. Simple inspection reveals that the distance between the peaks is constant. The wave train propagates with an amplitude of the form
$`C(x)e^{\lambda |x|}sin^2(kx)`$ (3)
The exponential drop is characteristic of a virtual state solution inside the well. The parameters $`\lambda `$ ad $`k`$, are independent of the initial velocity, but depend on time. The wave spreads and its amplitude diminishes, as expected. We corroborated that the polychotomous behavior continues for as long time as we could check numerically.
Figure 3 shows the sequence of events for a wide packet.
The multiple peaks disappear completely for packets wider than the well. The long time behavior of the same case is depicted in figure 4.
An approximate expression for the multiple peak reflected packet average speed was found to be, $`v=k(t_{formation})/m`$. Where $`k`$ represents the wavenumber outside the well at the time it starts emerging from it after a long period of multiple reflections. This speed was found to be independent of the initial packet speed. The memory of the initial packet is deleted.
We investigated other types of potentials, such as a Lorentzian, a square well, etc., and found the same phenomena described here. Moreover the effect is independent of the shape of the packet as long as it is narrower than the well width. We used square packets, Lorentzian packets, exponential packets, etc., with analogous results.
In order to find analytical support, we resorted to a square packet
$`\psi (x)=e^{iq(xx_0)}\mathrm{\Theta }(d|xx_0|)`$ (4)
where $`d`$ is half the width of the packet, $`x_0`$ the initial position and $`q`$ the wave number. It impinges on a square well located at the origin, whose width is $`2a`$ and depth $`V_0`$
$`V(x)=V_0\mathrm{\Theta }(a|x|)`$ (5)
This case is solvable using the techniques of ref.. The method is appropriate only for packets with sharp edges, that terminate at a certain point. It consists of integrating the Fourier amplitude of the wave using a contour in the complex momentum plane that avoids the poles of the scattering matrix corresponding to the bound states. For each momentum, one uses the appropriate stationary scattering state for the square well. The integral reads
$`\psi (x,t)={\displaystyle _C}\varphi (x,p)a(p,q)๐p`$ (6)
where $`C`$ is a contour that goes from $`\mathrm{}`$ to $`+\mathrm{}`$ and circumvents the poles that are on the imaginary axis for $`P<i\sqrt{2mV_o}`$ by closing it above them. $`\mathrm{\Phi }(x,p)`$ is the stationary solution to the square well scattering problem for each $`p`$ and $`a(p,q)`$ is the Fourier transform amplitude for the initial wave function with average momentum $`q`$. The results are depicted in figure 5. The initial wave packet had average momentum $`q=1`$ width $`\mathrm{\Delta }=0.5`$, and the square well parameters were $`V_0=1,a=1`$. The reflected wave shows exactly the same polychotomous behavior as the numerical simulations. In particular, numerical calculations with a square packet and a square well match almost exactly the analytical results.
The polychotomous effect is general, even the packet amplitude becomes unimportant. The very existence of the effect does not depend on the initial position of the packet, as mentioned in passing in . Figure 6 shows one such case for the same parameters as those of figure 1, but an initial location of $`x_0=50`$. The number of peaks has increased and the distance between them has shrunk. There appears a smooth background under the multiple peaks. The well reacts to the presence of the packet from far away. So, even if the packet is narrower only faraway from the well, the polychtomous structure persists, despite the normal spreading that must occur until the center of the packet reaches the well, which, in the depicted case, would amount to many times the original width.
It was claimed above that the multiple peak effect, was due to the interference between incoming and reflected waves. A sign of the its persistence may be found in the behavior of the wave inside the well. The amplitude of the wave function at the origin as a function of time provides us with a suitable index to characterize it. The long time behavior of the decay of this amplitude is polynomial. Trial and error lead us to a fit with a polynomial of the form $`|\psi (0)|=\frac{C}{t^{1.55}}`$, with c, a constant. Figure 7 shows this time dependence for the case of a narrow packet.
In the conventional manner of defining the lifetime for exponential decay, such as is done for virtual states, the metastable state inside the well would have an infinite lifetime. Although this is not a proof at all, the numerical results strongly suggest that, if not infinite, the lifetime of the metastable state is extremely large. Instead of being a mere transient effect, that disappears after a time comparable to the transit time of the packet through the well, as other transients, like the diffraction in time process, it looks as if the diffraction pattern persists. The metastable state inside the well is, in a way, a decaying trapped state.
## 3 Two dimensional wave packet scattering from an attractive well
The results of the previous section call for a more realistic calculation. As a step towards a full three dimensional calculation, we proceed here to describe the results of the two-dimensional case.
Consider two-dimensional scattering off a potential well as described by the time dependent Schrรถdinger equation
$$\frac{1}{2m}\left(\frac{^2}{r^2}+\frac{}{rr}+\frac{^2}{r^2\varphi ^2}\right)\mathrm{\Phi }+V(r)\mathrm{\Phi }=i\frac{\mathrm{\Phi }}{t}$$
(7)
Where $`\varphi `$ is the polar angle and $`r`$, the radial coordinate. Expanding in partial waves,
$$\mathrm{\Phi }(t,r,\varphi )=\underset{l=0}{\overset{lmax}{}}e^{il\varphi }\varphi _l(r,t)$$
(8)
we obtain decoupled partial wave equations (the potential is assumed independent of $`\varphi `$).
$$\frac{1}{2m}\left(\frac{^2}{r^2}+\frac{}{rr}\frac{l^2}{r^2}\right)\varphi _l+V(r)\varphi _l=i\frac{\varphi _l}{t}$$
(9)
A further simplification is achieved by the substitution $`\mathrm{\Phi }_l=\frac{\stackrel{~}{\mathrm{\Psi }}_l}{\sqrt{r}}`$. The potential acquires an extra term and the first derivative cancels out. Henceforth we work with the wave function $`\mathrm{\Psi }=\mathrm{\Phi }\sqrt{r}`$. This substitution also allows for a simple numerical treatment. For each partial wave we apply the method used in the one dimensional case.
We start the scattering event of a minimal uncertainty wave packet
$`\mathrm{\Psi }_0=C\sqrt{r}exp\left(iq(xx_0){\displaystyle \frac{(xx_0)^2+(yy_0)^2}{4\mathrm{\Delta }^2}}\right)`$ (10)
at a distance large enough to be outside the range of the potential,
$`V(r)=Aexp\left({\displaystyle \frac{r^2}{w^2}}\right)`$ (11)
for which we again use a Gaussian.
We present our results for different impact parameters $`y_0`$, for a packet traveling initially along the negative $`x`$ axis towards the well, with average speed $`v=\frac{q}{m}`$ as a function of angle and distance from the location of the potential.
Figures 8-10 show the scattered waves at angles of 180<sup>o</sup>, 90<sup>o</sup> and 0<sup>o</sup> respectively for initial momenta $`q=1,2,3`$ in inverse distance units. The initial center of the packet is at $`x_0=10,y_0=0`$. The parameters of the well are $`w=2,V_0`$=1, the width of the initial packet $`\mathrm{\Delta }=0.5`$. Throughout the calculation we limited the number of partial waves to $`l_{max}=50`$. The accuracy in the expansion obtained with this limit, was found to be better than 1%. For large impact parameters we increased the number of partial waves up to $`l_{max}=70`$. The wave functions are normalized to $`2\pi `$.
The figures show clearly that the same phenomenon found in the one dimensional case emerges in two dimensions. Even at small angles the effect persists, although the multiple peak structure is cleaner in the backward direction. Figure 11 shows the comparison between backward and forward scattering for impact parameter $`y_0=0`$. Large angle scattering shows up as an extremely important element.
Figures 12-13 show the behavior of the scattered wave at large and small angles for increasing impact parameter.
At backward angles, the impact parameter influences the shape of the pattern very little. The memory of the initial information concerning impact parameter -and even momentum, for moderate momenta as compared to the inverse of the well width- is erased.
We can visualize the existence of a quasi-bound state inside the well by selecting the region around the origin and plotting real and imaginary parts of the wave function. Figures 14 and 15 show these waves for masses $`m=20,5`$ for backward angles. Here we depicted $`\mathrm{\Phi }`$ instead of $`\mathrm{\Psi }`$.
As for the one dimensional case, we see a sinusoidal behavior. For $`m=20`$ it is seen that the well width accomodates approximately two wavelengths, while for $`m=5`$ one wavelength fits in. (Recall that the well does not have a sharp edge) From the figures we can read off the value of the wavenumber inside the well, namely $`k^{}=\sqrt{k^2+2m|V_0|}`$ to be $`k^{}=\frac{2\pi n}{2w}`$. For m=20 we find n=2, and for m=5 we find n=1. Inserting the values of the mass and the depth of the potential $`V_0`$, we obtain $`k0`$.
The two-dimensional case resembles remarkably the one dimensional scattering, for packets that are initially narrower than the well. This is also true for the scattering of wide packets. Figures 16-18 show that all the polychotomous wave trains disappear when the initial packet is wider than the well. The multiple peaks are now absent. Forward and backward scatterings look now quite similar.
The effect depends on the existence of a quasi-bound state inside the well. Shallow potentials cannot sustain the metastable states. Therefore, the polychotomous behavior should gradually disappear when the depth of the well is diminished. Figure 19 shows one such case for a well amplitude of $`A=0.03`$
## 4 Suggested experiments
We have found that the polychotomous coherent effect of ref., persists in two dimensions and presumably this will be true in a full three dimensional calculation.
Experimental work may take advantage of these findings and design setups to research the phenomena described here. We mentioned in section 1 the possibility of using cavity experiments with atomic beams. The assessment of feasibility of such experiments is however, beyond the knowledge of the author. Although, it appears, a tangible option.
Experiments in optics, sound propagation, microwaves, might also be appropriate in order to find the polychotomous behavior.
Another alternative that seems viable, consists in an experiment related to those known under the title of ALAS.
ALAS stands for anomalous large angle scattering. It occurs for $`\alpha `$ scattering on certain closed shell nuclei for incident energies below 100 MeV. The backward scattering is so pronounced that can exceed the Rutherford cross section by several orders of magnitude. Although many explanations based on optical models have been provided over the years for this process, it remains rather obscure. A possible interpretation based on the present results would be that, the $`\alpha `$ particle is a finite extent system of dimensions smaller than the nuclear well. Considered as a wave packet it could form a metastable state inside the well in a similar manner to the packets dealt with presently. The large backward scattering is then a reflection of the behavior found here for a finite size packet. A clear imprint of the effect would however, require the detection of the $`\alpha `$ particles as a function of time in order to observe the oscillatory amplitudes that dominate at large scattering angles. Data acquisition in nuclear (and other) experiments generally averages over time variations, except for coincidence experiments. The multiple peak behavior demands a continuous time dependent recording of the alpha particles, triggered by the bunches emitted from the accelerating machine. If experimental support is indeed gathered, then the effect can be turned around in order to become a research tool, due to its dependence on the geometrical and dynamical parameters of both projectiles and target. A firm theoretical connection to the ALAS effect, requires, eventually, a much more laborious theoretical and numerical work than the one carried out here. Efforts in that direction are currently underway.
Acknowledgements
This work was supported in part by the Department of Energy under grant DE-FG03-93ER40773 and by the National Science Foundation under grant PHY-9413872, while the author was on sabbatical at the cyclotron institute of the Texas A&M University. It is a pleasure to thank Prof. Youssuf El Masri of the UCL, University of Lovain-la-Neuve, Belgium for the information concerning the ALAS effect. |
no-problem/9912/cond-mat9912320.html | ar5iv | text | # One-dimensional metallic behavior of the stripe phase in La2-xSrxCuO4
## Abstract
Using an exact diagonalization method within the dynamical mean-field theory we study stripe phases in the two-dimensional Hubbard model. We find a crossover at doping $`\delta 0.05`$ from diagonal stripes to vertical site-centered stripes with populated domain walls, stable in a broad range of doping, $`0.05<\delta <0.17`$. The calculated chemical potential shift $`\delta ^2`$ and the doping dependence of the magnetic incommensurability are in quantitative agreement with the experimental results for doped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>. The electronic structure shows one-dimensional metallic behavior along the domain walls, and explains the suppression of spectral weight along the Brillouin zone diagonal.
It is commonly believed that the understanding of normal state properties of high-temperature superconducting cuprates (HTSC) will provide important clues for the understanding of superconductivity itself. The undoped compounds, La<sub>2</sub>CuO<sub>4</sub> and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6</sub>, are insulators and exhibit long-range antiferromagnetic (AF) order, which is rapidly destroyed and replaced by short-range AF correlations as holes are doped into the CuO<sub>2</sub> planes . Thus, one of the most important features is the nature of the interplay between the AF spin fluctuations and superconductivity. Incommensurate charge and spin order, discovered first in La<sub>1.6-x</sub>Nd<sub>0.4</sub>Sr<sub>x</sub>CuO<sub>4</sub> , suggests that the strong competition between hole propagation and AF order in the CuO<sub>2</sub> planes leads to segregation of holes in regions without AF order. These regions form one-dimensional (1D) substructures, so called stripes, which act as domain walls . The essentialy identical momentum dependence of the magnetic scattering in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> provides evidence for the stripe phases in this class of materials.
If indeed realized in a broad range of doping, the stripe phase should have measurable consequences. Studies of La<sub>2-x-y</sub>Nd<sub>y</sub>Sr<sub>x</sub>CuO<sub>4</sub> showed a chemical potential shift in underdoped and overdoped cuprates responsible for the breakdown of the Fermi liquid picture, a pseudogap which opens at the Fermi level , and a real gap for charge excitations in the electronic structure around momentum $`(\pi /2,\pi /2)`$ . It may be expected that such puzzling features follow from strong Coulomb interactions at Cu ions, and it would be interesting to investigate whether they are fingerprints of a stripe phase and could be reproduced by considering a generic model for the HTSC, a two-dimensional (2D) Hubbard model.
Stripe phases were first found in the Hartree-Fock (HF) approximation , with empty (filled by holes) domain walls in an insulating ground state. In contrast, the calculations which include electron correlations indicate that the ground state of an AF system with strong short-range Coulomb repulsion is a stripe phase with populated domain walls at low doping . Hence a partially filled band might be expected, and charge transport along the walls becomes possible. These results clearly emphasize the need for a reliable and controlled approximation scheme in order to study the physics of stripe phases.
In this Letter we present an exact solution of the dynamical mean-field theory (DMFT) equations for the stripe phase of the 2D Hubbard model. The DMFT approach allows to treat the hole correlations in a non-perturbative way using local selfenergy . Recently we have shown that within DMFT one obtains the correct dispersion and spectral weights of quasiparticle (QP) states in the Hubbard model at half-filling ($`n=1`$) .
Here we investigate long-range stripe order in the 2D Hubbard model at zero temperature. The square lattice is thereby covered by $`N`$ supercells containing $`L`$ sites each, and the ground state energy of the Hamiltonian,
$$H=\underset{mi,nj,\sigma }{}t_{mi,nj}a_{mi\sigma }^{}a_{nj\sigma }+U\underset{mi}{}n_{mi}n_{mi},$$
(1)
has been determined using this constraint. Positions $`๐_{mi}๐_m+๐ซ_i`$ of nonequivalent sites $`i=1,\mathrm{},L`$ within the unit cell $`m`$ are labelled by a pair of indices $`\{mi\}`$. We focus on the generic behavior of the stripe phase and thus restrict the hopping term to nearest-neighbors $`\{mi\}`$ and $`\{nj\}`$ only, $`t_{mi,nj}=t`$, and take a uniform on-site Coulomb interaction $`U`$. The one-particle Greenโs function in the stripe phase is given by an $`(L\times L)`$ matrix, $`G_{ij\sigma }(๐ค,i\omega _\nu )`$, on the imaginary energy axis $`\omega _\nu =(2\nu +1)\pi T`$ with fictitious temperature $`T`$. It contains a site- and spin-dependent local selfenergy ,
$$G_{ij\sigma }^1(๐ค,i\omega _\nu )=(i\omega _\nu +\mu )\delta _{ij}h_{ij}(๐ค)\mathrm{\Sigma }_{ii\sigma }(i\omega _\nu )\delta _{ij},$$
(2)
where $`\mu `$ is the chemical potential, and $`h_{ij}(๐ค)`$ is an $`(L\times L)`$ matrix which describes the kinetic energy, $`h_{ij}(๐ค)=_n\mathrm{exp}(i๐ค(๐_{0i}๐_{nj}))t_{0i,nj}`$. The local Greenโs functions for each nonequivalent site $`i`$ are calculated from the diagonal elements of the Greenโs function matrix (2), $`G_{ii\sigma }(i\omega _\nu )=N^1_๐คG_{ii\sigma }(๐ค,i\omega _\nu )`$. Self-consistency of site $`i`$ with its effective medium requires,
$$๐ข_{ii\sigma }^0(i\omega _\nu )^1=G_{ii\sigma }^1(i\omega _\nu )+\mathrm{\Sigma }_{ii\sigma }(i\omega _\nu ),$$
(3)
similar to the situation in thin films .
For the solution of the effective impurity model with hybridization parameters, $`V_{i\sigma }(k)`$, as well as diagonal energies, $`\epsilon _{i\sigma }(k)`$, for each non-equivalent site $`i`$ in the stripe supercell we employed the exact diagonalization method of Caffarel and Krauth . By fitting $`๐ข_{ii\sigma }^0(i\omega _\nu )`$ on the imaginary energy axis,
$$๐ข_{ii\sigma }^0(i\omega _\nu )^1=i\omega _\nu +\mu \underset{k=2}{\overset{n_s}{}}\frac{V_{i\sigma }^2(k)}{i\omega _\nu \epsilon _{i\sigma }(k)},$$
(4)
the parameters of an effective DMFT-impurity cluster with $`n_s`$ sites are obtained. After solution of the effective cluster problem with Lanczos algorithm ($`n_s8`$), the local Greenโs function $`G_{ii\sigma }(i\omega _\nu )`$, and local electron densities $`n_{i\sigma }`$ were determined. The self-consistency is implemented by extracting the new selfenergy for the next DMFT-iteration from Eq. (3). Finally, the Green functions (2) serve to determine the spectral function,
$`A(๐ค,\omega )={\displaystyle \frac{1}{\pi }}{\displaystyle \frac{1}{LN}}\mathrm{Im}{\displaystyle \underset{mi,nj,\sigma }{}}e^{i๐ค(๐_{mi}๐_{nj})}G_{mi,nj,\sigma }(\omega ).`$
Below we summarize the results obtained for $`U=12t`$, a value representative for La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> , and for a broad range of hole doping ($`\delta =1n`$), $`0.03<\delta <0.2`$, where we found that the ground state contains populated domain walls. First, for $`0.03<\delta 0.05`$, diagonal stripe supercells made out of pieces of site-centered vertical domain walls are stabilized by a (weak) charge density wave (CDW) superimposed with a spin density wave (SDW) along the wall. The SDW domain wall unit cells consist of four sites, $`|0||0|`$. For $`\delta =0.05`$ we found the somewhat lower electron densities at nonmagnetic ($`|0`$) sites ($`n_00.848`$) than the densities at magnetic ($`|\sigma `$) sites ($`n_m0.860`$) with magnetic moment of $`m0.334`$. The SDW unit cell is stabilized by only $`0.584`$ holes, which indicates that such states are precursors of the undoped AF Mott insulator .
In agreement with neutron scattering experiments , we found a particular stability of the vertical stripes with populated domain walls. As the most robust structure of the doped systems with $`0.05<\delta <0.17`$ we identified the site-centered stripe phase (Fig. 1). This phase is stabilized by electron correlations and by the kinetic energy gains on the populated domain walls which destabilize a SDW along the walls in this doping regime. At higher doping $`\delta >0.17`$ the bond-centered stripe phase of White and Scalapino is energetically favored, kinks and antikinks along the domain walls occur, and the stripe structure gradually melts.
The size of AF domains in the site-centered stripe phase shrinks with increasing doping for $`\delta 1/8`$ (Fig. 1). For example, the charge unit cell contains eight (four) sites at doping $`\delta =1/16`$ ($`\delta =1/8`$), while the electron density on the sites of the walls is almost constant, being $`n_i0.850`$ and $`n_i0.830`$, respectively. In agreement with the results of the numerical density matrix renormalization group , the self-consistent DMFT densities in the stripe unit cell are characterized by much smoother variations than in the corresponding HF states . Beyond $`\delta =1/8`$ we find a lock-in effect of the same structure with a charge (magnetic) unit cell consisting of four (eight) sites, and the doped hole density, $`n_h(l_x)=1n_{(l_x,0),}+n_{(l_x,0),}`$, increasing faster within the AF domains than on the wall sites (Fig. 1). The magnetic domain structure is best described by the modulated magnetization density, $`S_\pi (l_x)=L_y^1_{l_y}(1)^{l_x+l_y}\frac{1}{2}n_{(l_x,l_y),}n_{(l_x,l_y),}`$, projected on the direction perpendicular to the wall .
The stability of the above stripe phases is investigated by the ground state energy normalized per density of doped holes, $`E_h=[E_0(\delta )E(0)]/\delta `$, where $`E_0(\delta )`$ is the ground state energy at doping $`\delta `$. The energy $`E_h/t`$ is a monotonically increasing function of doping (Fig. 2(a)), showing that the different stripe phases are stable against macroscopic phase separation. From our results we conclude that short-range Coulomb repulsion suffices to obtain populated domain walls over a wide range of doping. The diagonal stripes, stable at low doping $`\delta <0.06`$, are followed by vertical site-centered stripes which compete with bond-centered stripes, both having a considerably larger energy gain than homogenous phases, such as spin spirals. We have verified that the kinetic energy is gained mainly on the sites which belong to the domain walls. Such energy gains are larger at the nonmagnetic domain walls (Fig. 2(a)) than on the magnetic sites of bond-centered domain walls. At the same time the small energy difference between these both quite different states indicates a strong tendency towards transverse stripe fluctuations which might enhance superconducting correlations in the ground state .
We have found that the chemical potential shifts downwards with hole doping, $`\mathrm{\Delta }\mu \delta ^2`$ (Fig. 2(b)), in agreement with the experimental results of Ino et al. , and with the Monte-Carlo simulation of a 2D Hubbard model . Therefore, the charge susceptibility is enhanced towards $`\delta 0`$, reproducing a universal property of the Mott-Hubbard metal-insulator transition .
As both the bond-centered and site-centered stripe phases have the same size of the magnetic unit cell, they give the same pattern in neutron scattering and are thus indistinguishable experimentally. The neutron scattering structure factor $`S(๐)`$ in the stripe phase has the maxima shifted away from the $`M=(\pi ,\pi )`$ point to $`๐=[(1\pm 2\eta _{\mathrm{vert}})\pi ,\pi ]`$ points for the structures of Fig. 1 \[and to $`๐=[\pi ,(1\pm 2\eta _{\mathrm{vert}})\pi ]`$ for equivalent horizontal stripes\]. The present calculations give a linear dependence $`\eta _{\mathrm{vert}}=\delta `$ for $`\delta 1/8`$ and $`\eta _{\mathrm{vert}}=1/8`$ for $`\delta >1/8`$ (Fig. 2(c)). Such a behavior was observed by Yamada et al. , and indicates a unique stability of populated domain walls in the stripe phase. The correlations included within the DMFT and its capability to describe the Mott-Hubbard metal-insulator transition play thereby a crucial role, as other filling and periodicity of the stripe phase are found in HF calculations . Also the points found at low doping (Fig. 2(c)) corresponding to the diagonal stripe structures $`๐=[(1\pm 2\eta _{\mathrm{diag}})\pi ,(1\pm 2\eta _{\mathrm{diag}})\pi ]`$ agree perfectly well with the recent neutron experiments of Wakimoto et al. . We find $`\eta _{\mathrm{diag}}\delta /\sqrt{2}`$, where the factor $`1/\sqrt{2}`$ is due to the rhombic lattice constant in diagonal stripe structures as suggested by experiment . This results in the relation $`\eta _{\mathrm{diag}}=\eta _{\mathrm{vert}}/\sqrt{2}`$, and the linear dependence $`\eta _{\mathrm{vert}}\delta `$ holds also for $`\delta <0.06`$ (Fig. 2(c)). Furthermore, our calculations predict vertical SDW domain wall unit cells in the diagonal stripe phase. Thus additional elastic magnetic superlattice peaks should be visible in neutron scattering experiments around $`๐=[(1+2\eta _{\mathrm{diag}})\pi /2,(1+2\eta _{\mathrm{diag}})\pi ]`$ with a weight smaller by a factor $`3.7`$, if such phases do exist in heavily underdoped La<sub>2-x-y</sub>Nd<sub>y</sub>Sr<sub>x</sub>CuO<sub>4</sub>.
Let us focus on the spectral function $`A(๐ค,\omega )`$ of the stripe phases shown in Fig. 1. The photoemission $`\omega \mu `$ spectra consist of the lower Hubbard band at an energy $`\omega \mu 4.8t`$ and low-energy states, well separated from the Hubbard band, extending over an energy range of $`2t`$. The QP states known from the dispersion of a single hole in the $`t`$-$`J`$ model survive also in the stripe phase up to $`\delta =0.15`$ and are characterized by a considerable spectral weight and a bandwidth $`2J`$ (here $`J/t=4t/U=1/3`$) (Figs. 3 and 4). Due to the stripe structure we find that the directions $`\mathrm{\Gamma }X`$ \[$`X=(\pi ,0)`$\] and $`\mathrm{\Gamma }Y`$ \[$`Y=(0,\pi )`$\] are nonequivalent.
At $`\delta =1/12`$ we observe a pseudogap at the $`X`$ point (Fig. 3). The QP weight there is composed out of QP states originating from the dressing of a moving hole by quantum fluctuations in an AF background and localized states from the 1D electronic structure of the site-centered stripe phase . This superposition of QP weight explains the flat band around the $`X`$ point and the Fermi level crossing at $`(\pi ,\pi /4)`$ observed in recent angle-resolved photoemission (ARPES) experiments . On the contrary, the QPโs originating from the 1D features of the stripe phase, are not seen in ARPES around the $`Y`$ point, as the structure factor vanishes , and one only resolves the spin-polaron QP with dispersion $`2J`$.
As the most spectacular result, a gap for charge excitations opens in the underdoped regime at the Fermi energy around $`S=(\pi /2,\pi /2)`$ (Fig. 3). Little or no spectral weight is found at momenta $`๐ค=(\pi /4,\pi /4)`$ and $`๐ค=(0,\pi /4)`$ where, notably, the 1D electronic structure should show Fermi level crossings . This behavior agrees quantitatively with the ARPES measurements on La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> and La<sub>1.28</sub>Nd<sub>0.6</sub>Sr<sub>0.12</sub>CuO<sub>4</sub> .
In order to understand the gap structure we calculated the electronic structure of $`H=t_{<ij>,\sigma }a_{i\sigma }^{}a_{j\sigma }+_iU_iS_i^z`$, where $`U_i`$ are site-dependent spin potentials in the stripe supercell. Local energy contributions $`V_iU_i|S_i^z|`$ are treated as parameters, where $`V_0=0`$ on the domain wall. We find a condition for vanishing photoemission weight at momentum $`๐ค=(\pi /4,\pi /4)`$, $`V_{0+2l_x}2V_{0+l_x}`$, which is satisfied by the present self-consistent DMFT spin densities, with $`V_{0+2l_x}2.07t`$ and $`V_{0+l_x}0.99t`$ (Fig. 1). The strong renormalization of $`V_i`$ in DMFT is due to charge fluctuations and demonstrates that local correlations play a crucial role in understanding the ARPES spectra of HTSC .
The ARPES spectral weight at $`\delta =1/12`$ around $`\omega \mu 2t`$ entirely originates from bands of the stripe supercell (Fig. 3). As expected, spectral weight is transferred with increasing doping from that energy region to the inverse photoemission $`\omega >\mu `$ (Fig. 4). Finally, we observed that the gaps at the $`Y`$ and $`S`$ point are gradually filled by spectral weight as the stripe order melts.
Summarizing, we have shown that vertical stripe phases with populated domain walls are robust structures in a broad range of $`\delta `$. Their spectral properties show an interesting superposition of the QPโs known from doped 2D antiferromagnets with a 1D metallic behavior. Such experimental features as: (i) the chemical potential shift $`\mathrm{\Delta }\mu \delta ^2`$, (ii) the incommensurability of spin fluctuations, and (iii) the gradual disappearence of the photoemission flat band and the pseudogap (gap) at the $`X`$ ($`S`$) point, find a natural explanation and accompany a gradual crossover from the stripe phases into a (strongly correlated) Fermi liquid with increasing doping.
We thank O. K. Andersen, B. Keimer, T. M. Rice, and J. Zaanen for stimulating discussions. A.M.O. acknowledges the support by the Committee of Scientific Research (KBN) of Poland, Project No. 2 P03B 175 14. |
no-problem/9912/gr-qc9912031.html | ar5iv | text | # Gravitational waves from inspiral into massive black holes
## Abstract
Space-based gravitational-wave interferometers such as LISA will be sensitive to the inspiral of stellar mass compact objects into black holes with masses in the range of roughly $`10^5`$ solar masses to (a few) $`10^7`$ solar masses. During the last year of inspiral, the compact body spends several hundred thousand orbits spiraling from several Schwarzschild radii to the last stable orbit. The gravitational waves emitted from these orbits probe the strong-field region of the black hole spacetime and can make possible high precision tests and measurements of the black holeโs properties. Measuring such waves will require a good theoretical understanding of the wavesโ properties, which in turn requires a good understanding of strong-field radiation reaction and of properties of the black holeโs astrophysical environment which could complicate waveform generation. In these proceedings, I review estimates of the rate at which such inspirals occur in the universe, and discuss what is being done and what must be done further in order to calculate the inspiral waveform.
One of the most exciting sources that should be measured by the space-based gravitational-wave interferometer LISA (the Laser Interferometer Space Antenna sah:lisa ) is the inspiral of a โsmallโ ($`110M_{}`$) compact body into a massive ($`10^{57}M_{}`$) black hole. Such massive black holes reside at the cores of galaxies; the smaller compact bodies will become bound to the hole and spiral into them after undergoing interactions with stars and other objects in the environment of the galactic center. The measurement of gravitational waves from such inspirals will make possible very high precision tests of general relativity, and probe the nature of the galactic coreโs environment.
To set the stage for understanding why these inspirals are such interesting objects, consider the following estimates: the orbital energy of a small body in an equatorial, prograde orbit of a Kerr black hole is
$$E^{\mathrm{orb}}=\mu \frac{12v^2+qv^3}{\sqrt{13v^2+2qv^3}},$$
(1)
where $`v\sqrt{M/r}`$ and $`qa/M`$. (I use units in which $`G=c=1`$ throughout.) The orbital frequency of the small body is
$$\mathrm{\Omega }=\frac{M^{1/2}}{r^{3/2}+aM^{1/2}}.$$
(2)
Radiation reaction carries orbital energy away from the system, causing the orbit to shrink. Eventually it shrinks enough that the body reaches the innermost stable circular orbit (ISCO). Orbits inside this radius are dynamically unstable; further radiative evolution tends to push the body into the black hole.
Post-Newtonian theory allows us estimate the rate at which the system loses energy as a power series in the quantity $`u(M\mathrm{\Omega })^{1/3}`$ (which is roughly the orbital speed) and the black holeโs spin $`a`$. Reference sah:minoetal97 gives the energy loss in such a post-Newtonian expansion:
$$\frac{dE}{dt}=\frac{32}{5}\left(\frac{\mu }{M}\right)^2u^{10}\left[f_{\mathrm{Schw}.}(u)+f_{\mathrm{spin}}(a,u)\right].$$
(3)
The prefactor $`32/5(\mu /M)^2u^{10}`$ is the result one gets applying the quadrupole formula to a binary described with Newtonian gravity; the function $`f_{\mathrm{Schw}.}(u)`$ is a (rather high-order) correction appropriate for zero-spin black holes, and $`f_{\mathrm{spin}}(a,u)`$ is a correction incorporating information about the holeโs spin. (Note that this formula is only appropriate for $`\mu M`$: it does not incorporate any finite mass ratio corrections.)
Equations (1)โ(3) can be used to estimate the time it takes for a small body to spiral from radius $`r_1`$ to $`r_2`$, and the number of gravitational-wave cycles it emits in that time:
$`T`$ $`=`$ $`{\displaystyle _{t_1}^{t_2}}๐t={\displaystyle _{r_1}^{r_2}}{\displaystyle \frac{dt}{dr}}๐r={\displaystyle _{r_1}^{r_2}}{\displaystyle \frac{dE/dr}{dE/dt}}๐r,`$ (4)
$`N_{\mathrm{cyc}}`$ $`=`$ $`{\displaystyle _{t_1}^{t_2}}f_{\mathrm{gw}}๐t={\displaystyle _{r_1}^{r_2}}{\displaystyle \frac{\mathrm{\Omega }}{\pi }}{\displaystyle \frac{dt}{dr}}๐r={\displaystyle _{r_1}^{r_2}}{\displaystyle \frac{\mathrm{\Omega }}{\pi }}{\displaystyle \frac{dE/dr}{dE/dt}}๐r.`$ (5)
(On the last line, I have assumed that the bulk of the radiation comes out in the quadrupole $`m=2`$ mode, so that $`f_{\mathrm{gw}}=2\mathrm{\Omega }/2\pi `$.) Consider now the inspiral of a $`5M_{}`$ body into a rapidly spinning ($`aM`$) $`10^6M_{}`$ black hole. The small body spirals from a radius of $`8M`$ (in Boyer-Lindquist coordinates) to the ISCO in one year, emitting around $`5\times 10^5`$ gravitational-wave cycles as it does so. The gravitational-waves that it emits lie in the frequency band $`3\times 10^3\text{Hz}f3\times 10^2\text{Hz}`$ โ the band to which LISA is most sensitive. Indeed, careful analyses of the detectability of the signal by LISA sah:finnthorne indicate that such an inspiral should be detected out to a distance of roughly 1 Gigaparsec with amplitude signal-to-noise ratio of around $`10`$ to $`100`$ (depending on factors such as the mass of the small body, the mass of the black hole, and the black holeโs spin). The fact that such a large number of cycles are emitted indicates that details of the waveform can in principle be measured to very high precision. Detection of extreme mass-ratio inspirals by LISA offers the possibility of very high precision measurements of the characteristics of extreme strong-field regions of spacetime.
The high precision tests that LISA will be able to make should allow us to directly map the characteristics of the massive bodyโs spacetime metric and confirm that they in fact exhibit the Kerr metric. Most likely, the way that this will be done will be to measure the multipole moments of the massive body. Fintan Ryan sah:ryan has shown that the gravitational waves which are emitted as a small body spirals into a massive compact object contain a โmapโ of the massive bodyโs spacetime. By measuring the gravitational waves and decoding the map, one learns the mass and current multipole moments which characterize the massive body. All multipole moments of Kerr black holes are parameterized by the holesโ mass $`M`$ and spin $`a`$:
$$M_l+iS_l=M(ia)^l.$$
(6)
For Kerr black holes, knowledge of the moments $`M_0M`$ and $`S_1aM`$ determines all higher moments. This is one way of stating the โno-hairโ theorem: The macroscopic properties of a black hole are entirely determined by its mass and spin. (I neglect the astrophysically uninteresting possibility of charged holes.) By measuring gravitational waves from extreme mass ratio inspiral and thereby mapping the massive bodyโs spacetime, LISA will test the no-hair theorem for black holes, determining whether the massive body has multipole moments characteristic of the Kerr metric, or whether the body is something more exotic, such as a boson star.
The waves emitted by extreme mass ratio inspiral are thus likely to be directly measureable by LISA, and are likely to be extremely interesting. One might next wonder whether they occur often enough to be interesting. This question has been examined in some detail by Martin Rees and Steinn Sigurdsson sah:rs ; sah:sig . They consider the scatter of stellar mass black holes in the central density cusp of galaxies into tightly bound orbits of the galaxiesโ central black hole. Occasionally, such a scattering event will put the stellar mass hole into an orbit which is so tightly bound that its future dynamics are driven by gravitational-wave emission, and it becomes an interesting source for LISA. They find that the rate of such events is likely to lie in the range
$$\frac{\text{1 event}}{\text{year }\text{Gpc}^3}\frac{\text{1 event}}{\text{month }\text{Gpc}^3}.$$
(7)
Obviously, there are large uncertainties in this calculation. The low mass end of the massive black hole population (which is most relevant to LISA observations) is not as well constrained as the population of very massive black holes ($`M10^8M_{}`$), and there are uncertainties in the rate at which stellar mass black holes are โfedโ into the central hole to produce extreme mass ratio binaries. However, the lower end of the rate estimate (7) is based on very conservative estimates. We may rather robustly estimate that the rate measured by LISA will be several events per year out to a Gigaparsec sah:steinn\_pc .
The waves that LISA will measure from these sources will come from orbits that are rather eccentric and inclined with respect to the black holeโs equatorial plane sah:hilsbender . To best interpret the measured waves (and, indeed, in order to improve the odds of seeing the waves at all in the detectorโs noisy data stream), it will be necessary to have some theoretical modeling of the orbit and the waves that it emits as gravitational radiation reduces the orbitโs energy and angular momentum. We expect that the radius, inclination angle, and eccentricity of the orbit will change as radiative backreaction drives the systemโs evolution. Some means of understanding these changes, in detail, is needed in order to model the wavesโ evolution accurately.
Before discussing work in radiation reaction, it is necessary to take a sanity check. Relativity theorists often work in a very idealized universe: their extreme mass ratio binary is likely to consist of a big black hole, a small body, and gravitational waves. In the real astrophysical world, there will be complications to this pretty (but highly idealized) picture. One should worry whether the complications render the relativity theoristโs modeling invalid.
Perhaps the most important such complication arises from interaction between the small inspiraling body and material accreting onto the massive black hole. Recently, Narayan has analyzed this interaction and concluded that, in almost all cases, accretion induced drag is unlikely to significantly influence extreme mass ratio inspiral sah:narayan . This conclusion is based on the fact that in the majority of cases, the rate at which the central black hole accretes gas from its environment is rather low (several orders of magnitude less than the Eddington rate sah:adaf ). For these โnormalโ galaxies, much evidence sah:narayan ; sah:adaf suggests that the gas accretes via an advection dominated accretion flow (ADAF). Narayanโs calculation sah:narayan shows that the timescale for ADAF drag to change the orbitโs characteristics (e.g., the orbital angular momentum) is many (9 to 16) orders of magnitude longer than the timescale for radiation reaction to change the orbitโs characteristics. Thus, the relativity theoristโs idealized view of an extreme mass ratio binary is probably quite accurate: radiation reaction is likely the most important factor driving the evolution of extreme mass ratio binaries.
When the mass ratio is extreme, one can analyze the spacetime of the binary using a perturbative expansion: the spacetime metric can be written as a โbackgroundโ from the central object (which I will assume from now on is a Kerr black hole), plus a perturbation due to the inspiraling body:
$$g_{\alpha \beta }=g_{\alpha \beta }^{\mathrm{Kerr}}(M,a)+h_{\alpha \beta }(\mu ).$$
(8)
The evolution of the perturbation $`h_{\alpha \beta }(\mu )`$ should then describe the dynamical evolution of the system. To linear order in the mass ratio $`\mu /M`$ (which should be adequate for extreme mass ratios), this evolution can be described using perturbation techniques, such as the Teukolsky equation<sup>1</sup><sup>1</sup>1The Teukolsky equation actually describes the evolution of a curvature quantity related to the perturbation. sah:teuk72 .
When the mass ratio is extreme, the effects of radiation reaction are gentle enough that the systemโs evolution is adiabatic: the radiation reaction timescale is much longer than the orbital timescale. At any given moment, the trajectory of the small body is very nearly a geodesic, parameterized by the three constants of Kerr orbital motion: the energy $`E`$, the ($`z`$-component of) angular momentum $`L_z`$, and the โCarter constantโ $`Q`$. Gravitational-wave emission causes these three constants to change on the radiation reaction timescale. In this adiabatic limit, the evolution of the system can be understood in terms of the evolution of the quantities $`(E,L_z,Q)`$.
It is well known that gravitational waves carry energy and angular momentum. One might think that they carry โCarter constantโ as well, and that therefore one might be able to deduce the effects of radiation reaction by measuring the flux of radiation at infinity and down the event horizon. By measuring the amount of $`E`$, $`L_z`$, and $`Q`$ carried in the flux one should be able to deduce how much $`E`$, $`L_z`$, and $`Q`$ are lost from the orbit. This would then allow one to figure out orbits of Kerr black holes radiatively evolve.
This approach does not work. One can deduce the change in the orbitโs $`E`$ and $`L_z`$ by examining radiation fluxes, but one cannot so deduce the change in $`Q`$:
$`\delta E_{\mathrm{orbit}}`$ $`=`$ $`\delta E_{\mathrm{radiated}},`$
$`\delta L_{z,\mathrm{orbit}}`$ $`=`$ $`\delta L_{z,\mathrm{radiated}},`$
$`\delta Q_{\mathrm{orbit}}`$ $``$ $`\delta Q_{\mathrm{radiated}}.`$ (9)
The change $`\delta Q`$ turns out to depend explicitly on the local radiation reaction force, $`f^\mu =dp^\mu /d\tau `$, which the small body experiences due to radiative backreaction (see sah:paperI ). The properties of this force (and programs to calculate it) are described elsewhere in this volume sah:mino\_burko . Here, it is sufficient to note that an understanding of the radiation reaction force for gravitational radiation reaction lies some time in the future, so that we cannot yet evolve generic Kerr black hole orbits.
There are special cases where evolution of the Carter constant is not such a nasty impediment. One case is the evolution of equatorial orbits. Equatorial orbits have $`Q=0`$; and, one can easily show that an orbit which starts off equatorial remains equatorial. In this case, one need only evolve the energy and angular momentum. The local radiation reaction force is not needed in this case. Another case is the evolution of circular, non-equatorial orbits. (For non-zero spin, โcircularโ means โconstant Boyer-Lindquist coordinate radiusโ.) Such orbits have recently been shown to remain circular under adiabatic radiation reaction sah:circulartheorems . Thus, in an adiabatic evolution, a system which is initially circular and inclined will remain circular and inclined: the system evolves through a sequence of orbits changing only its radius and inclination angle. By imposing โcircular goes to circularโ, one can write down a simple rule relating the change of the Carter constant to the flux of energy and angular momentum: $`\dot{Q}=\dot{Q}(\dot{E},\dot{L}_z)`$.
Recently, I have examined the evolution of these circular orbits under adiabatic radiation reaction, using a flux-measuring formalism based on the Teukolsky equation. The formalism and results are presented at length in sah:paperI . Some highlights of the results are presented in Figure 1.
Consider the left panel of Figure 1. The horizontal axis is orbital radius $`r`$; the vertical axis is inclination angle $`\iota `$. This figure shows the direction, in $`(r,\iota )`$ phase space, in which radiation reaction tends to push the orbit. This particular analysis is for a black hole with $`a=0.8M`$. Notice that the orbits are in the strong-field of the hole; the dotted line indicates the maximum inclination angle which the orbit can have and remain stable. Orbits tilted beyond this line are dynamically unstable to small perturbations and plunge into the hole. The tail of each arrow represents a particular orbit. The direction of the arrow gives the direction in which that orbit tends to evolve due to gravitational-wave emission; the arrowโs length indicates the relative rate of evolution. In all cases, the direction of the arrow is such that the inclination angle increases: radiation reaction tends to make tilted orbits more inclined. This is exactly what one would have predicted by extrapolating from post-Newtonian theory sah:ryan\_pn . The rate at which this inclination angle increases is rather slow โ the aspect of the arrows in Figure 1 is nearly flat. Indeed, the value of $`d\iota /dt`$ in this strong-field regime is roughly 3 times smaller than what post-Newtonian theory predicts<sup>2</sup><sup>2</sup>2Recent analyses are showing that in the extreme strong-field of rapidly rotating holes, the inclination angle changes rather more dramatically, and in the opposite direction: radiation reaction tends to decrease the inclination angle. This work is in progress sah:paperII .. An interesting feature of Figure 1 is the very long arrow at $`r=7M`$, $`\iota 120^{}`$. This orbit lies extremely close to the marginally stable orbit: it is at $`\iota =119.194^{}`$; the marginally stable orbit is at $`\iota =119.670^{}`$. This orbit is barely dynamically stable, so a small push has drastic effects.
The right panel of Figure 1 shows a portion of the gravitational waveform emitted during inspiral. The central black hole in this case has $`a=0.95M`$; it lies at luminosity distance $`D`$ from the detector. The waveform here is shown as the small body passes through $`r=7M`$, $`\iota =62.43^{}`$. Note the low-frequency modulation of both polarizations. This is due to the frame dragging induced precession of the orbital plane โ Lense-Thirring precession. Note also the many sharp, short-timescale features present in the two polarizations. When the spin is high, many harmonics of the small bodyโs fundamental orbital frequencies contribute to the gravitational waveform. This leads to a rather complicated structure; the energy spectrum corresponding to these waveforms extends to rather high frequencies. Accurate measurement of such complicated waveforms will be quite a challenge. However, the payoff is likely to be immense.
I thank Kip Thorne and Steinn Sigurdsson for help and advice in writing this talk; I also thank Ramesh Narayan for allowing me to use a preliminary draft of Ref. sah:narayan . I am indebted to many people who helped me construct my radiation reaction code, including (but not limited to) Sam Finn, Daniel Kennefick, Yuri Levin, Amos Ori, Sterl Phinney, and โThe Capra Gangโ: Lior Burko, Patrick Brady, รanna Flanagan, Eric Poisson, and Alan Wiseman. This research was support by NSF Grants AST-9731698 and AST-9618537, and NASA Grants NAG5-6840 and NAG5-7034. |
no-problem/9912/astro-ph9912115.html | ar5iv | text | # On the star formation history of IZw 18 This research has made use of NASAโs Astrophysics Data System Abstract Service.
## 1 Introduction
A challenge of modern astrophysics is the understanding of galaxies formation and evolution. In this exploration, low-mass dwarfs and irregular galaxies have progressively reached a particular place. Indeed, in hierarchical clustering theories these galaxies are the building blocks of larger systems by merging (Kauffmann et al. 1993; Pascarelle et al. 1996; Lowenthal et al. 1997). Moreover, as primeval galaxies may undergo rapid and strong star formation events (Partridge & Peebles 1967), nearby dwarf starburst galaxies or Blue Compact Galaxies (BCDG) of low metallicity can also be considered as their local counterparts. Therefore the study of low redshift starbursts is of major interest for our understanding of galaxies formation and evolution.
As BCDG presently undergo a strong star formation (which cannot be maintained during a long time), but generally present a low metallicity indicating a low level of evolution, Searle & Sargent (1972) have proposed that these systems are young in the sense that they are forming stars for the first time. An alternative is that they have formed stars during strong starburst events separated by long quiescent periods. However, most dwarf starburst galaxies show an old underlying population indicating that they have also formed stars in the past (Thuan 1983; Doublier 1998). Thus they are not โyoungโ.
Among starbursts, IZw 18, as the lowest metallicity galaxy known locally, could be considered as the best candidate for a truly โyoungโ galaxy. However, recent studies have shown that even this object is not forming stars for the first time. Color magnitude diagrams have revealed the presence of stars older than 1 Gyr (Aloisi et al. 1999). Legrand et al. (1999) have shown that the extreme homogeneity of abundances throughout the galaxy (see also Van Zee et al. 1998) cannot be explained by the metals ejected from the massive stars formed in the current burst (see also Tenorio-Tagle 1996), thus indicating previous star formation. Then we need to constrain this previous star formation and specify its nature.
It is generally accepted that the enrichment of the ISM arises by burst phases. In the case of IZw 18, Kunth et al. (1995) have shown that one single burst with intensity comparable to the present one is sufficient to reproduce the observed abundances. However metals ejected by massive stars could escape the galaxy, if its total mass is lower than $`10^8\mathrm{M}_{}`$ (Mac Low & Ferrara 1999). The metallicity is then no longer a measure of the number of bursts. On the other hand, if the total mass of the galaxy amounts $`10^9\mathrm{M}_{}`$ the metals are likely to be retained (Silich & Tenorio-Tagle 1998). As the total mass of IZw 18 is likely to lie between these values (Viallefond et al. 1987; Van Zee et al. 1998), the escape of a fraction or of the totality of the newly synthesized metals during a burst cannot be excluded. In such a case several bursts are needed to account for the observed metallicity, their number depending on the fraction of metals leaving the galaxy. Nevertheless, even if metals escape, stars are likely to remain bound and for an increasingly larger number of bursts, the old underlying stellar population will appear progressively redder. Thus the number of previous bursts is limited, considering the extremely blue color of BCDG.
Between starburst events, BCDG are likely to appear as Low Surface Brightness Galaxies (LSBG). However, studies of the latter (Van Zee et al. 1997c) showed that despite their low gas density, they do not have a zero star formation rate (SFR). LSBG indeed present a low and possibly continuous SFR. This led Legrand et al. (1999) to propose that a continuous low SFR over a Hubble time as responsible for the observed metallicity level in the most metal-poor objects like IZw 18.
Several studies of the past star formation history of BCDG, and specifically IZw 18, have been carried out. Most have dealt with their chemical evolution (Chiosi & Matteucci 1982; Carigi et al. 1995; Kunth et al. 1995) or with their spectrophotometric properties (Mas-Hesse & Kunth 1991; Leitherer & Heckman 1995; Stasinska & Leitherer 1996; Cervino & Mas-Hesse 1994; Mas-Hesse & Kunth 1999), but rarely with both. Moreover, solely the influence of bursts has been studied up to now, and the low continuous SFR of inter-burst phases has been ignored.
We used a spectrophotometric model coupled with chemical evolution in order to constrain both the abundances and the colors of the galaxies. We also investigated the effect of a continuous and low star formation regime. A preliminary study (Legrand & Kunth 1998) showed that this scenario is plausible. Here we present detailed calculations, results and their implications. The model and the observational data used are described in section 2. The different models, including the investigation of mass loss effect and the continuous star formation rate model are presented in section 3. Consequences and generalization of the continuous SFR hypothesis are discussed in section 4.
## 2 Modelling the star formation history in IZw 18
In order to investigate the star formation history of IZw 18, we used the spectrophotometric model coupled with the chemical evolution program โSTARDUSTโ described by Devriendt et al. (1999). The advantage of this model is that both the metallicity and the spectral properties of a galaxy are monitored through time.
### 2.1 The model
The main features of the model are the following:
* A normalized 1 $`\mathrm{M}_{}`$ of baryonic matter galaxy is considered.
* The SFR and the IMF are fixed and used to evaluate at each time the number of stars of all masses formed.
* The stellar lifetimes are taken into account, i.e., no instantaneous recycling approximation is used. Metals ejected (C,O, Fe and the total metallicity) are calculated at each time step as well as the number of stars of each mass. The chemical and spectroscopic evolution is followed in time.
* A fraction of the metals produced by the massive stars ($`M9M_{}`$) can be expulsed from the galaxy and do not contribute to the enrichment.
* The fraction of the produced metals remaining into the galaxy is assumed to be immediately and uniformly mixed with the interstellar medium. We must keep in mind that there may be a time delay between their production and their visibility.
* The newly formed stars have the metallicity of the gas at the time of their birth.
* The spectrum as a function of time is computed by summing the number of stars multiplied by their individual spectra. The nebular emission is not included in the model.
* The model uses the evolutionary tracks from the Geneva group (Schaller et al. 1992; Charbonnel et al. 1996). The yields are from Maeder (1992) for the massive stars ($`M9M_{}`$) and from Renzini & Voli (1981) for the lower mass stars. All the metals produced are ejected (Case A of Maeder 1992).
* The stellar output spectra is computed using the stellar libraries from Kurucz (1992) supplemented by Bessel et al. (1989, 1991) for M Giants, and Brett (1995) for M dwarfs.
* We used a typical IMF described as a power law in the mass range 0.1-120 $`\mathrm{M}_{}`$.
$$\varphi (m)=a.m^x$$
(1)
A constant index x of 1.35 was used (Salpeter 1955). We now have some indications that the IMF may flatten at low masses, maybe below $`0.3\mathrm{M}_{}`$ (Elmegreen 1999; Scalo 1998). As the stars in this range (0.1-0.3 $`\mathrm{M}_{}`$) do not contribute significantly to the enrichment of the ISM, nor to the colors, this will only act on the normalisation of the SFR in the sense that forming less low mass stars will decrease the total SFR requested to reproduce the observed abundances. However, as the SFR quoted by Van Zee et al. (1997b, c) and reproduced in table 1 are computed using a Salpeter IMF down to 0.1 $`\mathrm{M}_{}`$ we used this value in the model in order to compare our results with these previous studies. Finally, their upper mass limit is 100 $`\mathrm{M}_{}`$ (against 120 $`\mathrm{M}_{}`$ in our models). However, the upper mass limit of the IMF do not affect strongly the derivation of the SFR but can modify the abundances. Thus using a Salpeter IMF ranging from 0.1 $`\mathrm{M}_{}`$ to 120 $`\mathrm{M}_{}`$ appears as a good compromise to study abundances and compare SFR with previous studies.
* Two regimes of star formation have been investigated:
+ A continuous star formation during which the SFR is low and directly proportional to the total mass of available gas.
+ A burst of star formation during which all the stars are formed in a rather short time.
### 2.2 Comparison of the model with IZw 18
As the model is normalized to one solar mass of gas, we had to multiply the parameters by the mass of IZw 18 in order to compare our results with the observations. However, this normalization appears only in the value of the SFR and in the absolute magnitude predictions; the colors reported are independent of the choice of the mass of the galaxy.
The initial mass of gas in IZw 18 must lie between $`\mathrm{6.9\hspace{0.17em}10}^7\mathrm{and}\mathrm{8\hspace{0.17em}10}^8\mathrm{M}_{}`$, which are respectively the mass of HI and the dynamical mass measured by Lequeux & Viallefond (1980). However, if only the main component is considered, the mass must be taken between $`\mathrm{2.6\; 10}^7`$ (HI) and $`\mathrm{2.6\; 10}^8`$ (dynamical mass) as measured by Van Zee et al. (1998). The initial mass of gas in IZw 18, in the absence of infall, was higher than the mass of HI because of the presence of stars and perhaps molecular $`\mathrm{H}_2`$ (Lequeux & Viallefond 1980). As dark matter can represent a non negligible fraction of the total dynamical mass, the mass of (baryonic) gas can be lower than the dynamical mass. We adopted a value of $`10^8\mathrm{M}_{}`$ for the initial mass of gas in IZw 18.
The model produces both the abundances and the spectra. We used the spectrum to derive the expected colors in (U-B), (B-V) and (V-K). We have adopted for comparison with the observed abundance values reported by Garnett et al. (1997) for C and Skillman & Kennicutt (1993) for O. Most of the published colors for IZw 18 are relatively old (Huchra 1977; Thuan 1983), and have not been corrected for the nebular contribution. Salzer (1998) has recently measured the colors of IZw 18 and corrected for the nebular contribution. As our model does not include the nebular contribution, we adopted Salzerโs values for comparison, i.e., $`(\mathrm{U}\mathrm{B})=0.88\pm \mathrm{\hspace{0.17em}0.06}`$ and $`(\mathrm{B}\mathrm{V})=0.03\pm \mathrm{\hspace{0.17em}0.04}`$; Thuan (1983) has estimated that the flux measured in the IR was mainly of stellar in origin. We thus adopted his value for (V-K)=$`0.57\pm \mathrm{\hspace{0.17em}0.23}`$.
Finally, the model used have been compared by Devriendt et al. (1999) with two similar models, i.e., PEGASE (Fioc & Rocca-Volmerange 1997) and GISSEL (Bruzual A. & Charlot 1993), and no differences larger than 0.1 magnitude were found; this was considered the intrinsic uncertainty of the modeling process.
## 3 Results of the modelisation
### 3.1 Enrichment by one previous burst
It is generally admitted that the starburst events are the main contributors to the enrichment of the ISM. We thus used the model to evaluate the characteristics of a single burst to reproduce the observed oxygen abundance in IZw 18. Taking the uncertainties into account, we found, like previous studies (for example Kunth et al. 1995), that the present day abundances can be reproduced by a single burst, previous to the current one, with a SFR of $`0.065M_{}yr^1`$ during 20 Myr. Moreover, the contribution of this old underlying population is too faint to modify significantly the colors of the galaxy which are currently dominated by the newly formed massive stars. This model can reproduce all the observations.
However, some simple arguments can rule out this model. Indeed, we assumed that between bursts, the SFR is equal to zero, which is certainly wrong. We will demonstrate below that even a low but continuous SFR between bursts, as observed in LSBG, is likely to produce significant enrichment. Moreover, the kinetic energy liberated in such a burst is high (about $`10^{40}ergs^1`$ using the models of Cervino 1998). Mac Low & Ferrara (1999) have suggested that for galaxies with masses comparable with that of IZw 18, such an energy is likely to eject out of the galaxy all the metals formed by massive stars. If true, it means that bursts are unlikely to enrich the ISM by much! We thus have investigated the effect of the loss of newly synthesized elements by galactic winds. As a less extreme hypothesis, we assumed that only a fraction of the SN ejecta (and not the totality) leaves the galaxy.
### 3.2 The effect of metal loss
The possibility that the energy released by the SN could eject their products out of the galaxy was proposed earlier by Russell et al. (1988). However, intermediate mass stars, evolving more slowly, will eject their metals after the SN explosion of the most massive stars. Since the kinetic energy released, mainly in stellar winds, is lower than for the massive stars, their metal products should be retained. This will result in a low effective enrichment in oxygen (main product of massive stars), but in a relatively normal enrichment in carbon (mostly produced in intermediate mass stars). This hypothesis of โdifferential galactic windโ seems to be necessary to reproduce the abundance measurements in some but not all the galaxies (Marconi et al. 1994; Tosi 1998).
If a fraction of metals escapes from the galaxy during a burst, the number of bursts necessary to reach the observed abundance in IZw 18 will be larger. We thus ran a model in which 80% of the metals produced by stars more massive than 9 $`M_{}`$ left the galaxy and did not contribute to its chemical enrichment. Assuming the same parameters for the bursts as previously, five bursts were necessary to reach the oxygen abundance level seen in IZw 18. We thus assumed recurrent bursts occurring every 3 Gyr. The results of this model are shown in figures 1 and 2.
Figure 1 shows that if the oxygen abundance is reproduced after 5 bursts, the differential winds hypothesis results in an overproduction of carbon. Moreover, if the carbon abundance in IZw 18 is lower than the measurements of Garnett et al. (1997), as suggested by Izotov (1999), the discrepancy is even larger and completely rules out this model. However, the uncertainties on the yields remain large (see for example Prantzos 1998, 1999). For example, the detection of WR in IZw 18 (Legrand et al. 1997) can imply that the mass loss rate of massive stars are twice the standard ones and the metals produced by the intermediate mass stars may also leave the galaxy. We thus think that this argument alone is not strong enough to invalidate definitively this scenario.
On the other hand, figure 2 shows that after four bursts, the expected colors (essentially V-K) do not correspond with those observed. This is due to the fact that the old population remaining from the previous star formation events contributes more and more and reddens the colors. This constraint is strong, since it is difficult to ignore old population. Of course, the constraint from the observed colors is not really the number of previous bursts (because it depends on their strength) but the ratio of young stars (formed in the current burst) versus old stars (remaining from all the previous star formation events). We thus ran another model with recurent bursts of intensity comparable to the current one (Mas-Hesse & Kunth 1999) every 1.5 Gyr. This model shows, like in figure 2, that the observed colors become incompatible with observations after 6 of these bursts. It thus appears that the total mass of stars ever formed previously to the current burst cannot be larger than 6 times the total mass of stars involved in the current burst. Only 2 or 3 of these bursts produce enought metals to account for the observed abundances (if all the metals remain into the galaxy). Thus if the present day metallicity results from previous star formation events with mass loss, this constrains the fraction of metals lost by the galaxy to be lower than 50-70%. For the same reason, this rules out models inferring a large number of bursts in which most of the metals leave the galaxy or produce a hot metal-rich halo as suggested by Pantelaki & Clayton (1987).
### 3.3 A continuous low star formation rate
As discussed by Legrand et al. (1999), metals observed in IZw 18, and also in other starbursts galaxies, result from a previous star formation episode. Assuming that the present burst in IZw 18 is the first one, we evaluated the continuous star formation rate required to reproduce the observed oxygen abundance after 14 Gyr. In this scenario, a mild star formation process would have started a long time ago but the galaxy would presently undergo its first strong starburst event. We found that a SFR of only $`10^4gM_{}yr^1`$ (i.e., $`10^3gM_{}Gyr^1`$ by unity of mass of gas in the galaxy), where $`g`$ is the fraction of gas (in mass) available, can reproduce the observed oxygen abundance in IZw 18. Moreover, this model reproduces perfectly the carbon abundance measured by Garnett et al. (1997). The kinetic energy injection rate, evaluated using the models of Cervino (1998), is for this scenario of $`\mathrm{9\hspace{0.17em}10}^{36}\mathrm{erg}\mathrm{s}^1`$, i.e., probably insufficient to eject the metals out of the galaxy (Mac Low & Ferrara 1999; Silich & Tenorio-Tagle 1998).
Moreover, the kinetic energy is not deposited in one single region like in a burst, but as the continuous star formation is supposed to occur sporadically in location, the injected energy is diluted over the whole galaxy, reducing the efficiency of ejection of the metals (see also Strickland & Stevens 1999). For these reasons, we assumed that all the metals produced by the continuous star formation rate are retained in the galaxy. Of course, this continuous star formation regime represent an extreme case. We cannot rules out the existence of intermediate models in which the star formation history would be a succession of very small bursts without or with very low metal loss. However, these models will appear, in average, as a rather continuous star formation rate. The motivation for preferring a continuous star formation rate is principally the absence of observational evidences for gas rich galaxies with a SFR equal to zero, even among LSBG.
Finally, in order to compare the colors predicted by the model, we added the present burst at 14 Gyr. The characteristics for the current burst, i.e., a SFR of $`0.023\mathrm{M}_{}\mathrm{yr}^1`$ during 20 Myrs, were taken from Mas-Hesse & Kunth (1999). The evolution with time of the gas fraction, oxygen and carbon abundances is presented in Fig. 3, whereas the evolution of the colors is shown in Fig. 4.
Note that the observations of Thuan (1983) were done using a 8โ circular aperture, which is smaller than IZw 18. Thus the total asymptotic magnitudes may be smaller than the ones measured by Thuan (1983). Doublier (1998) has shown that this difference can be as large as 3 magnitudes, due to the presence of an old underlying stellar population. However, as IZw 18 is a very unevolved object, the old underlying population must be very faint. We have evaluated the difference between the observations of Thuan (1983) and the total magnitude expected from our model in the case of the existence of an old underlying population, due to continuous star formation, extending uniformly over the whole galaxy. We assumed that the old underlying population extends over 60โ $`\times `$ 45โ as discussed later. If $`m`$ is the total magnitude emitted in a band, it can be written as
$$\mathrm{m}=2.5\mathrm{Log}(\mathrm{E}_\mathrm{b}+\mathrm{E}_{\mathrm{ci8}}+\mathrm{E}_{\mathrm{co8}})$$
(2)
with $`\mathrm{E}_\mathrm{b}`$ the flux emitted by the burst (localized in the central region included in the measurement of Thuan 1983), $`\mathrm{E}_{\mathrm{ci8}}`$ and $`\mathrm{E}_{\mathrm{co8}}`$ the fluxes emitted by the old underlying stellar population inside and outside the 8โ aperture, respectively. The magnitude measured by Thuan (1983) are then:
$$\mathrm{m}_8=2.5\mathrm{Log}(\mathrm{E}_\mathrm{b}+\mathrm{E}_{\mathrm{ci8}})$$
(3)
$`\mathrm{E}_{\mathrm{co8}}`$ can be evaluated using the flux of the old underlying stellar population predicted by the model. Under these assumptions, the magnitude measured by Thuan (1983) should be decreased by 0.20 in J and 0.24 in K. Thus this does not change the main results and the model predicts colors consistent with the observations.
The fraction of gas consumed remains very low; thus $`g`$ is always close to 1. This means that the SFR is rather constant. The model is thus fully compatible with all the observations within the error bars.
### 3.4 First consequences
If we suppose that the continuous star formation occurs sporadically over the whole galaxy like in LSBG (Van Zee et al. 1997c), the observed homogeneity of abundances (within the NW region and also between NW and SE regions of IZw 18) is a natural consequence of this scenario. The uniformly distributed star formation and the long time evolution (14 Gyr) ensure the dispersal and homogenizing of the metals over the whole galaxy.
We evaluated the number of stars with a mass greater than 8 $`\mathrm{M}_{}`$, formed over 14 Gyr, to be around 12000. This corresponds to 120 massive stars (typically an open cluster) formed every 140 Myr. Taking their lifetime into account, we expect to see around 13 stars with mass greater than 8 $`\mathrm{M}_{}`$ at a given epoch. We also evaluated the SN rate expected to be $`\mathrm{7.5\; 10}^7\mathrm{yr}^1`$, or relatively to the mass of gas, around $`10^{14}\mathrm{yr}^1.\mathrm{M}_{}^1`$. This can be compared to the SN rate in our galaxy which amount $`10^{13}\mathrm{yr}^1\mathrm{M}_{}^1`$ (Tammann et al. 1994).
### 3.5 Threshold and efficiency of star formation
There is a relationship between the star formation rate and the gas surface density (Schmidt 1959). A simple power law describes this link at โhighโ density, but this relationship breaks down under a critical threshold (Kennicutt 1989, 1998). The existence of a gas density critical threshold under which star formation would be inhibited had been proposed by Quirk (1972). This threshold would be associated with large scale gravitational instabilities for formation of massive clouds. In a thin isothermal disk, the critical gas surface density threshold (Toomre 1964; Cowie 1981) is (Kennicutt 1989):
$$\mathrm{\Sigma }_c=\alpha \frac{\kappa c}{3.36G}$$
(4)
with
$$\kappa =1.41\frac{V}{R}(1+\frac{R}{V}\frac{dV}{dR})^{1/2}$$
(5)
where $`c`$ is the dispersion velocity in the gas, $`\kappa `$ is the epicyclic frequency (derived from the rotation curve), and $`V`$ the rotation velocity at the distance $`R`$ from the center of the galaxy. However, Van Zee et al. (1997c) have shown that despite a gas density lower than the threshold, LSBG are undergoing star formation. We suggest that in low density objects, the density can fluctuate locally and get above the threshold in some places. This could induce localized and faint star formation (Van Zee et al. 1997c; Skillman 1999). For IZw 18, observations of Van Zee et al. (1998) and Petrosian et al. (1997) reveal a solid body rotation (in the central part) with parameter $`\frac{dV}{dR}=70\mathrm{km}\mathrm{s}^1\mathrm{kpc}^1`$. Using a dispersion velocity in the gas of 12 $`\mathrm{km}\mathrm{s}^1`$ (Van Zee et al. 1998), the critical threshold is of the order of $`\mathrm{1.5\hspace{0.17em}10}^{22}`$ atoms $`\mathrm{cm}^2`$. On the other hand, Van Zee et al. (1998) have shown that the abundances in the HI halo are comparable to the one in the HII region. This suggest that star formation may have also occurred quite far away from the central regions. This is reinforced by the observation of star formation in regions with density lower than the threshold (Van Zee et al. 1997c). As most of the HI gas responsible of the absorption measured by Kunth et al. (1994) is concentrated at density higher than $`10^{20}`$ atoms $`\mathrm{cm}^2`$, we assumed that this level represent the density limit for the continuous star formation. This appears as a lower limit because metal produced by star formation in more inner regions can also have been dispersed and mixed at larger distances. This implies that local fluctuations in the density should be up to a factor of 150 and that the continuous SFR could occur over a surface of $`60\times 45^{\prime \prime }`$ (Van Zee et al. 1998).
The continuous star formation rate evaluated is very low. From the work of Wyse & Silk (1989), we can compute the star formation efficiency of such a process. According to these authors, the star formation rate at a distance $`r`$ of the center and a time $`t`$ is:
$$\psi (r,t)=ฯต.\mathrm{\Omega }(r).\mu _{HI}(r,t)$$
(6)
where $`ฯต`$ is the star formation efficiency, $`\mathrm{\Omega }(r)`$ the local angular frequency and $`\mu _{HI}(r,t)`$ the surface density in HI. Using the rotation curves computed by Van Zee et al. (1998) and Petrosian et al. (1997), we found that $`\mathrm{\Omega }=(\mathrm{8.76\hspace{0.17em}10}^7\mathrm{yr})^1`$. The HI mass measured by Van Zee et al. (1998) using a surface of 2.3$`\times `$3 kpc gives a mean surface density of $`3\mathrm{M}_{}\mathrm{pc}^2`$. The star formation rate is then $`\psi =0.18ฯต\mathrm{M}_{}\mathrm{yr}^1`$, which compared with the results of our model ($`10^4\mathrm{M}_{}\mathrm{yr}^1`$), leads to a very low star formation efficiency of $`ฯต\mathrm{6\hspace{0.17em}10}^4`$.
## 4 Generalization of the continuous star formation hypothesis
### 4.1 Underlying population and faint objects population
If the starburst in IZw 18 is the first one in its history, we can expect that objects which have not yet undergone a burst (but only a continuous star formation rate) do exist. They should then look like IZw 18 just before the present burst. Our modeling predicts that after 14 Gyr of continuous star formation, the magnitude of a IZw 18-like object is of the order of 20 in V and 17.5 in K. These magnitudes represent the brightness of the old underlying stellar population in IZw 18. Assuming that the continuous star formation occured in regions where the HI column density was greater than $`10^{20}\mathrm{cm}^2`$, i.e., $`60\times 45^{\prime \prime }`$ (Van Zee et al. 1998), the expected surface brightness would be of the order of 28 $`\mathrm{mag}\mathrm{arcsec}^2`$ in V and 26 $`\mathrm{mag}\mathrm{arcsec}^2`$ in K. These values are an upper limit (in $`\mathrm{mag}\mathrm{arcsec}^2`$); if a fraction of metals is ejected out of the galaxy, the SFR needed to produce the observed abundances will be higher and the total luminosity and surface brightness will be increased. Moreover, as discussed in section 3.5, the density limit adopted for the continuous SFR is a lower limit and the region where the continuous SFR can occur may be smaller, resulting in higher surface brightness. However, the extreme faintness of the old underlying population probably explains why no strong evidence for its existence has been found in IZw 18 (Thuan 1983; Hunter & Thronson 1995) until recently when reanalyzing HST archive images Aloisi et al. (1999) found stars older than 1 Gyr. In all the cases, these very low surface brightness levels will be reachable with 8m class telescopes like the VLT and Gemini for rather closeby objects. A search for a faint old underlying population, in the external part of dwarf galaxies like IZw 18, resulting from a continuous star formation process, is planned. If successful, this will support the existence of a class of very low surface brightness galaxies, which never underwent a burst, but evolved through a continuous weak star formation rate.
### 4.2 LSBG as quiescent counterparts of starbursts
We also studied the state of dwarf galaxies between bursts. IZw 18 appears as an extreme object which could present for the first time a strong star formation event. The chemical abundance levels in most of the starbursts galaxies suggest that they have undergone at least two or three previous bursts. Using the model described below, we investigated the simple following star formation history for a dwarf galaxy with mass, size and distance comparable to IZw 18:
* A continuous star formation rate since 14 Gyr.
* Two bursts with a SFR of $`0.065M_{}yr^1`$ and a duration of 20 Myr, respectively at 8 and 11 Gyr.
The absolute magnitudes predicted are around -10 and the surface brightness about 25 $`\mathrm{mag}\mathrm{arcsec}^2`$ in B. These values correspond to objects at the extreme end of the luminosity function of the galaxies as observed by Lo et al. (1993).
We have also compared the continuous SFR required for IZw 18 ($`10^4M_{}yr^1`$) with those measured by Van Zee et al. (1997a, b, c) in low surface brightness galaxies. As these objects have different sizes and masses, we normalized the SFR to the total HI mass. For IZw 18, the HI mass lies between $`\mathrm{2.6\hspace{0.17em}10}^7\mathrm{M}_{}`$ (Lequeux & Viallefond 1980; Van Zee et al. 1998, for the main body) and $`\mathrm{6.9\hspace{0.17em}10}^7\mathrm{M}_{}`$ (Van Zee et al. 1998, for the whole galaxy, including the diffuse low column density component). The comparison is shown in table 1. It appears that the continuous SFR as predicted by our scenario is comparable, relative to the HI mass, to the lowest SFR observed in quiescent and low surface brightness galaxies (like for example in UGC8024 or UGC9218).
We then conclude that LSBG and quiescent dwarfs are likely to be the quiescent counterparts of starburst galaxies.
### 4.3 Generalization to all galaxies
If a continuous star formation rate exists in IZw 18, it must exist in other dwarf galaxies, and may be, in all galaxies. There are some hints for such a hypothesis.
For example, the extreme outerparts of spirals, where no strong star formation event occurred and where the metals formed in the more active inner zones have not diffused, must have low abundances and low surface brightness. For example, let us assume that the bulk of โstrongโ star formation occurs in a spiral galaxy at distances less than the optical radius ($``$10 kpc). Roy & Kunth (1995) have shown that metals can be dispersed at scales up to 10 kpc in about 1 Gyr, so extending their results we can expect that if recent (less than 1 Gyr ago) star formation occurred in external regions located at 10 kpc from the center (one optical radius), the newly formed metals could affect abundances at distances up to 20 kpc form the center in few Gyrs. If this star formation is relatively recent, we expect that the most external region of the disk (more than 2-3 optical radii) will not be affected by โstrongโ star formation events their metallicity will be solely the result of the โunderlyingโ low continuous star formation rate. Extrapolations of metallicity gradients in spiral galaxies lead to abundances comparable to that of IZw 18 at radial distances of about three optical radii (Ferguson et al. 1998; Henry & Worthey 1999). This corresponds to the size of the halos or disks susceptible to give rise to metallic absorption in quasar spectra (Bergeron & Boisse 1991).
We showed that a continuous low star formation rate results in a steady increase of the metallicity of the interstellar gas. We have compared the evolution of the iron abundance predicted from our model with the measurements in DLA systems in Fig. 5. The abundances predicted by the model mimic the lower envelope of these measurements. If we assume that these absorption systems are associated with galaxy halos (Lanzetta et al. 1995; Tripp et al. 1997), this indicates that such a process can account for a minimal enrichment of the ISM. One measurement appears lower than the model prediction. However, Fe atoms are likely to condensate into grains (Lu et al. 1998; Pettini et al. 1997), so the iron abundance measurements are only lower limits of the real iron abundances. Moreover Bergeron & Boisse (1991) have shown that absorptions should occur at distances of up to 4 Holmberg radius. These regions are likely to present very low density, and may be too under-critical to allow star formation, even in the low SFR regime described here. Moreover, the partial ionization of these regions by the diffuse ionizing background (Van Gorkom 1991; Corbelli et al. 1989; Maloney 1990; Corbelli & Salpeter 1993a, b) will also contribute to prevent star formation. Their metallicity could thus be due only to metals which have diffused from the inner regions. If true, we can expect, in these regions, abundances lower than what is predicted from the continuous SFR.
## 5 Conclusion
We have investigated different star formation histories for IZw 18 using a spectrophotometric model coupled with a chemical evolution model of galaxies. We have shown that if the observed metallicity results only from burst events with galactic winds, no more than 50-70 % of the newly synthesized metals may have been ejected out of the galaxy. This is because a larger metal loss rate will require to form more stars to reach the measured abundances, hence resulting in redder colors than what is observed, due to an overproduction of old underlying low mass stars. Following the suggestion of Legrand et al. (1999), we investigated the hypothesis of a low, but continuous, SFR which should account alone for the observed metallicity in IZw 18. We have shown that the metals in IZw 18 are likely to result from a mild continuous star formation rate which took place indenpently from bursts. This star formation would be due to local fluctuations in the density which exceeds sporadically the threshold for star formation. Using a spectrophotometric model and a chemical evolution model of galaxies, we demonstrated that a continuous star formation rate as low as $`10^4M_{}/yr`$ occurring for 14 Gyrs can reproduce all the main parameters of IZw 18. The generalization of this model to all galaxies accounts for many observed facts, such as the presence of star formation in quiescent dwarfs and LSBG, the increase with time of the metallicity of the most underabundant DLA systems, the extrapolation of metallicity in the outerparts of spiral galaxies, the lack of galaxies with a metallicity lower than IZw 18, the apparent absence of HI clouds without optical counterparts, and the homogeneity of abundances in dwarfs galaxies. Moreover, we predict for IZw 18 and other extremely unevolved galaxies, the presence of an old underlying stellar population (resulting from this continuous star formation process) at a surface brightness level of at least 28 $`\mathrm{mag}\mathrm{arcsec}^2`$ in V and 26 $`\mathrm{mag}\mathrm{arcsec}^2`$ in K. Finally, we have shown that the parameters for the low continuous star formation rate are comparable to what is observed in LSBG and quiescent dwarfs, suggesting that these objects could be the quiescent counterparts of starburst galaxies.
###### Acknowledgements.
This work is part of FL PhD Thesis. I am indebted to Daniel Kunth for his advices, suggestions and support during all this work. I thank J. Devriendt, B. Guiderdoni and R. Sadat for having kindly provided their model and spent time to explain its subtilities. I am also grateful to J. Salzer for providing unpublished photometry of IZw 18. I also thank J.R. Roy, G. Tenorio-Tagle, M. Fioc, P. Petitjean, J. Silk, R. & E. Terlevich, F. Combes, G. Ostlin, J. Lequeux, M. Cerviรฑo, J. Walsh and M. Mas-Hesse for helpful suggestions and discussions. I also thank the anonymous referee for his remarks and suggestions which helped to improve the manuscript. |
no-problem/9912/astro-ph9912376.html | ar5iv | text | # Environment of The Gamma-Ray Burst GRB971214 : A Giant H II Region surrounded by A Galactic Supershell
## 1. Introduction
Gamma ray bursts (hereafter GRBs), located at cosmological distances, form a group of the most luminous objects in the Universe. A number of GRBs were observed with their host galaxies, of which intensive spectroscopic and imaging observations have been performed using the Hubble Space Telescope and the Keck telescopes. GRB971214 is one of such objects with a very high redshift $`z>3`$ and also is worth a particular attention in following points.
Firstly, even after the optical transient region had faded away, we can marginally detect a bright spot in the HST image, and the continuum and emission line fluxes from this spot overwhelm those from the remaining part of the host galaxy. Secondly, the ultraviolet spectrum of GRB971214 illustrated in Fig. 1 shows a flat UV continuum which is often found in star forming galaxies. In addition, the Ly$`\alpha `$ emission line has a black absorption trough in the red part of the emission peak. These facts imply that the UV spectrum is formed in a star forming region which is surrounded by a thick and expanding medium of neutral hydrogens. We note that the location of the GRB afterglow coincides with the star forming region or the bright spot, which leads to the proposal that the GRB occurs in a star forming region, and in this Letter we deduce the physical environment of the GRB from its spectrum.
The P-Cygni type Ly$`\alpha `$ emission in the spectrum of primeval galaxies has been often attributed to an absorption effect by a galaxy not associated with the primeval galaxy but intervening accidentally in the line of sight. It has been regarded as a damped Ly$`\alpha `$ absorption that occurs in the vicinity of the source galaxy. In order to check this possibility, we calculate the probability for observing an intervening galaxy in front of the GRB host galaxy, which is none other than the optical depth for seeing a galaxy between the GRB host galaxy at $`z=3.425`$ and the place that corresponds to $`v_{exp}`$ in the Hubbleโs expansion law. The optical depth is simply expressed by
$$\tau =n_g(1+z)^3\sigma L,$$
(1)
where the comoving volume number density of normal galaxies at $`z=0`$, $`n_g0.02h^3\mathrm{Mpc}^3`$ (Im 1995), and the path length $`L`$ is estimated to be $`L=v_{exp}/H`$ with $`H`$ being the Hubble constant at the redshift and given by
$$H=H_0[\mathrm{\Omega }_M(1+z)^3+\mathrm{\Omega }_\mathrm{\Lambda }]^{1/2},$$
(2)
where the cosmological density parameters are $`\mathrm{\Omega }_M=8\pi G\rho _0/3H_0^2`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=\mathrm{\Lambda }/3H_0^2`$, and the Hubble parameter $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. Here, the cross section is given by $`\sigma =\pi (r_{10}10h^1\mathrm{kpc})^2`$, where $`r_{10}`$ is the typical galaxy size in units of $`10\mathrm{kpc}`$. A direct substitution yields the optical depth $`\tau =0.0023r_{10}^2`$ for $`\mathrm{\Omega }_M=1/3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=2/3`$, and $`\tau =0.0025r_{10}^2`$ for $`\mathrm{\Omega }_M=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$. This indicates that if the average size of galaxies at $`z3`$ is not large, then it is highly improbable that the damped absorption in the spectrum of host galaxy of GRB971214 is formed by a galaxy intervening accidentally.
This leaves us to consider an alternative hypothesis, according to which the P-Cygni type profile of Ly$`\alpha `$ is formed by the the expanding supershell that surrounds the star forming region in GRB971214 and is the remnant of the GRB precedent to GRB971214. In order to check this possibility, we now calculate a number of GRB event in a galaxy at $`z3.4`$. Assuming the supernova rate is proportional to the starforming rate, Sadat et al. (1998) calculated the supernova rate at $`z3.4`$, $`\mathrm{\Gamma }_{SN}0.011\times 10^6h_{65}^3\mathrm{SNe}\mathrm{Myr}^1\mathrm{Mpc}^3`$, where $`H_0=65h_{65}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. Accepting the concept that the GRB rate traces the massive star formation rate, Woods & Loeb (1998) showed $`\mathrm{\Gamma }_{GRB}10^6\mathrm{\Gamma }_{SN}`$, where $`\mathrm{\Omega }_M=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, and $`H_0=65h_{65}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. Therefore, $`\mathrm{\Gamma }_{GRB}(z3.4)0.011h_{65}^3\mathrm{Myr}^1\mathrm{Mpc}^3`$. Adopting the number density of galaxies at $`2.0<z<3.5`$, $`\mathrm{\Phi }^{}=1.76\times 10^3h_{65}^3\mathrm{Mpc}^3`$ (Pozzetti et al. 1998), we can get the GRB rate per a galaxy at $`z3.4`$, $`\mathrm{\Gamma }_{GRB}6\mathrm{Myr}^1`$. Therefore, the number of GRB events per a galaxy during $`10^{45}\mathrm{yrs}`$ is $`N_{GRB}=0.060.6`$. Moreover, the beaming factor, if exists, can increase the event rate by another factor of ten and the number of GRB events per a galaxy during $`10^{45}\mathrm{yrs}`$ is $`N_{GRB}=0.66`$, which makes our supershell hypothesis more probable and alternative suggestion.
It is noticeable that the similar P-Cygni features are observed in primeval galaxies and nearby star forming galaxies. Lee and Ahn (1998) proposed that the features might be caused not by an overlapping intergalactic medium but by the expanding medium enveloping the star forming region in the galaxies. And this concept may be applied to such a remote star-forming galaxy as the host galaxy of GRB971214. However, we can not exclude the possibility that the multiple supernovae explosion may also result in the expanding shell.
In this Letter, we adopt the hypothesis of GRB-driven supershell and perform a profile fitting analysis to derive physical parameters characterizing the expanding supershell of neutral hydrogens surrounding the star forming region of the GRB host galaxy.
## 2. Images and Spectrum of GRB971214
GRB971214 was detected at 9 UT, December 14, 1997 (Heise et al. 1997), and its optical counterpart twelve hours after the burst (Halpern et al. 1998). With a total fluence of $`1.09\times 10^5\mathrm{erg}`$s$`\mathrm{cm}^2`$ (Kippen et al. 1997) and the measured redshift of $`z=3.418`$ (Kulkarni et al. 1998), its energy release is estimated to be $`3\times 10^{53}\mathrm{erg}`$s in $`\gamma `$-rays alone under the assumption of isotropic emission, $`\mathrm{\Omega }_0=0.3`$, and $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$.
In Fig. 1 is shown its ultraviolet spectrum redshifted to the optical band and obtained by Kulkarni et al. (1998) using the Keck telescope. It is characterized by a flat UV continuum that is typically found in the spectra of star forming regions. It is also seen that the Ly$`\alpha `$ emission has a P-Cygni type profile, which is frequently observed in the astronomical objects near or far (Lee & Ahn 1998). Lee and Ahn (1998) proposed that the P-Cygni type Ly$`\alpha `$ line is formed when the Ly$`\alpha `$ photons emitted in the central super star-cluster are radiatively transferred in a HI supershell that are optically thick and expanding.
In this work, the photon source is assumed to be the H II region that may contain $`10^4`$ O stars. According to Marlowe et al.(1995), nearby starbursting dwarfs are inferred to contain a similar number of OB stars, when considering H$`\alpha `$ luminosity. So this can be thought to be neither entirely new nor extreme assumption for galaxies of higher redshifts. Furthermore, it appears less plausible that the Ly$`\alpha `$ emission arises from the medium ionized by shocks produced by supernovae or hypernovae. This is because the number density of the inner region is not sufficiently high to give the recombination time scale $`10^5`$ years.
## 3. Is the Photon Source Surrounded by the GRB Remnant?
### 3.1. Interpretation of the P-Cygni Absorption
In this work we will consider the shell hypothesis, and derive the physical properties of the expanding neutral medium from the observed Ly$`\alpha `$ absorption.
We assume a Gaussian profile for the unobscured Ly$`\alpha `$ emission and convolve it with a Voigt function with the center displaced by the expanding velocity that will be determined by the fitting procedure. In principle, the effect of the frequency redistribution by back-scatterings should be considered. However, we neglect this effect in this paper, because the S/N ratio and the resolution of the spectrum are not sufficiently good. For the continuum level, the blue part of Ly$`\alpha `$, which is more prone to extinction, is extrapolated from the red portion of the spectrum given by Kulkarni et al. (1998). They quote $`F_\nu =174(\nu /\nu _R)^\alpha `$ nJy with $`\alpha =0.7\pm 0.2`$, where $`F_\nu `$ is the spectral density at frequency $`\nu `$ and $`\nu _R=4.7\times 10^{14}`$ Hz, the central frequency of the R band.
We show the result in Fig. 1, where the dotted line represents the best fit profile, the solid line the observed profile, and the horizontal solid line the continuum level. The best fit expansion velocity of the supershell relative to the H II region is determined to be $`v_{exp}=1500\mathrm{km}\mathrm{s}^1`$, and the best fit line center optical depth $`\tau _0=6\times 10^6`$, which corresponds to $`N_{HI}=10^{20}\mathrm{cm}^2`$. The best fit Ly$`\alpha `$ profile has the width of $`\sigma =5\mathrm{\AA }`$ and the line center flux $`f(\lambda =5280.8\mathrm{\AA })=0.675\mu `$Jy, which gives the unobscured flux to be $`9.1\times 10^{18}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ and the systemic redshift $`z=3.425`$.
This is slightly larger than the redshift proposed by Kulkarni et al. (1998), who may have overestimated the absorption in the blue part of the Ly$`\alpha `$. However, the absorption trough is sufficiently remote from the line center in the velocity space only to erode the extreme blue part of the Ly$`\alpha `$ emission. Hence, we prefer the redshift of $`z=3.425`$ of GRB971214 to the redshift of $`z=3.418`$, and subsequently other physical parameters need to be revised.
Assuming a standard Friedman cosmology with $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and $`\mathrm{\Omega }_0=0.3`$, the luminosity distance $`d_L=9.7\times 10^{28}\mathrm{cm}`$. Considering the Galactic extinction, the unobscured Ly$`\alpha `$ flux is corrected to be $`F_{Ly\alpha }=(1.5\pm 0.7)\times 10^{17}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, where the observational error given by Kulkarni et al. (1998) is introduced. Therefore, for the assumed cosmology, the Ly$`\alpha `$ line luminosity $`L_{Ly\alpha }=(1.8\pm 0.8)\times 10^{42}\mathrm{erg}\mathrm{s}^1`$. If there is no internal extinction in the interior of the Ly$`\alpha `$ source, this corresponds to $`n_e=(1.4\pm 0.4)(\frac{L}{L_{Ly\alpha }})^{0.5}(\frac{R}{1\mathrm{kpc}})^{1.5}\mathrm{cm}^3`$ or $`n_e=(40\pm 10)(\frac{L}{L_{Ly\alpha }})^{0.5}(\frac{R}{100\mathrm{pc}})^{1.5}\mathrm{cm}^3`$, of which the ionization can be maintained by $`10^4`$ O5 stars as the ionizing source.
From the Ly$`\alpha `$ luminosity, we can estimate the star-formation rate (Thompson, Djorgovski, & Trauger 1995) to be $`R_{SF}=(7\pm 3)\mathrm{M}_{}\mathrm{yr}^1`$, with both the internal and the Galactic extinction being corrected. This is consistent with the star forming rate given by Kulkarni et al. (1998) as a lower limit, $`R_{SF}=5.2\mathrm{M}_{}\mathrm{yr}^1`$ which was obtained from the rest-frame continuum luminosity at $`1,500\mathrm{\AA }`$.
Using the revised redshift of the GRB host galaxy, we refine other absorption lines in the observed spectrum. In Fig. 2, we show the spectrum of the GRB host in the UV regime. It is seen that the revised wavelengths are in good agreement with the absorption features.
### 3.2. Physical Configuration of the Supershell
We consider the dynamical evolutionary model of the supernova remnant to derive the physical quantities of the shell. According to Woltjer (1972), the supernova remnant has four evolutionary phases, that is, the free expansion phase, the Sedov-Taylor or adiabatic phase, the snowplow or radiative phase, and finally the merging or dissipation phase (see also Reynolds 1988).
According to Woltjer(1972), the radiative phase begins roughly when the expansion velocity of the shell becomes
$$v=300(\frac{n_1}{1\mathrm{cm}^3})^{2/17}(\frac{E}{10^{53}\mathrm{ergs}})^{1/17}\mathrm{km}\mathrm{s}^1,$$
(3)
where $`n_1`$ is the number density of the ambient medium and $`E`$ is the initial explosion energy. Since the expansion velocity $`v=v_{exp}=1500\mathrm{km}\mathrm{s}^1`$ of the supershell exceeds the velocity in the radiative phase by a large margin, we propose that the supershell is in the adiabatic phase, which is described by the Sedov solution.
According to the Sedov solution in a uniform medium of number density $`n_1`$ in which we have the relation $`n_1=3N/R`$,
$$R=0.92(\frac{E}{m_HN})^{1/4}t^{1/2}\mathrm{cm},$$
(4)
$$v=0.37(\frac{E}{m_HN})^{1/4}t^{1/2}\mathrm{cm}\mathrm{s}^1,$$
(5)
$$n_1=3.2(\frac{m_HN^5}{E})^{1/4}t^{1/2}\mathrm{cm}^3,$$
(6)
where $`v`$ is the expansion velocity of the supershell, $`R`$ the size of the supershell, $`E`$ the initial explosion energy, $`N`$ the column density of the supershell, $`m_H`$ the hydrogen mass, and $`t`$ the age of the shell.
Using the values $`v=v_{exp}=1500\mathrm{km}\mathrm{s}^1`$ and $`N=10^{20}\mathrm{cm}^2`$, we get
$$t=4.7\times 10^3(\frac{E_{53}}{N_{20}})^{1/2}\mathrm{yrs},$$
(7)
$$R=18(\frac{E_{53}}{N_{20}})^{1/2}\mathrm{pc},$$
(8)
$$n_1=5.4(\frac{N_{20}^3}{E_{53}})^{1/2}\mathrm{cm}^3,$$
(9)
where $`N_{20}=N/10^{20}\mathrm{cm}^2`$ and $`E_{53}=E/10^{53}\mathrm{ergs}`$.
In Fig. 3 are shown the size and age of the supershell as well as the volume number density of the ambient medium with $`E_{53}`$ being a free parameter. For a range of input energy, $`0.01E_{53}100`$, we get the possible range of the other parameters $`0.53n_153`$, $`2\mathrm{pc}R180\mathrm{pc}`$, and $`5\times 10^2\mathrm{yrs}t5\times 10^4\mathrm{yrs}`$.
From these values we estimate the total kinetic energy of the expanding supershell given by $`E_k=2\pi R^2N_{HI}m_Hv_{exp}^2=7.3\times 10^{52}E_{53}\mathrm{ergs}`$. It is also noticeable that this kinetic energy can be comparable to those of the galactic supershell including those in Our Galaxy, NGC 4631, and M101 (Heiles 1979, Rand & van der Hulst 1993, Wang 1999). Furthermore, the P-Cygni Ly$`\alpha `$ lines of the primeval galaxies also show the similar energy scale. Thus, we propose that the supershell is the remnant of a hypernova or a GRB that had exploded earlier than GRB971214.
## 4. Summary and Implications
We have studied on the formation of P-Cygni type Ly$`\alpha `$ in the spectrum of GRB971214, and found that there are at least three components in the system, i.e. the parsec scale remnant of GRB971214 itself, a giant H II region, and a supershell surrounding it.
The giant H II region from which Ly$`\alpha `$ emission originates is photoionized by a super stellar cluster whose total ionizing photons correspond to those emitted from about $`10^4`$ O5 stars. The existence of the P-Cygni type absorption plausibly implies that there exists a supershell surrounding the H II region. By a profile fitting procedure, we found out that the shell is expanding with a velocity of $`v_{exp}=1500\mathrm{km}\mathrm{s}^1`$ and its neutral column density $`N_{HI}=10^{20}\mathrm{cm}^2`$. We also revised the redshift of the Ly$`\alpha `$ emission source to be $`z=3.425`$, and its unobscured Ly$`\alpha `$ luminosity to be $`L_{Ly\alpha }=(1.8\pm 0.8)\times 10^{42}\mathrm{erg}\mathrm{s}^1`$, which gives a more reasonable star formation rate to be $`R_{SF}=(7\pm 3)\mathrm{M}_{}\mathrm{yr}^1`$.
We also applied the theory on the hydrodynamical evolution of supernova remnants to the supershell surrounding GRB971214. Assuming a reasonable scale of the initial explosion energy of the supershell, we propose that the supershell is at the adiabatic phase, with its radius $`R=18E_{53}^{1/2}\mathrm{pc}`$, its age $`t=4.7\times 10^3E_{53}^{1/2}\mathrm{yrs}`$, and the number density of the ambient medium $`n_1=5.4E_{53}^{1/2}\mathrm{cm}^3`$, where $`E_{53}=E/10^{53}\mathrm{ergs}`$. And we estimate the kinetic energy of the supershell to be $`E_k=7.3\times 10^{52}E_{53}\mathrm{ergs}`$.
It is noticeable that there are many astronomical objects showing the similar characteristics. Using the X-ray data, Wang (1999) has discovered the candidates of GRB remnants in M101, of which two exhibit similar physical characteristics to those of the GRB remnant in GRB971214. With the advent of 10m class telescopes, a large number of primeval galaxies are observed by applying several methods including the Lyman break method (Steidel 1996). About $`50`$ percent of Ly$`\alpha `$ emission lines in their spectra show P-Cygni type profiles. It is suggested that these profiles are formed in an expanding media surrounding the star forming region and a case study was performed for DLA 2233+131 in detail (Lee & Ahn 1998).
Recently one of the most debated suggestions is the hypernova conjecture, according to which gamma-ray bursts occur in star forming regions. Our results strongly favor this model and more concrete evidence is expected as the sample of GRB spectra showing Ly$`\alpha `$ emission becomes statistically significant.
The author thanks George Djorgovski and Shri Kulkarni for kindly providing the optical spectrum of GRB971214. He also thanks Bon-Chul Koo, Kee-Tae Kim, Hee-Won Lee, Hwang-Kyung Sung, In-Su Yi, and Hyung-Mok Lee for their invaluable discussions. The author thanks to the anonymous referee for his/her fruitful comments and suggestions. |
no-problem/9912/astro-ph9912188.html | ar5iv | text | # Redshifts and Neutral Hydrogen Observations of Compact Symmetric Objects in the COINS Sample
## 1 Introduction
Unified schemes for active galactic nuclei (AGN) seek to explain several classes of objects with a smaller number of source types viewed at different orientations (see the review by Antonucci (1993)). A critical ingredient of all unified schemes is an obscuring disk or torus which hides the nucleus when the source is viewed edge-on. An accretion disk is also required to feed the AGN and probably plays a role in collimating the bipolar jets. Although the composition of the obscuring material in the torus may be varied, it seems likely that at some radii and scale heights, there will be significant amounts of neutral atomic hydrogen gas (Neufeld & Maloney, 1995). This should be detectable in Hi absorption towards the core and inner jets of radio-loud AGN. Sources with symmetric parsec-scale jets are ideal for testing the model. Broad ($`>`$100 km s<sup>-1</sup>) lines are expected this close to the bottom of the potential well of the galaxy. Such broad absorption lines have thus far been detected towards a few moderate power radio galaxies such as Hydra A (Taylor, 1996), 3C 84 (Crane, van der Hulst & Haschick, 1982) and PKS 2322$``$123 (Taylor et al., 1999), and in the compact symmetric objects (CSOs) 0108+388 (Carilli et al., 1998), 1146+586 (van Gorkom et al., 1989; Peck & Taylor, 1998), PKS 1413+135 (Carilli, Perlman & Stocke, 1992) and 1946+708 (Peck, Taylor & Conway, 1999).
In the inner regions of the disk a large fraction of the gas must be ionized by the central engine. This will result in free-free absorption of the radio continuum at frequencies below $``$5 GHz as seen in 3C 84 (Walker et al., 1998), Hydra A (Taylor, 1996), 1946+708 (Peck, Taylor & Conway, 1999), and PKS 2322$``$123 (Taylor et al., 1999). Another result of the dense ionized gas, if it is magnetized, could be extremely high Faraday rotation measures (RMs). Owing to the lack of any polarized flux in these systems, it has not been possible to directly measure the RMs in any source with Hi absorption, but the presence of very high RMs could explain why no polarized flux is detected from these sources (Taylor et al., 1999; Peck & Taylor, 1999).
We are engaged in an ongoing project to use VLBI observations of Hi absorption to study the densities, kinematics, and scale heights of the neutral gas in the obscuring torus in a moderate size sample of compact objects. This sample is known as the CSOs Observed in the Northern Sky (COINS) sample, although to date only 27 have been securely classified as CSOs. Since CSOs are rare ($``$2% of compact objects; Peck & Taylor 1999), it is necessary to start with large VLBI surveys and go to moderately low flux density levels ($``$100 mJy at 5 GHz) in order to obtain the 52 CSO candidates that make up the COINS sample. Follow-up multifrequency and polarimetric VLBI observations must then be made in order to identify the center of activity in each source and thus distinguish the true CSOs from core jet sources with โcompact doubleโ morphologies. Once the CSOs are identified, their redshifts must be determined before they can be searched for Hi in absorption. Low spatial resolution observations can then be carried out at the frequency of the redshifted neutral hydrogen line, and detections can be followed up with extremely high spatial and spectral resolution VLBI studies. Multifrequency continuum observations can also be used to image the free-free absorption in suitable candidates, providing an additional means of determining the geometry of the circumnuclear material.
We present the current status of the COINS survey project in $`\mathrm{\S }`$2 of this paper. $`\mathrm{\S }`$3 outlines our recent observations which provide three new redshifts and one new Hi detection. These new results, and their importance to future work, are discussed in $`\mathrm{\S }\mathrm{\S }`$ 4 and 5.
## 2 The COINS Sample
The sources in the COINS sample have been identified based on images in the Pearson-Readhead (PR; Pearson & Readhead 1988), Caltech-Jodrell Bank (CJ; Polatidis et al. 1995; Taylor et al. 1994) and VLBA Calibrator (VCS; Peck & Beasley 1997) Surveys. The majority of the CSO candidates were chosen from the VCS based on criteria outlined in Peck & Taylor (1999), while the sources chosen from the PR and CJ surveys conformed to very similar criteria described in Readhead et al. (1996) and in Taylor, Readhead & Pearson (1996).
The sources in the COINS sample are described in Table 1. Column (1) lists the J2000 convention source name of the CSO candidate. Column (2) provides an alternate name, with those prefaced by PR or CJ indicating selection from that survey. Columns (3) and (4) show the optical identification and magnitude of the source. Column (5) lists the redshift of the source, references for which are provided in the table caption. The last column in Table 1 lists the status of the Hi absorption detections toward the source, providing the optical depth or upper limit of any Hi absorption observation published to date.
## 3 Observations and Data Reduction
The optical data were taken in two observing runs on the 200 inch telescope at Palomar Observatory, 1998 March 30 and 1999 April 10. Both runs used the Double Spectrograph with a 5200 ร
dichroic beamsplitter and a 2โณ slit. All target sources were observed for 1500 seconds each. The total effective wavelength coverage was approximately 3500โ9200 ร
with a resolution of 4.9 ร
. The spectra were extracted using standard IRAF techniques. Wavelength calibration was performed using exposures of arc lamps taken at intervals throughout the observations. Observations of the standard star Feige 34 were used to remove the response function of the chip. The conditions were not photometric on either run so the flux calibration in the spectra shown should not be regarded as absolute.
On 1998 December 6, the source J1110+4817 was observed for 6 hours on the Westerbork Synthesis Radio Telescope (WSRT) using the UHF-high receivers in dual linear polarization. With the new DZB correlator we obtained 256 spectral channels over a 10 MHz bandwidth (formal resolution 47 kHz), centered at the frequency of the Hi line predicted by the optical redshift. There were 10 operational telescopes. Calibration of the bandpass shape and the flux density scale was based on a brief scan of 3C286 (1328+307). The initial phase calibration was refined by a few self-calibration and modelfitting loops, in which the 20 brightest continuum sources in the field were found. The spectrum of J1110+4817 shown in Fig. 2a was then produced by vector averaging (over time and baselines) all of the cross-correlation spectra; J1110+4817 is unresolved with the WSRT and was at the phase center of the data. The spectrum of J1816+3457 shown in Fig. 2b was obtained in an analogous procedure from a 12-hour observation on 1999 June 17.
## 4 Results
### 4.1 Redshifts
The spectra of the three CSO candidates for which redshifts were obtained are shown in Figure 1. Gaussian fits to the spectral lines used to determine the redshifts for the three new CSO candidates are summarized in Table 2.
J1111+1955 is a galaxy with a redshift of 0.299$`\pm `$0.001. Some indication of stellar absorption features is seen (Ca H & K, Balmer break), but these fall at the location of the dichroic and so the identifications are tentative.
J1311+1417 is a quasar at a redshift of 1.995$`\pm `$0.003.
J1816+3457 is a radio galaxy at a redshift of 0.2448$`\pm `$0.0003. Here again, stellar features are visible, but have not been fitted.
### 4.2 Neutral Hydrogen
Figure 2 shows the spectra of the redshifted 21cm hydrogen toward two of the CSO candidates.
J1110+4817 has been identified as a quasar by Hook et al. (1996). The upper limit for Hi absorption at the redshift of the source is $`\tau `$0.009.
J1816+3457 exhibits Hi absorption with an optical depth of $`\tau `$$``$0.035. This line is centered at 1.1418 GHz (c$`z`$=73151 km s<sup>-1</sup>). Applying the relativistic correction required at this redshift, the radial velocity of the Hi absorption line is $`v`$=64434$`\pm `$10 km s<sup>-1</sup>, which is 184 km s<sup>-1</sup> blueward of the optical emission line radial velocity of $`v`$=64618$`\pm `$70 km s<sup>-1</sup>. This difference of a couple of hundred km s<sup>-1</sup> should not be overinterpreted, given that the optical redshift is determined from emission lines, which can be influenced by outflow or infall. A VLBI study of the Hi absorption in this source is currently underway.
## 5 Summary
The compact size, orientation, and high rate of Hi detection in compact symmetric objects make these sources highly valuable in the study of accretion, evolution and the unified scheme of AGN. Unfortunately, the scarcity of these sources thus far has resulted in too few extensive studies on which to base any general conclusions. The COINS survey is an attempt to ameliorate this situation by identifying a larger sample of CSOs which can be comprehensively studied using VLBI techniques. The first phase of this project, the identification at radio wavelengths of CSO candidates from large, high resolution surveys, has been completed. Multi-frequency VLBI follow-up observations to eliminate core-jet sources masquerading as CSOs in the finding survey have been carried out by Peck & Taylor (1999).
A requisite next step in the process is to obtain the redshift of the COINS sources by optical spectroscopy. Including the three redshifts presented herein, our redshift completeness for the sample is 32 of 52 (61%). Of these 32, low spatial resolution (kpc-scale) observations to look for the presence of Hi absorption are available for 6 sources (4 referred to in the introduction, 2 new ones presented here). J1110+4817 is the only 1 of the 6 in which there is no absorption line exceeding 1% peak depth. Clearly, Hi studies are very profitable for CSOs. The detection rate in this class of sources is far higher than that found in the nearby radio galaxies of โnormalโ size surveyed for Hi absorption by van Gorkom et al. (1989), which yielded 4 detections for 29 galaxies.
Continuing work on this project involves using optical spectroscopy to obtain redshifts for the remaining CSOs, many of which are extremely faint at optical wavelengths. Once this has been accomplished, Hi absorption studies can be undertaken, providing a necessary complement to free-free absorption and jet expansion studies.
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under a cooperative agreement by Associated Universities, Inc. AP is grateful for support from NRAO through the pre-doctoral fellowship program. AP also acknowledges the New Mexico Space Grant Consortium for partial publication costs. IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the NSF. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. |
no-problem/9912/patt-sol9912001.html | ar5iv | text | # Vector solitons in (2+1) dimensions
## Abstract
We address the problem of existence and stability of vector spatial solitons formed by two incoherently interacting optical beams in bulk Kerr and saturable media. We identify families of (2+1)-dimensional two-mode self-trapped beams, with and without a topological charge, and describe their properties analytically and numerically.
Recent experimental observations of multidimensional spatial optical solitons in different types of nonlinear materials call for a systematic analysis of the self-trapping of light in higher dimensions. When two (or more) fields interact nonlinearly, they can form multi-component trapped states, known as vector solitons. Vector solitons, first theoretically studied in a (1+1)-D models , were observed in birefringent fibers and planar waveguides . Fabricated waveguiding structure localizes such solitons in one of the two directions transverse to the direction of propagation, hence these solitons are effectively one-dimensional. It is only recently, that the theory and experiments on incoherent interaction and truly two-dimensional self-trapping of beams in a bulk (saturable) medium merged , indicating progress towards the observation and study of different types of (2+1)-D vector solitons and their interactions.
The practical possibility of such an observation greatly depends upon the soliton stability in the media with realistic, Kerr or saturable nonlinearity. It is known that scalar (one-component), fundamental (2+1)-D solitons are stable in saturable media , but they exhibit critical collapse in Kerr-type media . However, as in the case of (1+1)-D vector solitons , both existence and stability of multi-dimensional vector solitons are nontrivial issues, which have not been systematically adressed so far.
In this Letter, we study (2+1)-D vector solitons in Kerr and saturable media. We analyze two classes of such solitons. First, we consider solitons formed by the coupling of two fundamental modes; such solitons are always bell-shaped. Secondly, we analyze the coupling between the fundamental mode of one field and the first-order mode (i.e. that carrying a topological charge) of the other field. In the latter case the vector solitons may possess a ring structure and are expected to be analogous to the two-hump (1+1)-D vector solitons recently proved to be stable in a saturable medium .
We consider two incoherently interacting beams propagating along the direction $`z`$ in a bulk, weakly nonlinear optical medium. For a Kerr medium, the problem is described by the normalized, coupled equations for the slowly varying beam envelopes, $`E_1`$ and $`E_2`$,
$$i\frac{E_{1,2}}{z}+\mathrm{\Delta }_{}E_{1,2}+(|E_{1,2}|^2+\sigma |E_{2,1}|^2)E_{1,2}=0,$$
(1)
where $`\mathrm{\Delta }_{}`$ is the transverse Laplacian, and $`\sigma `$ measures the relative strength of cross- and self-phase modulation effects. Depending on the polarization of the beams, the nature of nonlinearity, and anisotropy of the material, $`\sigma `$ varies over a wide range. For a Kerr-type material with nonresonant electronic nonlinearity $`\sigma 2/3`$, whereas for a nonlinearity due to molecular orientation $`\sigma 7`$ .
We look for solutions of Eqs. (1) in the form
$$E_1=\sqrt{\beta _1}ue^{i\beta _1z}e^{im_1\phi },E_2=\sqrt{\beta _1}ve^{i\beta _2z}e^{im_2\phi },$$
(2)
where $`\beta _1`$ and $`\beta _2`$ are two independent propagation constants, and $`m_{1,2}=0,\pm 1`$ are topological charges. Measuring the radial coordinate in the units of $`\sqrt{\beta _1}`$, and introducing the ratio of the propagation constants, $`\lambda =\beta _2/\beta _1`$, from Eqs. (2) we derive a system of stationary equations for the radially symmetric, normalized envelopes $`u`$ and $`v`$:
$$\begin{array}{c}\mathrm{\Delta }_\mathrm{r}uu+(u^2+\sigma v^2)u=0,\hfill \\ \mathrm{\Delta }_\mathrm{r}v\frac{m_2^2}{r^2}v\lambda v+(v^2+\sigma u^2)v=0,\hfill \end{array}$$
(3)
where $`\mathrm{\Delta }_\mathrm{r}=(1/r)(d/dr)(rd/dr)`$, and we assume $`m_1=0`$. Following the notations introduced in , we describe all vector solitons (2) by their โstate vectorsโ $`|m_1,m_2`$.
First, we consider solutions $`|0,0`$. The families of these radially symmetric, two-component vector solitons are characterized by a single parameter $`\lambda `$, and at any fixed value of $`\sigma `$, their existence domain is confined between two cut-off values, $`\lambda _1`$ and $`\lambda _2`$. When $`\lambda <\lambda _1`$ or $`\lambda >\lambda _2`$, self-trapping of coupled fields does not occur, and there exist only scalar solitons for either the $`u`$ or $`v`$ components. However, for $`\lambda _1<\lambda <\lambda _2`$, a two-mode self-trapped state emerges. Near the cutoff points ($`\lambda \lambda _1`$ or $`\lambda \lambda _2`$) this state can be presented as a waveguide created by one field component with a small-amplitude guided mode of the other field component. Examples of $`|0,0`$ solitons are presented in Figs. 1(a, b) for $`\sigma =2`$. On the parameter plane ($`\sigma ,\lambda `$), the existence domain is skirted by two curves, $`\sigma _1(\lambda )`$ and $`\sigma _2(\lambda )`$, which are defined by the corresponding cut-off values, $`\lambda _1`$ and $`\lambda _2`$, of the soliton-induced waveguides (see Fig. 1).
For $`\sigma =1`$, the $`|0,0`$ vector solitons exist only at $`\lambda =1`$, and their properties resemble those of the (1+1)-D Manakov vector solitons. They can be constructed by the transformation $`u=U\mathrm{cos}\theta `$ and $`v=U\mathrm{sin}\theta `$, where $`\theta `$ is arbitrary and $`U`$ satisfies the scalar equation $`d^2U/dr^2+(1/r)(dU/dr)U+U^3=0`$.
To describe the existence domain of the multi-dimensional vector solitons analytically, we employ the variational technique . We look for stationary two-component solutions of Eqs. (3) in the form $`u(r)=A\mathrm{exp}(r^2/a^2)`$, $`v(r)=B\mathrm{exp}(r^2/b^2),`$ where the parameters $`A`$, $`B`$, $`a`$, and $`b`$ are defined by variation of the effective Lagrangian of the model (1). Details of this analysis will be published elsewhere. Here, we mention that the coupled algebraic equations of the variational analysis allow us to find the borders of the existence domain for the $`|0,0`$ vector solitons: $`\sigma _1(\lambda )=(1+\sqrt{\lambda })^2/4`$ and $`\sigma _2(\lambda )=(1+\sqrt{\lambda })^2/(4\lambda ),`$ shown in Fig. 1 by dashed curves. One can see that the variational approach provides an excellent alternative to the numerics in identifying the existence domains of the vector solitons.
An important physical characteristic of vector solitons of this type is the total power defined as $`P=P_u+P_v=2\pi _0^{\mathrm{}}(u^2+v^2)r๐r`$, where the partial powers $`P_u`$ and $`P_v`$ are the integrals of motion for the model (1). Figures 2 (a,b) show the total power of the (2+1)-D vector solitons vs $`\lambda `$ for $`\sigma =2/3`$ and $`\sigma =2`$, respectively.
As follows from these results, the cases $`\sigma >1`$ and $`\sigma <1`$ are qualitatively different. In the former case, the total power of the vector soliton is lower than the power of a scalar soliton, $`P_011.7`$ at $`\sigma =\lambda =1`$ \[see Fig. 2(b)\]. This is an important and unexpected physical result indicating, in contrast to the commonly held belief, that the excitation of vector solitary waves would require lower input power in comparison to scalar solitons. For $`\sigma <1`$, the situation is opposite, and the vector solitons exist at higher power than scalar solitons \[see Fig. 2(a)\]. A lower total power of the vector soliton, as compared to the corresponding power of a scalar soliton, allows us to explain an effective suppression of the blow-up instability numerically observed at $`\sigma =2`$, for the special case of $`\lambda =1`$, when an analytical form of the vector soliton can be found by a Hartree-type ansatz .
Similar to the well studied case of the (1+1)-D vector solitons (see, e.g., Ref. ), the soliton-induced waveguide can guide higher-order modes. In higher dimensions, such higher-order modes carry a topological charge, i.e. $`m_20`$ in Eq. (2). In the simplest case, we consider $`m_2=\pm 1`$ and analyze the system of stationary equations for the radially symmetric wave envelopes, Eqs. (3). Examples of two-mode $`|0,1`$ solitary waves are shown in Figs. 3 (a,b).
Near the cut-off of the first-order mode, the second component ($`v`$) appears as a guided mode (dash-dotted) of the effective waveguide created by the scalar soliton in the $`u`$-component (dashed). The total intensity, shown in Figs. 3 (a) by a solid curve, has a maximum at the beam center. Akin to the humps in the total intensity profile of (1+1)-D solitons , the ring shape of (2+1)-D vector solitons develops far from the cut-off for the first-order mode, when the guided mode deforms the soliton waveguide and creates a coupled state that has the maximum shifted from the beam center \[see Fig. 3(b)\].
Next, we conduct the numerical stability analysis, and an accurate check of the Vakhitov-Kolokolov stability criterion for vector solitons . Our analysis reveals that neither the fundamental nor the first-order mode of the soliton-induced waveguide can arrest the collapse of the scalar bright (2+1)-D solitons in a Kerr medium, and the corresponding vector solitons $`|0,0`$ and $`|0,1`$ are linearly unstable.
Given the established stability of (1+1)-D vector solitons in a saturable medium, a tempting task is to look for the $`|0,1`$ ring-like solitons in such a medium. For the (completely solvable) model of the so-called threshold-type nonlinearity, $`|0,1`$ vector solitons have been recently analyzed in . Here we consider the model corresponding, in the isotropic approximation, to the physically realised solitons in photorefractive materials. The normalized dynamical equations for the envelopes of two incoherently interacting beams can, in this case, be approximately written in the form: $`i\frac{E_{1,2}}{z}+\mathrm{\Delta }_{}E_{1,2}+E_{1,2}\left(1+|E_{1,2}|^2+|E_{2,1}|^2\right)^1=0.`$ Seeking stationary solutions in the general form (2), and introducing the relative propagation constant $`\lambda =(1\beta _2)/(1\beta _1)`$, we arrive, after corresponding renormalizations , to the following system of equations \[cf. Eqs. (3)\]:
$`\mathrm{\Delta }_\mathrm{r}uu+uf(I)=0,`$ (4)
$`\mathrm{\Delta }_\mathrm{r}v{\displaystyle \frac{m_2^2}{r^2}}v\lambda v+vf(I)=0,`$ (5)
where $`f(I)=I(1+sI)^1`$, $`I=u^2+v^2`$, and $`s=1\beta _1`$ plays the role of a saturation parameter. For $`s=0`$, this system describes the Kerr nonlinearity (with $`\sigma =1`$). Since the contributions from the self- and cross-phase modulation are, in this case, equal, the lowest-order bell-shaped $`|0,0`$ solutions only exist at $`\lambda =1`$. In the remaining region of the parameter plane $`(s,\lambda )`$, the solutions $`|0,1`$, similar to those described above for the Kerr nonlinearity, are found \[see Fig. 4(a)\]. Again, the ring-shaped structure of these solutions develops only far from the cut-off for the vortex-type guided mode. Close to the cutoff, all vector solitons are bell-shaped.
Although our numerical simulations confirmed that the saturation does have a strong stabilizing effect on the $`|0,1`$ vector solitons \[cf. cases $`s=0`$ and $`s=0.6`$ in Fig. 4(b)\], vector solitons of this type appear to be linearly unstable. The instability, although largely suppressed by saturation, triggers the decay of the solitons into a dipole structure (as shown in Fig. 5) for even small contribution of the charged mode. However, the quasi-stable dynamics exhibited by these vector solitons over long propagation distances may have serious implications on the attempts to observe them experimentally. Indeed, since, for the current experiments on spatial solitons in photorefractive media , the propagation length $`z=100`$ corresponds to a crystal length of up to $`40`$ mm, this slow-developing dynamical instability may be hard to definitively detect in experiments.
In conclusion, we have analyzed multi-dimensional vector solitons composed of two incoherently coupled beams in both Kerr and saturable media and defined, analytically and numerically, existence domains for new classes of spatial solitary waves. A stabilizing effect of nonlinearity saturation on (2+1)-D vector solitons with topological charge has been demonstrated.
Yu. S. K. thanks M. Segev for useful discussions and a copy of Ref. prior to its publication. |
no-problem/9912/astro-ph9912322.html | ar5iv | text | # Red Hole Gamma-Ray Bursts: A New Gravitational Collapse Paradigm Explains the Peak Energy Distribution And Solves the GRB Energy Crisis
## KEY GRB MODEL BUILDING CHALLENGES
Gamma-ray bursts vary rapidly and therefore they must be compact. Because these compact gamma-ray bursts release enormous energy, they must form an intense fireball that is optically thick, pair-producing, and thermalized. But the spectrum is not thermal, and there is no sign of pair-production attenuation at the high end of the observed spectrumjsgband . This seeming self-contradiction (the opacity problem) can be solved by having the fireball power a relativistic shell or jet that collides with something (perhaps itself) to produce the observed gamma raysjsgrees . This fireball/shock model is currently the leading candidate to explain GRBsjsgpiran . It has already overcome several severe model-building challenges. But like almost all other published models, it fails to explain the observed spectroscopy of GRBs, particularly the narrowness of the observed peak energy distributionjsgpreece ; jsgbrainerd . Furthermore, this model does not explain the high ratio of the energy of the GRB burst itself (caused by internal shocks) to the energy in the afterglow (caused by external shocks in the fireball/shock model)jsgpa . Nevertheless the predictions of this model for the afterglows themselves are consistent with current observationsjsgpiran .
Finally, there is the problem of the overall energetics of the GRB. The two leading candidates to produce the initial fireball or fireballs โthe so-called central engineโ are merging neutron stars and core-collapse supernovaejsgeichler ; jsgwoosley . Both these sources have over 10<sup>5</sup><sup>4</sup> ergs of total energy available. This is more than enough energy for even the most energetic GRB, but it is not at all clear how to prevent most of it from falling into the newly created black hole which forms in the standard general relativity versions of these models.
There seems to be an inherent conflict between solving the opacity problem and solving the peak energy distribution problem. The only successful technique available to solve the transparency problem is to invoke highly relativistic bulk motion. In the relativistic frame, the gamma rays are below pair-production threshold and so do not suffer pair-production attenuation. This definitively solves the opacity problem. But unless the Lorentz gamma factor of the bulk motion can be fine-tuned to a very narrow range for all GRBs, the resulting blueshift will not only relocate the peak of the photon energy distribution; it will also substantially widen it, inconsistent with the observed narrow E-peak distribution. Thus one needs to find a way to fine-tune the Lorentz gamma factor or find some other way around this conflict. In the fireball/shock model the gamma factor depends sensitively on the baryon loading, and hence will vary widely. Furthermore, the internal shocks model is dependent on shocks with varying Lorentz gamma factors colliding with each other. So fine-tuning is not a reasonable option for this model.
A generic solution to this problem is provided if the relativistic bulk motion results not from an initial explosion, but rather from the gravitational acceleration of matter falling into a deep potential well. An arbitrarily high Lorentz gamma factor can be attained, but the accompanying blueshift will be exactly cancelled when the matter and radiation are redshifted as they emerge from the potential well. (By that time, the matter and radiation will have separated, so the opacity problem has already been solved).
A black hole can provide the necessary deep potential well. But once matter or radiation is deep in the potential well of a black hole, it is almost impossible for it to escape. Therefore, we will consider an alternative gravitational collapse paradigm in which it is possible to escape from deep within the potential well of a gravitationally collapsed object.
## WHY CONSIDER ALTERNATE GRAVITY MODELS?
The problems with constructing a GRB model might be sufficient motivation to consider alternate theories of gravity. However, a stronger motivation comes from the theory of gravitation. Recent theoretical developments in string theory, quantum gravity and critical collapse strongly suggest the possibilities of both gravitational collapse without singularities (and without loss of information) and also gravitational collapse without event horizonsjsgsv ; jsgcm ; jsgms ; jsgst ; jsgchop ; jsgchr . If these possibilities are correct, we are forced to consider the phenomenological consequences (such as different models for GRBs and core-collapse supernovae) of alternate paradigms for gravitational collapse in which black holes do not formjsggrab .
## RED HOLESโ A NEW PARADIGM
Many authors have considered the alternative in which a hard core collapsed object similar to a smaller harder denser neutron star forms in place of a black holejsgrob . We here consider the alternative in which no such hard surface forms. Instead the spacetime stretching that forms a black hole in the standard model occurs, but it does not continue to the extent necessary to form an event horizon or a singularity. Instead, spacetime stretches enormously, but not infinitely, and forms a deep wide potential well with a narrow throat. We call this a red hole.
This type of spacetime configuration was previously considered by Harrison, Thorne, Wakano and Wheeler (HTWW) in 1965, but only as a way station in the final collapse to a black hole (not yet then called by that name)jsghtww . In their version, part of the configuration is inside the event horizon, the collapse continues, and a singularity soon forms.
In the new alternate paradigm we call a red hole, no event horizon forms and no singularity forms. The gravitational collapse does not continue forever, but eventually stops. (Why? Perhaps due to quantum effects or string-theory dualities, but we cannot discuss this adequately here.) As the collapse proceeds, the collapsing matter becomes denser and denser until it reaches a critical point, after which, the distortion of spacetime is so great that the density decreases. This happens because the spacetime is stretching faster than the collapsing material can fall inward. (This decreasing density effect was already noticed by HTWW in their analysis of gravitational collapse in the context of standard general relativityjsghtww . In general relativity, this expansion of spacetime is mostly hidden behind the event horizon and does not prevent the formation of a singularity in a finite time. This is not the case in several observationally viable alternate theories of gravityjsgrosen ; jsgyil ; jsgitin .) This is why we are confident that the center of a red hole resembles a low-density vacuum more than it resembles a high-density neutron star. The decrease in density due to this enormous stretching may also be a factor in halting the gravitational collapse of the red hole before the stretching becomes infinite.
As a result, even though the stretching of spacetime is enormous, it never becomes fast enough to exceed the speed of light and cause an event horizon to form. It stops before it reaches an infinite size or any other form of singularity. (Infinite density and infinite curvature also do not occur.) Nevertheless, it is very hard to escape from a red hole. First, there are trapped orbits inside the red hole for photons as well as massive particles, which allows permanent or nearly permanent trapping of mass and energy. Second, the Shapiro delay in crossing a red hole is very substantial, (in some cases, enormous)jsgshap . Hence particles that are only crossing the red hole or passing through are in effect temporarily trapped.
In fact most of the matter falling into a red hole will be trapped. However, radiation, and highly relativistic matter that falls directly into the center of the red hole and does not rescatter while inside the red hole, can travel straight through and emerge on the other side. This possibility is essential for our proposed new GRB models.
## RED-HOLE MODELS FOR GRBs
In order to describe our new red-hole models for GRB s, which are based on modifying the existing fireball/shock model, we begin by resummarizing that model. In the fireball/shock model, some form of gravitational collapse deposits a large amount of energy in a very small region, (which is called the fireball, and also the central engine). The fireball has so much energy in such a small space that a relativistic expansion must occur. Part or all of this explosive expansion travels through a region with a very small critical number of baryons, which absorb essentially all of the energy and form a relativistic blast wave (either spherical or jetted). Multiple such relativistic shells (travelling in the same direction) are created by the central engine, perhaps by repeated explosions (possibly due to repeated accretion events). The faster relativistic shells overtake the slower relativistic shells and collide with them. The internal shocks convert the energy of the baryons to gamma rays,(by synchroton emission or inverse compton scattering or perhaps by both means). The shells eventually collide with external matter and generate the main afterglow. (Perhaps an early prompt afterglow is the result of a reverse shock)jsgpiran .
Basically there are three important sites in this model. First, there is the central engine, or fireball site. (In the standard black-hole interpretation, this is probably near a newly forming black hole, perhaps at the pole of a Kerr black hole)jsgkerr . Second, there is the location of the internal shocks, where the main gamma-ray burst is generated. According to Piran, this is typically 10<sup>1</sup><sup>2</sup>-10<sup>1</sup><sup>4</sup> centimeters, or 30-3000 light seconds down stream from the central enginejsgpiran . Third, there is the location of the external shock, where the relativistic matter collides with material that was not part of the original explosion, and the long-lasting (days to months), but weak (less total energy than the gamma rays) afterglow is generated. In the standard model, this is far from the central engine.
In our alternate red hole models we will relocate these three sites in or near a red hole instead of near the outside of a black hole.
In the first and most conservative red-hole model, we merely replace the black hole of the standard model with a red hole. The red hole can help the central engine by generating more energy than the corresponding black hole or by focusing the outgoing jet more narrowly, but the rest of the model is essentially the same as the standard one and there is no significant impact on the spectral issues. In other words, this first red-hole model can help solve the energy crisis, but does not help explain the broad spectrum, with its unusual slopes and narrowly distributed peak energy.
In the second โ and more interesting โ red-hole model, the central engine is located at the infalling bottleneck of the red hole, and the internal shocks that generate the primary gamma-ray burst are located at the outgoing bottleneck of the red hole (which is essentially the same place, but at a later time), and the external shocks and the afterglow still occur far away at the point where the ejecta encounter the interstellar material or some other external matter.
In this model, the great internal expansion of the red hole, along with the great acceleration of the gravitational infall, help to generate the relativistic jet that will later create the GRB. Then the focusing effects of the emerging bottleneck of the red hole help to create the internal shocks necessary for the final transformation of the energy into gamma rays, and to very substantially increase the efficiency of this process. Furthermore, since the blueshift of the infall should be exactly cancelled by the redshift of the outclimb, the gamma rays seen by the observer will have no net red or blue shift (on average). Therefore the observed peak energy will be the same as the initial peak energy. Even if the internal transit involves enormous and substantially varying Lorentz gamma factors, they will not be observed as a net blueshift. So this model helps solve the narrow peak energy distribution problem, as well as the energy crisis. It can also help solve the spectral wideness and slope problems because of the tolerance for differing Lorentz gamma factors during the transit through the red hole. |
no-problem/9912/astro-ph9912113.html | ar5iv | text | # Constraints on the Steady-State R-mode Amplitude in Neutron Star Transients
## 1. Introduction
With the launch of *RXTE*, precision timing of accreting neutron stars has opened new threads of inquiry into the behavior and lives of these objects. The neutron stars in low-mass X-ray binaries (LMXBs) have long been thought to be the progenitors of millisecond pulsars (see Bhattacharya 1995 for a review), and a long-standing observational goal has been the detection of a spin period of a neutron star in an LMXB. Recent observations (see van der Klis 1999 for a review) have finally provided conclusive evidence of millisecond spin periods of neutron stars in about one-third of known Galactic LMXBs. Altogether, there are seven neutron stars in LMXBs with spin periods firmly established by either pulsations in the persistent emission (in the millisecond X-ray pulsar SAX J1808.4-3658; Wijnands & van der Klis 1998) or oscillations during type I X-ray bursts (so-called burst QPOs, first discovered in 4U 1728โ34; Strohmayer et al. 1996). There are an additional thirteen sources with twin kHz QPOs for which the neutron starโs spin may be approximately equal to the frequency difference (van der Klis 1999). A striking feature of all these neutron stars is that their spin frequencies lie within a narrow range, $`260\mathrm{Hz}<\nu _{\mathrm{spin}}<589\mathrm{Hz}`$. The frequency range might be even narrower if the burst QPOs seen in KS 1731โ260, MXB 1743โ29, and Aql X-1 are at the first harmonic of the spin frequency, as is the case with the $`581\mathrm{Hz}`$ burst oscillations in 4U 1636โ536 (Miller 1999). If this is the case, then the range of observed frequencies is $`260\mathrm{Hz}<\nu _{\mathrm{spin}}<401\mathrm{Hz}`$. The neutron stars in LMXBs accrete at diverse rates, from $`10^{11}M_{}\mathrm{yr}^1`$ to the Eddington limit, $`10^8M_{}\mathrm{yr}^1`$. Since disk accretion exerts a substantial torque on the neutron star and these systems are very old (van Paradijs & White 1995), it is remarkable that these neutron starsโ spins are so tightly correlated, and that none of the neutron stars are rotating anywhere near the breakup frequency of roughly $`1\mathrm{kHz}`$.
Observations therefore suggest that neutron stars in LMXBs are somehow stuck within a narrow band of spin frequencies well below breakup. Two explanations for this convergence of spin frequencies have been proffered. White & Zhang (1997) argued that the magnetospheric spin equilibrium model (see Ghosh & Lamb 1979 and references therein), which is applicable to the accreting X-ray pulsars, is also at work in LMXBs. In this scenario, the neutron starโs magnetic field ($`B10^9\mathrm{G}`$) dominates accretion near the stellar surface, and the Keplerian period at the magnetospheric radius roughly equals the spin period, so that the accretion stream exerts no net torque on the star. Because the sourcesโ luminosities (and presumably accretion rates) vary by several orders of magnitude, White & Zhang (1997) noted that this explanation requires either that the accretion rate be tightly correlated with the neutron starโs magnetic field, $`B\dot{M}^{1/2}`$, or that the torque be roughly independent of accretion rate when the magnetospheric radius approaches the radius of the neutron star. Moreover, the persistent pulses typical of magnetic accretors must also be hidden most of the time.
The other class of theories, first considered by Papaloizou & Pringle (1978) and Wagoner (1984), invoke the emission of gravitational radiation to balance the torque supplied by accretion. Bildsten (1998) proposed that equilibrium between the accretion torque and gravitational radiation can explain the narrow range of observed spin frequencies. The source for the gravitational radiation could be a mass quadrupole formed by misaligned electron capture layers in the neutron starโs crust (Bildsten 1998). Alternatively, as proposed independently by Bildsten (1998) and Andersson, Kokkotas, & Stergioulas (1999), current quadrupole radiation from an unstable r-mode oscillation (Andersson 1998; Friedman & Morsink 1998) in the liquid core of the neutron star could also limit the spin, as might occur in hot, newly born neutron stars (Lindblom, Owen, & Morsink 1998; Owen et al. 1998; Andersson, Kokkotas, & Schutz 1999). Because the accretion rate of LMXBs does vary by several orders of magnitude, the small range of $`\nu _{\mathrm{spin}}`$ among these objects also requires a correlation between the quadrupole moment and accretion rate. This correlation is much less restrictive, however, than for magnetic equilibrium theories because of the steep dependence of gravitational wave torque on the spin frequency.
These theories have renewed interest in accreting neutron stars as gravitational wave sources. If gravitational radiation does in fact halt the spin-up of accreting neutron stars, then, regardless of the mechanism producing the gravitational radiation, the brightest LMXBs (such as Sco X-1, with dimensionless strain $`h_c2\times 10^{26}`$; Bildsten 1998) are also promising sources for ground-based gravitational wave interferometers, such as LIGO, VIRGO, GEO, and TAMA (Bildsten 1998; Brady & Creighton 1999). It is not certain, however, that accreting neutron stars in LMXBs do emit gravitational radiation. The *only* evidence to date is their narrow range of spin frequencies. It is therefore important to look for astronomical observations, doable today, that can either corroborate or rule out the various mechanisms for gravitational radiation from LMXBs.
In this paper we present a new observational test for r-mode driven gravitational radiation from neutron stars in one set of LMXBs, the soft X-ray transients. These are LMXBs in which accretion outbursts, lasting for days to months, are followed by periods of quiescence, lasting on the order of years to decades. Typical time-averaged (over the recurrence interval, rather than just over the outburst) accretion rates $`\dot{M}`$ for these sources are $`10^{10}M_{}\mathrm{yr}^1`$, smaller than those in the brighter persistently accreting LMXBs. We show that the quiescent X-ray luminosities of these neutron star transients (in particular Aql X-1, which exhibits burst QPOs with a frequency $`549\mathrm{Hz}`$; Zhang et al. 1998) can be used to determine whether r-modes with amplitudes sufficient to balance the accretion torque are present in their cores.
Recent theoretical (Brown, Bildsten, & Rutledge 1998) and observational (Rutledge et al. 1999b) works suggest that at least some fraction of the quiescent luminosity of a neutron star transient is thermal emission from the neutron starโs surface. Motivated by the possibility of indirectly measuring the core temperature of an accreting neutron star, we consider the amount of heat that must be lost, on average, by the neutron star to maintain a thermal steady state. If the spins of neutron star transients are set by the equilibrium between the *time-averaged* accretion torque and gravitational wave emission by *steady-state* (i.e., constant amplitude) r-mode pulsations in their cores, the required amplitude of the pulsations can be computed (ยง 2). The steady-state assumption implies a certain magnitude of viscous dissipation, i.e., heat deposited directly into the core of the neutron star. If the core is superfluid, Urca neutrino emission is suppressed and this heat escapes as thermal radiation from the surface of the star. We show (ยง 3) that in this case the X-ray luminosity in quiescence, $`L_q`$, would be about 5โ10 times greater than that observed. If the nucleons in the core are normal, then, as shown by Levin (1999), r-mode pulsations are thermally unstable (at least for saturation amplitudes of order unity). In this case it is unlikely that r-modes are currently excited in any of the known Galactic LMXBs. If for some reason a thermal steady state could be achieved in a normal fluid core, however, then Urca neutrino emission would carry away most of the r-mode heating, and the resulting lower quiescent thermal luminosities would be consistent, within uncertainties, with observations. Our test does not depend on how the r-mode is damped, but only on the assumptions that the dissipated energy is deposited into the thermal bath of the star and that the star has reached a rotational and thermal steady state. We are only inquiring into total energetics, i.e., whether the viscous heating present matches that required by the spin equilibrium with the accretion torque.
## 2. R-mode Viscous Heating of Accreting Neutron Stars
Recently Andersson (1998) and Friedman & Morsink (1998) showed that gravitational radiation excites the r-modes (large scale toroidal fluid oscillations similar to geophysical Rossby waves) of rotating, *inviscid* stars. Lindblom et al. (1998) compared the gravitational wave growth timescale $`\tau _{\mathrm{gr}}`$ for the r-modes with the viscous damping timescale $`\tau _v`$ set by shear and bulk viscosities for normal fluids (i.e., no superfluidity); at rotation rates $`\mathrm{\Omega }0.065\mathrm{\Omega }_K`$, where $`\mathrm{\Omega }_K=(GM/R^3)^{1/2}`$ is the Keplerian angular velocity at the surface of the star, the damping is sufficient to preclude unstable growth. The modes are excited, however, over a wide range of spin frequencies and temperatures that includes typical values for the neutron star transients.
Gravitational waves radiate away angular momentum at a rate
$$\frac{dJ}{dt}|_{\mathrm{gr}}=\frac{2J_c}{\tau _{\mathrm{gr}}},$$
(1)
where
$$J_c=\frac{3}{2}\alpha ^2\mathrm{\Omega }\stackrel{~}{J}MR^2$$
(2)
is the canonical angular momentum of the $`(l=2,m=2)`$ r-mode (Friedman & Schutz 1978; Owen et al. 1998), $`\alpha `$ is the dimensionless amplitude of the mode, and $`\stackrel{~}{J}`$ is a dimensionless constant that accounts for the distribution of mass in the star (Owen et al. 1998). The gravitational wave growth time $`\tau _{\mathrm{gr}}`$ is negative, which implies instability. In a rotational steady state, this angular momentum loss is balanced by the accretion torque, $`N_{\mathrm{accr}}`$. For a fiducial torque, we assume that each accreted particle transfers its Keplerian angular momentum to the neutron star, with a net accretion torque $`N_{\mathrm{accr}}=\dot{M}(GMR)^{1/2}`$. Using $`\tau _{\mathrm{gr}}`$ as evaluated by Lindblom et al. (1998), we find the steady-state r-mode amplitude (Bildsten 1998; Levin 1999),
$$\alpha _{\mathrm{steady}}=7.9\times 10^7\left(\frac{\dot{M}}{10^{11}M_{}\mathrm{yr}^1}\right)^{1/2}\left(\frac{300\mathrm{Hz}}{\nu _{\mathrm{spin}}}\right)^{7/2},$$
(3)
such that the fiducial accretion torque $`N_{\mathrm{accr}}`$ is balanced by r-mode angular momentum loss $`dJ/dt|_{\mathrm{gr}}`$.
The gravitational radiation reaction adds energy to the unstable r-mode at a rate
$$\frac{dE_c}{dt}|_{\mathrm{gr}}=\frac{2E_c}{\tau _{\mathrm{gr}}},$$
(4)
where
$$E_c=\frac{1}{2}\alpha ^2\mathrm{\Omega }^2\stackrel{~}{J}MR^2$$
(5)
is the canonical energy of the $`(l=2,m=2)`$ r-mode (Friedman & Schutz 1978; Owen et al. 1998). In a steady state all of this energy must be dissipated by viscous processes at a rate $`W_d=dE_c/dt`$. In terms of the accretion luminosity, $`L_A=GM\dot{M}/R=N_{\mathrm{accr}}\mathrm{\Omega }_K`$, the dissipation rate is
$$\frac{W_d}{L_A}=\frac{1}{\mathrm{\Omega }_K}\frac{dE_c/dt|_{\mathrm{gr}}}{dJ_c/dt|_{\mathrm{gr}}}=\frac{1}{3}\frac{\mathrm{\Omega }}{\mathrm{\Omega }_K}.$$
(6)
The viscosity in the neutron star originates from several possible sources. For normal $`npe`$ matter, calculations of the viscous transport coefficients exist only at near-nuclear densities (Flowers & Itoh 1979). The components of such a core are strongly degenerate, and phase-space restrictions impart a characteristic $`T^2`$ dependence to the shear viscosity (Cutler & Lindblom 1987). Compressing a fluid element of neutron star matter causes it to emit neutrinos as the $`npe`$ mixture reestablishes $`\beta `$-equilibrium, so the bulk viscosity has an Urca-like $`T^6`$ dependence (Sawyer 1989). Another possibility for the viscosity is that it is caused by mutual friction in the neutron-proton superfluid (Mendell 1991). In this case the viscous damping is independent of temperature.
While the total amount of viscous dissipation $`W_d`$ depends only on the assumption of a steady-state r-mode amplitude, the amount of heat actually deposited into the star depends on the nature of the damping. If the dominant viscous mechanism is bulk viscosity (i.e., for core temperatures $`T10^9\mathrm{K}`$), then the dissipated energy is released in the form of neutrinos, which promptly leave the star. The core temperatures of LMXBs are most likely less than $`10^9\mathrm{K}`$, however, in which case the dissipation mechanism is either shear viscosity or mutual friction. For both of these mechanisms, the heat $`W_d`$ is deposited directly into the core of the star; we shall assume this to be the case in the rest of this paper.
Levin (1999) first noted that, if the nucleons in the core are normal, the r-modes damped by shear viscosity are likely to be thermally unstable, at least for saturation amplitudes of order unity. The heating from the shearing motions decreases the viscosity, and so the r-mode amplitude increases, which heats the star even more. The result is a thermal and dynamical runaway. As envisaged by Levin (1999), the neutron star enters a limit cycle of slow spin-up to some critical frequency, at which the r-mode becomes unstable, followed by a rapid spin-down until the mode is once again damped. As the neutron star cools, accretion again exerts a positive torque on the star, and the cycle repeats. Because the r-modes are present at a nonzero amplitude for only $`10^7`$ of the entire cycleโs duration, it is unlikely that any of the known LMXBs harbor active r-modes *and* have normal fluid cores.
For a superfluid core, where the damping is due to mutual friction (and hence independent of temperature), the neutron star can reach a state of three-fold equilibrium (Bildsten 1998; Levin 1999): the temperature is set by the balance of viscous heating and radiative or neutrino cooling, the r-modeโs amplitude is set by the balance of gravitational radiation back-reaction and viscous damping, and the spin is set by the balance of accretion torque and angular momentum loss to gravitational radiation. It is this scenario that we shall examine for existing evidence of r-mode spin regulation.
While a neutron star accretes, its luminosity is dominated by the release of the infalling matterโs gravitational potential energy, $`L_A190\mathrm{MeV}(\dot{M}/m_b)`$, where $`m_b`$ is the average nucleon mass. Nuclear burning (either steady or via type I X-ray bursts) of the accreted hydrogen and helium generates an additional $`5\mathrm{MeV}`$ per accreted nucleon. Most of this heat is promptly radiated away, however, and no more than a few percent diffuses inward to heat the interior (Fujimoto et al. 1984, 1987). Nuclear reactions in the deep crust (at $`\rho 5\times 10^{11}\mathrm{g}\mathrm{cm}^3`$) release about $`1\mathrm{MeV}`$ per accreted nucleon (Sato 1979; Blaes et al. 1990; Haensel & Zdunik 1990) and heat the crust directly (Brown & Bildsten 1998; Brown 2000).
In addition to the crustal reactions, the viscous dissipation of r-modes constitutes another heat source in the neutron starโs core. For a fiducial neutron star with $`M=1.4M_{}`$ and $`R=10\mathrm{km}`$, equation (6) implies that $`W_d/L_A=0.046(\nu _{\mathrm{spin}}/300\mathrm{Hz})`$, or
$$W_d8.9\mathrm{MeV}\left(\frac{\dot{M}}{m_b}\right)\left(\frac{\nu _{\mathrm{spin}}}{300\mathrm{Hz}}\right).$$
(7)
This heating is very substantial, as it is much greater than the amount of nuclear heating from the crustal reactions. The prospects for detecting the effect of core r-mode heating in *steadily* accreting neutron stars are dim, unfortunately, as it is dwarfed by the accretion luminosity (which is a factor of 20 brighter). The thermal emission from the neutron star is directly observable, however, if accretion periodically halts, as in the neutron star transients (the cooling timescale of the heated core is $`10^4\mathrm{yr}`$). While continued accretion at low levels between outbursts may contribute some of the quiescent luminosity (see Brown et al. 1998 for a discussion), the thermal emission from the hot crust of the neutron star is impossible to hide, and so observations of $`L_q`$ set an upper limit on the core temperature. The neutron stars in soft X-ray transients therefore offer the best prospects to look for evidence of viscous heating. In the next section we predict the quiescent luminosity $`L_q`$ that arises because of the r-mode heating and compare it to the observed luminosities of several neutron star transients.
## 3. The Quiescent Luminosities of Neutron Star Transients
The neutron star accretes fitfully, so the spin period and the r-mode amplitude oscillate about the equilibrium defined by the time-averaged accretion rate, $`\dot{M}t_r^1\dot{M}๐t`$, where $`t_r`$ is the recurrence interval. Moreover, the timescale for viscous dissipation to heat the core is
$$t_H\frac{c_pT}{W_d}\frac{M}{m_b}6\times 10^4\mathrm{yr}\left(\frac{10^{11}M_{}\mathrm{yr}^1}{\dot{M}}\right),$$
(8)
where $`c_p`$ is the specific heat per baryon and $`W_d`$ is the viscous heating averaged over an outburst/quiescent cycle. Because $`t_H`$ is much longer than the outburst recurrence time (typically of order years to decades), the core should remain fixed at the temperature set by the balance (over many outburst/quiescent cycles) between heating and cooling processes. We may therefore compute the viscous dissipation using $`\dot{M}`$. Some simple estimates of the equilibrium core temperatures and the resulting quiescent luminosities, for when both radiative and neutrino cooling are important, are presented first (ยง 3.1). This is followed, in ยง 3.2, by detailed numerical calculations of the neutron starโs thermal structure and a comparison (ยง 3.3) to observations of several neutron star transients.
### 3.1. Simple Estimates
In a thermal steady state, the neutron star interior is cooled both by neutrinos emitted from the core and crust and by photons emitted from the surface. To begin, we estimate the luminosity and the equilibrium core temperature set by balancing the heat deposited during an outburst/recurrence cycle, $`W_d`$, with each cooling mechanism individually. First, if neutrino emission from the core is negligible (e.g., if the core is superfluid and the Urca processes are exponentially suppressed), then all of the heat generated by viscous dissipation, $`W_d`$, is conducted to the surface of the neutron star and escapes as thermal radiation during quiescence. For the interior to be in a thermal steady state, the quiescent luminosity must then be
$$L_qW_d=5.4\times 10^{33}\mathrm{erg}\mathrm{s}^1\left(\frac{\dot{M}}{10^{11}M_{}\mathrm{yr}^1}\right)\left(\frac{\nu _{\mathrm{spin}}}{300\mathrm{Hz}}\right).$$
(9)
This estimate depends only on the assumption that neutrino emission is suppressed, and is independent of the crust microphysics.
As a check, we estimate the temperature of the neutron star core. In quiescence the atmosphere and crust come to resemble a cooling neutron star (Bildsten & Brown 1997; Brown et al. 1998). For the temperature increase through the atmosphere and upper crust, we use the fit of Gudmundsson, Pethick, & Epstein (1983),
$$L_\gamma 8.2\times 10^{32}\mathrm{erg}\mathrm{s}^1\left(\frac{T_b}{10^8\mathrm{K}}\right)^{2.2},$$
(10)
where $`T_b`$ is the temperature at a fiducial boundary $`\rho _b=10^{10}\mathrm{g}\mathrm{cm}^3`$. Equating $`L_\gamma `$ with $`L_q`$ from equation (9) gives an estimate of the temperature in the upper crust,
$$T_b2.4\times 10^8\mathrm{K}\left(\frac{\dot{M}}{10^{11}M_{}\mathrm{yr}^1}\right)^{0.45}\left(\frac{\nu _{\mathrm{spin}}}{300\mathrm{Hz}}\right)^{0.45}.$$
(11)
To relate $`T_b`$ to the core temperature $`T_c`$, we use approximate analytic expressions (Brown 2000; eqs. and ) for the crust temperature to obtain
$$\left(\frac{T_c}{10^8\mathrm{K}}\right)^2\left(\frac{T_b}{10^8\mathrm{K}}\right)^2+4.9\left(\frac{L_q}{10^{34}\mathrm{erg}\mathrm{sec}^1}\right),$$
(12)
where we have neglected the luminosity due to crustal nuclear reactions. Substituting from equation (11) for $`T_b`$, we obtain the core temperature in the absence of neutrino emission,
$$T_c2.9\times 10^8\mathrm{K}\left(\frac{\dot{M}}{10^{11}M_{}\mathrm{yr}^1}\right)^{0.45}\left(\frac{\nu _{\mathrm{spin}}}{300\mathrm{Hz}}\right)^{0.45},$$
(13)
where the scalings for $`\dot{M}`$ and $`\nu _{\mathrm{spin}}`$ are obtained by dropping the second term on the right in equation (12). This estimate agrees quite well with the detailed calculations described in ยง 3.2.
The core neutrino emissivity is, for modified Urca processes (Shapiro & Teukolsky 1983), $`L_\nu ^{\mathrm{Urca}}7.4\times 10^{31}(T_c/10^8\mathrm{K})^8`$, multiplied by a superfluid reduction factor that goes roughly as $`\mathrm{exp}(\mathrm{\Delta }/kT_c)`$, where $`\mathrm{\Delta }`$ is the superfluid gap energy (Yakovlev & Levenfish 1995). For $`\mathrm{\Delta }>kT_c`$ the net Urca neutrino luminosity is much less than $`L_q`$, so that equation (9) is self-consistent. Neutrino emission from crust neutrino bremsstrahlung (Kaminker et al. 1999) at the temperature $`T_b`$ (eq. ) is also not significant, although at higher $`\dot{M}`$ it is competitive with radiative cooling. Hence for accretion rates typical of neutron star transients, the majority of the deposited heat is conducted to the surface, and equation (9) provides a robust estimate of the radiative luminosity of the star.
Alternatively, if core neutrino emission is not suppressed (i.e., the nucleons are not superfluid), then modified Urca processes are the dominant coolant and $`L_\nu ^{\mathrm{Urca}}W_d`$. In this case the core temperature is
$$T_c1.7\times 10^8\mathrm{K}\left(\frac{\dot{M}}{10^{11}M_{}\mathrm{yr}^1}\right)^{1/8}\left(\frac{\nu _{\mathrm{spin}}}{300\mathrm{Hz}}\right)^{1/8},$$
(14)
and is smaller than if the core were superfluid. A colder core implies a dimmer thermal luminosity from the surface. In order to estimate $`L_q`$, we write $`W_d=L_q+L_\nu ^{\mathrm{Urca}}(T_c)`$, where $`L_q=L_\gamma (T_b)`$, and $`T_b`$ is related to $`T_c`$ by equation (12). Under the assumption that $`L_qL_\nu ^{\mathrm{Urca}}`$, the solution of the resulting transcendental equation is
$$L_q1.8\times 10^{33}\mathrm{erg}\mathrm{s}^1\left(\frac{\dot{M}}{10^{11}M_{}\mathrm{yr}^1}\right)^{0.3}\left(\frac{\nu _{\mathrm{spin}}}{300\mathrm{Hz}}\right)^{0.3},$$
(15)
where we obtain the scalings for $`\dot{M}`$ and $`\nu _{\mathrm{spin}}`$ by dropping the second term on the right-hand side of equation (12). In this case $`L_q`$ is less than $`W_d`$ and $`L_\nu ^{\mathrm{Urca}}`$, so our assumption that core neutrino emission is the dominant coolant is self-consistent.
These estimates neglect cooling from other neutrino-producing mechanisms, such as neutrino bremsstrahlung in the crust. Moreover, the core neutrino emissivity depends on the local proper temperature, which increases towards the center of the star because of the gravitational redshift. We now describe our detailed calculations, which take these effects into account.
### 3.2. Numerical Calculations
To calculate the expected quiescent luminosities of accreting neutron stars, we compute hydrostatic neutron star models by integrating the post-Newtonian stellar structure equations (Thorne 1977) for the radius $`r`$, gravitational mass $`m`$, potential, and pressure with the equation of state AV18+$`\delta `$v+UIX\* (Akmal, Pandharipande, & Ravenhall 1998), as described in Brown (2000). With the hydrostatic structure specified, the luminosity $`L`$ and temperature $`T`$ are found by solving the entropy and flux equations (Thorne 1977),
$`e^{2\mathrm{\Phi }/c^2}{\displaystyle \frac{}{r}}\left(Le^{2\mathrm{\Phi }/c^2}\right)4\pi r^2n\left(ฯต_rฯต_\nu \right)\left(1{\displaystyle \frac{2Gm}{rc^2}}\right)^{1/2}`$ $`=`$ $`0`$ (16)
$`e^{\mathrm{\Phi }/c^2}K{\displaystyle \frac{}{r}}\left(Te^{\mathrm{\Phi }/c^2}\right)+{\displaystyle \frac{L}{4\pi r^2}}\left(1{\displaystyle \frac{2Gm}{rc^2}}\right)^{1/2}`$ $`=`$ $`0.`$ (17)
Here $`ฯต_r`$ and $`ฯต_\nu `$ are the nuclear heating and neutrino emissivity per baryon, $`n`$ is the baryon density, and $`K`$ is the thermal conductivity. The potential $`\mathrm{\Phi }`$ appears in the time-time component of the metric as $`e^{\mathrm{\Phi }/c^2}`$ (it governs the redshift of photons and neutrinos; Misner, Thorne, & Wheeler 1973). We neglect in equation (16) terms arising from compressional heating, as they are of order $`T\mathrm{\Delta }s(\dot{M}/M)`$ (Fujimoto & Sugimoto 1982), $`s`$ being the specific entropy, and are negligible throughout the degenerate crust and core (Brown & Bildsten 1998). We do not include heating from nuclear reactions in the deep crust. This has the effect of underestimating slightly (by $`10\%`$) the quiescent luminosity of the neutron star transient. Equations (16) and (17) are integrated outwards to a density $`\rho _b=10^{10}\mathrm{g}\mathrm{cm}^3`$. We there impose a boundary condition relating $`L`$ and $`T`$ with the fitting formula of Potekhin, Chabrier, & Yakovlev (1997) for a partially accreted crust. By incorporating a parameter describing the depth of a light element (H and He) layer, this formula differs from that of Gudmundsson et al. (1983), which we used for our simple estimates (ยง 3.1). We set the depth of this light element layer to where the density is $`10^5\mathrm{g}\mathrm{cm}^3`$, which is roughly where the accreted material burns to heavier elements (Hanawa & Fujimoto 1986).
The high thermal conductivity of the neutron starโs core insures that it is very nearly isothermal, regardless of the detailed dependence of the heating rate $`ฯต_r`$ on the radius. For the core temperatures typical of LMXBs, bulk viscosity is unimportant, so we assume that the heating is from ordinary shear viscosity. The rate per unit volume is just $`2\eta \delta \sigma ^{ab}\delta \sigma _{ab}^{}`$, where $`\eta `$ is the shear viscosity and $`\delta \sigma _{ab}`$ is the kinematic shear. If we neglect the dependence of shear viscosity on density (since the density is approximately constant in the neutron starโs core), this rate is just proportional to $`r^2`$ for an $`(l=2,m=2)`$ r-mode (Lindblom et al. 1998). Hence we take $`ฯต_rr^2`$, and normalize it so that the heating rate, when integrated over the core, satisfies equation (6).
The microphysics used to integrate equations (16) and (17) is fully described in Brown (2000), so here we just highlight two modeling uncertainties. First, standard calculations presume that the neutron starโs crust is a pure lattice, and hence the conductivity is dominated by electron-phonon scattering. Over the lifetime of an LMXB, however, the neutron star can easily accrete enough matter to replace its entire crust (requiring about $`0.01M_{}`$). The accreted crust is formed from the products of hydrogen and helium burning and is likely to be very impure (Schatz et al. 1999). A lower conductivity from impurities lowers the surface temperature, and hence the quiescent luminosity, for a given core temperature. We model the low thermal conductivity of a very impure crust by using electron-ion scattering (Haensel, Kaminker, & Yakovlev 1996) throughout the crust.
The second modeling uncertainty is the superfluid transition temperatures, for which estimates vary widely (see Tsuruta 1998, and references therein). When the core temperature is much less than the superfluid transition temperature, emissivity from Cooper pairing is unimportant (Yakovlev, Kaminker, & Levenfish 1999) and the superfluidity suppresses the neutrino emission by roughly the Boltzmann factor $`\mathrm{exp}(\mathrm{\Delta }/k_BT)`$, for a superfluid gap energy $`\mathrm{\Delta }`$. We perform our calculations for two models, one with superfluidity parameterized as in Brown (2000), with a typical gap energy $`\mathrm{\Delta }0.5\mathrm{MeV}`$, and another model with a normal core, $`\mathrm{\Delta }=0`$.
Figure 1 demonstrates the thermal structure of such a neutron star with a time-averaged accretion rate $`\dot{M}=2.4\times 10^{11}M_{}\mathrm{yr}^1`$ (the rate inferred for Aql X-1) for a spin frequency of $`275\mathrm{Hz}`$ (*solid lines*) and $`549\mathrm{Hz}`$ (*dotted lines*). If the core is superfluid (the upper pair of curves), then the neutrino luminosity from crust bremsstrahlung (region leftward of the vertical dot-dashed line) is roughly comparable to the photon luminosity. In contrast, if the core were normal (so that the modified Urca processes were unsuppressed) but the viscosity remained independent of temperature (so that a thermal steady state could be reached), then only about 10% of the heat generated in the core would be conducted to the surface. The rest of the heat is balanced by modified Urca neutrino emission. The core temperature and fraction of viscous heat conducted to the surface compare well with the estimates in ยง 3.1.
Fig. 1โ The thermal structure of a neutron star accreting at a time-averaged rate of $`2.4\times 10^{11}M_{}\mathrm{yr}^1`$ (e.g., Aql X-1), for two different spin frequencies: 275$`\mathrm{Hz}`$ (*solid lines*) and 549$`\mathrm{Hz}`$ (*dotted lines*). The upper pair of curves are for a superfluid core (region rightward of the vertical dot-dashed line); the lower pair, for a normal core.
### 3.3. Comparison to Observed Transients
A superfluid core is cooled mainly by conduction of heat to the surface, at least until the interior temperature is high enough to activate crust neutrino bremsstrahlung. Figure 2 shows the expected quiescent luminosity $`L_q`$ as a function of $`\dot{M}`$ for this case, with a range (*shaded region*) of rotation frequencies $`200\mathrm{Hz}<f<600\mathrm{Hz}`$. The inferred $`\dot{M}`$ and $`L_q`$ for several neutron star transients are also plotted (*squares*) for comparison. With the exception of EXO 0748-676<sup>1</sup><sup>1</sup>1EXO 0748-676 is likely to accrete during quiescence, as suggested by observations (with *ASCA*) of variability on timescales $`1000\mathrm{s}`$ (Corbet et al. 1994; Thomas et al. 1997)., the neutron star transients with measured quiescent luminosities are too dim, by a factor of 5โ10, to be consistent with viscous heating of the magnitude assumed here. We must conclude, then, that *either the accretion torque is much less than $`\dot{M}(GMR)^{1/2}`$, or that a steady-state r-mode does not set their spin.*
The quiescent luminosities for Aql X-1, Cen X-4, and 4U 1608โ522 use the bolometric corrections appropriate for a H atmosphere spectrum (Rutledge et al. 1999a); $`L_q`$ for the Rapid Burster is from Asai et al. (1996a). We infer the time-averaged accretion rate from $`\dot{M}(t_o/t_r)(L_o/GMR^1)`$, where
Fig 2.โ Quiescent luminosities as a function of time-averaged accretion rate $`\dot{M}`$. The heating from viscous dissipation of the r-mode is from eq. (6), and the neutrino emission from the core is suppressed by nucleon superfluidity. The shaded region corresponds to rotation frequencies between $`200\mathrm{Hz}`$ (*lower curve*) and $`600\mathrm{Hz}`$ (*upper curve*). Neutrino cooling from crust bremsstrahlung is important rightward (i.e., at higher $`\dot{M}`$) of the knee in the shaded region. Also shown are the inferred quiescent luminosities and time-averaged accretion rates for several neutron star transients.
$`t_o`$ and $`L_o`$ are the outburst duration and luminosity and the distances are taken from Chen, Shrader, & Livio (1997).<sup>2</sup><sup>2</sup>2Recent observations (Callanan, Filippenko, & Garcia 1999) resolved the optical counterpart of Aql X-1 into two objects. We use the distance estimate ($`2.5\mathrm{kpc}`$) of Chevalier et al. (1999), which accounts for the interloper star. Outburst fluences for Aql X-1 and the Rapid Burster are accurately known (*RXTE*/All-Sky Monitor public data); for the remaining sources $`\dot{M}`$ is estimated from peak luminosities and outburst rise and decay timescales (Chen et al. 1997).
Our estimates for $`\dot{M}`$ depend on the inferred source distance. When most of the r-mode heating $`W_d`$ is conducted to the surface, however, as in the superfluid core case for $`\dot{M}10^{11}M_{}\mathrm{yr}^1`$, the predicted quiescent luminosity is $`L_qW_d\dot{M}`$ (see eqs. and ), and hence depends on the source distance in the same way as does $`\dot{M}`$. Therefore, our comparison of $`L_q`$ predicted from r-mode heating and the quiescent luminosity actually observed is *independent* of distance. In this regime, the relation between $`L_q`$ and $`\dot{M}`$ *is also independent of the microphysics in the crust.*
As shown by Levin (1999), the temperature dependence of viscosity in a normal fluid likely prevents a steady-state r-mode. For comparison, however, we plot in Figure 3 the case where modified Urca neutrino emission from the core is allowed (as it would be in a normal fluid) but the r-mode amplitude is steady, i.e., we assume that a thermogravitational runaway has somehow been avoided. As a result, neutrino emission efficiently cools the core, and so the radiative luminosity $`L_q`$ is less for a given $`\dot{M}`$. For this case, with the exception of Cen X-4, the neutron star transients have quiescent luminosities roughly consistent with that predicted. Because there is a characteristic core temperature, namely, that at which neutrino cooling equals radiative cooling, the relation between $`\dot{M}`$ and $`L_q`$ is no
Fig. 3โ The same as Fig. 2, but for a normal core. We also show (*thin dotted lines*) $`L_q(\dot{M})`$ for a neutron star with a crust of light elements at $`\rho <10^{10}\mathrm{g}\mathrm{cm}^3`$.
longer independent of distance, unlike the case shown in Figure 2. The knee in the shaded region is where the neutrino and photon luminosities are comparable. Rightward of this knee ($`\dot{M}10^{12}M_{}\mathrm{yr}^1`$) neutrino cooling prevents the core temperature, and hence the photon luminosity, from rising rapidly with increasing $`\dot{M}`$. Should the crust have a higher conductivity (e.g., if it were more pure) than we have assumed here, then the shaded region rightward of the knee would move upwards, i.e., the predicted $`L_q`$ would be even higher. To illustrate this we computed $`L_q(\dot{M})`$ using the $`L_\gamma (T_b)`$ relation for a crust composed of light elements (and having a higher conductivity) for densities less than $`\rho _b=10^{10}\mathrm{g}\mathrm{cm}^3`$ (*dotted lines*).
It should be noted that the actual thermal radiation from a neutron starโs surface is in general *less* than the observed quiescent luminosity, since other emission mechanisms are possible, such as accretion via a low-efficiency advective flow (Narayan, McClintock, & Yi 1996) or magnetospheric emission (Campana et al. 1998a). Evidence for other, non-thermal emission processes are the hard power-law tails observed from Cen X-4 (*ASCA*; Asai et al. 1996b) and Aql X-1 (*BeppoSAX*; Campana et al. 1998b). In addition, variability on timescales of a few days has been observed from Cen X-4 (van Paradijs et al. 1987; Campana et al. 1997). As a result, a plot showing thermal emission (as opposed to observed $`L_q`$) would have the data points shifted downward in Figures 2 and 3. In other words, the quiescent luminosity inferred from observations is likely to overestimate the actual thermal emission from the neutron star. This strengthens our conclusion regarding the incompatibility of steady-state r-mode heating with the observations.
There are stronger neutrino emission mechanisms possible than modified Urca and crust bremsstrahlung. Recently, there has been renewed interest in the direct Urca process (Lattimer et al. 1991), which is allowed if the proton fraction exceeds 0.148 or if hyperons are present (Prakash et al. 1992). Other exotic mechanisms may be possible, including pion condensates (Umeda et al. 1994), kaon condensates (Brown et al. 1988), or quark matter (Iwamoto 1982). The exotic mechanisms have the same temperature dependence as the direct Urca ($`T^6`$) but are weaker. Should any of these enhanced processes occur, the core will be much colder, and the heat radiated from the surface much weaker, than in the calculations here. For example, balancing the viscous heating with neutrino emission from a pion condensate, $`L_\nu ^\pi 2.0\times 10^{39}(T/10^8\mathrm{K})^6\mathrm{erg}\mathrm{s}^1`$ (Shapiro & Teukolsky 1983), implies that $`T_c1.2\times 10^7(\dot{M}/10^{11}M_{}\mathrm{yr}^1)^{1/6}\mathrm{K}`$, and, from equations (10) and (12), that $`L_q6.0\times 10^{30}(\dot{M}/10^{11}M_{}\mathrm{yr}^1)^{0.45}\mathrm{erg}\mathrm{s}^1`$. This is much dimmer than that observed. Of course, it is possible that superfluidity reduces $`L_\nu `$ such that the core temperature is just enough to explain the observed quiescent emission. It is difficult, however, to arrange *all* of the sources to obey such a relation.
## 4. Conclusions
Using the assumption that the accretion torque is balanced by angular momentum loss from gravitational radiation by an r-mode pulsation of constant amplitude, we find that the expected quiescent luminosities of the neutron star X-ray transients, for rotation rates of 200โ600$`\mathrm{Hz}`$, are characteristically brighter than those observed. Reconciling the observations with the presence of r-mode heating requires that neutrino emission from the core be unsuppressed, as for a normal core. In this case, however, the r-mode is thermally unstable and cannot remain at a constant amplitude, unless some mechanism prevents a runaway. It therefore seems unlikely that the spin frequency of Aql X-1 is a signature of a steady-state core r-mode pulsation. We note, however, that the same conclusion cannot be drawn for the bright, persistent LMXBs (such as Sco X-1, which could be detected by gravitational wave experiments soon to be operational). Uncertainties in the nuclear burning and the accretion luminosity cannot constrain the surface thermal luminosity to within $`5\%`$, which is necessary to differentiate the r-mode heating from the accretion luminosity.
In addition to Aql X-1, there is one other neutron star transient which is known to be spinning rapidly, and that is the $`401\mathrm{Hz}`$ accreting pulsar (Wijnands & van der Klis 1998) in the transient SAX J1808.4โ3658 (inโt Zand et al. 1998). This source has not yet been detected in quiescence. Given a recurrence interval of $`1.5\mathrm{yr}`$, an outburst duration of $`20`$ day, and an outburst accretion rate of $`3\times 10^{10}M_{}\mathrm{yr}^1`$ (inโt Zand et al. 1998), we expect a quiescent luminosity $`5.8\times 10^{33}\mathrm{erg}\mathrm{s}^1`$ if the core is superfluid and an active r-mode pulsation balances the accretion torque in this system. This $`L_q`$ corresponds to an unabsorbed flux ($`4\mathrm{kpc}`$ distance) of $`3\times 10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, which is about ten times the flux expected if the only heat source were crust nuclear reactions (Brown et al. 1998). Future *ASCA*, *Chandra*, and *XMM* observations will assist in constraining the viscous damping present. The luminosity from the viscous damping is much larger than the expected magnetospheric emission (Becker & Trรผmper 1997), and so interpretation of the spectrum should be unambiguous in the absence of accretion onto the neutron starโs surface.
At $`\dot{M}10^{11}M_{}\mathrm{yr}^1`$, all of the viscous heating in the core is radiated from the neutron starโs surface during quiescence. As noted in section 3.3, the relation between $`L_q(\dot{M})`$ then depends only on the accretion torque, and not on the source distance and crust microphysics. *Chandra* and *XMM* are ideally suited for a study of a population of low-luminosity neutron stars, which offer excellent prospects for a clean determination of the amount of viscous heating present.
At higher $`\dot{M}`$, for which neutrino cooling from the crust contributes to balancing the viscous heating, the quiescent luminosity depends on the crust microphysics. In our calculations we assume that the neutron crust is very impure and hence used thermal conductivity dominated by electron-ion collisions. If the conductivity of the crust is higher than we have assumed (e.g., if the crust is a pure lattice), then the predicted quiescent luminosity $`L_q`$ would be even higher than that plotted in Figures 2 and 3. In addition, we underestimated the predicted $`L_q`$ by neglecting the effect of direct heating of the neutron star crust by nuclear reactions occurring near neutron drip (Brown et al. 1998). Moreover, taking into account the possibility that non-thermal emission contributes to the observed quiescent luminosity further widens the gap between the observed $`L_q`$ and that inferred from the r-mode spin regulation hypothesis. All of these effects further strengthen our conclusions.
If the r-mode is not in steady state, then there remain several possibilities: either the superfluid viscosity is so strong that it suppresses the r-mode instability entirely, or the mode saturation amplitude is so small that it is unimportant at all the spin frequencies observed, or else the neutron star is in a limit cycle (Levin 1999) of spin-up to some critical frequency, followed by rapid spin-down and heating. A detailed study of the spin evolution is necessary to determine if the spin periods of the neutron stars are consistent with such a scenario. In particular, it remains an open question as to whether one should expect to observe a population of slowly spinning neutron stars with low-mass companions, such as Her X-1, 4U 1626โ67, and GX 1+4.
The study of r-modes in neutron stars is rapidly evolving in response to the interest aroused in the general relativity community. While this paper was being refereed, several theoretical developments occured that are relevant for this study. First, Lindblom & Mendell (1999) showed that unless the superfluid entrainment parameter assumes a very special value, superfluid mutual friction is not competitive with gravitational radiation for the r-mode amplitude evolution. There is therefore a conflict between theory and experiment: while theoretical calculations show that r-modes in superfluid neutron stars should be excited, the observations discussed in this paper are direct evidence against the r-modes having a sufficient steady amplitude to limit the spin of the neutron star, and the clustering of LMXB spin frequencies argues against an recurrent instability. This contradiction is likely resolved by consideration of the presence of a solid crust (Bildsten & Ushomirsky 1999), which dramatically enhances the dissipation rate and damps the r-modes for typical core temperatures and spin frequencies of LMXBs. The findings presented in this paper lend observational support to that conclusion.
We thank Tom Prince for stimulating our interest in looking for signatures of gravitational wave emission from LMXBs in ways that do not require a gravitational wave detector. We also thank Lars Bildsten, Curt Cutler, Lee Lindblom, Ben Owen, and Yuri Levin for numerous discussions. This research was supported by NASA via grant NAGW-4517. E.F.B is supported by a NASA GSRP Graduate Fellowship under grant NGT5-50052. G.U. acknowledges fellowship support from the Fannie and John Hertz Foundation. |
no-problem/9912/hep-ex9912017.html | ar5iv | text | # The Isolated Photon Cross Section in ๐โข๐ฬ Collisions at โ๐ ="1.8 TeV"*footnote **footnote *submitted to Physical Review Letters
## Abstract
We report a new measurement of the cross section for the production of isolated photons, with transverse energies ($`E_T^\gamma `$) above 10 GeV and pseudorapidities $`|\eta |<2.5`$, in $`p\overline{p}`$ collisions at $`\sqrt{s}=1.8`$TeV. The results are based on a data sample of 107.6 pb<sup>-1</sup> recorded during 1992โ1995 with the Dร detector at the Fermilab Tevatron collider. The background, predominantly from jets which fragment to neutral mesons, was estimated using the longitudinal shower shape of photon candidates in the calorimeter. The measured cross section is in good agreement with the next-to-leading order (NLO) QCD calculation for $`E_T^\gamma >36\mathrm{GeV}`$.
preprint: Fermilab-Pub-99/354-E
Direct (or prompt) photons, by which we mean those produced in a hard parton-parton interaction, provide a probe of the hard scattering process which minimizes confusion from parton fragmentation or from experimental issues related to jet identification and energy measurement . In high energy $`p\overline{p}`$ collisions the dominant mode for production of photons with moderate transverse energy $`E_T^\gamma `$ is through the strong Compton process $`qgq\gamma `$. The direct photon cross section is thus sensitive to the gluon distribution in the proton. Direct-photon measurements allow tests of NLO and resummed QCD calculations, phenomenological models of gluon radiation, and studies of photon isolation and the fragmentation process.
Data from previous collider measurements have indicated an excess of photons at low $`E_T^\gamma (<25\mathrm{GeV})`$ compared with predictions of NLO QCD. This excess may originate in additional gluon radiation beyond that included in the QCD calculation , or reflect inadequacies in the parton distributions and fragmentation contributions .
In this Letter, we present a new measurement of the cross section for production of isolated photons with $`E_T^\gamma 10`$ GeV and pseudorapidity $`|\eta |<2.5`$ in $`p\overline{p}`$ collisions at $`\sqrt{s}=1.8`$TeV, which supersedes our previous publication . (Pseudorapidity is defined as $`\eta =\mathrm{ln}\mathrm{tan}\frac{\theta }{2}`$ where $`\theta `$ is the polar angle with respect to the proton beam.) The higher statistical precision afforded by the increased luminosity ($`12.9\pm 0.7`$ pb<sup>-1</sup> recorded during 1992โ1993 and $`94.7\pm 5.1`$ pb<sup>-1</sup> recorded during 1994โ1995) motivated a refined estimation of the backgrounds. In particular, fully-simulated jet events were used in place of single neutral mesons to model background.
Photon candidates were identified in the Dร detector as isolated clusters of energy depositions in the uranium and liquid-argon sampling calorimeter. The calorimeter covered $`|\eta |<4`$ and had electromagnetic (EM) energy resolution $`\sigma _E/E15\%/\sqrt{E(\mathrm{GeV})}0.3\%`$. The EM section of the calorimeter was segmented longitudinally into four layers (EM1โEM4) of 2, 2, 7, and 10 radiation lengths respectively, and transversely into cells in pseudorapidity and azimuthal angle $`\mathrm{\Delta }\eta \times \mathrm{\Delta }\varphi =0.1\times 0.1`$ ($`0.05\times 0.05`$ at shower maximum in EM3). Drift chambers in front of the calorimeter were used to distinguish photons from electrons, or from photon conversions, by ionization measurement.
A three-level trigger was employed during data taking. The first level used scintillation counters near the beam pipe to detect an inelastic interaction; the second level required that the EM energy in calorimeter towers of size $`\mathrm{\Delta }\eta \times \mathrm{\Delta }\varphi =0.2\times 0.2`$ be above a programmable threshold. The third level was a software trigger in which clusters of calorimeter cells were required to pass minimal criteria on shower shape.
Offline, candidate clusters were accepted within the regions $`|\eta |<0.9`$ (central) and $`1.6<|\eta |<2.5`$ (forward) to avoid inter-calorimeter boundaries; in the central region, clusters were required to be more than 1.6 cm from azimuthal boundaries of modules. The event vertex was required to be within 50 cm of the nominal center of the detector along the beam. Each candidate was required to have a shape consistent with that of a single EM shower, to deposit more than 96% of the energy detected in the calorimeter in the EM section, and to be isolated as defined by the following requirements on the transverse energy observed in the annular region between $`=\sqrt{\mathrm{\Delta }\eta ^2+\mathrm{\Delta }\varphi ^2}=0.2`$ and $`=0.4`$ around the cluster: $`E_T^{0.4}E_T^{0.2}<2`$GeV. The combined efficiency of these selections was estimated as a function of $`E_T^\gamma `$ using a detailed Monte Carlo simulation of the detector and verified with electrons from $`Zee`$ events, and found to be $`0.65\pm 0.01(0.83\pm 0.01)`$ at $`E_T^\gamma =40\mathrm{GeV}`$ for central (forward) photons. An uncertainty of 2.5% was added in quadrature to this to allow for a possible dependence on instantaneous luminosity. Photon candidates were rejected if there were tracks within a road $`\mathrm{\Delta }\theta \times \mathrm{\Delta }\varphi 0.2\times 0.2`$ radians between the calorimeter cluster and the primary vertex. The mean efficiency of this requirement was measured to be $`0.83\pm 0.01(0.54\pm 0.03)`$ in the central (forward) region. The inefficiency stemmed mainly from photon conversions and overlaps of photons with charged tracks (either from the underlying event or from other $`p\overline{p}`$ interactions).
Background to the direct-photon signal comes primarily from two-photon decays of $`\pi ^0`$ and $`\eta `$ mesons produced in jets. While the bulk of this background is rejected by the selection criteria (especially the isolation requirement), substantial contamination remains, predominantly from fluctuations in jet fragmentation, which can produce neutral mesons that carry most of the jet energy. For a $`\pi ^0`$ meson with $`E_T^\gamma >10`$GeV, the showers from its two-photon decay coalesce and mimic a single photon in the calorimeter.
The fraction of the remaining candidates that are genuine direct photons (the purity $`๐ซ`$) was determined using the energy $`E_1`$ deposited in the first layer (EM1) of the calorimeter. The decays of neutral mesons primarily responsible for background produce two nearby photons, and the probability that at least one of them undergoes a conversion to an $`e^+e^{}`$ pair either in the cryostat of the calorimeter or the first absorber plate is roughly twice that for a single photon. Such showers due to meson decays therefore start earlier in the calorimeter than showers due to single photons, and yield larger $`E_1`$ depositions for any initial energy. A typical distribution in our discriminant, $`\mathrm{log}_{10}\left[1+\mathrm{log}_{10}\{1+E_1(\mathrm{GeV})\}\right]`$, is shown in Fig. 1. This variable emphasized differences between direct photons and background, and was insensitive to noise and event pileup. A small correction, based on electrons from $`W`$ decays, was made to bring the $`E_1`$ distribution for the 1992โ1993 data into agreement with the 1994โ1995 data. The distribution in the discriminant was then fitted to the sum of a photon signal and jet background, both of which were obtained from Monte Carlo simulation. Two components of the jet background were included separately: those with and those without charged tracks inside the inner isolation cone ($`=0.2`$ from the photon candidate). This was done to minimize constraints in the fit from the (relatively poorly determined) tracking efficiency and from the model used for jet fragmentation.
Direct photon and QCD jet events were generated using pythia and then passed through the geant detector-simulation package, and overlaid with data acquired using a random trigger to model noise, pileup, underlying event, and multiple $`p\overline{p}`$ interactions . The simulated $`E_1`$ was corrected for imperfect modeling of the material in the detector. We assumed that the Monte Carlo energy could be parametrized as $`E_1^{\mathrm{MC}}=\alpha +\beta E_1`$, with the parameters $`\alpha `$ and $`\beta `$ determined from data: $`\beta `$ from the $`We\nu `$ sample and $`\alpha `$ from the photon data. The fits to extract the purity $`๐ซ`$ were performed for different values of $`\alpha `$, and the total $`\chi ^2`$ was minimized for all $`E_T^\gamma `$.
To reduce computation time, the jet background events were preselected just after their generation to have highly electromagnetic jets. The background subtraction technique used in this analysis employs fully-simulated jet events, whereas the previous analysis modeled the background with isolated neutral mesons. With our increased statistics, it was found that individual isolated mesons could not adequately model the background. Indeed, our simulation shows that less than half of the background can be attributed to the presence of single neutral mesons within the inner isolation cones (of $`=0.2`$). The new approach provided a much better description of the shower shape and isolation energy, and resulted in an increased estimate of the signal purity.
Fitting was done separately for samples at central and forward regions, for each $`E_T^\gamma `$ bin, using the package hmcmll , with the constraint that the fractions of signal and background were between 0.0 and 1.0. The resulting purity $`๐ซ`$ and its uncertainty is shown in Fig. 2 as a function of $`E_T^\gamma `$. As well as the fitting error, a systematic error was assigned to the use of pythia to model jets. This uncertainty was estimated by varying the multiplicity of neutral mesons in the core of the jet by $`\pm 10`$.
The differential cross section $`d^2\sigma /dE_T^\gamma d\eta `$, determined after correction for purity and efficiency (but not corrected for energy resolution) is shown as a function of $`E_T^\gamma `$ in Fig. 3 and in Table I. The purity corrections were applied point by point, using the same binning for the cross section as for the determination of purity. The correlated errors consist of the quadrature sum of the uncertainties on luminosity, vertex requirements, and energy scale in the Monte Carlo (which are energy independent) and the model for fragmentation (large uncertainty at low $`E_T^\gamma `$ because of the low purity in this region). The uncorrelated errors include the statistical uncertainty, the fitting error, and the statistical uncertainties on the determination of acceptance, trigger efficiency, and the efficiency of the selection criteria.
These new measurements are $`2030`$% higher than our previously published results. The change is well understood, and is due to the improvements in the Monte Carlo model used to estimate the purity, and in calculations of the acceptance and luminosity.
We compare the measured cross section with NLO QCD calculations using the program of Baer, Ohnemus, and Owens . This calculation includes $`\gamma +\mathrm{jet}`$, $`\gamma +\mathrm{two}`$ jets, and two jets with bremsstrahlung in the final state. In the latter case, a jet collinear with the photon was created with the remaining fraction of the energy of the relevant final-state parton, so that the isolation cut could be modeled. For all sources of signal, the final-state parton energies were smeared using the measured EM and jet resolutions. The isolation criterion was imposed by rejecting events with a jet of $`E_T>2\mathrm{GeV}`$ within $`0.4`$ of the photon. (Smearing photon and jet energies changed the QCD prediction by less than 4%.) CTEQ4M parton distributions were used in the NLO calculations, with renormalization and factorization scales $`\mu _R=\mu _F=E_T^{\mathrm{max}}`$, where $`E_T^{\mathrm{max}}`$ is the larger of the transverse energies of the photon or the leading jet. If, instead, the scales $`\mu _R=\mu _F=2E_T^{\mathrm{max}}`$ or $`E_T^{\mathrm{max}}/2`$ were employed, the predicted cross sections changed by $`<6`$%.
Figure 4 shows the difference between experimental and theoretical differential cross sections ($`d^2\sigma /dE_T^\gamma d\eta `$), divided by the theoretical values. In both central and forward regions, the NLO QCD predictions agree with the data for transverse energies $`E_T^\gamma >36\mathrm{GeV}`$. At lower transverse energies, particularly for $`|\eta |<0.9`$, our measured cross section exceeds the expectation from NLO QCD, a trend consistent with previous observations at collider and fixed target energies. Using contributions from both correlated and uncorrelated errors, the $`\chi ^2`$ value for the data compared with NLO QCD is 8.9 in the central region and 1.9 in the forward region, for $`E_T^\gamma 36\mathrm{GeV}`$ in each case (the first 4 data points).
These data complement and extend previous measurements, and provide additional input for extraction of parton distributions through global fits to all data. The difference between the data and NLO QCD for $`E_T^\gamma <36`$ GeV suggests that a more complete theoretical understanding of processes that contribute to the low-$`E_T^\gamma `$ behavior of the photon cross section is needed.
We thank J. F. Owens for his assistance with the theoretical calculations. We thank the Fermilab and collaborating institution staffs for contributions to this work, and acknowledge support from the Department of Energy and National Science Foundation (USA), Commissariat ร LโEnergie Atomique (France), Ministry for Science and Technology and Ministry for Atomic Energy (Russia), CAPES and CNPq (Brazil), Departments of Atomic Energy and Science and Education (India), Colciencias (Colombia), CONACyT (Mexico), Ministry of Education and KOSEF (Korea), CONICET and UBACyT (Argentina), A.P. Sloan Foundation, and the Humboldt Foundation. |
no-problem/9912/astro-ph9912099.html | ar5iv | text | # Complex extended line emission in the cD galaxy in Abell 2390
## 1 Introduction
The central galaxies of rich clusters often differ remarkably from other cluster ellipticals in their morphological and spectroscopic properties; in particular, a large fraction ($`40`$%) show strong nebular emission lines and an excess ultraviolet/blue continuum (e.g., Johnstone, Fabian & Nulsen 1987; Heckman et al. 1989; McNamara & OโConnell 1993; Crawford et al. 1999). These features are more common in clusters selected from X-ray samples, and especially in those which are known to have strong cooling flows. The optical line emission is generally very concentrated in a central region of only 5โ10 kpc (e.g. Heckman et al. 1989; Crawford et al. 1999), sometimes with filaments extending beyond the stellar continuum, $`20`$ kpc into the intracluster medium (e.g. Cowie et al. 1983; Romanishin & Hintzen 1988; Crawford & Fabian 1992). The cooling flow gas itself extends to much larger scales ($``$100 kpc).
The origin of the nebular emission in cooling flow clusters is uncertain, and is likely not the same in all clusters. Generally, the line luminosity is too strong to arise directly from the cooling gas. Heckman et al. (1989) showed that the spatial variation of emission line ratios generally corresponds better with shock models, rather than photoionization models. Other authors (e.g. Filippenko & Terlevich 1992, Allen 1995) claim the line ratios can be produced by very massive O-stars, which may form in the collision of cold gas clouds within the cooling flow. The star formation rates determined from this nebular emission, assuming the gas is photoionised by massive stars, are generally 10โ100 $`M_{}`$ yr<sup>-1</sup> (e.g. Johnstone et al. 1987; McNamara & OโConnell 1993; Allen 1995; Crawford et al. 1999), which are at least a factor of 10 less than the mass inflows implied by the cooling flows.
The luminous emission line nebulae in these unusual galaxies are often asymmetrically distributed about the galactic nucleus (Crawford & Fabian 1992). McNamara & OโConnell (1993) observe strong colour structure in the blue spectra of two such galaxies which have smooth I-band isophotes. In some cases, this structure is seen to correlate with features at radio wavelengths (e.g. Heckman et al. 1989; Edge et al. 1999) which suggest ionisation of the gas is at least partly due to a central radio source. The velocity structure of the nebulae also tends to be complex. Heckman et al. (1989) found that the velocities of H$`\alpha `$ nebulae in nine central galaxies are very disordered (uncorrelated with position), with velocities of $`200`$ km/s. They claim this is consistent with a scenario in which the gas is clumpy and infalling (in a cooling flow), such that the โrandomโ fluctuations in velocity are due to clumps observed in both the foreground and background of the centre. Occasionally much higher velocities are observed; Johnstone & Fabian (1995) observe Ly$`\alpha `$ emission and absorption velocities at 1000 km/s and greater, which are likely also due to nebulae infalling along the line of sight (Haschick, Crane & van der Hulst 1982).
Recent observations have detected large amounts of dust in some central galaxies, often amounting to $`E_{BV}0.5`$ (e.g. Hu 1992; Allen et al. 1995; McNamara et al. 1996; Pinkney et al. 1996; Crawford et al. 1999). The presence of such large amounts of dust may itself require a high star formation rate to replenish the supply destroyed by sputtering (e.g. Draine & Salpeter 1979). Neglecting this dust when analysing emission line fluxes can lead to an underestimation of the implied star formation rates by an order of magnitude.
The cluster Abell 2390 is host to a particularly strong cooling flow, with an inflow rate of $`800M_{}`$ yr<sup>-1</sup> derived from ROSAT observations (Pierre et al. 1996). The galaxy populations in this cluster have been studied in some detail by the Canadian Network for Observational Cosmology (CNOC1) consortium, using images and spectra from the Canada France Hawaii Telescope (Yee et al 1996, Abraham et al 1996); evidence for infall and strong population gradients throughout the cluster is seen. The central bright cD galaxy of the cluster has considerable size and structure and was studied using the CNOC1 database by Davidge and Grinder (1995). It has a lumpy morphology at resolutions of a few arcseconds or better, extremely blue ($`gr`$) and ($`UB`$) colour (Smail et al. 1998) and a predominantly young stellar population, with strong \[O II\] emission (rest frame equivalent width of 110 $`\pm `$ 2 ร
). Davidge and Grinder note that, in this respect, this galaxy is similar to the cD galaxies of CNOC1 clusters 0839+29 and 1455+22, and different from other cDs in that survey. This galaxy has also been studied in detail at submillimetre, radio, infrared and optical wavelengths by Edge et al. (1999); in particular, these authors note the presence of a strong blue lane (or โconeโ) in the archival WFPC2 observations which is oriented orthogonally to the 4.89 GHz radio map, suggesting that the gas in this cone is ionised by a strong nuclear source. Edge et al. and Lemonon et al. (1998) claim that there is some evidence for dust in this galaxy, though this is strongly dependent on the uncertain nature of the ionisation source.
We have obtained HST STIS observations of the cD galaxy in A2390 with a 2โณ slit, which provides spatial and velocity information about the Ly$`\alpha `$ and \[OII\] emission lines, and the UV continuum. We compare these data with CFHT H$`\alpha `$ observations obtained as part of a larger study (Balogh & Morris 1999). With this data, we map the structure of nebular emission within $`10`$ kpc of the galaxy centre. We present our observations in ยง2, and our results on the spatial and velocity structure of the emission line gas in ยง3. In ยง4 we discuss some of the implications of these observations. Our results are summarized in ยง5.
## 2 Observations
### 2.1 STIS Observations
HST STIS observations were obtained with the 2โณ wide slit aligned at $`25^{}`$ to the direction of the blue lane seen in the WFPC2 images (ยง2.3). Standard reduction procedures (flats and wavelength calibration) were followed. The slit direction was chosen to include another central cluster galaxy, and two of the gravitational arcs near the cluster centre, for additional information. Spectra were taken at both optical and UV wavelengths, as summarized in Table 1. The blue G430L spectrum has resolved \[O II\] and H$`\gamma `$ emission, while the G750L exposure is too short to be useful, though it does show H$`\alpha `$ emission and continuum. L$`\alpha `$ emission is clearly resolved in the FUV MAMA spectra. The very wide slit ensured that all bright central features in the galaxy were included, and the resulting spectra are essentially slitless, so that spectral feature positions depend on their physical location as well as their radial velocities. The STIS observation first centred on a nearby bright star, and then moved to the galaxy by accurate blind offset. Since the galaxy is extended, its coordinates may not be exactly those of the nucleus, and there is some uncertainty (perhaps $``$0.1โ) in the exact location of the 2โ wide slit.
### 2.2 H$`\alpha `$ Imaging
Images of the cD galaxy in H$`\alpha `$+N\[II\]<sup>1</sup><sup>1</sup>1 Hereafter, we refer to this as H$`\alpha `$ alone; corrections for N\[II\] are made when necessary. light were taken with OSIS on the CFHT, with a specially designed interference filter (Balogh 1999, Balogh & Morris 1999). A continuum image, formed by combining narrow band images redward and blueward of H$`\alpha `$, was subtracted from the on-line image, after matching the PSF (0$`\stackrel{}{\mathrm{.}}`$8), to produce the final image. Full details of the reduction procedure are given in Balogh (1999). The H$`\alpha `$ image was aligned with the WFPC2 images by matching the H$`\alpha `$ continuum image with the WFPC2 F814W image, and this should be good to about 0$`\stackrel{}{\mathrm{.}}`$1. H$`\alpha `$ fluxes could not be accurately determined, since the nights were not photometric, but equivalent widths are reliably measured. The H$`\alpha `$ filter used was quite wide, 324 ร
FWHM, so H$`\alpha `$ emitted at $`v6000`$ km s<sup>-1</sup> relative to the cD galaxy will be detected.
### 2.3 Comparison with Archival WFPC2 Images
We have obtained images of the cD galaxy in Abell 2390 from the HST archive; it has been observed with the WFPC2 with the F555W and F814W filters. Figure 5 shows all of the images with the same spatial scale, and also the width of the slit, the length of which is vertical in the orientation of the diagram. The first panel shows the WFPC2 F555W image, and the third panel presents the F814W/F555W ratio, thus showing the colour distribution (light shades correspond to blue colours). The horizontal, solid line shown in the F555W image represents the size and approximate location of the 2โณ slit used to obtain the STIS spectra. At the cluster redshift of $`z=0.23`$, 1โณ corresponds to 3.4 kpc<sup>2</sup><sup>2</sup>2We assume a cosmology with H=75 km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_{}`$=0.3 and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$.. We consider the redder knot that lies in the centre of the general galaxy light to be the nucleus; this feature is surrounded by red material. There is also a red, resolved knot, 23โณ E of S from the galaxy that is a separate small galaxy of red colour (number 956 in Yee et al 1996). It is a normal-looking elliptical with an old stellar population, and the spectra and H$`\alpha `$ images do not reveal any spatially varying properties or line emission. It is too far away from the cD galaxy to appear in the UV spectral image, which covers only 25โณ. There is a faint signal at the position of the bright arc about 8โณ S of the cD, which is flat and without strong emission in the observed blue wavelength range. This is consistent with a very blue stellar population, at redshift larger than 0.23. There is no significant signal at the position of a larger red arc about 14โณ from the cD galaxy. We do not discuss these other objects further.
## 3 Results
### 3.1 Spatial Structure
The most prominent blue features seen in the WFPC2 images of the cD galaxy in Figure 5 are the resolved knots below the nucleus, just above the nucleus, and the linear feature above the nucleus. These all lie roughly along a line. Edge et al (1999) interpret the blue lane as a double โconicalโ feature, similar to that seen in Seyfert galaxies, and suggest that this material may be ionised by the strong nuclear source.
The UV spectrum shown in the right hand panel of Figure 5 (and, spatially compressed, in Figure 5) shows a remarkable L$`\alpha `$ complex. The direct and dispersed images are lined up exactly in the spatial direction along the slit. The 2โณ slit covers the brightest, central region ($``$7 kpc) of the galaxy and its various bright knots; the continuum profile of the halo itself has an effective radius of about 10โณ. There is L$`\alpha `$ emission that extends over the whole length of the inner galaxy, and is considerably more extended below (NW) the nucleus, beyond the bright blue knots. The brightest L$`\alpha `$ knots are the nucleus itself and the knots above and below it in our image.
There is UV continuum from three of the knots, including one fainter blue knot below the nucleus. However, continuum light from the nucleus itself is not detected. The CCD blue spectral image shows that there is continuum at the nucleus and the bright knots above it in our diagram, but not from the knot below which is bright in the UV. The \[O II\] line emission is brightest at the nucleus but is seen clearly at the two L$`\alpha `$ knots as well. The blue lane does not correspond with either line or continuum flux in our data, so presumably arises from a combination of lines and continuum in the WFPC2 passbands.
There is a prominent, short wavelength edge to the L$`\alpha `$ emission. This is shown more clearly in Figure 5, which is compressed along the spatial direction to make this edge and its curvature more apparent. Where the edge crosses the lower continuum region, there is some evidence for absorption over an interval shortward of the edge; this can be seen in spectral profile shown in Figure 5. However, the shape of the continuum is not well defined, which makes this determination uncertain.
As noted above, in contrast to the L$`\alpha `$ image, the nucleus is the brightest feature in the dispersed \[O II\] image. Furthermore, no evidence for the sharp L$`\alpha `$ edge is seen in the \[O II\] profile, as shown in Figure 5. We note that the \[O II\] profile we show is predominantly that of the nuclear region: the contribution from the L$`\alpha `$ bright knot is small and does not affect the shortward side of the profile we show. The presence of \[O II\] emission at velocities more negative than those at L$`\alpha `$ suggests that the L$`\alpha `$ edge may be due to absorption, since the \[O II\] doublet (velocity separation $``$220 km s<sup>-1</sup>) is from forbidden transitions. However, the \[OII\] emission signal is too weak to enable us to draw strong conclusions. The velocity structure of the L$`\alpha `$ edge and of the various emission features is discussed in more detail in ยง3.2.
The image of the cD galaxy in H$`\alpha `$ light is shown in the left hand panel of Figure 5. The nuclear regions of the galaxy, and at least two regions extending several arcseconds below (NW) the nucleus, are clearly detected. The photometric zero point of the H$`\alpha `$ image is uncertain by at least $`0.3`$ mag, as observing conditions were poor; the rest frame equivalent width line (which is independent of this zero point) is 150$`\pm 7`$ ร
within a 5โณ diameter aperture.
The nuclear region is extended towards the knot above (in Figure 5) the nucleus, but does not show a peak at the same position. The first H$`\alpha `$ emission region below the nucleus corresponds with the faint inner knots in the WFPC2 image, and extends towards the brighter blue knot below. The absence of detected H$`\alpha `$ emission to the SE of the nucleus, along the clearly visible blue lane, is notable. However, the H$`\alpha `$ data are not very sensitive; the limiting H$`\alpha `$ magnitude is $`m_{AB}=21.7`$ (for a 2$`\sigma `$ point source detection), which corresponds to a flux limit of 1.6$`\times 10^{41}`$ ergs/s. Weaker emission may be present along the blue lane; clearly, however, the strongest emission originates from the NW of the nucleus.
The L$`\alpha `$ bright knot emission distribution appears to correspond more closely with the features in the HST continuum images, rather than the H$`\alpha `$ image. However, this comparison is complicated by the very different spatial resolutions, and the faintest emission boundaries do resemble those of H$`\alpha `$ if we assume that the slit edge cuts off the leftmost part of the L$`\alpha `$ emission. The H$`\alpha `$ emission peak farther to the NW extends towards the two fainter blue knots farthest from the nucleus, but does not correspond in detail with their positions. Plots of the row-averaged light from the H$`\alpha `$, F555W, and L$`\alpha `$ images (not shown) confirm that they differ significantly. In particular, the L$`\alpha `$ emission is strong at the blue knots with UV continuum, while H$`\alpha `$ is not; there are other places where the reverse is true. Similar effects are seen in nearby starburst galaxies, which Conselice et al. (1999) claim are consistent with a picket fence dust distribution. However, it is still unclear whether, in this case, the observed structure is mostly due to differences in extinction, ionisation, or sensitivity.
We note finally that we have a VLA C-configuration 1.4GHz map of the galaxy that shows a weak extension in the general direction of the NW knots. Edge et al (1999) claim that the A and B configuration reveal a smaller (0.3โณ) extension at 5 GHz, possibly normal to the blue lane. Deeper, high resolution radio maps at several frequencies would be of interest in understanding whether there are jets along the blue lanes or not.
### 3.2 Velocity Structure
Because of the wide slit, there is ambiguity between spatial and velocity structure in the STIS spectra which plagues our analysis of the nebular velocities. The geocoronal emission produces an emission line that is 50ร
wide, from uniformly distributed emission at zero radial velocity. A similarly distributed L$`\alpha `$ emission wider than the 2โณ wide slit around the galaxy would produce the same uniformly illuminated spectral feature from 1465-1526ร
. Line emission from spatially resolved knots within the slit will be shifted by the combination of radial velocity and position within the slit, broadened by the intrinsic motions and spectroscopic resolution.
To measure the velocities of resolved knots, which one can hope to locate spatially from continuum images, we matched the dispersed and undispersed image features in the dispersion direction and noted the relative offsets for as many recognisable components as possible. This removes the spatial shifts from the dispersed image before measuring velocity shifts, a technique that has been used and described e.g. in Hutchings et al (1998) for NGC 4151 slitless data. The quantitative measures were obtained by superposing contour plots of knots in the dispersed and undispersed images. This does not provide an absolute scale, so the zero point was set by identifying the nucleus as described above and referring all other velocity shifts to it. The nuclear velocity is assumed to have the ground-based redshift 0.2301 (Abraham et al 1996) derived from a narrow slit spectrum of the central galaxy. (Yee et al 1996) published a redshift of 0.23024. For the purpose of this paper, the difference, which amounts to some 40 km s<sup>-1</sup> is not significant, and is within the quoted errors of Yee et al.)
This process assumes that L$`\alpha `$ emission, aside from the brightest knots, arises at the peak of the F555W image flux - in particular, along the blue lanes on either side of the nucleus. We regard this as reasonable given the correspondence in overall flux distributions along the galaxy. However, this can only be confirmed by narrower slit observations. If, for example, the L$`\alpha `$ emission arises along the faint extensions in the H$`\alpha `$ image to the upper left and lower right, rather than the continuum flux along the blue lanes, then the emission velocities will be quite different. Figure 5 shows the emission knot velocities based on the blue continuum image and also the H$`\alpha `$ image. The values derived from the blue continuum image form a rough S-shape, symmetrical about a point 0$`\stackrel{}{\mathrm{.}}`$2 below the nucleus, about mid-way between the outer UV-continuum knots. The central region has a โrotationโ curve of amplitude about 400 km s<sup>-1</sup>. Further out, this curve reverses, reaching a full amplitude more than 7000 km s<sup>-1</sup> across the outer regions. Weaker evidence for such high velocities are also present in the \[OII\] spectra. The velocities we derive if the L$`\alpha `$ arises in the H$`\alpha `$ regions (dots in Figure 5) are also high. Note that the dots in Figure 5 at positions $`<1`$โณ indicate double-valued velocities, since the H$`\alpha `$ emission splits and forms two โtailsโ in this region. This significantly reduces the velocities in the SW, where identification of L$`\alpha `$ with the blue lane is most in doubt. However, the measured velocities corresponding to the knots are still quite high, and probably arise in in- or out-flowing material.
The curved L$`\alpha `$ edge is a particularly striking feature. It corresponds closely to the position of the edge of the slit, and this could give rise to this feature if there is extended L$`\alpha `$ emission across much of the galaxy, with weak velocity structure to produce the curvature. It is difficult to know the slit position exactly as the geocoronal line is very strong and has broad wings. In addition, there is a zero point shift for the whole velocity scale, derived from a different narrow slit, and there is some uncertainty in the precise redshift from an extended source like this galaxy. However, if the L$`\alpha `$ edge is caused by the slit position, the curvature and other morphology seen in the dispersed image likely reflects primarily the spatial structure of the ionised gas; the overall shape of the faintest emission is similar for undispersed H$`\alpha `$ and dispersed L$`\alpha `$, suggesting that this may be the case.
However, there are some reasons to believe that the sharp edge corresponds instead to a real absorption feature:
1. The line emission is not evenly distributed, but generally peaks near the shortward edge and falls off slowly towards longer wavelengths. The emission image has no redward edge although we might expect emission to cover that part of the galaxy, especially at the top of our diagrams, where the galaxy tilts to that side. If this is spatial structure, we have the unlikely situation where emission arises preferentially toward the slit.
2. The shortward edge is not straight, but has smooth curvature over a large distance. Since it appears to curve in a โCโ shape, rather than the โSโ shape characteristic of rotation curves, it is difficult to reconcile this with velocity structure.
3. There are dips in the continuum sources shortward of the emission edge, particularly the lowest one in the diagram, indicative of absorption. However, the signal is weak and it is difficult to interpolate the continuum in the plots, as shown in Figure 5; therefore we can only claim that there is a suggestion of real absorption, with low significance.
4. The \[O II\] image does not show a shortward edge, as expected if the L$`\alpha `$ edge is due to absorption, since \[O II\] is a forbidden line. While the signal is low, the emission does have wings that lie in the velocity region โshortwardโ of the edge in L$`\alpha `$. While quite suggestive, we note that the absence of an edge in \[OII\] could also be explained if the \[OII\] emitting gas is distributed differently (in velocity or position) from that of L$`\alpha `$.
Given this tentative evidence, we suggest as an alternative that the edge is a true absorption edge, due to an expanding envelope that lies across the whole line of sight. A similar phenomenon is seen in L$`\alpha `$ observations of $`\eta `$ Carinae: a curved shifted absorption edge over the length of the extended emission region (Gull, private communication). While we do not suggest strong similarities, it may be that the cD galaxy is surrounded by an outward-moving wind with a terminal velocity that is well-defined and about the same over all directions covered by the line of sight. The curvature could arise by projection effects if the expansion were at a constant velocity and in a spherical envelope.
To establish the shortward edge velocity under this assumption, we use the wavelength scale from the standard wavelength calibration for the STIS spectrum, with a zero point offset for the wide slit obtained by setting the centre of the strong and sharply-edged geocoronal L$`\alpha `$ emission at 1215.7ร
. The absorption edge wavelength was measured from plots of sections in the spectral image, as the 10% point up the short wavelength side of the emission (see examples in Figure 5). The absorption edge velocities were mapped as far as emission is detected in both directions away from the nucleus.
These edge measurements are shown in Figure 5, referred to as velocities with respect to the overall galaxy redshifted wavelength. This implies an expansion velocity of about 5000 km s<sup>-1</sup>. If we fit a simple expanding spherical absorber model to the curve, moving outwards at this velocity, its radius is about 12 Kpc. This is not very sensitive to the zero point of the expansion: at 6000 km s<sup>-1</sup>, the implied radius is 13 Kpc. However, if the emission arises spatially along the edge of the slit, we may have velocities as low as 0 km s<sup>-1</sup>, with respect to the galaxy mean redshift. The absorption trough seen against the continuum regions appears to extend to some 10000 km s<sup>-1</sup>, but its outer limit cannot be determined very well since the trough becomes weaker as it merges with the background or the weak UV continuum where present (see Figure 5).
## 4 Discussion
There is probably a mixture of distributed emission in this galaxy, including diffuse emission spatially slanted relative to the slit, and resolved emission from the bright knots. We require narrower slit observations to untangle the situation; also, it would be of great interest to obtain undispersed images in H$`\alpha `$ and \[O II\] or \[O III\] at HST resolution to map the gas in the required detail.
We note that the shortward edge in L$`\alpha `$ is not centred on what we regard as the nucleus, but some 0$`\stackrel{}{\mathrm{.}}`$6 below it in our diagrams. The edge wavelength (outflow velocity?) is most extreme next to the brightest UV source, so that possibly this represents the central force driving any outflow. This knot accounts for 20% of the L$`\alpha `$ flux and has magnitude close to 22.3 in both filters (V and I), corresponding to absolute magnitude -18 (similar to the luminosity of the SMC). If this emission is due to photoionisation by massive stars, it is of interest to see if the implied star formation rate can provide enough energy to drive the purported outflow. From the H$`\alpha `$ photometry, we use the relation of Kennicutt (1998) to estimate star formation rates (assuming an extinction of 1 mag at H$`\alpha `$ and an \[NII\]/H$`\alpha `$ ratio of 0.5 (Kennicutt 1992)). This results in a star formation rate for the nucleus of 28 $`\pm 5`$ $`M_{}\text{yr}^1`$, where the uncertainty reflects the estimated zero point uncertainty. Star formation rates of the same order of magnitude are measured in the two other resolved knots. We calculate the total energy released by supernovae, assuming a frequency of 0.005 per solar mass formed, a total energy per event of $`10^{51}`$ ergs, and that 10% of this energy is transfered to the gas. Thus, a star formation rate of 30 $`M_{}\text{yr}^1`$ corresponds to an energy release of 1.5$`\times 10^{49}`$ ergs yr<sup>-1</sup>; this amounts to copious amounts of energy if the burst lasts several 100 Myr. However, this energy must go into not only the kinetic energy of the gas, but to overcoming the cD gravitational potential and the pressure of the intracluster medium (ICM). It turns out that the latter is the most significant. Assuming an isothermal gas density profile, the gas pressure is given by:
$$P=\frac{1}{2\pi G}\left(\frac{kT}{\mu m_H}\right)^2\frac{\mathrm{\Omega }_b}{\mathrm{\Omega }_{}}r^2,$$
(1)
where kT=9.5 keV is the gas temperature (David et al. 1993), $`\mu =0.59`$ is the mean molecular weight of the gas, $`\mathrm{\Omega }_b=0.0125h^2`$ is the baryon density parameter predicted by nucleosynthesis (Copi, Schramm & Turner 1995), the total matter density $`\mathrm{\Omega }_{}`$=0.3, and $`r`$ is the distance from the centre of the cluster. The total resistant force on an expanding sphere of material ($`4\pi r^2P`$) is therefore independent of distance. The work needed to push a shell of material out to $`r=12`$ kpc against this pressure is $`1.0h^2\times 10^{61}`$ ergs; this is much more energy than is available from star formation alone (even if we consider additional energy from stellar winds). If the expanding shell is a real structure, there must be another source of energy driving it, or the ICM pressure must be much lower than we have estimated (as will be the case if, for example, the gas density is much lower than that of an isothermal sphere).
The origin of the blue lane to the SE of the nucleus is uncertain. If the L$`\alpha `$ detected to the upper left in Figure 5 is associated with this lane (as we assume for the solid line in Figure 5), this gas is approaching at very high velocity. However, H$`\alpha `$ is only strongly detected to the NW, where the gas velocity is clearly receding. It would be unusual to detect only the receding side of a double jet system in H$`\alpha `$. It seems likely, then, that the blue knots and nebular emission arising from the NW are due to infalling nebulae, and that the L$`\alpha `$ to the SW is not associated with the blue lane to the SE. It is more likely that the L$`\alpha `$ traces the faint H$`\alpha `$ SW extension, and that the linear, blue feature is not associated with any of the emission features. However, narrower slit observations are required to establish this with certainty. One possible explanation is that the blue lane corresponds to a conical hole blown through the intergalactic medium by an active nucleus; if this hole is devoid of dust, and/or lined with hot stars, it may appear blue, but without nebular emission lines.
## 5 Summary
Our data show that there is extended line emission in L$`\alpha `$, H$`\alpha `$, and \[OII\] in the cD galaxy at the centre of Abell 2390. This emission is seen distributed non-uniformly across and around the galaxy, in both diffuse and knotty components. It is difficult to separate the spatial and velocity shifts in our dispersed images, due to the wide slit; however, there is strong evidence for high velocities due to infalling nebulae. There is a sharp edge in the L$`\alpha `$ spectrum which, if due to absorption, may indicate outflow from the whole galaxy with very high velocity. It is unlikely that the star formation rates derived from the H$`\alpha `$ flux produce enough energy (through supernovae) to overcome the ICM pressure and produce such an outflow.
If the detected line emission arises from massive star formation, this implies that there is ongoing evolution of the cD galaxy. Young populations are found only in a few CNOC1 clusters by Davidge and Grinder (1996), and are rare in low redshift rich clusters (e.g. Oegerle and Hoessel 1991). These data provide additional evidence that clusters with strong cooling flows detected in X-rays have strong emission lines, with asymmetric distributions and spatially varying line ratios (e.g. Crawford et al. 1999). We suggest that the difference between the H$`\alpha `$ and L$`\alpha `$ images is the result of inhomogeneous dust extinction.
More detailed spectral mapping in other lines with high spatial resolution would be valuable in determining the precise velocity structure of this system. It would also be valuable to extend the sample of such observations to include cD galaxies with a wide range of line intensity, to quantify the connection between star-formation, inflows/outflows, and cooling flows. In particular, it would be of interest to study the other two CNOC1 cD galaxies with similarities to that of A2390, 0839+29 and 1455+22 (Davidge & Grinder 1996), in similar detail.
We thank Tim Davidge for helpful discussions on the results in this paper, and Alastair Edge for a careful reading of the manuscript and many useful suggestions. When this work was begun, MLB was supported by a Natural Sciences and Engineering Research Council of Canada (NSERC) research grant to C. J. Pritchet and an NSERC postgraduate scholarship. MLB is currently supported by a PPARC rolling grant for extragalactic astronomy and cosmology at Durham.
Captions to Figures
References
Abraham R.G., et al , 1996, ApJ, 471, 694
Allen S. W. 1995, MNRAS, 276, 947
Allen S. W., Fabian A. C., Edge A. C., Bohringer H., White D. A. 1995, MNRAS, 275, 741
Balogh, M. L. 1999, Ph.D. thesis, University of Victoria
Balogh, M. L., Morris, S. L. 1999, in preparation
Conselice, C. J., Gallagher, J. S., Calzetti, D., Homeier, N., Kinney A. 1999, ApJ, accepted (Astro-ph-9910382)
Copi C. J., Schramm D. N., Turner M. S. 1995, Science, 267, 192
Cowie L. L., Hu E. M., Jenkins E. B., York, D. G. 1983, ApJ, 272, 29
Crawford C. S., Allen, S. W., Ebeling, H., Edge, A. C., Fabian, A. C. 1999, MNRAS, 306, 857
Crawford C. S., Fabian A. C. 1992, MNRAS, 259, 265
David L. P., Slyz A., Jones C., Forman W., Vrtilek S. D., Arnaud K. A. 1993, ApJ, 412, 479
Davidge T.J. and Grinder M., 1995, AJ, 109, 1433
Draine B. T. & Salpeter E. E. 1979, ApJ, 231, 77
Edge A. C., Ivison, R. J., Smail, I., Blain, A. W., Kneib, J.-P. 1999, MNRAS 306, 599
Filippenko A. V., Terlevich R., 1992, ApJ, 397, L79
Haschick A. D., Crane P. C., van der Hulst, 1982, ApJ, 262, 81
Heckman T. M., Baum S. A., van Breugel W. J. M., McCarthy P., 1989, ApJ, 338, 48
Hu E. M. 1992, ApJ, 391, 608
Hutchings J.B. et al 1998, ApJ, 492, L115
Johnstone R. M., Fabian A. C., Nulsen P. E. J. 1987, MNRAS, 224, 75
Kennicutt, R. C. 1992, ApJ, 388, 310
Kennicutt, R. C. 1998, ARA&A 36, 189
Lemonon, L., et al 1998, A&A, 334, L21
McNamara B.R., OโConnell R. W., 1993, AJ, 105m 417
McNamara B. R., Wise M., Sarazin C. L., Jannuzi B. T., Elston R., 1996, ApJ, 466, 66
Oegerle W.R., and Hoessel J.G., 1991, ApJ, 375, 15
Pierre M., Le Borgne, J.-F., Soucail, G., Kneib, J.-P. 1996, A&A, 311, 413
Pinkney J., et al. 1996, ApJ, 468, L13
Romanishin W., Hintzen P., 1988, ApJ, 227, 131
Smail I., Edge A. C., Ellis R. S., Blandford R., 1998, MNRAS, 293, 124
Yee H.K.C., et al 1996, ApJS, 102, 289 |
no-problem/9912/astro-ph9912422.html | ar5iv | text | # Analyzing data from DASI
## 1 Introduction
The study of the Cosmic Microwave Background (CMB) anisotropy holds the promise of answering many of our fundamental questions about the universe and the origin of the large-scale structure (see e.g. Bennett, Turner & White BenTurWhi (1997)). The advent of low-noise, broadband, millimeter-wave amplifiers (Popieszalski Pop (1993)) has made interferometry a particularly attractive technique for detecting and imaging low contrast emission, such as anisotropy in the CMB. An interferometer directly measures the Fourier transform of the intensity distribution on the sky. By inverting the interferometer output, images of the sky are obtained which include angular scales determined by the size and spacing of the individual array elements.
In an earlier paper (White et al. 1998, hereafter WCDH ) we outlined a formalism for interpreting CMB anisotropies as measured by interferometers. In this paper we extend this analysis to consider an efficient method of analyzing the data that would be obtained from a series of uncorrelated pointings of an interferometer. In particular we examine what the upcoming Degree Angular Scale Interferometer<sup>1</sup><sup>1</sup>1More information on DASI can be found at http://astro.uchicago.edu/dasi. (DASI, Halverson et al. HCDHK (1998)) experiment may teach us about cosmology.
DASI is an interferometer designed to measure anisotropies in the CMB over a large range of scales with high sensitivity. The array consists of 13 closely packed elements, each of 20cm diameter, in a configuration which fills roughly half of the aperture area with a 3-fold symmetry. Each element of the array is a wide-angle corrugated horn with a collimating lens. DASI uses cooled HEMT amplifiers running between 26-36GHz with a noise temperature of $`<15`$K. The signal is filtered into ten 1GHz channels. Details of the layout of the DASI horns is given below.
The outline of this paper is as follows. In ยง2 we adapt the formalism of WCDH to deal with real and imaginary parts of the visibilities independently, and show explicitly how the formalism automatically imposes the constraint that the sky temperature is real. With the issues defined, we describe the configuration of DASI in ยง3. In ยง4, 5 we analyze mock data appropriate to single fields from DASI, showing how to construct an optimal basis for likelihood analysis. We specifically address the question of optimal sampling on the sky, which was omitted from our previous work. This leads naturally to a discussion of Wiener filtered (Bunn, Hoffman & Silk Wiener (1996)) map making and we reformulate the strategy for imaging the sky in this basis, pointing out the complementarity between NASAs MAP satellite<sup>2</sup><sup>2</sup>2http://map.gsfc.nasa.gov/ and DASI at 30GHz. We discuss estimates of the angular power spectrum which are easy to implement for uncorrelated pointing of DASI in ยง6 and show that the finite field of view does not hamper our ability to reconstruct the angular power spectrum. Finally in ยง7 we discuss multi-frequency observations and present our power spectrum results in the context of โradical compressionโ (Bond, Jaffe & Knox BonJafKno (1998)).
## 2 Formalism
The reader is referred to our earlier paper (WCDH ) for a detailed discussion of how to formulate the data-analysis problem with an interferometer, plus references to earlier work. We briefly review the major elements here.
Under the assumption of a narrow frequency band and a distant source, the datum measured by an interferometer is proportional to the Fourier Transform of the observed intensity on the sky, i.e. the sky intensity multiplied by the instrument beam. We label the โprimaryโ beam of the telescope by $`A(๐ฑ)`$, with $`๐ฑ`$ a 2D vector lying in the plane<sup>3</sup><sup>3</sup>3Since the field of view of the currently operational or planned instruments is small ($`5^{}`$) the sky can be approximated as flat with excellent accuracy. of the sky. Every pair of telescopes in the interferometer measures a visibility at a given point in the Fourier Plane, called the $`uv`$ plane,
$$๐ฑ(๐ฎ)๐๐ฑA(๐ฑ)\mathrm{\Delta }T(๐ฑ)e^{2\pi i๐ฎ๐ฑ}$$
(1)
where $`\mathrm{\Delta }T`$ is the temperature (fluctuation) on the sky and $`๐ฎ`$ is the variable conjugate to $`๐ฑ`$, with dimensions of inverse angle measured in wavelengths. The (omitted) proportionality constant, $`B_\nu /T`$ where $`B_\nu `$ is the Planck function, converts from temperature to intensity. The spacing of the horns and the position of the beam on the sky determine which value of $`๐ฎ`$ will be measured by a pair of antennae in any one integration. The size of the primary beam determines the amount of sky that is viewed, and hence the size of the โmapโ, while the maximum spacing determines the resolution.
The 2-point function of the observed visibilities is the convolution of the sky power spectrum, $`S(๐ฎ,๐ฏ)`$, with the Fourier Transforms of the primary beams. If our theory is rotationally invariant the power spectrum is diagonal, $`S(๐ฎ,๐ฏ)=S(u)\delta (๐ฎ๐ฏ)`$ and on small scales (WCDH )
$$u^2S(u)\frac{\mathrm{}(\mathrm{}+1)}{(2\pi )^2}C_{\mathrm{}}|_{\mathrm{}=2\pi u}\mathrm{for}u10.$$
(2)
where the dimensionless $`C_{\mathrm{}}`$ are the usual multipole moments of the CMB anisotropy spectrum and $`\mathrm{}\theta ^1`$ is the multipole index. This approximation works at the few percent level for a standard Cold Dark Matter model when $`u10`$ or $`\mathrm{}60`$.
The Fourier Transform of the primary beam<sup>4</sup><sup>4</sup>4Throughout we will use a tilde to represent the Fourier Transform of a quantity. is the auto-correlation of the Fourier Transform of the point response, $`g`$, of the receiver to an electric field, $`\stackrel{~}{A}(u)=\stackrel{~}{g}\stackrel{~}{g}(u)`$ and
$$A(๐ฑ)=๐๐ฎ\stackrel{~}{A}(๐ฎ)e^{2\pi i๐ฎ๐ฑ},$$
(3)
Due to the finite aperture $`\stackrel{~}{A}`$ has compact support. In order to obtain a simple estimate of our window function it is a reasonable first approximation to take $`\stackrel{~}{A}`$ equal to the auto-correlation of a pill-box of radius $`D/2`$ where $`D`$ is the diameter of the dish in units of the observing wavelength. Specifically
$$\stackrel{~}{A}(๐ฎ)=\frac{2\stackrel{~}{A}_{}}{\pi }\left[\mathrm{arccos}\frac{u}{D}\frac{u\sqrt{D^2u^2}}{D^2}\right]$$
(4)
if $`uD`$ and zero otherwise. If we require $`A(0)=1`$ then this must integrate to unit area, so $`\stackrel{~}{A}_{}^1=\pi (D/2)^2`$, or the area of the dish. We show $`\stackrel{~}{A}(u)`$ in Fig. 2a. For now we shall treat a single frequency. Obviously for a fixed physical dish the $`\stackrel{~}{A}`$ will be slightly different for different wavelengths. We return to this complication in ยง7.
In WCDH we presented the formalism in terms of complex visibility data. However it is easier practically to implement the analysis in terms of the real and imaginary parts of the visibility. We write these as $`๐ฑ_jV_j^R+iV_j^I`$. The cosmological contribution to the real and imaginary components is uncorrelated $`V_i^RV_j^I=0`$. Assemble $`V^R`$ and $`V^I`$ into a vector consisting of first the real and then the imaginary parts โ the signal correlation matrix of this vector takes block-diagonal form. Further the cosmological contribution obeys $`V_i^RV_j^R=\pm V_i^IV_j^I`$. It is straightforward to show that the cosmological contribution is proportional to
$$\frac{1}{2}๐๐ฏS(v)\stackrel{~}{A}(๐ฎ_i๐ฏ)\left[\stackrel{~}{A}(๐ฎ_j๐ฏ)\pm \stackrel{~}{A}(๐ฎ_j+๐ฏ)\right]$$
(5)
where the $`\pm `$ refer to the real and imaginary parts respectively. (We have dropped the normalization factor which converts temperature to flux.) Note that if $`๐ฎ_i=๐ฎ_j`$ the visibilities are completely (anti-)correlated, as would be expected given that $`๐ฑ(๐ฎ)`$ is the Fourier Transform of a real field: $`๐ฑ^{}(๐ฎ)=๐ฑ(๐ฎ)`$. The factor of $`\frac{1}{2}`$ out front reflects the fact that the full variance $`C_{ij}^V๐ฑ_i^{}๐ฑ_j`$ is the sum of the real and imaginary components.
In the case where all correlated signal is celestial, the correlation function of the noise in each visibility is diagonal with
$$C_{ij}^N=\left(\frac{2k_BT_{\mathrm{sys}}}{\eta _AA_D}\right)^2\frac{1}{\mathrm{\Delta }_\nu t_an_b}\delta _{ij}.$$
(6)
If the noise in the real and imaginary components is uncorrelated, then each makes up half of this variance. Here $`k_B`$ is Boltzmannโs constant, $`T_{\mathrm{sys}}`$ is the system noise temperature, $`\eta _A`$ is the aperture efficiency, $`A_D`$ is the physical area of a dish (not to be confused with $`A(๐ฑ)`$), $`n_b`$ is the number of baselines<sup>5</sup><sup>5</sup>5The number of baselines formed by $`n_r`$ receivers is $`n_b=n_r(n_r1)/2`$. corresponding to a given separation of antennae, $`\mathrm{\Delta }_\nu `$ is the bandwidth and $`t_a`$ is the observing time. Typical values for DASI are $`T_{\mathrm{sys}}=20`$K, $`\eta _A0.8`$, dishes of diameter 20cm, $`n_b=3`$ and $`\mathrm{\Delta }_\nu =10`$GHz (in $`10\times 1`$GHz channels).
We show in Fig. 1 the positions at which DASI will measure visibilities. With $`13`$ horns there are $`78`$ baselines. Because of the 3-fold symmetry of the instrument each pointing samples $`V`$ at $`26`$ different $`|๐ฎ|`$. For each โstareโ DASI is rotated about an axis perpendicular to the baseplate to fill half of the $`uv`$ plane as shown in Fig. 1. The other half of the $`uv`$ plane is constrained by the symmetry $`๐ฑ^{}(๐ฎ)=๐ฑ(๐ฎ)`$.
## 3 The DASI configuration
The layout of the horns for DASI is shown in Fig. 1a. This configuration has a 3-fold symmetry about the central horn. The positions of 4 of the non-central horns are arbitrary, up to a global rotation, with the remaining configuration being determined by symmetry. The configuration shown represents an optimal configuration, within the physical constraints, for the purposes of measuring CMB anisotropy.
As described in the last section, the distance between each pair of horns represents a baseline at which a visibility can be measured. Each visibility probes a range of angular scales centered at $`\mathrm{}=2\pi u`$ where $`u`$ is the baseline in units of the wavelength. The sensitivity as a function of $`\mathrm{}`$ is given by the window function (see Eq. 8). Since the width of the window function is determined by the size of the apertures, the optimal coverage for the purpose of CMB anisotropy is that configuration which spans the largest range of baselines with the most overlap between neigbouring window functions. In one dimension such optimal configurations are known as Golomb<sup>6</sup><sup>6</sup>6e.g. http://members.aol.com/golomb20/ rulers (Dewdney Golomb (1985)), and a well known procedure exists for finding them.
Finding an optimal solution in two dimensions, within the physical constraints, cannot be done analytically. We have optimized the configuration numerically. We have searched the 7 dimensional parameter space of horn positions (an $`x`$ and $`y`$ position for each of 4 horns, minus one overall rotation) for the configuration which minimized the maximum separation between baseline distances, while at the same time covering the largest range of angular scales. Trial starting positions were determined by a simple Monte Carlo search of the parameter space. From each of these positions a multi-dimensional minimization was started. Two additional constraints were imposed upon the allowed solutions: no two horns could come closer than 25cm, the physical size of the horn plus surrounding โlipโ, and all horns etc had to lie completely within the 1.6m diameter base plate (leading to a maximal radius of $`63.5`$cm). Our โoptimalโ solution is given in Table 1. The baselines run from 25cm to 120cm with the largest gap between baseline distances being 6.4cm.
## 4 S/N Eigenmodes or Optimal Subspace Filtering
A quick glance at Fig. 1b shows that the signal in most of the visibilities will be highly correlated, i.e. the apertures have a large overlap. This is shown quantitatively in Fig. 3a where we plot one row of the signal correlation matrix. Given a โtrialโ theory we can perform a change of basis to remove these correlations (see ยง6.2 of WCDH , or Tegmark et al. THSVZ (1998) for a review of this method). The input theory can be considered as a prior in the context of Bayesian analysis, or could be iterated to match the data if so desired. For our purposes all that will matter is that the signal variance is independent of $`\widehat{u}`$ and decreases with $`|๐ฎ|`$. For concreteness we will take $`u^2S(u)=`$constant, normalized to the COBE 4-year data (Bennett et al. 4year (1996)). For this choice the cosmological signal is approximately equal to the noise in each of the highest $`|๐ฎ|`$ bins in Fig. 1b.
Let us consider only the real parts of the visibilities for now, the imaginary parts are dealt with in an analogous manner. Take these to lie in the upper-left block of the block-diagonal correlation matrix. Denote the noise in the real part of each visibility $`\sigma _j`$. The eigenvalues of the matrix $`C_{ij}^{RR}/\sigma _i\sigma _j`$ measure the signal-to-noise in independent linear combinations of the visibilities. The independent linear combinations can be written $`\nu _a=_i(V_i/\sigma _i)\mathrm{\Psi }_{ia}`$ where $`\mathrm{\Psi }_{ia}`$ is the $`a`$th eigenvector of $`C_{ij}^{RR}/\sigma _i\sigma _j`$. Then $`\nu _a\nu _b=(\lambda _a+1)\delta _{ab}`$ where the $`\lambda _a`$ are the signal-to-noise eigenvalues and the 1 represents the noise contribution (which is the unit matrix in this basis).
The number of modes $`\nu _a`$ with $`\lambda _a>1`$ is a quantitative indication of how much signal is being measured by the experiment. We show the eigenspectrum of the real parts of the visibilities for 1 day of observation in Fig. 3b. There is in addition an eigenvalue of the same size for each imaginary component. Due to the overlapping windows in the $`uv`$ plane, $`25\%`$ of the modes contain good cosmological information. The rest are redundant, containing mostly noise. Note however that in 1 day of observation DASI measures nearly 200 high signal-to-noise eigenvectors! We have tested that reducing the number of orientations of the instrument, i.e. the oversampling in $`\theta _u`$, does not significantly reduce the number of high signal-to-noise eigenvectors until the apertures in the outermost circle are just touching, at which point there are 160 modes with $`S/N1`$.
The $`\lambda _a`$ are the variance of the $`\nu _a`$ and can be predicted given a theoretical power spectrum
$$\lambda _a=\frac{1}{2}v๐vS(v)W_a(v)$$
(7)
with the window function
$$W_a(|๐ฏ|)=\underset{ij}{}\frac{\mathrm{\Psi }_{ia}}{\sigma _i}\frac{\mathrm{\Psi }_{ja}}{\sigma _j}๐\theta _v\stackrel{~}{A}(๐ฎ_i๐ฏ)\left[\stackrel{~}{A}(๐ฎ_j๐ฏ)\pm \stackrel{~}{A}(๐ฎ_j+๐ฏ)\right]$$
(8)
and as such could form a basis for โradical compressionโ. We shall return to this issue in ยง7.
## 5 Imaging the Sky
DASI provides coverage over a significant part of the $`uv`$ plane, and as such is able in principle to perform high resolution imaging of the sky. The high signal to noise of the DASI instrument and the large dynamic range in angular scale however make imaging a computationally challenging task. In this section we present two approaches which we have tried with mixed success on simulated data.
We discussed the Wiener filtering formalism for producing an image from the visibility data in WCDH . Here we note that in the $`\nu _a`$ basis it is straightforward to construct the Wiener filtered sky map (Bunn, Hoffman & Silk Wiener (1996); WCDH and references therein). Recall that for Wiener filtering the sky temperature is approximated by
$$T^{WF}(๐ฑ_\alpha )=C_{\alpha \beta }^TW_{\beta j}\left[C^V+C^N\right]_{jk}^1V_k$$
(9)
where $`C^T`$ is the real-space temperature correlation function,
$$W_{\beta j}=A\left(๐ฑ_\beta \right)\left\{\genfrac{}{}{0pt}{}{\mathrm{cos}}{\mathrm{sin}}\right\}\left[2\pi i๐ฎ_j๐ฑ_\beta \right]$$
(10)
we choose $`\mathrm{cos}`$ for the real components and $`\mathrm{sin}`$ for the imaginary components, and $`C^V`$ and $`C^N`$ are the visibility signal and noise correlation matrices. We have written the expression in this mixed way to avoid needing to invert a matrix of size $`N_{\mathrm{pix}}\times N_{\mathrm{vis}}`$ where $`N_{\mathrm{pix}}`$ is the number of pixels in the map being reconstructed, but conceptually the $`C^TW`$ term can be replaced with $`W^1C^V`$. Then in the $`\nu _a`$ basis one simply replaces $`\nu _a`$ with $`\lambda _a(\lambda _a+1)^1\nu _a`$ which down-weights modes with low signal-to-noise. Specifically
$$T^{WF}(๐ฑ_\beta )=\underset{a}{}M_{\beta a}^1\frac{\lambda _a}{\lambda _a+1}\nu _a$$
(11)
where
$$M_{\beta a}=\underset{j}{}\frac{\mathrm{\Psi }_{ja}}{\sigma _j}A\left(๐ฑ_\beta \right)\left\{\genfrac{}{}{0pt}{}{\mathrm{cos}}{\mathrm{sin}}\right\}\left[2\pi i๐ฎ_j๐ฑ_\beta \right]$$
(12)
where we choose $`\mathrm{cos}`$ for the real components and $`\mathrm{sin}`$ for the imaginary components. The ratio $`\lambda _a(\lambda _a+1)^1`$ is plotted in Fig. 3b, where we can see that $`25\%`$ of the modes will contribute significantly to the final map. The expected variance in the final map is also easily computed:
$$T_\rho ^{WF}T_\sigma ^{WF}=\underset{a}{}M_{\rho a}^1\frac{\lambda _a^2}{\lambda _a+1}M_{a\sigma }^1$$
(13)
which approaches $`T^2`$ as $`\lambda _a\mathrm{}`$. Note that the Wiener filter is not power preserving, so the maps should not be used for power spectrum estimation.
By reducing the total number of modes that need to be kept in the summation the $`\nu _a`$ basis can speed up calculation of the Wiener filtered map. This is an advantage, but even with this speed up we found that producing a high resolution map of even a single field on the sky is very computationally expensive due to the large number of matrix multiplications involved in computing $`T^{WF}`$. An alternative to Wiener filtering, which is very similar in the high signal-to-noise regime in which we are working, is the minimum variance estimator for $`T(๐ฑ)`$. The number of operations required to produce the minimum variance and Wiener filtered maps are comparable, and both tend to be very slow. We are currently investigating faster approximate methods of image making.
The formalism above can in principle be used for making maps at each frequency, or one can make a map which combines the frequencies in such a way as to isolate the CMB (or foreground) signal. While the formalism looks more complex, it is in fact easy to implement computationally. The techniques are well known (see ยง7): we imagine that at each point in the $`uv`$ plane our visibilities form a vector $`\stackrel{}{V}`$ whose components are the different frequencies. We can expand this vector in terms of the physical components we wish to consider, e.g.,
$$\stackrel{}{V}=\theta _{\mathrm{CMB}}\stackrel{}{V}^{\mathrm{CMB}}+\theta _{ff}\stackrel{}{V}^{ff}$$
(14)
In the absence of noise it is easy to solve for $`\theta _{\mathrm{CMB}}`$ as
$$\theta _{\mathrm{CMB}}=\frac{\stackrel{}{V}_{}\stackrel{}{V}}{\stackrel{}{V}_{}\stackrel{}{V}_{\mathrm{CMB}}}\mathrm{with}\stackrel{}{V}_{}\stackrel{}{V}_{ff}=0$$
(15)
If we write $`\theta _{\mathrm{CMB}}=_ac_aV_a`$ then we should replace $`\stackrel{~}{A}`$ by $`_ac_a\stackrel{~}{A}_a`$ throughout. In the presence of noise we may define $`c_a`$ by a minimum variance estimator as above:
$$c_a=\underset{A}{}\left(\underset{bc}{}V_b^{\mathrm{CMB}}N_{bc}^1V_c^A\right)^1\underset{d}{}V_d^AN_{da}^1$$
(16)
where $`N_{ab}`$ is the channel noise matrix and capital Roman letters indicate the component being considered (see ยง7).
We end our discussion of imaging with an observation about the complementarity between the MAP satellite and DASI. In making maps of the sky with DASI, the largest source of error is the missing long wavelength modes which are filtered out by the DASI primary beam. The long wavelength power can be included in the map by simultaneously fitting another data set (e.g. WCDH ). In this particular case the data from the MAP satellite at the same frequency is the obvious choice since MAP will have full sky coverage. We show the window functions for MAP at the relevant frequencies, along with the envelope for DASI in Fig. 4.
## 6 Power Spectrum Estimation
While the $`\nu _a`$ are the natural basis from the point of view of signal-to-noise and Wiener filtering, and can dramatically improve likelihood function evaluation, they are not necessarily the quantities of greatest physical interest for power spectrum estimation. In WCDH we discussed estimating a series of bandpowers using the quadratic estimator of (Bond, Jaffe & Knox BonJafKno (1998), Tegmark Max (1997)). Here we present a simpler approach more appropriate to unmosaiced fields. In this presentation we shall assume we are dealing with single frequencies; the case of multiple frequencies is dealt with in ยง7.
In CMB anisotropy observations with single dish experiments much has been written about optimal ways to estimate the power spectrum. The principle reason is that care must be taken in weighting the data to ensure that no sharp cut-offs in real space are introduced. These lead to ringing in Fourier space and delocalize the window function (e.g. discussion in Tegmark Teg (1996)). For the interferometer no such problem arises โ each visibility samples a compact region in $`๐ฎ`$. For each $`V_i`$, the square is a noise biased estimate of the power spectrum convolved with a window function. If we define $`s_i2\left(V_i^2N_{ii}\right)`$ with $`N_{ii}`$ the noise variance in (the component of the) visibility $`i`$, then $`s_i`$ is an unbiased estimator for $`C_{ii}^V`$:
$$s_i=u๐uS(u)W_{ii}(u)$$
(17)
where $`W_{ii}(u)`$ is the window function in analogy to Eq. (8). We have included the factor of 2 in the definition of $`s_i`$ to account for the fact that each component $`V_i`$ of the visibility contributes half of the total variance of $`|๐ฑ_i|^2`$. Take a weighted average of the $`s_i`$: $`๐ฎ_AE_{Ai}s_i`$. The simplest weighting is to sum all of the $`s_i`$ with $`|๐ฎ_i|u`$ and we shall use that below. Under the assumption that the visibilities are Gaussian, the error matrix for the estimates $`๐ฎ_A`$ is
$$\delta ๐ฎ_A\delta ๐ฎ_B=2\underset{ij}{}E_{Ai}\left(C_{ij}^V+C_{ij}^N\right)^2E_{jB}.$$
(18)
Note that for $`N`$ uncorrelated visibilities (real and imaginary parts) contributing to $`๐ฎ_A`$ this gives $`\delta ๐ฎ_A/๐ฎ_A=N^{1/2}(1+\mathrm{noise}/\mathrm{signal})`$ as expected.
For DASI in the configuration shown in Fig. 1b one can construct 26 different estimates $`๐ฎ_A`$, which will however be quite correlated. We show the $`13\times 13`$ correlation matrix for every second estimate in Table 2 along with the expected error on each determination (the diagonal elements). As independent pointings are included in the analysis the error bar on each $`๐ฎ_A`$ decreases as $`N^{1/2}`$, but the correlations remain the same. Without mosaicing (WCDH ) the resolution in $`\mathrm{}`$ is restricted to $`2\pi D`$, thus the individual determinations are required to be highly correlated. The error on the highest $`\mathrm{}`$ bins is still dominated by the small number of independent samples, in this case 16. Increasing the oversampling in the angular direction (and the observing time) can reduce this error to about 23%, hardly worth the extra time compared to including a different pointing.
The $`๐ฎ_A`$ are estimates of the power spectrum convolved with the window function $`W_A(u)`$. Given a theory it is straightforward to compare to the data once the window functions are known, and we discuss this in ยง7. However one could ask whether (or how much) cosmological information has been lost due to the convolution or if the information is still present in the correlated visibilities. One way of answering this question is to attempt to perform an approximate (theory independent) deconvolution.
The simplest (โdirectโ) method is to assume that $`S(u)=`$constant through the window, so that $`S(u)๐ฎ_A/u๐uW_A(u)`$ with $`W_A(u)=_iE_{Ai}W_{ii}(u)`$. (Alternatively one could assume that $`u^2S(u)=`$constant, which leads to a similar expression.) For the window function of Eq. (4) we have $`d^2u\stackrel{~}{A}^2(u)0.585D^2`$. As we show in Fig. 5, this technique works surprisingly well for DASI, even though the FWHM of each window is $`\mathrm{\Delta }\mathrm{}100`$, over which scales we can expect power spectra to change significantly. For a CDM power spectrum for example, the approximation above induces a systematic 10% error (at worst) due to the window function โwashing outโ the peak structure.
To avoid this problem, we can attempt to perform the deconvolution by an iterative procedure. Recall that we are trying to constrain the power spectrum, while our measurements are the spectrum convolved with a positive semi-definite kernel โ the window function. In this situation Lucyโs method (Lucy Lucy (1974)) can be used. We have implemented Lucyโs algorithm, following Baugh & Efstathiou (BauEfs (1993)), including a coupling between different bins and a regularization of the iteration. As they found, the final result is not sensitive to the mechanism chosen.
To briefly re-cap the method: we think about the deconvolution problem, following Lucy (Lucy (1974)), first in terms of probability distributions. If we denote by $`p(x)`$ the probability of measuring a quantity $`x`$ and $`p(y|x)`$ the conditional probability of measuring $`y`$ given that $`x`$ is true then
$$p(y)=p(y|x)p(x)๐x$$
(19)
which is a convolution integral. We wish to estimate $`p(x)`$ given observations $`p^{\mathrm{obs}}(y)`$. We start the $`r`$th iteration with an estimate $`p^r(x)`$ of $`p(x)`$ and predict $`p^r(y)`$ using Eq. (19) assuming $`p(y|x)`$ is known. Writing the inverse of Eq. (19), for $`p(x)`$, using the observed $`p^{\mathrm{obs}}(y)`$ and rewriting $`p(x|y)p(y)=p(y|x)p(x)`$ leads us to the iterative expression for $`p^{r+1}(x)`$:
$$p^{r+1}(x)=p^r(x)\frac{\left(p^{\mathrm{obs}}(y)/p^r(y)\right)p(y|x)๐y}{p(y|x)๐y}$$
(20)
where the denominator is unity. The iterative method we use takes this expression over with the replacements $`p(y)๐ฎ_A`$, $`p(x)f(u)u^2S(u)`$ and $`p(y|x)u^1W_A(u)`$. Approximating the integrals as sums equally spaced in $`u`$ we have the iterated pair of equations:
$`๐ฎ_A^r`$ $`=`$ $`{\displaystyle \underset{a}{}}f^r(u_a)u_a^1W_A(u_a)\mathrm{\Delta }u`$ (21)
$`f^{r+1}(u_a)`$ $``$ $`f^r(u_a){\displaystyle \frac{\underset{i}{}\left(๐ฎ_A^{\mathrm{obs}}/๐ฎ_A^r\right)u_a^1W_A(u_a)}{_iu_a^1W_A(u_a)}}`$ (22)
To make the iteration converge more stably we in fact replace only a fraction $`ฯต`$ of $`f^r`$ with $`f^{r+1}`$ on each step, and average the $`f^r(u_a)`$ using a 2nd order Savitsky-Golay filter of length $`(2,2)`$. The final result is insensitive to the details of this procedure, the number of bins chosen for $`u_a`$ etc.
We show in Fig. 5 results of a deconvolution assuming that 16 independent patches of sky were observed, each for 1 day. We can see that the deconvolution works well, with good resolution in $`\mathrm{}`$ and no bias over the scales where DASI has sensitivity. Thus we expect that no important cosmological information has been lost by the convolution procedure. The points are highly correlated, but we do not show the correlations on the Figure (all correlations were included in the analysis). The error bars are so large because they are the computed from the variance allowing all other bins to vary freely.
Though the example here was for a single frequency channel, the generalization to multiple frequencies is straightforward and can in principle allow even finer sub-band resolution.
Finally we remark that obviously a statistical comparison of a given model to the data should be done with the $`๐ฎ_A`$. The correlations of the $`๐ฎ_A`$ can be computed for any given theory (e.g. Table 2). This allows a full likelihood analysis to be done, and is the route which should be taken when comparing DASI data to a specific theory. This is what we shall discuss now.
## 7 Radical Compression
To constrain theories using interferometer data is much easier than in the case of single dish data, thus many of the powerful techniques developed for the latter case are not needed. The primary reason for this is that interferometers work directly in the $`\mathrm{}`$-space of theories. It is straightforward to perform โradical compressionโ (Bond, Jaffe & Knox BonJafKno (1998)) of an interferometer data set and quote a set of bandpowers along with their full noise covariance matrix and window functions. We shall develop this idea briefly in this section.
We now reintroduce the multi-frequency nature of the data set that has been suppressed during most of this paper. For DASI we can work with 10$`\times `$1GHz channels which we shall label with a greek subscript. Following Dodelson (FDF (1997); see also White BeamComment (1998)) we imagine that our visibility signal $`V_\alpha `$ at each point in the $`u`$-$`v`$ plane is a sum of contributions with different frequency dependences: $`V_a=_A\theta ^AV_a^A`$. Let $`A=0`$ be the CMB contribution, whose frequency dependence will be $`V^0=(1,1,\mathrm{})`$ for observations at DASI frequencies. At each frequency, $`a`$, and visibility position, $`i`$, the signal is the convolution of the sky with an aperture $`\stackrel{~}{A}_i^a`$ which will be of the form of Eq. (4) with the central $`u`$ and $`D`$ varying by the inverse of the observing wavelength. If each visibility has noise $`N_{ab}^{(i)}`$ we can estimate the CMB component $`\theta ^0`$ by minimizing
$$\chi ^2=\underset{ab}{}\underset{AB}{}\left(V_a^{\mathrm{obs}}\theta ^AV_a^A\right)N_{ab}^1\left(V_b^{\mathrm{obs}}\theta ^BV_b^B\right)$$
(23)
where we have suppressed the visibility index $`i`$. Solving $`d\chi ^2/d\theta ^A=0`$ amounts to taking a linear combination of the frequency channels $`\theta _i^0=_ac_aV_a^{\mathrm{obs}}`$ with
$$c_a=\underset{A}{}\left(\underset{bc}{}V_b^0N_{bc}^1V_c^A\right)^1\underset{d}{}V_d^AN_{ad}^1$$
(24)
While formidable this expression reduces to the well known least-squares weighting in the limit $`N_{ab}=\sigma _a^2\delta _{ab}`$:
$$c_a=\underset{Ab}{}\left(\frac{V_b^0V_b^A}{\sigma _b^2}\right)^1\frac{V_a^A}{\sigma _a^2}$$
(25)
Now we simply replace $`V_i`$ with $`\theta _i^0`$ and $`\stackrel{~}{A}_i`$ with $`_ac_a\stackrel{~}{A}_{ia}`$ to generalize Eq. (17). The generalized $`๐ฎ_A`$ can be easily calculated from the data, and the expectation values and distribution can be calculated for any theory once the window function and noise properties are given. Supplying this set would be an ideal way to release the DASI data for the purposes of model fitting.
## 8 Conclusions
We have shown how one can implement the formalism of WCDH in the case of a single stare of the DASI instrument. We have discussed making maps, filtering the data and reconstructing the anisotropy power spectrum. Our results suggest that with $`1`$ month of data, DASI could provide significant constraints on theories of structure formation.
We would like to acknowledge useful conversations with Alex Szalay and thank Tim Pearson for comments on a draft of this work. |
no-problem/9912/patt-sol9912008.html | ar5iv | text | # Pulse Shepherding and Multi-Channel Soliton Transmission in Bit-Parallel-Wavelength Optical Fiber Links
## Abstract
We study bit-parallel-wavelength (BPW) pulse transmission in multi-channel single-mode optical fiber links for high-performance computer networks. We develop a theory of the pulse shepherding effect earlier discovered in numerical simulations, and also describe families of the BPW solitons and bifurcation cascades in a system of N coupled nonlinear Schrรถdinger equations.
A growing demand for high-speed computer communications requires an effective and inexpensive parallel computer interconnects that eliminate bottlenecks caused by parallel-to-serial conversion. Recently, bit-parallel-wavelength (BPW) single-fiber optical links were proposed as possible high-speed computer interconnects . Functionally, the BPW link is just a single-medium parallel fiber ribbon cable. In a conventional ribbon cable, $`N`$ parallel bits coming from a computer bus are transmitted by $`N`$ pulses travelling in $`N`$ separate fibers. In a BPW scheme, all $`N`$ bits are wavelength multiplexed and transmitted by time-aligned pulses, ideally โ solitons, through a single optical fiber.
For any bit-parallel transmission the crucial problem is mantaining the alignment of pulses corresponding to parallel bits of the same word. Unlike the fiber ribbon cable, a single-fiber BPW link presents a unique possibility of dynamical control of the pulse alignment by employing the so-called pulse shepherding effect , when a strong โshepherdโ pulse enables manipulation and control of co-propagating weaker pulses. Experimentally, the reduction of the misalignment due to the group-velocity mismatch of two pulses in the presence of the shepherding pulse has been observed in a dispersion-shifted Corning fiber .
In this Letter, we develop a rigorous theory of the shepherding effect, and show that it is caused by the nonlinear cross-phase modulation (XPM) of pulses transmitted through the same fiber. The co-propagating pulses can be treated as fundamental modes of different colour trapped and guided by an effective waveguide induced by the the strong shepherd pulse. The resulting multi-component soliton pulse propagates in the fiber preserving the time alignment of its constituents and thus enables multi-channel bit-parallel transmission. For the first time to our knowledge, we analyze multi-component solitons and describe a mechanism of the soliton multiplication via the bifurcation cascades in a model of $`N`$ nonintegrable coupled nonlinear Schrรถdinger (NLS) equations.
To describe the simultaneous transmission of $`N`$ pulses of different wavelengths in a single-mode optical fiber, we follow the standard derivation and obtain a system of $`N`$ coupled NLS equations in the coordinate system moving with the group velocity $`v_{g0}`$ of the central pulse $`(0jN1)`$:
$$i\left(\frac{}{z}+\delta _j\frac{}{t}\right)A_j=\frac{\alpha _j}{2}\frac{^2A_j}{t^2}\gamma _j\left(|A_j|^2+S_{mj}\right)A_j,$$
(1)
where $`S_{mj}=2_{mj}^{N1}(\gamma _m/\gamma _j)|A_m|^2`$. For a pulse $`j`$, $`A_j(z,t)`$ is the slowly varying envelope measured in the units of $`\sqrt{P_0}`$, where $`P_0`$ is the incident power carried by the central pulse, $`\alpha _j=\beta _{2j}/|\beta _{20}|`$ is the normalized group velocity dispersion, $`\delta _j=(v_{g0}v_{gj})/v_{g0}v_{gj}`$ is the relative group velocity mismatch, and $`\gamma _j=\omega _j/\omega _0`$ characterises the nonlinearity strength $`(\alpha _0=\gamma _0=1)`$. The time, $`t=(TZ/v_{g0})/T_0`$, and propagation distance, $`z=Z/L_0`$, are measured in units of characteristic pulse width, $`T_010ps`$, and dispersion length $`L_0=T_0^2/|\beta _{20}|50km`$ . For the operating wavelengths spaced $`4รท5`$ $`nm`$ apart (to avoid the four-wave-mixing effect), within the band $`1530รท1560`$ $`nm`$ (see Refs. ), the coefficients $`\alpha _j`$ and $`\gamma _j`$ are different but close to $`1`$. For the realistic group-velocity difference of less than $`5`$ $`ps/km`$ , the mismatch parameter $`\delta _j1`$.
Below we show that the system (1) admits stationary solutions in the form of multi-component BPW solitons which represent a shepherd pulse time-aligned with a number of lower-amplitude pulses. We then discuss the effect of the group-velocity mismatch on the time alignment of the constituents of the multi-component pulse.
To find the stationary solutions of Eqs. (1), we use the transformation: $`A_j(t,z)=u_j(t)\mathrm{exp}(i\delta _j\alpha _jt+i\lambda _jz)`$, with the amplitudes $`u_j`$ obeying the following equations:
$`{\displaystyle \frac{1}{2}}{\displaystyle \frac{d^2u_0}{dt^2}}+\left(u_0^2+2{\displaystyle \underset{n=1}{\overset{N1}{}}}\gamma _nu_n^2\right)u_0={\displaystyle \frac{1}{2}}u_0,`$ (2)
$`{\displaystyle \frac{\alpha _n}{2}}{\displaystyle \frac{d^2u_n}{dt^2}}+\gamma _n\left(u_n^2+2{\displaystyle \underset{mn}{\overset{N1}{}}}{\displaystyle \frac{\gamma _m}{\gamma _n}}u_m^2\right)u_n=\lambda _nu_n,`$ (3)
where the amplitudes, time, and $`\lambda _n`$ are measured in units of $`\sqrt{2\lambda _0}`$, $`(2\lambda _0)^{1/2}`$, and $`2\lambda _0`$, respectively.
System (2) has exact analytical solutions for $`N`$ coupled components. Indeed, looking for solutions in the form $`u_0(t)=U_0\mathrm{sech}t`$, $`u_n(t)=U_n\mathrm{sech}t`$, we obtain the constraints $`\lambda _n=\alpha _n/2`$, and a system of $`N`$ coupled algebraic equations,
$`U_0^2+2{\displaystyle \underset{n=1}{\overset{N1}{}}}\gamma _nU_n^2=1,\gamma _nU_n^2+2{\displaystyle \underset{mn}{\overset{N1}{}}}\gamma _mU_m^2=\alpha _n.`$
As long as all modal parameters, $`\alpha _n,\gamma _n`$, are close to $`1`$, this solution describes a composite pulse with $`N`$ nearly equal constituents. In the degenerate case, $`\alpha _n=\gamma _n=1`$, the amplitudes are : $`U_0=U_n=[1+2(N1)]^{1/2}`$.
Approximate analytical solutions of different types can be obtained in the linear limit , when the central frequency pulse, $`u_0`$, is much larger than other pulses, and the XPM interaction between the latter can be neglected. Lineariziation of Eqs. (2) for $`|u_n||u_0|`$ yields an exactly solvable NLS equation, for $`u_0`$, and $`N1`$ decoupled linear equations, for $`u_n`$. Each of the latter possesses a localized solution provided $`\lambda _n=\mathrm{\Lambda }_n(\alpha _n/8)[1\sqrt{1+16\alpha _n}]^2`$. Near this point, the central soliton pulse (shepherd pulse) can be thought of as inducing an effective waveguide that supports a fundamental mode $`u_n`$, with the corresponding cutoff, $`\mathrm{\Lambda }_n`$. Since $`\alpha _n`$ and $`\gamma _n`$ are close to $`1`$, the soliton-induced waveguide always supports no more than two modes of the same wavelength, with largely separated eigenvalues. As a result, the effective waveguide induced by the shepherd pulse stays predominantly single-moded for all operating wavelengths.
Let us describe the mechanism of pulse shepherding in more details. First, we consider the simplest case $`N=2`$. If the two pulses, $`(0)`$ and $`(1)`$, do not interact, then the uncoupled Eqs. (2) possess only single-component soliton solutions $`u_0(t)=\mathrm{sech}t`$ and $`u_1=(2\lambda _1\gamma _1/\alpha _1)^{1/2}\mathrm{sech}\sqrt{2\lambda _1/\alpha _1}t`$, with the corresponding normalized powers $`P_0u_0^2๐t=2`$ and $`P_1=2(2\lambda _1\gamma _1)^{1/2}\alpha _1^{3/2}`$. These solutions can be characterised by the curves on the diagram $`P(\lambda )`$ \[see curves $`(0)`$ and $`(1)`$ in Fig. 1\]. When the two copropagating pulses interact, a new branch of the two-mode solitons $`(0+1)`$ appears (branch A-B in Fig. 1). It is characterised by the total power $`P(\lambda _1)=P_0+P_1`$, and it joins the two branches $`P_0(\lambda _1)`$ and $`P_1(\lambda _1)`$ at the bifurcation points $`O_1`$ and $`O_2`$, respectively. Near the point $`O_1`$, the solution consists of a large pulse of the central wavelength that guides a small component $`u_1`$ (see Fig. 1, point $`A`$). The point $`O_1`$ therefore coincides with the cutoff $`\lambda _1=\mathrm{\Lambda }_1`$ for the fundamental mode $`u_1`$ guided by the shepherd pulse $`u_0`$. Shapes and amplitudes of the soliton components evolve with changing $`\lambda _1`$ (see the point $`B`$ in Fig. 1). The component $`u_0`$ disappears at the bifurcation point $`O_2`$.
Next, we consider a shepherd pulse guiding three pulses, $`n=1,2,3`$, with the modal parameters: $`\alpha _{0,3}=\gamma _{0,3}=1`$, $`\alpha _1=\gamma _1=0.65`$, and $`\alpha _2=\gamma _2=0.81`$. Solitary waves of this four-mode BPW system can be found as localized solutions of Eqs. (2), numerically. Figure 2 presents the lowest-order families of such localized solutions on the line $`\lambda _1=\lambda _2=\lambda _3\lambda `$ in the parameter space $`\{\lambda _1,\lambda _2,\lambda _3\}`$. If the pulses $`(1)`$, $`(2)`$, and $`(3)`$, were interacting only with the central pulse but not with each other, then the bifurcation pattern for each of this pulses would be similar to that shown in Fig 1. Thin dotted, dashed, and dash-dotted curves in Fig. 2 correspond to the solitons of three independent pulses $`(1)`$, $`(2)`$, and $`(3)`$, shown with branches of corresponding two-mode solitons of the BPW system with pairwise interactions, $`(0+1)`$, $`(0+2)`$, and $`(0+3)`$, respectively (cf. Fig. 1). In fact, all four pulses interact with each other, and therefore each new constituent added to a multi-component pulse โfeelsโ the effective potential formed not only by the shepherd pulse but also by all the weaker pulses that are already trapped. In addition, mutual trapping of the pulses $`(1)`$, $`(2)`$ and $`(3)`$ without the shepherd pulse is possible. As a result, new families of the two-mode $`(1+2)`$ and, branching off from it, the three-mode $`(0+1+2)`$ solutions appear (marked curves in Fig. 2). The three-mode solutions bifurcate at the point $`O_3`$ and give birth to the four-mode $`(0+1+2+3)`$ solitons (branch $`O_3O_4`$). An example of such four-wave composite solitons is shown in Fig. 2 (inset). This solution corresponds to the typical shepherding regime of the BPW transmission for $`N=4`$, when the central pulse $`u_0`$ traps and guides three smaller fundamental-mode pulses on different wavelengths.
On the bifurcation diagram (Fig. 2), starting from the central pulse branch, the solution family undergoes a cascade of bifurcations: $`O_1O_2O_3O_4`$. On each segment of the corresponding solution branches, different multi-component pulses are found: $`(0)(0+1)(0+1+2)(0+1+2+3)(1+2+3)`$. The values of the modal parameters in Fig. 2 are chosen to provide a clear bifurcation picture, although they correspond to the wavelength spacing that is much larger than the one used in the experiments , for which $`\gamma _n/\gamma _{n+1}0.997`$. If the modal parameters are tuned closer to each other, the first two links of the bifurcation cascade tend to disappear. Ultimately, for equal parameters, the bifurcation points $`O_2`$ and $`O_3`$ merge at the point $`O_1`$, and the four-mode soliton family (thick line in Fig 2) branches off directly from the central pulse branch $`(0)`$. Then, near the point $`O_1`$, the four-mode pulse can be described by the linear theory. The qualitative picture of the bifurcation cascade for $`N=4`$ preserves for other values of $`N`$.
The BPW solitons supported by shepherd pulses are linearly stable . However, the effect of the relative walk-off due to the group velocity mismatch of the soliton constituents endangers the integrity of a composite soliton and thus the pulse alignment in the BPW links. It is known that, in the case of two solitons of comparable amplitude, nonlinearity can provide an effective trapping mechanism to keep the pulses together . In the shepherding regime, a multi-component pulse creates an effective attractive potential, and the $`j`$-th pulse is trapped if its group velocity is less than the escape velocity of this potential. The threshold value of the walk-off parameter can be estimated as : $`\delta _j^2(4\alpha _j/P_j)_{mj}\gamma _mu_m^2(0)`$, where $`u_m^2(0)`$ is the peak intensity of the component $`m`$. For instance, for the component $`j=1`$ of the four-component BPW soliton presented in Fig 2 (point A), the estimated threshold $`\delta _11.7`$ agrees with the numerically calculated value $`\delta _12.2`$.
In reality, all the components of a BPW soliton would have nonzero walk-off. The corresponding numerical simulations are presented in Fig. 3 for $`N=4`$. Initially, we launch four pulses as an exact four-mode BPW soliton $`A`$ (see Fig. 2) of Eqs. (2). When this soliton evolves along the fiber length, $`z`$, in the presence of small to moderate relative walk-off ($`\delta _j0`$ for $`j0`$), its components remain strongly localized and mutually trapped \[Figs. 3(a,b)\], whereas it loses more energy into radiation for much larger values of the relative walk-off \[Figs. 3(c,d)\]. The former situation is more likely to be realized experimentally as the relative group-velocity mismatch for pulses of different wavelength is different . In this case, the conclusive estimate for the threshold values of $`\delta _j`$ can only be given if the shepherd pulse is much stronger than the guided pulses, which are approximately treated as non-interacting fundamental modes of the effective waveguide induced by the shepherd pulse.
In conclusion, we have developed a theory of the shepherding effect in BPW fiber links and established that the pulse shepherding can enable the time alignment of the copropagating pulses despite the relative group velocity mismatch.
The authors acknowledge fruitful discussions with A. Hasegawa, C. Yeh and L. Bergman and a partial support of the Performance and Planning Fund. |
no-problem/9912/physics9912031.html | ar5iv | text | # Hydrogen atom in a spherical well: linear approximation
## I Introduction
The problem of a hydrogen atom confined in a sphere has quite a long history in quantum physics. It was first investigated more than sixty years ago by Michels, de Boer, and Bijl in their study of the effects of pressure on the polarizability of atoms and molecules. This problem was then taken up by Sommerfeld and Welker who studied the problem in detail and calculated the critical radius for which the binding energy becomes zero. Over the years there has been a steady flow of papers on this and other closely related problems. The model has often been used as a test problem for various perturbation methods. Using their boundary perturbation method, Hull and Julius obtained a formula which expresses the change of energy for the eigenstates in the confined system in terms of the corresponding wave functions in free space. This method has been improved and generalized by many authors. Some variational methods have also been used to study this problem. Frรถman, Yngve, and Frรถman have developed the phase-integral method as a general method to attack the problem of confined quantum systems and their 1987 paper provides 80 references on this problem.
In recent years there has been some renewed interest on this problem. This is partly driven by the technological advances, such as in the field of semiconductor quantum dots, that have enabled the construction of interesting nanostructures which contain a small and controllable number (1-1000) of electrons. The computation of the electronic structure of such systems necessarily has to take into account the presence of the finite confining boundaries and their influence on the system.
In this paper we shall study the boundary corrections for a hydrogen atom in a spherical well using an approximation method which is linear in energy. This is a well-known method in solid-state physics and has been widely used in electronic structure calculations, under the name of Linear Muffin-Tin Orbital (LMTO) method, since its initial introduction by O. K. Andersen in 1975. The method is best applied to the calculations of the wave functions of a hamiltonian with energies which are in close vicinity of the energy of a known wave function. To the best of our knowledge, this simple method has never been applied previously to this problem of hydrogen atom in an impenetrable sphere. The present paper seeks to serve two purposes. First, it presents a new approach, which has some pedagogical simplicity, to the confined hydrogen atom problem. Second, it offers an analytically tractable problem from which one can hopefully gain some insights into the workings and the accuracy of the LMTO method.
## II Linear Method
In this paper we will examine the boundary corrections for a hydrogen atom situated at the center of a spherical cavity of radius $`S`$ as shown in Fig.1. We will assume the wall of the cavity to be impenetrable and consider the following spherically-symmetric potential:
$$V(r)=\{\begin{array}{cc}e^2/r,\hfill & r<S,\hfill \\ \mathrm{},\hfill & r>S.\hfill \end{array}$$
(1)
The radius of the cavity will be assumed to be much larger than the Bohr radius: $`Sa_0.`$ In the remainder of the paper we shall use the atomic units:
$$\mathrm{}=\frac{e^2}{2}=2m=1.$$
(2)
The unit of length is the Bohr radius $`a_0=\mathrm{}^2/me^2`$ and the unit of energy is the Rydberg: $`\mathrm{Ry}=e^2/2a_0=13.6`$ eV. The Schrรถdinger equation takes the following form:
$$H\mathrm{\Psi }(๐ซ)=\left(^2\frac{2}{r}\right)\mathrm{\Psi }(๐ซ)=E\mathrm{\Psi }(๐ซ).$$
(3)
The wave function $`\mathrm{\Psi }(๐ซ)`$ satisfies the Schrรถdinger equation for the hydrogen atom for $`r<S`$, in particular it should still be regular at the origin. The only difference from the free-space case is that now we have to impose a different boundary condition: the wave function should vanish at $`r=S`$ instead of at $`r=\mathrm{}`$.
For $`Sa_0`$, the changes in the ground-state wave function and energy due to the presence of the wall are expected to be โsmallโ because the wave function is concentrated at the center of the cavity, far away from the confining wall.
In free space, i.e. in the absence of the confining cavity, the hydrogen atom has the familiar Rydberg spectrum:
$$\epsilon _n=\frac{1}{n^2},n=1,2,\mathrm{}.$$
(4)
In the presence of the cavity, we write
$$E_n=\epsilon _n+\mathrm{\Delta }\epsilon _n.$$
(5)
We use small letters ($`\epsilon ,\psi `$, etc.) to denote quantities for the free-space problem and capital letters ($`E,\mathrm{\Psi }`$, etc.) for the corresponding quantities in the cavity problem. The dimensionless parameter $`(\mathrm{\Delta }\epsilon _n/\epsilon _n)`$ is expected to be small for $`n^2a_0S.`$ In the linear method, the (unnormalized) wave function at energy $`E_n`$ is approximated by
$$\mathrm{\Psi }(E_n,๐ซ)=\psi (\epsilon _n,๐ซ)+\mathrm{\Delta }\epsilon _n\dot{\psi }(\epsilon _n,๐ซ).$$
(6)
Here $`\dot{\psi }(\epsilon _n,๐ซ)`$ is the derivative with respect to energy of $`\psi (\epsilon ,๐ซ)`$ evaluated at $`\epsilon =\epsilon _n`$:
$$\dot{\psi }(\epsilon _n,๐ซ)=[\psi (\epsilon ,๐ซ)/\epsilon ](\epsilon =\epsilon _n).$$
(7)
The eigenfunctions in the cavity problem are then obtained by imposing the boundary condition at $`r=S`$:
$$\mathrm{\Psi }(E_n,S,\widehat{๐ซ})=0,$$
(8)
which gives an expression for the energy correction:
$$\mathrm{\Delta }\epsilon _n=\frac{\psi (\epsilon _n,S,\widehat{๐ซ})}{\dot{\psi }(\epsilon _n,S,\widehat{๐ซ})}.$$
(9)
Here $`\widehat{๐ซ}=(\theta ,\varphi )`$ is a unit vector in the direction of $`๐ซ.`$
To apply the linear approximation method we need the general solution to the Schrรถdinger equation at an arbitrary energy $`E`$. Since we are dealing with a spherically-symmetric system, we can separate the variables:
$$\mathrm{\Psi }(๐ซ)=R(r)Y_{lm}(\widehat{๐ซ}).$$
(10)
The resulting radial differential equation is
$$\frac{d^2R}{dr^2}+\frac{2}{r}\frac{dR}{dr}+\left[E+\frac{2}{r}\frac{l(l+1)}{r^2}\right]R=0.$$
(11)
Transforming the variables by defining
$$\omega =\sqrt{E},\rho =2\omega r,$$
(12)
and using the following trial functional form
$$R(\rho )=\rho ^le^{\rho /2}u(\rho ),$$
(13)
then gives us the following differential equation
$$\rho u^{\prime \prime }+\left[2(l+1)\rho \right]u^{}\left[l+1\frac{1}{\omega }\right]u=0,$$
(14)
which is the equation for the confluent hypergeometric function. The general solution of this equation, which is regular at the origin, is
$$u(\rho )=A{}_{1}{}^{}F_{1}^{}(l+1\frac{1}{\omega };2l+2;\rho ),$$
(15)
where $`A`$ is a normalization constant. The radial part of the general solution to the Schrรถdinger equation Eq.(3) with energy $`E=\omega ^2`$ therefore is
$$R_l(\omega ,r)=A(2\omega r)^le^{\omega r}{}_{1}{}^{}F_{1}^{}(l+1\frac{1}{\omega };2l+2;2\omega r).$$
(16)
The free-space solution is obtained by requiring that $`R(r)0`$ as $`r\mathrm{}.`$ From the properties of the hypergeometric functions, this can only happen if $`(l+11/\omega )`$ is a negative integer or zero. This implies that
$$\frac{1}{\omega }=n,l=0,1,\mathrm{},(n1),$$
(17)
with $`n`$ a positive integer. This directly leads to the Rydberg spectrum in Eq.(4).
The function $`R_l(\omega ,r)`$ is plotted in Fig.2 for $`l=0`$ and $`\omega `$ = 1, 0.98, and 0.50. The $`\omega =1`$ curve is the ground-state wave function of the hydrogen atom in free space and is nodeless. Here a node of $`R_l(\omega ,r)`$ is defined to be a value of the argument $`r`$ which gives zero value for the function $`R_l(\omega ,r)`$. As $`\omega `$ is reduced below 1, the wave function acquires a single node which moves from $`r=\mathrm{}`$ to $`r=2a_0`$ at $`\omega =0.50.`$ where it becomes the $`(n,l)=(2,0)`$ eigenstate of the hydrogen atom in free space. One therefore can obtain the ground-state wave function and energy of the hydrogen atom in a cavity of radius $`S`$ by numerically searching for the energy which gives a wave function with a single node at $`r=S`$. This provides a useful comparison for our approximation.
Since the spherical harmonics are independent of the energy we can recast Eq.(9) into
$$\mathrm{\Delta }\epsilon _{nl}=2\omega _n\frac{R_l(\omega _n,S)}{\dot{R}_l(\omega _n,S)}.$$
(18)
where $`\omega _n=\sqrt{\epsilon _n}`$ and
$$\dot{R}_l(\omega _n,S)=[R_l(\omega ,S)/\omega ](\omega =\omega _n).$$
(19)
Substituting the radial function $`R_l(\omega ,r)`$ in Eq.(16) into Eq.(18) then gives us an explicit formal expression for $`\mathrm{\Delta }\epsilon _n`$ which should be valid for $`Rn^2a_0.`$ Note that the presence of the finite boundary lifts the azimuthal degeneracy of the states with different orbital quantum number $`l`$ (and the same radial quantum number $`n`$). As in the case of the screened Coulomb potential, this occurs because one no longer deal with the pure Coulomb potential.
To gain an insight into Eqs.(18)-(19), we shall consider the ground state ($`n=1`$), which is a special case of the zero angular momentum ($`l=0`$) states. We have
$$R_0(\omega ,r)=Ae^{\omega r}{}_{1}{}^{}F_{1}^{}(1\frac{1}{\omega };2;2\omega r).$$
(20)
For the ground state ($`n=1`$), this is
$$R_0(1,r)=Ae^r{}_{1}{}^{}F_{1}^{}(0;2;2r)=Ae^r.$$
(21)
We are interested in obtaining a simple analytical expression of the correction to the ground-state energy for $`Sa_0`$, therefore we need to calculate the limiting form of $`\dot{R}_0(\omega ,r)`$ for $`ra_0.`$ The asymptotic expansion of the hypergeometric function $`{}_{1}{}^{}F_{1}^{}(a,b,z)`$ for large $`z`$ is
$$\frac{{}_{1}{}^{}F_{1}^{}(a,b,z)}{\mathrm{\Gamma }(b)}=\frac{e^{i\pi a}}{z^a}\frac{I_1(a,b,z)}{\mathrm{\Gamma }(ba)}+e^zz^{ab}\frac{I_2(a,b,z)}{\mathrm{\Gamma }(a)},$$
(22)
with
$$I_1(a,b,z)=\underset{n=0}{\overset{R1}{}}\frac{(a)_n(1+ab)_n}{n!}\frac{e^{i\pi n}}{z^n}+๐ช(|z|^R),$$
(23)
$$I_2(a,b,z)=\underset{n=0}{\overset{R1}{}}\frac{(ba)_n(1a)_n}{n!}\frac{1}{z^n}+๐ช(|z|^R).$$
(24)
The Pochhammer symbol $`(a)_n`$ is defined by
$$(a)_n=a(a+1)\mathrm{}(a+n1)=\frac{\mathrm{\Gamma }(a+n)}{\mathrm{\Gamma }(a)}.$$
(25)
We need to calculate the derivative of this function at $`a=(11/\omega )`$ with $`\omega =1.`$ In this case the dominant term comes from the derivative of $`\mathrm{\Gamma }(a)`$ in the second term in Eq.(22). The first term can be neglected because it does not have the exponential term $`e^z`$ which dominates the derivative at large distances. Keeping only the largest term, we get
$$\frac{}{a}{}_{1}{}^{}F_{1}^{}(a,b,z)e^zz^{ab}\mathrm{\Gamma }(b)I_2(a,b,z)\frac{\psi (a)}{\mathrm{\Gamma }(a)}.$$
(26)
Here $`\psi (a)`$ is the digamma function: $`\psi (a)=\mathrm{\Gamma }^{}(a)/\mathrm{\Gamma }(a)`$. Its ratio with $`\mathrm{\Gamma }(a)`$ as $`a0`$ is
$$\underset{a0}{lim}\frac{\psi (a)}{\mathrm{\Gamma }(a)}=\underset{a0}{lim}\frac{\gamma 1/a}{\gamma +1/a}=1,$$
(27)
where $`\gamma `$ is the Euler constant. This then gives
$$\left[\frac{}{a}{}_{1}{}^{}F_{1}^{}(a,b,z)\right](a0)e^zz^{ab}\mathrm{\Gamma }(b)I_2(a,b,z).$$
(28)
Using this expression, and keeping only the first two terms in $`I_2(a,b,z)`$, we can obtain the limiting form of $`\dot{R}_0(\omega ,r)`$ at large $`r`$ and $`\omega 1`$:
$$\dot{R}_0(\omega ,r)\frac{Ae^{\omega r}}{\omega ^2}\left\{\frac{e^{2\omega r}}{(2\omega r)^{1+1/\omega }}\left[1+\frac{\mathrm{\Gamma }(2+1/\omega )}{2\omega r\mathrm{\Gamma }(1/\omega )}\right]\right\}.$$
(29)
Exactly at $`\omega =1`$, this expression becomes
$$\dot{R}_0(1,r)\frac{Ae^r}{4r^2}\left[1+\frac{1}{r}\right].$$
(30)
Finally, using this equation and Eq.(21) in Eq.(18), we get the boundary correction to the ground-state energy:
$$\mathrm{\Delta }\epsilon _0(S)8S(S1)e^{2S},Sa_0.$$
(31)
## III Discussion
Fig.3 displays the asymptotic dependence of the energy correction on the radius of cavity, Eq.(31), together with the exact curve and the curve obtained from the linear approximation method, Eq.(18), using the exact wave function Eq.(21). It is seen that the asymptotic formula, Eq.(31), is fairly accurate for radii greater than about four Bohr radius. Note that the exact energy at $`S=2a_0`$ is equal to $`\frac{1}{4}`$ Ry, which is the energy of the first excited state $`(n,l)=(2,0)`$ of the hydrogen atom in free space. This is because the corresponding wave function has a node at $`r=2a_0`$ as can be seen in Fig.2.
The asymptotic formula Eq.(31), which is the limit curve in Fig.3, is a โdouble-approximationโ to the exact curve. It is an asymptotic form of the linear curve, Eq.(18), valid for large values of $`S/a_0`$. The linear curve itself is an approximation, linear in energy, to the exact curve. For small values of $`S/a_0`$, and within the linear approximation method, one has to use Eq.(18) which in general, unfortunately, does not correspond to a simple analytic expression. This does not pose a problem in actual electronic-structure applications because there the wave function and its energy derivative are computed numerically. In this paper, for pedagogical purposes, we have calculated the asymptotic formula, Eq.(31), which does correspond to a simple analytic expression.
Knowing the dependence of the ground-state energy on the cavity radius, Eq.(31), allows us to calculate the pressure needed to โcompressโ a hydrogen atom in its ground state to a certain size. This is given by
$$p(S)=\frac{\mathrm{\Delta }\epsilon _0}{V}\frac{4e^{2S}}{\pi }\left(1\frac{2}{S}\right).$$
(32)
At $`S=4a_0`$ this has a value of $`2.13\times 10^4`$ eV/$`a_0^3`$ = $`1.47\times 10^4`$ GPa. At this radius, the change of the ground-state energy is 0.032 Ry which is only three percent of the binding energy of a free hydrogen atom.
The information on the effects of the boundary on the wave function of the atom can also be used to study the influence of the boundary on other properties of the atom, e.g., the spin-orbit coupling energy. It is also interesting to calculate the changes in the wave function and energy of the atom when it is displaced from the center of the cavity, and the force that will push it back towards the center. The linear method also seems to be well-suited for the analysis of the โsoft-cavityโ case where we have a finite potential outside the cavity, instead of the infinite potential considered in this paper. These topics will be examined in future works.
In conclusion, we have used a linear approximation method to calculate the asymptotic dependence of the ground-state energy of a hydrogen atom confined to a spherical cavity on the radius of the cavity. The boundary correction to the energies of the excited states can be obtained using the same method.
AcknowledgementsโD. D. is grateful to Prof. David L. Price (U. Memphis) for introducing him to Andersenโs linear approximation method and for many useful discussions. Thanks are also due to Dr. H. E. Montgomery, Jr. for many useful references. This work has been supported by AF-OSR Grant F49620-99-1-0274. |
no-problem/9912/gr-qc9912057.html | ar5iv | text | # Pendulum Mode Thermal Noise in Advanced Interferometers: A comparison of Fused Silica Fibers and Ribbons in the Presence of Surface Loss
## I Introduction
One of the most important limitations to the sensitivity of long baseline gravitational wave detectors such as LIGO , VIRGO , GEO 600 ,and TAMA is thermal noise associated with the test masses and their suspensions. Designs for advanced detectors propose either fused silica or sapphire test-masses. For fused silica test-masses internal mode thermal noise is expected to be an important source of noise from approximately 20 Hz to a few hundred Hertz, whereas pendulum mode thermal noise is more important below this range.
Pendulum mode thermal noise is due primarily to dissipation in the suspending filaments. It is imperative, therefore, to minimize the intrinsic losses in the filament. Many current detector designs use metal wires to suspend the test-mass, but metals, with $`\varphi 10^5`$, result in unacceptably high levels of pendulum mode thermal noise. Fused silica has lower loss ($`\varphi 3\times 10^8`$) and monolithic fused silica suspensions have been shown to have much higher $`Q`$ than metal wire suspensions . Such a monolithic suspension system is being developed and adopted for use in the GEO 600 detector , while variations of this design are being considered for LIGO II. In particular, fibers with circular cross sections may be replaced with fused silica ribbons , which allow the suspension filaments to be very thin and compliant in the direction of motion .
Experiments indicate that in thin fused silica filaments much of the dissipation takes place in a layer near the surface . The level at which surface loss affects the total pendulum mode dissipation depends on the filament thickness and geometry, and influences the choice of suspension design parameters. To investigate this, we calculate the pendulum mode thermal noise, including surface dependent loss, as a function of the design parameters for fibers and for ribbons.
## II Dissipation Dilution
In the absence of external sources of dissipation, the dissipation in the fundamental mode of a pendulum suspended by a filament is given by
$$\mathrm{\Phi }=\frac{1}{2}\sqrt{\frac{YI}{MgL^2}}\varphi ,$$
(1)
where $`\varphi `$ is the loss angle of the unloaded suspension filament, $`Y`$ is the Youngโs modulus of the filament material, $`I`$ is the cross-sectional moment of inertia, $`M`$ is the supported mass, $`g`$ is the acceleration due to gravity, and $`L`$ is the filament length. In general, $`\varphi `$ and hence $`\mathrm{\Phi }`$ will be a function of frequency. The coefficient of $`\varphi `$ is called the dissipation dilution factor and is the ratio of elastic energy (subject to dissipation) to the total energy stored in the pendulum mode, which is predominantly gravitational energy (non-dissipative). The right hand side of this equation should be multiplied by two if the mass is constrained from rotating in the plane of oscillation, since bending then occurs both in the region where the filament leaves the support and in the region where the filament leaves the mass. If the test mass is suspended by $`N`$ filaments Eq. 1 should be multiplied by $`\sqrt{N}`$. For fibers of diameter $`d_f`$, and ribbons of thickness $`d_r`$ and width $`w`$, we have
$$I=\{\begin{array}{cc}(\pi /64)d_f^4\hfill & \text{fibers,}\hfill \\ (1/12)wd_r^3\hfill & \text{ribbons,}\hfill \end{array}$$
(2)
where subscripts $`f`$ refer to fibers and subscripts $`r`$ to ribbons. Rewriting Eq. 1 using these expressions for the cross-sectional moment of inertia, allowing multiple filaments and assuming that the suspension constrains the filaments to bend at both ends, we have
$$\mathrm{\Phi }=\{\begin{array}{cc}\sqrt{\frac{YN_f\pi (d_f/2)^4}{4MgL^2}}\varphi _f\hfill & \text{fibers,}\hfill \\ \sqrt{\frac{YN_rwd_r^3}{12MgL^2}}\varphi _r\hfill & \text{ribbons,}\hfill \end{array}$$
(3)
where $`M`$ is the suspended mass, $`N_f`$ is the number of suspension fibers, $`\varphi _f`$ is the loss angle of the unloaded fibers, $`N_r`$ is the number of suspension ribbons, and $`\varphi _r`$ is the loss angle of the unloaded ribbons. The limit to how much dissipation dilution we can obtain, and hence the lower limit to the pendulum mode thermal noise, is set by the values obtainable for the parameters in these equations. They are limited by a number of material and technological concerns, especially by the value achievable for the loss angle $`\varphi _f`$ or $`\varphi _r`$. This loss angle may depend on a number of factors including the bulk material loss angle, surface loss, and the filament geometry.
If $`\varphi _f`$ and $`\varphi _r`$ were independent of filament thickness and roughly equal, Eq. 3 would indicate that by using very thin but wide ribbons one could obtain lower dissipation $`\mathrm{\Phi }`$, and hence less pendulum mode thermal noise, than by using fibers of similar load bearing capacity. However, since surface loss becomes increasingly important as the filament thickness is reduced, the enhanced dissipation dilution obtainable using thin ribbons is moderated by an increase in $`\varphi _r`$.
## III Thermal Noise in the Presence of Surface Loss
The loss angle for a sample, including surface loss, may be expressed as
$$\varphi =\varphi _{\mathrm{bulk}}(1+\mu \frac{d_s}{V/S}),$$
(4)
where $`\varphi _{\mathrm{bulk}}`$ is the loss angle of the bulk material, $`\mu `$ is a geometrical factor and $`d_s`$ is the dissipation depth which parametrizes the filament size at which surface loss becomes important. The geometrical factor $`\mu `$ describes the emphasis placed on the condition of the surface due to the sample geometry and mode of oscillation while the dissipation depth $`d_s`$ describes the amount of surface damage and the depth to which it penetrates. Equation 4 serves to define $`d_s`$, whose value for a given sample may be determined by experiment. The geometrical factor is given by
$$\mu =\frac{V}{S}\frac{_๐ฎฯต^2(\stackrel{}{r})d^2r}{_๐ฑฯต^2(\stackrel{}{r})d^3r},$$
(5)
where $`\stackrel{}{r}`$ denotes a point in the sample, $`ฯต(\stackrel{}{r})`$ the strain amplitude, $`V`$ is the volume of the sample, $`S`$ the surface area of the sample, $`๐ฑ`$ is the set of points comprising the volume, and $`๐ฎ`$ is the set of points comprising the outer surface. For transverse oscillations of fibers and ribbons we have
$$\mu =\{\begin{array}{cc}2\hfill & \text{fibers,}\hfill \\ (3+a)/(1+a)\hfill & \text{ribbons}\hfill \end{array}$$
(6)
where $`a`$ is the aspect ratio of the combined ribbons, $`ad_r/W`$ with $`WN_rw`$ being the total combined width of the ribbons.
Experiments suggest that $`\varphi _{\mathrm{bulk}}`$ is approximately constant over the frequency range of interest for LIGO. For simplicity, we will assume $`\varphi _{\mathrm{bulk}}`$ to be constant. Substituting Eqs. 4 and 6 into Eq. 3 we obtain
$$\mathrm{\Phi }=\{\begin{array}{cc}\sqrt{Y/16\sigma L^2}(d_f+8d_s)\varphi _{\mathrm{bulk}}\hfill & \text{fibers,}\hfill \\ \sqrt{Y/12\sigma L^2}(d_r+(6+2a)d_s)\varphi _{\mathrm{bulk}}\hfill & \text{ribbons,}\hfill \end{array}$$
(7)
where $`\sigma `$ is the filament stress. In both cases, the first term is the traditional expression for dissipation dilution, while the term involving $`d_s`$ represents a reduction of the dilution due to the increasing importance of surface loss as the filament thickness is decreased. For very thin filaments the term involving $`d_s`$ dominates and the loss angle becomes independent of the filament thickness.
From the fluctuation-dissipation theorem , we find the power spectrum of the pendulum mode thermal fluctuations above the pendulum mode resonance, at angular frequency $`\omega `$:
$$x^2(\omega )=\frac{4k_BTg}{ML^{}\omega ^5}\mathrm{\Phi }(\omega ),$$
(8)
where $`\omega \stackrel{>}{_{}}\sqrt{g/L^{}}`$, $`T`$ is the temperature of the suspending filaments and $`L^{}`$ is the radius of the arc traced out by the center of mass during pendulum mode oscillation. For convenience, we will take $`L^{}L`$. Inserting $`\mathrm{\Phi }`$ from Eq. 7, and including the contribution from thermoelastic damping we have the expression for the pendulum mode thermal noise:
$$\begin{array}{c}x^2(\omega )=\{\begin{array}{cc}\frac{4k_BTg}{ML^2\omega ^5}\sqrt{\frac{Y}{16\sigma }}\left[d_f\left(\varphi _{\mathrm{bulk}}+\varphi _{\mathrm{th}}\right)+8d_s\varphi _{\mathrm{bulk}}\right]\hfill & \text{fibers,}\hfill \\ \frac{4k_BTg}{ML^2\omega ^5}\sqrt{\frac{Y}{12\sigma }}\left[d_r\left(\varphi _{\mathrm{bulk}}+\varphi _{\mathrm{th}}\right)+(6+2a)d_s\varphi _{\mathrm{bulk}}\right]\hfill & \text{ribbons.}\hfill \end{array}\hfill \end{array}$$
(9)
The thermoelastic damping term $`\varphi _{\mathrm{th}}`$ is given by
$$\varphi _{\mathrm{th}}=\frac{Y\alpha ^2T}{C}\frac{\omega \tau _d}{1+\omega ^2\tau _d^2},\tau _d=\{\begin{array}{cc}d_f^2/13.55D\hfill & \text{fibers,}\hfill \\ d_r^2/\pi ^2D\hfill & \text{ribbons,}\hfill \end{array}$$
(10)
where $`\alpha `$ is the thermal expansion coefficient of the filament material, $`C`$ is the heat capacity per unit volume, and $`D`$ is the thermal diffusion coefficient.
## IV Advanced Interferometers
Using Eq. 9, we can now make estimates for the level of pendulum mode thermal noise achievable in advanced interferometers and investigate the dependence on filament thickness and geometry. Using the results of previous experiments and design studies, most of the parameters can be well bounded with some reasonable assumptions. From these parameters, we can obtain upper and lower bounds on the pendulum mode thermal noise at a given frequency as a function of ribbon thickness. This analysis assumes that losses extrinsic to the filaments (e.g. recoil of the suspending structure or lossy filament-to-test-mass bonds) have been made negligible.
The achievable fiber diameter depends on the achievable stress $`\sigma `$ and on the mass $`M`$. The fiber diameter in Eq. 9 can be replaced by
$$d_f=\sqrt{4Mg/\pi N_f\sigma }.$$
(11)
The remaining parameters $`M`$,$`L`$,$`N_f`$, $`\sigma `$, and $`d_s`$ are independent and the achievable pendulum mode thermal noise depends on the bounds established for these parameters.
It is clear from Eq. 9 that an efficient way of minimizing the thermal noise is to make the length of the suspension as large as possible. However, the value of $`L`$ is bounded above by the minimum allowable spacing $`f_{\mathrm{min}}`$ of the violin mode frequencies. The frequency spacing must be kept above about 300 Hz to allow reasonably large intervals of the spectrum to be free of violin modes. The frequency spacing limited $`L`$ is
$$L=\frac{1}{2f_{\mathrm{min}}}\sqrt{\frac{\sigma }{\rho }}$$
(12)
where $`f_{\mathrm{min}}`$ is the minimum allowable spacing of the violin modes. The range of possible lengths is determined by the range of possible stress to which the filaments will be subject. This in turn depends on the breaking strengths achievable for fused silica filaments. Many measurements have been reported on the breaking strength of fibers manufactured from naturally occurring, and synthetic, vitreous silica. Little is known about the strength of ribbons, though one is tempted to assume their strengths are similar. Values reported for the breaking strength of fibers in tension at room temperature vary greatly depending on the condition of the fibers , but strengths on the order of several gigapascals at room temperature, in fibers with diameters as large as 1 mm have been reported . By assuming that the filaments are only loaded to a fraction of their breaking strength we assign the range of possible stress to which the filaments will be subject as $`0.1\mathrm{GPa}\sigma 1.0\mathrm{GPa}`$. Substituting these values into Eq. 12 we obtain the range of possible lengths $`0.36\mathrm{m}L1.1\mathrm{m}`$. In principle, the physical design of the suspension also places an upper limit on the length, but ultimately this limit is likely to be less stringent than that due to the frequency spacing.
For the number of filaments we will choose $`N_f=N_r=4`$. This reflects the most likely choice for the suspensions in advanced detectors which require โmarionetteโ control of the test masses . Analytically, the number of ribbons does not enter into the calculation as, for a given stress $`\sigma `$ and ribbon thickness $`d_r`$, only the total combined width of the ribbons $`W`$ is fixed.
In order to avoid excessive radiation pressure noise in LIGO II, the suspended test masses must have a mass $`M`$ of 30 kg. However, if this is not feasible, the LIGO I mass of 10 kg can be used as a fall-back. We take 10 kg $`<M<`$ 100 kg to allow for possible advanced designs that utilize even larger masses .
For the bulk material dissipation in fused silica, Gretarsson and Harry have measured $`\varphi _{\mathrm{bulk}}=3.3\pm 0.3\times 10^8`$. Others have measured similar values for the bulk material dissipation in samples of different geometry . We shall adopt the relatively reliable upper limit $`\varphi _{\mathrm{bulk}}=3.6\times 10^8`$ and somewhat uncertainly set the lower limit as $`\varphi _{\mathrm{bulk}}=2.5\times 10^8`$
Finally, for untreated fused silica fibers drawn in a natural gas flame, $`d_s=180\pm 20\mu `$m has been measured . It should be noted that the factors resulting in a given quality of fiber surface are not well quantified. Fibers pulled from silica rods with different initial surface conditions, or fibers drawn using a different production method, may have a different dissipation depth or surface loss than that found in the measurements above. The geometry of a filament could also have some effect on the quality of the surface layer, e.g. through different cooling stresses during fabrication, and our assumption that fused silica ribbons have the same surface properties as fused silica fibers has not been tested. However, given these assumptions we set an upper bound of $`d_s=200\mu `$m, which should be reliable. To estimate a lower bound for $`d_s`$ we will use a $`Q`$-measurement of a ribbon of thickness $`50\mu `$m, made of natural fused quartz . One mode of this ribbon showed a $`Q`$ much higher than others. After subtracting the loss due to thermoelastic damping (Eq. 10) and assuming the remaining loss is mainly surface loss, the equivalent $`d_s`$ for a similarly limited fused silica ribbon can be estimated at $`30\mu `$m. We, therefore, set the range of possible dissipation depths at 30 $`\mu `$m$`<d_s<200\mu `$m.
Table I summarizes the best and worst case estimates for the parameters, and also a โbest guessโ for the most probable values. Figure 1 shows the levels of pendulum mode thermal noise at 10 Hz as a function of filament thickness for each of the three sets of estimates of the parameters. The graphs for ribbon filaments all show a maximum around $`400\mu `$m. This is the thermoelastic damping peak, which for all but the most optimistic case must be avoided if the desired levels of pendulum mode thermal noise are to be achieved. Figure 1 also shows the value of using ribbons rather than fibers as suspension filaments. Since the diameter of fibers cannot be independently reduced, the pendulum mode thermal noise is dominated by thermoelastic damping. The use of ribbons allows us to evade this problem by residing in the surface loss limited regime. To evade the thermoelastic regime and to obtain better dissipation dilution one might be tempted to use the thinnest ribbons possible. However, at small thicknesses, below the thermoelastic peak, the graphs begin to level off. This is due to surface loss which sets a minimum to achievable pendulum mode thermal noise of
$$x^2(\omega )_{\mathrm{min}}=\frac{24k_BTg}{ML^2\omega ^5}\sqrt{\frac{Y}{12\sigma }}d_s\varphi _{\mathrm{bulk}},$$
(13)
It is clear from the plots that, for all but the most optimistic values of $`d_s`$, reducing the ribbon thickness below about $`50\mu `$m (corresponding to individual ribbon widths $`w`$ of 5 mm, 3 mm, and 5 mm for the three cases) does not result in significant reductions of pendulum mode thermal noise. To satisfy LIGO II requirements, the pendulum mode thermal noise at 10 Hz should be less than about $`2\times 10^{19}\mathrm{m}/\sqrt{\mathrm{Hz}}`$ . Even in the presence of surface loss, only the worst case scenario does not achieve this level. In the most probable case, pendulum mode thermal noise will be lower than other noise sources at 10 Hz (radiation pressure noise, fused silica internal mode thermal noise), provided the ribbon thickness is kept below the thermoelastic regime. In the most optimistic case, other noise sources dominate the total noise at 10 Hz regardless of ribbon thickness. The most probable estimate for fiber suspensions gives pendulum mode thermal noise that is just acceptable for LIGO II. If there are unforeseen problems with ribbon suspensions, fiber suspensions may still prove an acceptable alternative. We reiterate that in the comparison of fibers and ribbons, we have assumed that the breaking strength of fibers is not significantly greater than that of ribbons of equal cross-sectional area; we have also assigned them identical surface properties. Further research is required to test these assumptions. Research is continuing on ribbon suspensions within the LIGO research community, and additional emphasis on surface properties and breaking strength is warranted.
## V Comparison of low frequency noise sources in LIGO II
While studying the pendulum mode thermal noise at 10 Hz is a good way to gain insight into the effect of the different physical parameters on the level of this source of noise, the comparison with other sources of noise must be done over the entire range of relevant frequencies.
For clarity, we now specialize to a single set of values for the physical parameters of the suspension filamentsโthose proposed for LIGO II . These parameters are shown in the last column of Table I. With a four ribbon suspension, it follows from the proposed stress that each ribbon should have a cross-sectional area of $`5.5\times 10^7\mathrm{m}^2`$, giving a width of 5.5 mm for the $`100\mu `$m ribbon thickness proposed. In general, the values for all these parameters fall between the worst case and most probable case scenarios. As such, the LIGO II proposal is fairly conservative, and better noise performance may be achieved.
The LIGO II proposal does not, however, specify a value for $`d_s`$. In keeping with the conservative spirit of the other parameters, we choose $`d_s=180\mu `$m. This falls between the worst case and most probable case and corresponds to the measured value in fibers without strict handling requirements .
Figure 2 shows the pendulum mode thermal noise of a fiber suspension in relation to estimates for the other sources of noise in the interferometer. Figure 3 shows the same comparison for the pendulum mode thermal noise of a ribbon suspension . Note that the noise due to radiation pressure in the LIGO II design is greater than the pendulum mode thermal noise for the ribbon suspension. For greater low frequency sensitivity, radiation pressure can always be reduced by lowering the amount of laser power at the beam splitter. This could reduce the noise to the pendulum mode thermal noise level in the low frequency band but will increase the amount of laser shot noise at higher frequencies. From figures 2 and 3 it is clear that while a ribbon suspension leads to lower pendulum mode thermal noise than a fiber suspension, the pendulum mode thermal noise for the fiber suspension is still comparable to radiation pressure noise in the relevant frequency band. If there are unforeseen problems with ribbons (buckling, lower strength, etc.), fibers do provide an acceptable, if less attractive, alternative.
## VI Acknowledgments
We would like to thank our colleagues at the University of Glasgow, at Stanford University, and throughout the gravitational wave community for their interest in this work. Additional thanks to Ken Strain for his help with LIGO II parameters beyond the white paper as well as Gabriela Gonzalez, Gary Sanders, David Tanner, and Rai Weiss for their comments. This work was supported by Syracuse University, U.S. National Science Foundation Grants No. PHY-9602157 and No. PHY-9630172, the University of Glasgow, and PPARC. |
no-problem/9912/astro-ph9912357.html | ar5iv | text | # On the Transition from AGB Stars to Planetaries: The Spherical Case
## 1. Introduction
Although we are here only interested in an explanation of non-spherical structures observed so often in planetary nebulae (PN), a detailed study of spherical systems appears to be important for at least two reasons. The first is a more physical one and refers to our still rather poor knowledge of PN formation and evolution. The use of spherical models allows a detailed study of basic physical processes without having to worry about influences caused by non-spherical structures. The other reason is a technical one: the presently available computing power is too limited to follow the evolution of non-spherical model planetaries over their whole life with sophisticated physics and good spatial resolution.
It is expected that basic physical processes work similarly in systems with a complex geometry. They set the stage for the other phenomena responsible for the development of non-spherical structures and should always be considered.
## 2. The Basic Physical System
The evolution of an AGB star is driven by mass loss until the mantle is lost and the remnant begins to contract rapidly towards higher temperatures. Eventually the burning shells extinguish and the white dwarf cooling path is reached. The remnantโs luminosity and evolutionary speed depend very sensitively on its mass, and possible ranges of luminosity and speed are shown in Fig. 1. All these remnants stem from different progenitors whose evolutionary histories have been consistently followed from the main sequence through all the later phases including mass loss and thermal pulses (see Blรถcker 1995 for the details). Similar computations have been performed by Vassiliades & Wood (1994).
The evolution of the AGB remnant, - the central star -, in temperature and luminosity drives in turn the development of a PN out of a cool wind envelope by two processes, viz. by the concomitant changes of the stellar radiation field and wind power. The relative importance of both processes with respect to the evolution of a planetary varies with the central-starโs age (or effective temperature). The number of hydrogen-ionizing photons emitted per second increases rapidly with the remnantโs effective temperature, but later the luminosity decrease starts to dominate. For a typical central-star mass of 0.6 M$`_{}`$ the maximum flux of ionizing photons occurs between 60 000 and 70 000 K. The very rapid luminosity drop after the central star has reached its maximum effective temperature (cf. Fig. 1) may cause substantial recombination. It is important to follow this late evolutionary phase with a fully time-dependent code that treats all the relevant physical processes, i.e. ionization, recombination, heating and cooling (Marten 1995).
The property of mass-loss during the post-AGB evolution is more difficult to evaluate. For an AGB star we have winds driven by radiation pressure on small grains with momentum transfer to the gas. The outflow rates depend on the starโs luminosity and effective temperature (cf. Sedlmayr & Dominik 1995; Arndt, Fleischer, & Sedlmayr 1998). Typical rates are between about $`10^7`$ and $`10^4`$ M, with outflow velocities from 5 to 25 km/s, i.e. $`<V_{\mathrm{esc}}`$, the surface escape velocity. During the post-AGB contraction, mass-loss rates are lower by orders of magnitude, but the wind velocities are substantially higher. The driving of the outflow occurs via radiation pressure on lines (cf. Pauldrach et al. 1988), $`\dot{M}1.310^{15}(L/L_{})^{1.86}`$, and typical values are $`10^8`$ M/yr for the rate, but now with $`V\mathrm{1\hspace{0.17em}000}\mathrm{}\mathrm{10\hspace{0.17em}000}`$ km/s $`(2\mathrm{}4)V_{\mathrm{esc}}`$. The wind power, $`P=\dot{M}V^2/2`$, reaches its maximum close to the turn-around point at maximum effective temperature and declines then rapidly with the luminosity.
It should, however, be noted that the stellar wind does not interact directly with the nebular/AGB material. Instead, the windโs kinetic energy thermalizes through a shock and adds to the energy and matter content of hot, shocked wind material emitted at earlier times. The thermal pressure of this โbubbleโ of hot but very tenuous gas drives the inner edge of the planetary. Though it is actually the time integral over the wind power that determines the energy content of the bubble, the maximum bubble pressure coincides roughly with the maximum wind power of the central star.
## 3. Formation and Evolution of PN
Given typical mass-loss rates between $`10^5`$ and $`10^4`$ M$`_{}`$/yr during the final AGB evolution and the still rather low wind velocities during the remnantโs transition through the cool part of the Hertzsprung Russell diagram, the dynamical effects of wind interaction are expected to be modest. As soon as the remnant becomes sufficiently hot, ionization sets in and gives birth to a HII region deeply embedded in the neutral/molecular circumstellar AGB material. Thermal pressure of the ionized matter drives a shock wave into the ambient slow material, and the front itself defines the outer edge, $`R_{\mathrm{pn}}`$, of the new PN even if the ionization has already broken through into the surrounding region. The frontโs speed, $`\dot{R}_{\mathrm{pn}}`$, is mainly determined by the balance of the shellโs thermal pressure with the ram pressure exerted by the ambient matter. At a given time, speed and position of the outer rim of a planetary depend thus on the mass-loss history over the last 10 000 to 20 000 years of AGB evolution. The mass embraced by $`R_{\mathrm{pn}}`$ is steadily growing with time at the expense of the still undisturbed (although possibly ionized) AGB wind material. We emphasize here that $`\dot{R}_{\mathrm{pn}}`$ is not a matter velocity and cannot be observed spectroscopically!
As outlined in Sect. 2. above, the wind interaction through the hot bubble becomes more and more important with time and compresses and accelerates the inner parts of the shell into a high-density shell, the so-called โrimโ (cf. Balick 1987). Since the bubbleโs pressure is controlled by the central-starโs wind properties, the evolution of the central star controls the shaping of the inner parts of a planetary.
Hydrodynamical simulations that took ionization and wind interaction properly into account have shown that both effects lead unavoidably to typical double-shell structures consisting of an inner high-density โrimโ surrounded by a low-density โshellโ with no resemblance to the initial density and velocity distributions (cf. Marten & Schรถnberner 1991; Mellema 1994). Thus planetaries do not contain direct information on precedings mass-loss phases during the end of the AGB evolution!
## 4. Two-Component Radiation Hydrodynamics Simulations of the <br>Final AGB Phase
Attempts to model the evolution of planetary nebulae face the problem of selecting the proper initial configuration, i.e. density distribution and velocity field. Since practically nothing is known, rather simple conditions are usually assumed, viz. mass outflow with constant speed and rate. Our present knowledge of the late stages of stellar evolution allows, however, to draw more detailed conclusions: (i) The theory of radiation-driven winds on the AGB suggests that both the outflow rate and -speed depend on the stellar luminosity and effective temperature, and on the chemical composition as well (Arndt et al. 1997). (ii) Stellar evolution theory predicts large luminosity variations (up to a factor three) during thermal pulses, expected to lead to drastic variations of outflow rates and speeds.
One can expect that hydrodynamical simulations of AGB wind envelopes along the upper AGB give very useful informations about initial conditions to be expected for planetaries. A first step into this direction has been reported by Schรถnberner et al. (1997). The stellar outflow is assumed to be spherically symmetric, and the equations of hydrodynamics are solved for the gas and the dust component, coupled by momentum exchange due to dust-gas collisions. We used a modified version of the code developed by Yorke & Krรผgel (1977), making use of the following simplifications: (i) Radiation transfer is considered only for the dust component, i.e. exchange of photons between dust grains and the gas is neglected. (ii) The dust temperature is computed from radiative equilibrium, and the gas (neutral hydrogen) is assumed to have the same (local) temperature. (iii) The dust consists of single-sized grains, either based on oxygen or carbon chemistry, adopting a fixed dust-to-gas ratio at the dust condensation point.
We introduced time-dependent values of stellar mass, luminosity, effective temperature and variable mass loss (as shown in Fig. 2) with a constant flow velocity equaling the local sound speed, $`3`$ km/s as a boundary condition. The radiation pressure on the grains and the momentum transfer to the gas leads to an acceleration of the material to typical final outflow velocities around 10 to 15 km/s, in agreement with observations. A more detailed description of this fully implicit radiation hydrodynamics code has been given by Steffen et al. (1997) and Steffen, Szczerba, & Schรถnberner (1998).
### 4.1. Evolution through the Upper AGB and Beyond
We extended our AGB hydrodynamical simulations somewhat into the post-AGB regime, using the mass-loss prescription shown in the upper panel of Fig. 2. Mass-loss rate and effective temperature (or radius) of the star are coupled according to the prescription of Blรถcker (1995), and the most prominent feature is a rapid decrease of the rate by orders of magnitude within about 100 years around effective temperatures of 6 000 K. The consequence is a rapid detachment and thinning of the dust shell since the density of any newly formed dust is strongly reduced and gives no detectable signature. This is illustrated by the sequence of spectral energy distributions in the lower panel of Fig. 2 which covers a time interval of less than 500 years. For this simulation we adopted an oxygen-based grain type (โAstronomical Silicatesโ), and the gradual disappearance of the strong silicate absorption feature with increasing shell detachment is clearly seen. At the same time, the previously totally obscured AGB remnant becomes visible. Our modelled spectral energy distributions resemble very much those of known proto-planetary nebulae (Hrivnak, Kwok, & Volk 1989), indicating that the mass-loss variations at the end of the AGB evolution as chosen by Blรถcker (1995) are close to reality!
Due to the variations of the mass-loss rate as shown in the upper panel of Fig. 2, the density structure is clearly different from the usual assumption of a $`\rho r^2`$ law (middle panel): The density dip near $`r=10^{18}`$ cm is caused by the last thermal pulse about 30 000 years ago (cf. upper panel), while the rapid density increase towards the inner parts of the shell ($`\rho r^3`$) is due to the recent increase of mass-loss rate. Further inwards the density increase flattens somewhat ($`\rho r^1`$). The outflow velocity is rather constant, $`11`$ km/s, except for a slight decrease during the last thermal pulse.
### 4.2. Evolution across the Hertzsprung Russell Diagram
Little is really known about the development of wind strength and speed during the early post-AGB evolution. In the model shown in Fig. 2 the mass loss is set to the Reimers prescription (Reimers 1977) which is then kept until the remnant becomes hot enough for the theory of radiation driven winds to be applicable (Pauldrach et al. 1988). A more detailed description how mass-loss rate and wind speed may vary in the course of the post-AGB evolution is given in Marten & Schรถnberner (1991).
In order to investigate the transformation of a cool AGB wind envelope into a planetary nebula, we used the model structure shown in Fig. 2 at time $`t2`$ as input for another radiation hydrodynamics code. This one-component explicit code is based on a second-order Godunov-type advection scheme and considers time-dependent ionization, recombination, heating and cooling of six elements (H, He, C, N, O, Ne) with all of their ionization stages. More details can be found in Perinotto et al. (1998).
A visualization how our model planetary develops in size, brightness and structure is presented in Fig. 4, showing H$`\alpha `$ surface-brightness distributions for selected models taken from our hydrodynamical simulation along the post-AGB evolutionary path displayed in Fig. 3.
At age = 1 837 yrs (upper left in Fig. 4) ionization has already created a small but bright shell limited by a density wave which keeps the photons trapped. The peak flow velocity in this wave is 23 km/s, whereas the flow at the inner edge of the shell is nearly stalling with only about 4 km/s. About 1 500 years later (upper right) the ionization has broken through the shock, and the shell expands and dilutes because the wave front (the shock) is further accelerated due to the steeper than $`\rho ^2`$ density slope. At about this time, wind interaction becomes noticeable through the formation of a compressed, bright โrimโ at the inner edge of the shell.
At age = 6 365 yrs (lower left), the brightness contrast between inner rim and shell has significantly increased by the combined action of the shellโs expansion into the AGB wind and compression from inside by the โhot bubbleโ. The whole structure corresponds to a typical โattached-halo multiple-shell PNโ (Stanghellini & Pasquali 1995). The maximum flow velocity, immediately behind the shock front, is now 32 km/s, that of the rim matter about 24 km/s. This model agrees also qualitatively with the results of a structural and kinematical study of Gศฉsicki, Acker, & Szczerba (1996) who found, e.g., for the double-shell planetary IC 3568 shell velocities up to 40 km/s, but only about 10 km/s for the inner dense parts (the rim).
When the central starโs luminosity has dropped rapidly to only a few 100 L$`_{}`$, recombination within the shell reduces its brightness to typical halo values (age = 8716 yrs, lower right) which ends then the double shell phase that lasted from about age = 2800 till age = 7600 yrs, i.e. for a quite substantial fraction of a typical PN life time. Though the shellโs brightness compares now with that of a halo, it is not a halo: the matter within the recombined shell continues to expand and compresses the AGB gas into a dense but thin shell, leading to substantial limb brightening. An example for such a structure is NGC 2438 which consists of a bright ring-like shell surrounded by a limb-brightened โhaloโ. The analysis of Corradi et al. (in preparation) shows that this โhaloโ is actually the recombined former shell set up by ionization at the very beginning of the planetaryโs life.
## References
Arndt, T. U., Fleischer, A. J., & Sedlmayr, E. 1997, A&A, 327, 614
Balick, B. 1987, AJ, 94, 671
Blรถcker, T. 1995, A&A, 297, 727
Gศฉsicki, K., Acker, A., & Szczerba, R. 1996, A&A, 309, 907
Hrivnak, B. J., Kwok, S., & Volk, K. 1989, ApJ, 346, 265
Marten, H. 1995, in Annals of the Israel Physical Society Vol. 11, Asymmetrical Planetary Nebulae, ed. A. Harpaz & N. Soker, 273
Marten, H., & Schรถnberner, D. 1991, A&A, 248, 590
Mellema, G. 1994, A&A, 290, 915
Pauldrach, A., Puls, J., & Kudritzki, R.-P., Mรฉndez, R. H., & Heap, S. H. 1988, A&A, 207, 123
Perinotto, M., Kifonidis, K., Schรถnberner, D., & Marten, H. 1998, A&A, 332, 1044
Reimers, D. 1977, in Problems in Stellar Atmospheres and Envelopes, ed. B. Baschek, W. H. Kegel, & G. Traving (Berlin: Springer), 229
Schรถnberner, D., Steffen, M., Stahlberg, J., Kifonidis, K., & Blรถcker, T. 1997, in Advances in Stellar Evolution, ed. R. T. Rood & A. Renzini (Cambridge: University Press), 146
Sedlmayr, E., & Dominik, C. 1995, Ap&SS, 73, 211
Stanghellini, L., & Pasquali, A. 1995, ApJ, 452, 286
Steffen, M., Szczerba, R., & Schรถnberner, D. 1998, A&A, 337, 149
Steffen, M., Szczerba, R., Mensฬhchikov, A., & Schรถnberner, D. 1997, A&AS, 126, 39
Vassiliades, E., & Wood, P. R. 1994, ApJS, 92, 125
Yorke, H. W., & Krรผgel, E. 1977, A&A, 54, 183 |
no-problem/9912/cond-mat9912429.html | ar5iv | text | # Superconductivity in striped Hubbard Clusters
## 1 Introduction
Short after the discovery of the high-$`T_c`$ superconductors BED86 , the Hubbard model was introduced as a generic description of the CuO-planes on a microscopic level AND87 . According to the Van Hove scenario we use an extension, the ttโ-Hubbard model, to shift the Van Hove singularity in the density of states close to the Fermi energy NEW92 . The experimental result of striped domains CHE89 ; TSU98 in the superconducting CuO-planes inside the high-$`T_c`$ materials has inspired this study of striped Hubbard clusters.
In order to understand superconductivity in the high-$`T_c`$ cuprates on a macroscopic level, the high-$`T_c`$ glass model was introduced in 1987 MOR87 ; MOR89/2 .
It was demonstrated, that the high-$`T_c`$ glass model including the ttโ-Hubbard model as a microscopic description of the striped superconducting domains is able to explain several properties of the high-$`T_c`$ cuprates MOR98 , e.g., the d-wave symmetry of the superconducting phase WOL93 ; TSU94 or the pseudogap above $`T_c`$ in the density of states DIN96 ; LOE96 . Furthermore this combined high-$`T_c`$ glass and ttโ-Hubbard model picture gives an intuitive description of the experimental puzzle, that different samples of the same material and same doping exhibit a nearly constant superconducting transition temperature $`T_c`$, yet the critical current densities vary from sample to sample MOR99 .
Hubbard clusters were already investigated with numerical algorithms for a number of different geometries and dimensions, e.g. the one-dimensional chains and ladders DAG92 ; FAB92 ; BUL96 ; NOA96 ; DAU99 , two-dimensional (2D) squares SAN89 ; MON94 ; MOR94 ; ZHA97/2 ; FET97/3 , and layered square systems BUL92 ; MOR92 ; MOR92/3 . The stripe instability was found theoretically within Hartree-Fock calculations applied to an extended Hubbard model ZAA89 , and was confirmed by a number of subsequent investigations ZAA99 . But up to now it is not known whether the pure Hubbard model exhibits striping, e.g., in the form of a phase separation. In the closely related 2D $`tJ`$ model the occurrence of stripes is discussed controversially HEL95 ; HEL98 ; WHI99 ; WHI99/2 . Here we study striped clusters of the Hubbard model directly and do not examine the occurrence of stripes per se, but only the existence of superconductivity in striped Hubbard clusters.
In a single striped domain we consider the ttโ-Hubbard model, which is described in real space by HUB63 ; GUT63 :
$$=_{kin}+_{pot}$$
(1)
with the kinetic
$$_{kin}=\underset{i,j,\sigma }{}t_{i,j}(c_{i,\sigma }^{}c_{j,\sigma }+c_{j,\sigma }^{}c_{i,\sigma })$$
(2)
and the potential part
$$_{pot}=U\underset{i}{}n_{i,}n_{i,}$$
(3)
of the Hamiltonian. We denote the creation operator for an electron with spin $`\sigma `$ at site $`i`$ with $`c_{i,\sigma }^{}`$, the corresponding annihilation operator with $`c_{i,\sigma }`$, and the number operator at site $`i`$ with $`n_{i,\sigma }c_{i,\sigma }^{}c_{i,\sigma }`$. The hopping $`t_{i,j}`$ is only nonzero for nearest neighbors $`i,j`$ ($`t_{i,j}=t`$) and next nearest neighbors ($`t_{i,j}=t^{}`$). Finally $`U`$ is the interaction. Usually we choose $`t^{}<0`$ to shift the Van Hove singularity in the density of states close to the Fermi energy for less than half filled systems ($`n<1`$, where $`n(n_{e,}+n_{e,})/2`$ and $`n_{e,\sigma }`$ is the number of electrons with spin $`\sigma `$). Throughout this paper we set $`n_{e,}=n_{e,}n_e`$ and the energy unit as $`t=1`$. Additionally we apply periodic boundary conditions both in x- and y-direction.
## 2 Hubbard model and superconducting correlations
Following SCA89 ; FRI91 we use the (vertex) correlation function (instead of the largest eigen value of the reduced two particle density matrix) as an indicator of superconductivity, making use of the standard concept of off-diagonal long range order (ODLRO) YAN62 .
We concentrate here on the d$`_{x^2y^2}`$-wave symmetry (abbreviated as d-wave). The full two-particle correlation function is defined for the the d-wave symmetry by
$$C_d(r)=\frac{1}{L}\underset{i,\delta ,\delta ^{}}{}g_\delta g_\delta ^{}c_{i,}^{}c_{i+\delta ,}^{}c_{i+r+\delta ^{},}c_{i+r,}.$$
(4)
The vertex correlation function $`C_d^V(r)`$ is the two-particle correlation function $`C_d(r)`$ without the contributions of the single-particle correlations of the same symmetry FRI90 . For the d-wave the result is
$$C_d^V(r)=C_d(r)\frac{1}{L}\underset{i,\delta ,\delta ^{}}{}g_\delta g_\delta ^{}C_{}(i,r)C_{}(i+\delta ,i+r+\delta ^{}).$$
(5)
In equation (5) $`C_\sigma (i,r)c_{i,\sigma }^{}c_{i+r,\sigma }`$ is the single-particle correlation function for spin $`\sigma `$. The phase factors are $`g_\delta `$, $`g_\delta ^{}=\pm 1`$ to model the d-wave symmetry, the number of lattice points is $`L`$ and the sum $`\delta `$ (resp. $`\delta ^{}`$) is over all nearest neighbors.
We averaged the vertex correlation function $`C_d^V(r)`$ only in the large range regime of $`r`$, i. e. for the distances $`|r|>|r_c|`$:
$$\overline{C}_d^{V,P}\frac{1}{L_c}\underset{r,|r|>|r_c|}{}C_d^V(r)$$
(6)
with the number $`L_c`$ of lattice points with $`|r|>|r_c|`$. The qualitative behavior of our results (concerning the vertex correlation functions) is not influenced by our choice of $`|r_c|`$ as long as we suppress the short range correlations (i. e. $`r_c1.9`$).
Evidence for ODLRO in the d$`_{x^2y^2}`$ channel was already found for the case of the square 2D ttโ-Hubbard model MOR94 ; HUS94 ; FET97/3 . We report here the existence of ODLRO in the striped Hubbard model and investigate the influence of shape and interaction strength on the superconducting signal.
Figure 1 shows the d-wave correlation functions (eq. (4) and (5)) as a function of the distance $`|r|`$ between the pair creation and pair annihilation operators of both the vertex and the full correlation function for a striped system. Similar to results for square systems, we obtain huge correlations for the short range part and finite, positive values for $`C_d^V(r)`$ for large distances in the system (inlay of figure 1). Here it becomes obvious that the vertex correlation function is non-negative in the d-wave case.
For a comparison we plot in figure 1 also the correlation function for the extended s-wave (xs-) symmetry for the nearest neighbors. This symmetry obeys the same formulas as equations (4) and (5) only the phase factors $`g_\delta `$ and $`g_\delta ^{}`$ both are set equal to 1. In contrast to the d-wave case, the xs-wave symmetry does not exhibit this long range behavior (inlay of figure 1). This is also in agreement with simulations for the square Hubbard model MOR94 ; HUS96 .
## 3 The PQMC-Method
We calculate the ground state properties of the Hubbard model using the projector quantum Monte Carlo (PQMC) method BLA81 ; KOO82 . In this algorithm the ground state is projected with
$$|\mathrm{\Psi }_0=\frac{1}{๐ฉ}e^\theta |\mathrm{\Psi }_T$$
(7)
from a test state $`|\mathrm{\Psi }_T`$ with the projection parameter $`\theta `$ and the normalization factor $`๐ฉ`$ LIN92 . In order to perform this projection it is necessary to transform the many-particle problem into a single-particle problem. This is done in two steps, first the exponential of the Hamiltonian $``$ is decomposed into two separate parts, $`_{kin}`$ and $`_{pot}`$, using a Trotter Suzuki transformation SUZ76 ; LIN92 and second the interaction term is treated with a discrete Hubbard Stratonovich (HS) transformation, which leads to an effective single-particle problem with additional fluctuating HS fields HIR83 .
We use the second order Trotter Suzuki transformation, which reads as
$$e^{\theta (_{kin}+_{pot})}=\left(e^{\frac{\tau }{2}_{kin}}e^{\tau _{pot}}e^{\frac{\tau }{2}_{kin}}\right)^m+๐ช(\tau ^2),$$
(8)
where $`m`$ is the number of Trotter slices and $`\tau \frac{\theta }{m}`$. Here a systematic error of order $`๐ช(\tau ^2)`$ enters the calculations for finite $`m`$.
The two parameters $`m`$ and $`\theta `$ influence the correct projection of the ground state $`|\mathrm{\Psi }_0`$ from the test wave function $`|\mathrm{\Psi }_T`$ in the PQMC algorithm FET98/3 ; FET98/4 .
In figure 2 we investigate the dependence of the ground state energy per site $`E_0/L`$ and of the vertex correlation function $`\overline{C}_d^{V,P}`$ on the Trotter parameter $`m`$. Both, $`E_0/L`$ and $`\overline{C}_d^{V,P}`$, level off for large $`m`$, indicating the convergence of the PQMC method. The results resemble similar PQMC simulations for the square 2D-ttโ-Hubbard model BOR92 ; ZHA97 ; FET98 ; FET98/4 and the APEX-oxygen model FRI90 . Like in these cases the vertex correlation function is here more sensitive to $`m`$ than the ground state energy per site $`E_0/L`$ (figure 2).
Figure 3 shows the influence of the projection parameter $`\theta `$ on the same observables. The results are again in good agreement with similar simulations for the square Hubbard model FET97/3 ; FET98 ; FET98/4 .
Due to the more rapid convergence of the ground state energy compared to the vertex correlation function the relative changes of $`E_0/L`$ of figures 2 (a) and 3 (a) are significantly different compared to their (b) counterparts showing the vertex correlation function. This is also expressed by the very different scales of the corresponding y-axes.
Figure 2 (b) shows additionally to a $`12\times 4`$ system the results for a twice as large $`24\times 4`$ system. Convergence occurs here at higher values for $`\theta `$, namely $`\theta >16`$. Due to the sign problem (inlay of figure 3 (a)) we were not able to perform simulations for $`\theta >16`$ . Similar effects occur also for PQMC simulations of the square Hubbard model FET98 .
Quantum Monte Carlo simulations are often plagued with the sign problem ASS90 ; LOH90 . The average sign $`sign`$ enters the calculation for the expectation value $`A`$ of an observable $`A`$ by
$$A=\frac{_{\sigma ,\sigma ^{}}w(\sigma ,\sigma ^{})A(\sigma ,\sigma ^{})}{_{\sigma ,\sigma ^{}}w(\sigma ,\sigma ^{})}=\frac{A^+A^{}}{sign}.$$
(9)
Here $`\sigma `$ and $`\sigma ^{}`$ are configurations of the HS field, $`w(\sigma ,\sigma ^{})`$ is their weight, and $`A(\sigma ,\sigma ^{})`$ is the expectation value of $`A`$ for $`\sigma `$ and $`\sigma ^{}`$ HIR83 . Now $`w(\sigma ,\sigma ^{})`$ can have both positive and negative values, thus when used in a Monte Carlo algorithm for a transition propability one uses the right hand side of equation (9). $`A^+`$ and $`A^{}`$ denote the separate averages of HS-configurations $`\sigma `$ and $`\sigma ^{}`$ with positive resp. negative weights $`w(\sigma ,\sigma ^{})`$. Generally speaking QMC simulations are only meaningful for $`sign`$ close to 1.
The average sign $`sign`$ is known to decrease for increasing system size $`L`$, interaction strength $`U`$ and projection parameter $`\theta `$ (see the inlay of figure 2 resp. 3 and ASS90 ; LOH90 ). But a small average sign leads among others to large statistical errors in the Monte Carlo process and renders the simulation results meaningless.
From the above analysis of the dependence of the ground state energy and the correlation functions on $`m`$ and $`\theta `$ we conclude, that the PQMC simulations are converged for $`U=2`$ when $`\theta 8`$ in smaller systems and $`\theta 16`$ in larger systems. A ratio $`\tau =\frac{\theta }{m}=\frac{1}{8}`$ of the projection parameter and the Trotter parameter was found to be sufficient for a correct decomposition. Due to the sign problem there is only a small range of the parameters, where $`\theta `$ can be chosen sufficiently large, so that the investigation of the vertex correlation function and its long range behavior is possible. This is similar to the case of the square Hubbard model FET97/3 .
For smaller systems we performed also some simulations of the Hubbard model using the stochastic diagonalization RAE92 ; FET97/2 , figure 6. They compare also favorably with their PQMC counterparts, which is an additional indication that the PQMC performs correctly.
## 4 Superconductivity in stripes
We now investigate the dependence of the superconducting properties of the striped Hubbard model on system size, shape and interaction strength.
The geometry has a quite significant effect on the magnitude of the correlation functions. For increasing width $`L_y`$ of the stripes, figure 4, the average long range part of the vertex correlation function $`\overline{C}_d^{V,P}`$ is decreasing significantly. The ratio between $`\overline{C}_d^{V,P}`$ for a rectangular $`12\times 4`$ and a square $`12\times 12`$ system is almost 3. In figure 5 we show $`\overline{C}_d^{V,P}`$ for both, square systems and the rectangular $`12\times L_y`$ systems from figure 4, as a function of the system size $`L=L_x\times L_y`$. Here the rectangular shaped systems always show a higher superconducting signal than square systems of the same size $`L`$.
This is rather surprising when one takes into account that in striped systems, on average, the distances $`|r|`$ between pair creation and pair annihilation operators are larger than in square systems with the same number of sites $`L`$. In our view there are two effects which may increase the superconducting correlations. First the anisotropy of $`L_x`$ and $`L_y`$ which leads to more finite size shells. Finite size shells refer to the energy levels of the free Hubbard clusters ($`U=0`$). It is known, that other ways of introducing additional shells, e.g., anisotropic hopping $`t_xt_y`$ KUR96 ; KUR97 , or additional hopping to next nearest neighbors HUS96 increase the superconducting correlations in the repulsive Hubbard model.
In our view it is a second effect, the squeezing of the system in one dimension, that gives rise to these increased superconducting correlations.
In contrast to the width $`L_y`$, the length $`L_x`$ of the stripes is relatively insensitive to the height of the plateau, figure 6.
Another way to strengthen the superconducting correlations is to increase the (repulsive) Hubbard interaction U, as shown in figure 7. Here we present both common methods for analyzing superconductivity: the full correlation function IMA91 ; SCA89 ; WHI89 and the vertex function SCA89 ; FRI91 ; FET97/2 . The dotted lines in figure 7(a) and (b) indicate the values of the full (vertex) correlation function in the case of no interaction $`U=0`$. Figure 7(a) shows that the full correlation function increases for higher interactions. The vertex correlation function is zero for the case of no interaction ($`U=0`$) and increases also monotonous for increased interaction strength. Due to the sign problem we were not able to perform simulations for an interaction strength $`U>2.5`$, for the system size and filling shown in figure 7.
Thus within the range of parameters accessible by the PQMC method, the superconducting correlations are increasing for an increasing repulsive interaction strength $`U`$. Both full and vertex correlation function show this behavior. We want to note, that the vertex correlation function is much more sensitive to variations of $`U`$ due to the substraction of the background of the single-particle correlation functions (figure 7). Thus the vertex correlation function is the more appropriate observable to analyze superconductivity in small Hubbard clusters. These results are similar to observations made for the BCS reduced Hubbard model FET97/2 .
## 5 Effective interaction in striped Hubbard clusters
Due to the failure of the usual finite size scaling in the square 2D Hubbard model we introduced HUS96 ; FET97/3 ; FET98 an effective model, the BCS-reduced Hubbard model, to compare the superconducting correlations for different system sizes $`L`$. This failure is mainly caused by the underlying shell structure of the free ($`U=0`$) system FET98 ; FET98/3 ; HUS96 .
The BCS-reduced Hubbard model exhibits the same corrections to scaling as the Hubbard model, and has a well chosen interaction term, that produces superconductivity with d-wave symmetry. We calculate for this model the same correlation functions as for the Hubbard model. The effective interaction strength $`J_{eff}`$ is then chosen to give the same values for the correlation functions as the Hubbard model (for details see FET98 ). From this we get a direct estimate of the superconducting interaction strength.
The calculation of an effective interaction for the three band Hubbard model was used to identify the pairing mechanism for d-wave superconductivity in this model. The evidence of d-wave pairing in this case is based on symmetry arguments and exact diagonalization results of small clusters CIN97 ; CIN98 ; CIN98/2 ; CIN99 .
In the momentum space the BCS-reduced Hubbard model is described by the Hamiltonian:
$$^{BCS}=_{kin}^{BCS}+_{int}^{BCS}.$$
(10)
The kinetic part is again equation (2), only transformed to momentum space, i. e. $`_{kin}^{BCS}=_{kin}`$, and the interaction is defined by (for d-wave interaction)
$$_{int}^{BCS}=\frac{J}{L}\underset{\genfrac{}{}{0pt}{}{k,p}{kp}}{}f_kf_pc_{k,}^{}c_{k,}^{}c_{p,}c_{p,}.$$
(11)
In equation (11) we use the form factors
$$f_k\mathrm{cos}(k_x)\mathrm{cos}(k_y)$$
(12)
to model the d-wave symmetry in 2D ($`k(k_x,k_y)`$).
We calculate the ground state of this BCS-reduced Hubbard model with the exact and the stochastic diagonalization RAE92 ; RAE92/2 ; FET97/2 .
In figure 8 and 9 we show the effective interaction $`J_{eff}`$ corresponding to the correlation functions and systems shown in figure 4 and 6. Within the error bounds of the simulations we conclude that $`J_{eff}`$ is nearly constant for various geometries of the system. It is not possible to calculate stochastic errors of the physical observables within the SD. But for smaller system sizes our comparison of SD with exact diagonalization results indicates that for the weak interactions $`J`$ used here the errors in the SD are negligible HUS96 ; FET97/2 . The error bars shown in figures 8 and 9 are therefore calculated using only the statistical errors of the PQMC results and fitting these values to the SD results.
In addition to the above mentioned, one has to take into account, that even so we tried to perform the calculations at a constant filling $`n0.8`$ the constraint of closed shells for PQMC simulations leads to different fillings $`n`$ for each of these system sizes. Furthermore, in the case of figure 9 (and 6 respectively) one has to take into account, that all simulations are performed at a constant $`\theta =8`$. Whereas figure 3 indicates that for large system sizes $`L`$ a higher value of $`\theta `$ would lead to slightly higher values of the vertex correlation function in the PQMC runs and thus to a slightly lower effective interactions $`J_{eff}`$.
From figures 8 and 9 we conclude that within the accuracy of the applied methods, the effective interaction strength $`J_{eff}`$ is equal for both square and striped systems. Furthermore $`J_{eff}`$ is insensitive to the length of the striped systems.
## 6 Summary and Conclusions
Here we performed ground state simulations of the 2D ttโ-Hubbard model and the BCS-reduced Hubbard model of striped clusters using PQMC and SD techniques. Together with the exact diagonalization these are the most reliable computational tools for this type of calculations.
We concentrated our investigations on the behavior of rectangular striped systems. In agreement with previous calculations of the square Hubbard model we find that these finite systems show evidence for superconductivity in the $`d_{x^2y^2}`$ channel for repulsive interactions $`U`$. Compared to the squared case these correlations are significantly enhanced, and the superconducting signal is nearly insensitive to the length of these stripes.
Using SD-techniques we were capable of estimating the effective superconducting interaction strength $`J_{eff}`$ of a BCS-reduced Hubbard model with the same symmetry of the superconducting correlation functions. Within the accuracy of our methods both square and striped Hubbard model show approximately the same superconducting interaction strength $`J_{eff}`$.
In conclusion, the striped Hubbard model is a promising candidate for the microscopic description of the superconducting striped domains in the high-$`T_c`$ cuprates. Within the larger framework of the high-$`T_c`$ glass model a combined model is able to explain many puzzling properties of the high-T<sub>c</sub> materials.
## 7 Acknowledgment
We want to thank P.C. Pattnaik, D.M. Newns, C.C. Tsuei, T. Doderer, H. Keller, T. Schneider, J.G. Bednorz, and K.A. Mรผller for very helpful discussions. The LRZ Munich grants us a generous amount of CPU time on their IBM SP2 parallel computer, which is highly appreciated. Finally we acknowledge the financial support of the UniOpt GmbH, Regensburg. |
no-problem/9912/astro-ph9912031.html | ar5iv | text | # DISTANCE DEPENDENCE IN THE SOLAR NEIGHBORHOOD AGE-METALLICITY RELATION
## 1 Introduction
The age-metallicity relation (AMR) for stars, coupled with the stellar metallicity distribution and the star formation history, is a fundamental constraint on models for the chemical evolution of the solar neighborhood, providing the time history of the enrichment of the interstellar medium. Defining this relation, however, has not been a trivial task, as it requires obtaining the ages, distances, metallicities, and kinematics for a large sample of stars. The age-metallicity relation established by Twarog (1980) was a key constraint on chemical evolution models for the solar neighborhood. However, this study lacked kinematic information for the stars, and thus on the amount of contamination by stars not born in the solar neighborhood (such as thick disk stars, whose connection to galactic evolution is uncertain at present).
More recently, Edvardsson et al. (1993; hereafter Edv93) published abundances for numerous heavy elements in field F and G dwarf stars having kinematic information. These data have provided a wealth of information on abundances and abundance ratios as a function of time and kinematics in the galactic disk. The sample of stars chosen had a variety of photometric information and space velocities, allowing them to be placed on the HR diagram and in the appropriate kinematic population. One surprising result from this study was that the AMR derived by Edv93 showed much greater scatter (inferred to be 0.6-1.0 dex by some papers) than could be attributed to observational uncertainties. This result suggested that chemical evolution in the solar neighborhood has been highly inhomogeneous over time.
A number of theoretical explanations for the scatter in the local AMR have been proposed, including: radial diffusion of stellar orbits (Franรงois & Matteucci 1993; Wielen, Fuchs, & Dettbarn 1996); episodic infall of metal-poor gas (Edv93; Pilyugin & Edmunds 1996); and sequential or stochastic enrichment by stellar populations (van den Hoek & de Jong 1997; Copi 1997). Which mechanism might be most important is unknown, but there is no lack of explanations. On the other hand, the inference of large scatter is inconsistent with abundance measurements in nearby spiral and irregular galaxies (e.g., Kennicutt & Garnett 1996 and Kobulnicky & Skillman 1996), and in the local ISM (Meyer, Jura, & Cardelli 1998), which show that dispersions in ISM abundances are rather small on kiloparsec scales or less. It is difficult to understand how a largely homogeneous ISM could give rise to a large dispersion in stellar metallicities. The apparent dispersion in the Edv93 data is inconsistent with the smaller dispersion derived by Twarog (1980) as well.
Therefore, it seems appropriate to re-examine the AMR. Edv93 warned that their star sample was not an unbiased sample (see also Nissen 1995) and thus should be used cautiously in interpreting the AMR. We will demonstrate below that there is a systematic dependence in the amount of scatter in the AMR on the properties of the stars, in particular on stellar distance. We will conclude that the intrinsic scatter in the AMR is smaller than has sometimes been inferred.
## 2 The Stellar Sample
The Edv93 metallicity sample consisted of 189 somewhat evolved F and G stars within 80 pc of the sun. The sample was selected to have roughly equal numbers of stars in nine metallicity bins from \[M/H\] = +0.2 to $``$0.9, where \[M/H\] is the logarithmic metallicity relative to solar. In order to obtain such sampling, Edv93 had to observe fainter and more distant stars to obtain a sufficient number of stars in the low-metallicity bins.
Of this sample, eleven stars are spectroscopic binaries, which can have larger uncertainties in distances and ages; another eleven stars had large uncertainties in their proper motions. We have excluded these from our analysis. Seven other stars had no age estimates and were also excluded. We have retained the stars labeled โhookโ stars by Edv93, although their ages may be systematically underestimated by up to 0.15 dex.
Not all of the stars in the Edv93 sample had trigonometric parallaxes in 1993, and so they relied on distances based on their Strรถmgren photometry. Since then, parallaxes from the Hipparcos catalog (ESA 1997) have become available for all of these stars. Ng & Bertelli (1998) have published revised ages for the stars based on the Hipparcos parallaxes and the more recently computed stellar evolution tracks of Bertelli et al. (1994). Comparison of the revised stellar ages with the Edv93 ages showed little systematic difference (Ng & Bertelli 1998), indicating that age uncertainties are not the main source of the scatter in age vs. metallicity. However, there is a slight reduction in the scatter when the Hipparcos distances and Ng & Bertelli (1998) ages are used; we will therefore base the following discussion on the revised ages from Tables 5 and 6 of Ng & Bertelli (1998) and Hipparcos distances.
## 3 Distance Dependence of the Scatter in Metallicity
Figure 1 shows the AMR plot for \[Fe/H\] based on our selected subset of the Edv93 stars. Assuming the quoted observational uncertainties of $`\pm `$0.1 dex in \[Fe/H\] and age, a Monte Carlo analysis indicates that an additional Gaussian dispersion in \[Fe/H\] of $`\pm `$0.15-0.2 dex beyond the observational scatter is required to account for the observed scatter.
We were originally concerned that uncertainties in the distances to the stars could introduce errors into their ages, and thus could cause artificially large scatter in the AMR. Therefore, we examined the scatter as a function of distance to the stars. We made a simple division of the sample into two groups: a nearby group with distances less than 30 parsecs from the sun (89 stars), and a more distant set containing the remaining 71 stars with distances greater than 30 parsecs (which extends out to 80 parsecs). Figure 2 shows the age-metallicity diagrams for the two groups of stars. The comparison is remarkable: the nearest stars show an obvious reduction in scatter in \[Fe/H\] at a given age than the full sample of Figure 1, especially for the intermediate ages (log $`\tau `$ between 0.5 and 0.9 in Gyrs). In fact, Figure 2 reveals a striking asymmetry in the metallicity distribution for the nearby and more distant stars. While for the nearby stars \[Fe/H\] rises steeply with decreasing age and then levels off for the younger stars, for the more distant stars a more gradual increase in \[Fe/H\] appears to be the case. To show that this difference is not a simple statistical fluctuation, we examine the stars in the range 0.5 $`<`$ log $`\tau `$ $`<`$ 0.9, where the more distant sample shows a dearth of metal-rich stars and an enhancement in the number of metal-poor stars compared to the nearby sample. We show the distributions of \[Fe/H\] for the near and far stars within this age range in Figure 3. A Mann-Whitney test rejects the hypothesis that these two samples come from populations with the same mean \[Fe/H\] at the 99.99% confidence level, while a Kolmogorov-Smirnov test indicates that the probability that these two samples are drawn from the same population is only 8.3$`\times `$10<sup>-4</sup>. Thus, it appears that the difference between these two groups of stars is highly significant. (We see the same patterns in plots for other elements as well.) The difference suggests that the more distant sample may include stars whose chemical properties do not reflect the evolution of the disk in the solar neighborhood. This is plausible since the more distant stars sample a volume that is 18 times larger than the stars within 30 pc.
We explore the properties of the two star samples further. Figure 4 plots \[Fe/H\] vs. the mean stellar orbital radius $`R_{mean}`$ (from Edv93), Figure 5 shows \[Fe/H\] vs. the maximum height $`Z_{max}`$ of each star above the Galactic plane (one star, HD148816, lies outside Fig. 5 with \[Fe/H\] = $``$0.74 and $`Z_{max}`$ = 5.44 kpc.), and Figure 6 plots \[Fe/H\] vs. orbital eccentricity. (We have not attempted to re-derive orbital parameters for the stars based on the new parallax results. Although there may be significant changes for a few stars, for the vast majority of these stars distances have changed only slightly and so the orbits will also change negligibly.) Two things can be discerned from these plots. First, there is a tight cloud of disk stars with mean orbital radii between 7.0 and 8.5 kpc, orbit eccentricities $`<`$ 0.15, and \[Fe/H\] $`>`$ $``$0.5. Second, most of the metal-poor stars in the sample have eccentric orbits that range far from the solar radius and away from the galactic plane. Edv93 also noted the increase in the vertical velocity dispersion for old, metal-poor stars in their sample (see their Fig. 16). It can be inferred from this that the sample is contaminated by thick disk stars and other stars from outside the solar circle.
The differences between the upper and lower panels of Figure 2 largely reflect the sample selection criteria used by Edv93. Metal-poor stars are more rare than metal-rich ones in the solar neighborhood. Thus, in order to have roughly equal stars in each metallicity bin, the metal-poor stars will be fainter and more distant, on average. Second, Edv93 selected stars which were at least 0.4 magnitudes above the main sequence but within the temperature range 5600-7000 K. Finding young stars that meet these criteria is more difficult than finding older stars. Therefore, the young stars will also tend to be more distant. The third difference is the fact that the stars in the outer shell with 0.5 $`<`$ log $`\tau `$ $`<`$ 0.9 are systematically more metal-poor than stars of the same ages in the inner shell. We suspect that this is also likely a result of the selection criteria, but this is more difficult to understand without modeling the selection biases. A comparison of the kinematics of the two sets could prove interesting, but is beyond the scope of this paper. Figure 2 makes it clear that this is not a randomly selected sample of stars. If the inner and outer circles were fair samples of the solar neighborhood metallicity distribution, Figs. 2(a) and 2(b) should show similar age-metallicity relationships.
Finally, we compare the \[Fe/H\] distribution for the stars in the Edv93 sample with the volume-limited sample of G and K dwarfs from Favata, Micela, & Sciortino (1997). Favata et al. derived \[Fe/H\] for a random selection of 92 stars from the Gliese catalog of nearby stars. There are eight stars in common between Edv93 Favata et al.; the mean difference in \[Fe/H\] is 0.03$`\pm `$0.03 dex, suggesting no significant systematic difference in the two metallicity scales. However, Favata et al. noted a peculiarity in their \[Fe/H\] distribution, in that the cooler K dwarfs showed a higher mean and smaller dispersion in \[Fe/H\] than the G dwarfs. For fair comparison, therefore, we restrict our discussion to the 39 Favata et al. stars with $`T_{eff}`$ $`>`$ 5600 K, the lower $`T_{eff}`$ limit of the Edv93 sample, to be compared to the Edv93 stars within 30 pc of the Sun.
We plot the \[Fe/H\] distributions of these two samples as histograms in Figure 7. The Edv93 stars are clearly skewed toward lower metallicities compared to the Favata et al. sample, with the Edv93 sample missing the most metal-rich stars found by Favata et al., while showing an excess of stars in the range $``$0.1 $`>`$ \[Fe/H\] $`>`$ $``$0.4. Again, this could be a result of the metallicity selection in the Edv93 sample. It is probably not possible to say at present if the volume-limited sample has a smaller dispersion than the Edv93 sample, given the small numbers of stars involved. Unfortunately, the Favata et al. stars lack kinematic data, so a more detailed comparison is not possible at present.
## 4 Discussion
It is apparent from this exercise that determining the shape and dispersion in the age-metallicity relation is not particularly straightforward, even for a high-quality data set such as the Edv93 sample. One must be careful to account for kinematically distinct populations, sample selection, and abundance peculiarities to derive a representative solar neighborhood sample. Although diffusion of stars from outside the solar circle does contribute somewhat to the scatter in abundances, it does not account for most of it, as was suggested by Wielen, Fuchs, & Dettbarn (1996). That this is the case can be inferred by comparing the Edv93 AMR with that from Twarog (1980). The measured dispersion in \[Fe/H\] at every age bin in the Twarog sample is smaller than that measured in the Edv93 sample; the dispersion in \[Fe/H\] in the Twarog sample ranges from 0.06 to at most 0.18 dex, compared with the 0.24 dex determined by Edv93. If stellar diffusion is responsible for most of the scatter in the Edv93 AMR, then the dispersion in the Twarog AMR should be at least as large as that of Edv93, not smaller. This reinforces the argument that the large scatter in the Edv93 AMR is most likely due to selection effects.
Twarog, Ashman, & Anthony-Twarog (1998) measured a dispersion in \[Fe/H\] of only 0.1 dex for galactic open clusters, which are presumedly less subject to diffusion effects than individual field stars. In comparison, the average dispersion in the Twarog (1980) AMR is 0.15 dex for field stars with ages in the range 3-10 Gyr. If this represents the true dispersion in metallicity for the field stars, then the difference between the field star and cluster results (corresponding to a scatter of about 0.1 dex) could be attributable to stellar diffusion, although the effect of sampling from different parts of the galactic metallicity gradient needs to be accounted for as well. A comparison of the cluster data with a complete sample of field stars having kinematic data could provide a more stringent test of stellar diffusion.
Contamination by thick disk stars complicates the determination of the metallicities of the oldest thin disk stars. On the other hand, the kinematic data suggest that the thick disk may be an ancient population (see Fig. 31 of Edv93 and corresponding discussion in Freeman 1991 ). With larger complete samples of stars with metallicity and kinematic measurements it may be possible to subtract the thick disk contribution statistically. This would be a very important measurement because the initial metallicity of the thin disk is poorly known (as is its age as well). Determination of this quantity would provide an important constraint on chemical evolution models.
Abundance measurements for the interstellar medium in galaxies typically imply small dispersions in metallicity. Kennicutt & Garnett (1996) found a dispersion of only 0.1-0.2 dex about the radial gradient in O/H from 41 H II regions in the spiral galaxy M101, consistent with observational uncertainties and implying that the intrinisic abundance dispersion is negligible. Kobulnicky & Skillman (1996, 1997) found a dispersion in O/H of only $`\pm `$0.05 dex in the dwarf irregular galaxy NGC 1569, and $`\pm `$0.10 dex in NGC 4214. Closer to home, Meyer, Jura, & Cardelli (1998) have found a very small dispersion, only $`\pm `$0.05 dex in O/H, in local (within 500 pc) diffuse interstellar gas. The combined data from ISM and star cluster observations imply that the ISM is relatively well-mixed on size scales $`<`$ 0.5 kpc and $`>`$ 1 kpc, or that mixing occurs on sufficiently large spatial and time scales that supernova ejecta are considerably diluted by ambient gas.
Roy & Kunth (1995) and Elmegreen (1998) discuss mixing of SN ejecta on small ($`<`$ 1 kpc) scales. The implication from these studies is that it is difficult to maintain a metallicity dispersion greater than 0.15 dex because of the efficiency of mixing processes. Elmegreen (1998), considering the enrichment of clouds by supernovae, predicts inhomogeneities of only about 0.05 dex within molecular clouds. This level of inhomogeneity is indeed consistent with the abundance dispersion measured in interstellar gas, but not with the apparent dispersion on stellar metallicities. On the other hand, data on the dispersion in abundances on intermediate size scales is lacking. Further studies of interstellar abundances over size scales of $``$ 1 kpc, along with further studies of stellar abundances with age, are needed to improve our understanding of the distribution and mixing of heavy elements in our galaxy and others.
To summarize, we conclude that, while the Edv93 stellar abundance data provide invaluable information on the evolution of element abundance ratios over time, it is not possible to infer either the dispersion in metallicity nor even the shape of the age-metallicity relation from these data because of various selection biases, as pointed out by the authors themselves. As a result of these biases, we infer that the intrinsic scatter in the Edv93 AMR must be much smaller than that measured. We therefore argue that the Twarog (1980) study remains at present the preferred determination of the solar neighborhood AMR, until a new study based on a complete sample of stars becomes available.
We thank B. Edvardsson, B. Gustafsson, A. Quillen, and R. Wyse for informative discussions, and V. Smith for comments on the manuscript. We also thank the referee, Bruce Twarog, for a very helpful and stimulating discussion of the issues underlying the analysis here. DRG is grateful for support from NASA-LTSARP grant NAG5-7734, while HAK acknowledges support from NASA and STScI through Hubble Fellowship HF-1094.01-97A. |
no-problem/9912/hep-ph9912385.html | ar5iv | text | # Symmetries of Excited Heavy Baryons In The Heavy Quark And Large ๐ต_๐ Limit
## Abstract
We demonstrate in a model independent way that, in the combined heavy quark and large $`N_c`$ limit, there exists a new contracted U(4) symmetry which connects orbitally excited heavy baryons to the ground states.
Due to our inability to solve nonperturbative QCD from first principles, most of our quantitative understanding of low energy hadron properties are based on symmetry considerations. The most notable of these schemes is chiral perturbation theory, which is based on the fact that the QCD Lagrangian is approximately chirally invariant. On the other hand, there are emergent symmetries which are not symmetries (not even approximate symmetries) of the QCD Lagrangian, but emerge as symmetries of an effective theory obtained by taking certain limits. Two famous examples of such emergent symmetries, namely the heavy quark symmetry and the large $`N_c`$ spin-flavor symmetry , have important phenomenological implications and are well discussed in the literature.
In this paper, we will discuss a new emergent symmetry of QCD, which emerges in the heavy baryon (baryon with a single heavy quark) sector in the combined heavy quark and large $`N_c`$ limit. As we will see below, this contracted U(4) symmetry connects the ground state baryon to some of its orbitally excited states. As a result, static properties like the axial current couplings and the moments of the weak form factors of these orbitally excited states can be related to their counterparts of the ground state. While some of these results have been discussed before in the literature, this is the first time where they are presented as symmetry predictions. Moreover, unlike previous studies, the analysis here is essentially model independent, depending only on the heavy quark and large $`N_c`$ limits. We will outline the steps through which the existence of this symmetry can be demonstrated, while the details of the construction will be reported in the longer paper .
It has been pointed out that, in the combined heavy quark and large $`N_c`$ limit, a heavy baryon can be regarded as a bound state of a heavy meson and a light baryon (a baryon for which all valence quarks are light) . (Similar models are studied in .) For concreteness, we will always adopt the prescription that the heavy quark limit is taken before the large $`N_c`$ limit. Since both constituents are infinitely massive (the heavy meson in the heavy quark limit, the light baryon in the large $`N_c`$ limit), a small attraction between them is sufficient to ensure the existence of a bound state. By the usual large $`N_c`$ counting rule it can be shown that the binding potential is of order $`N_c^0`$. Moreover, as the kinetic energy term is suppressed by the large reduced mass of the bound state, the wave function does not spread and is instead localized at the bottom of the potential. As a result, it can be approximated by a simple harmonic potential: $`V(x)=V_0+\frac{1}{2}\kappa \stackrel{}{x}^2`$. By the large $`N_c`$ counting rules, both $`V_0`$ and $`\kappa `$ are of order $`N_c^0`$, and it has been shown that in the model studied in Refs. , $`V_0<0`$, $`\kappa >0`$, i.e., the potential is attractive and can support bound states. When the bound state is the ground state of the simple harmonic oscillator, it is a $`\mathrm{\Lambda }_Q`$, the lightest heavy baryon containing the heavy quark $`Q`$. On the other hand, excited states in the simple harmonic oscillator are orbitally excited heavy baryons. We emphasize that this description of a heavy baryon is a model, which is not directly related to QCD.
After describing the physical picture of this model, which we will refer to as the bound state picture, we make the crucial observation that the excitation energy $`\omega =\sqrt{\kappa /\mu }`$ is small, where $`\mu `$ is the reduced mass of the bound state. By first taking the heavy quark limit, $`\mu =m_N`$ (mass of the light baryon) scales like $`N_c`$. Since the spring constant $`\kappa `$ is of order $`N_c^0`$, $`\omega `$ scales like $`N_c^{1/2}`$ and vanishes in the large $`N_c`$ limit. This implies that when $`N_c\mathrm{}`$, the whole tower of excited states become degenerate with the ground state โ a classic signature of an emergent symmetry.
What is the symmetry group of this emergent symmetry then? It has to contain, as a subgroup, the symmetry group of a three-dimensional simple harmonic oscillator, namely U(3) generated by $`T_{ij}=a_i^{}a_j`$ ($`i,j=1`$, 2, 3) where $`a_j`$ is the annihilation operator in the $`j`$-th direction. These $`T_{ij}`$โs satisfy the U(3) commutation relations.
$$[T_{ij},T_{kl}]=\delta _{il}T_{kj}+\delta _{kj}T_{il}.$$
(1)
When $`N_c\mathrm{}`$ and the excited states become degenerate with the ground state, the annihilation and creation operators $`a_j`$ and $`a_i^{}`$ ($`i,j=1`$, 2, 3) also become generators of the emergent symmetry. The additional commutation relations are
$$[a_j,T_{kl}]=\delta _{kj}a_l,[a_i^{},T_{kl}]=\delta _{il}a_k^{},[a_j,a_i^{}]=\delta _{ij}\mathrm{๐},$$
(2)
where 1 is the identity operator. These sixteen generators $`\{T_{ij},a_l,a_k^{},\mathrm{๐}\}`$ form the spectrum generating algebra of a three-dimensional harmonic oscillator. It is related to the usual U(4) algebra, generated by $`T_{ij}`$ ($`i,j=1`$, 2, 3, 4) satisfying commutation relations (1) by the following limiting procedure:
$$a_j=\underset{R\mathrm{}}{lim}T_{4j}/R,a_i^{}=\underset{R\mathrm{}}{lim}T_{i4}/R,\mathrm{๐}=\underset{R\mathrm{}}{lim}T_{44}/R^2.$$
(3)
Such a limiting procedure is called a group contraction, and hence the group generated by $`\{T_{ij},a_l,a_k^{},\mathrm{๐}\}`$ is called a contracted U(4) group.
So we have shown that the contracted U(4) is a symmetry of the bound state picture. But is it also a symmetry of QCD itself? We claim the answer is affirmative, and it is the aim of this paper to demonstrate the existence of this U(4) symmetry in QCD itself. We will first construct operators $`p_j`$ and $`x_j`$ ($`j=1`$, 2, 3), which correspond to the momentum and central position of the โbrown muckโ (light degrees of freedom) of the heavy baryon. From them operators $`a_j`$ and $`a_j^{}`$ are constructed. By considering the double and triple commutator of the QCD Hamiltonian $``$ with these operators, one can show that $``$ is at most a bilinear in $`a_j`$ and $`a_j^{}`$ as $`N_c\mathrm{}`$. Then it is straightforward to show that in the large $`N_c`$ limit,
$$=H+H^{},H=\omega \underset{j=1}{\overset{3}{}}a_j^{}a_j,$$
(4)
where $`\omega `$ is a parameter of order $`N_c^{1/2}`$ and $`H^{}`$ is an operator which commutes with all the $`a_j`$ and $`a_j^{}`$. This Hamiltonian clearly has a contracted U(4) emergent symmetry in the large $`N_c`$ limit as $`\omega 0`$.
Due to the conservation of baryon number and heavy quark number (in the heavy quark limit), it is well-defined to restrict our attention to the heavy baryon Hilbert space, i.e., the subspace with both heavy quark number and baryon number equal to unity. In this subspace, we will define operators which correspond to the momenta and positions of the heavy quark and brown muck of the heavy baryon. So let $``$ be the QCD Hamiltonian in this heavy baryon Hilbert space, $`P_j`$ ($`j=1`$, 2, 3) be the momentum operators, and $`X_j`$ be the operators conjugate to $`P_j`$. On the other hand, in the heavy quark limit, both the heavy quark mass $`m_Q`$ and the heavy quark momentum operators $`P_{Q}^{}{}_{j}{}^{}`$ ($`j=1`$, 2, 3) are well-defined up to order $`N_c^0`$ ambiguities, and $`X_{Q}^{}{}_{j}{}^{}`$ are the operators conjugate to $`P_{Q}^{}{}_{j}{}^{}`$. These operators satisfy the following operator identities:
$$[X_j,]=iP_j,[m_QX_{Q}^{}{}_{j}{}^{},]=iP_{Q}^{}{}_{j}{}^{},$$
(5)
where the first identity follows from Poincare invariance, and the second from heavy quark symmetry where $`\mathrm{\Lambda }_{QCD}/m_Q`$ corrections are dropped . Note that in the heavy quark limit, both $`X_j`$ and $`X_{Q}^{}{}_{j}{}^{}`$ commute with $``$ and hence are constants of motion. Lastly, we define the brown muck momentum operators $`p_j=P_jP_{Q}^{}{}_{j}{}^{}`$, and $`x_j`$ to be their conjugate operators.
The Hamiltonian $`H`$ can be decomposed into three pieces: $`=m_Q+m_N+\stackrel{~}{H}`$, where by the large $`N_c`$ scaling rules $`m_NN_c`$ and $`\stackrel{~}{H}N_c^0`$. Moreover, since $`X_j=m_QX_{Q}^{}{}_{j}{}^{}+(m_N+\stackrel{~}{H})x_j`$, one can subtract the operator identities (5) to obtain
$$[m_Nx_j,]=[X_jm_QX_{Q}^{}{}_{j}{}^{},]=i(P_jP_{Q}^{}{}_{j}{}^{})=ip_j,$$
(6)
with the term proportional to $`\stackrel{~}{H}`$ dropped as it is order $`N_c^1`$ suppressed relative to the leading order. So one has
$$[x_k,[x_j,]]=\delta _{jk}/m_N๐ช(N_c^1)(1+๐ช(N_c^1)).$$
(7)
On the other hand, the double commutator $`[p_k,[p_j,]]=[p_k,[p_j,\stackrel{~}{H}]]`$ measures the second order energy change when the heavy quark is spatially moved with respect to the brown muck. Since $`\stackrel{~}{H}`$ is of order $`N_c^0`$, the double commutator is generically also of the same order. (For later use, we will mention in passing that, by the same logic, all multiple commutators like $`[p_i,[p_j,\mathrm{}[p_k,]\mathrm{}]]`$ are also of order $`N_c^0`$.) We will define
$$\widehat{\kappa }=[p_k,[p_j,]]๐ช(N_c^0)(1+๐ช(N_c^1)),\kappa =G|\widehat{\kappa }|G$$
(8)
where $`|G`$ is the ground state of QCD Hamiltonian $``$.
Now let us take the heavy quark limit, and without loss of generality, set the constants of motion $`X_j=X_{Q}^{}{}_{j}{}^{}=0`$, so that the heavy baryon is sitting at the origin, and $`x_j`$ becomes the position of the center of the brown muck relative to the heavy quark, which is the center of mass of the heavy baryon. Then note that both Eqs. (7) and (8) will still hold if the QCD Hamiltonian $``$ in the double commutators are replaced by $`H`$, the Hamiltonian of a simple harmonic oscillator.
$$H=\underset{j=1}{\overset{3}{}}\left[\frac{(p_j)^2}{2m_N}+\frac{\kappa (x_j)^2}{2}\frac{\omega }{2}\right]=\omega \underset{j=1}{\overset{3}{}}a_j^{}a_j,a_j=\sqrt{\frac{m_N\omega }{2}}x_j+i\sqrt{\frac{1}{2m_N\omega }}p_j,$$
(9)
where $`a_j^{}`$ is the hermitian conjugate of $`a_j`$ and $`\omega =\sqrt{\kappa /m_N}๐ช(N_c^{1/2})`$. The contracted U(4) symmetry mentioned above is precisely the spectrum generating algebra of $`H`$ and becomes an emergent symmetry as $`\omega 0`$ in the large $`N_c`$ limit. On the other hand, to demonstrate that this contracted U(4) is a symmetry of QCD, one needs to show that the generators of the contracted U(4) commute with the QCD Hamiltonian $``$, or equivalently, show that $`=H+H^{}`$, where $`H^{}`$ commutes with $`a_j`$ and $`a_j^{}`$ in the large $`N_c`$ limit, i.e., $`[a_j,H^{}]=[a_j^{},H^{}]=0`$.
Our strategy is to study all possible double and triple commutators of $``$ with $`a_j`$ and $`a_j^{}`$. It is straightforward to show that the vanishing of the triple commutators $`[a_i,[a_j,[a_k,]]]`$ and $`[a_i^{},[a_j,[a_k,]]]`$ implies that $`[a_k,]=Ca_k+Da_k^{}`$, where both $`C`$ and $`D`$ commute with both $`a_k`$ and $`a_k^{}`$. (A possible constant term is ruled out by parity.) But then $``$ is at most a bilinear in $`a_k`$ and $`a_k^{}`$:
$$=\underset{k=1}{\overset{3}{}}Ca_k^{}a_k+D(a_ka_k+a_k^{}a_k^{})+H^{},$$
(10)
where $`H^{}`$ commutes with all the $`a_k`$ and $`a_k^{}`$. Moreover, the forms of the operators $`C`$ and $`D`$ can be fixed by the relations $`C\delta _{jk}=[a_j^{},[a_k,]]`$ and $`D\delta _{jk}=[a_j,[a_k,]]`$. Hence if $`C=\omega `$ and $`D=0`$, $`=H+H^{}`$ with both terms invariant under the contracted U(4) symmetry, completing the argument that this is a symmetry of QCD.
The triple commutators $`[a_i,[a_j,[a_k,]]]`$ and $`[a_i^{},[a_j,[a_k,]]]`$ are both linear combinations (with coefficients $`๐ช(N_c^0)`$) of these four terms:
$`T^{(0)}`$ $`=(m_N\omega )^{3/2}[p_i,[p_j,[p_k,]]],T^{(1)}`$ $`=(m_N\omega )^{1/2}[p_i,[p_j,[x_k,]]],`$ (11)
$`T^{(2)}`$ $`=(m_N\omega )^{1/2}[p_i,[x_j,[x_k,]]],T^{(3)}`$ $`=(m_N\omega )^{3/2}[x_i,[x_j,[x_k,]]].`$ (12)
As mentioned before, the triple $`p`$ commutator is at most of order $`N_c^0`$, and hence $`T^{(0)}๐ช(N_c^{3/4})`$. All of the other three triple commutators are also small as $`[x_k,]=ip_k/m_N+๐ช(N_c^2)`$, and the first term does not contribute to the triple commutators. So $`T^{(1)}N_c^{9/4}`$, $`T^{(2)}N_c^{7/4}`$ and $`T^{(3)}N_c^{5/4}`$. All four terms are smaller than $`\omega N_c^{1/2}`$ in the large $`N_c`$ limit, and hence so are the triple commutators $`[a_i,[a_j,[a_k,]]]`$ and $`[a_i^{},[a_j,[a_k,]]]`$. By the strategy outlined above, the vanishings of these triple commutators in the large $`N_c`$ limit imply that $``$ is of the form in Eq. (10).
Lastly, we want to show that $`C=\omega `$ and $`D=0`$, which is equivalent to showing that $`[x_j,[x_j,]]=1/m_N`$ and $`[p_j,[p_j,]]=\kappa `$. (The index $`j`$ is not summed over.) While the former is true from Eq. (7), the latter is not: from Eq. (8) the double $`p`$ commutator is $`\widehat{\kappa }`$, which is in general not identical with its ground state expectation value $`\kappa `$. However, it is true for states in the ground state band, which is the subspace spanned by states of the form $`(a_x^{})^{n_x}(a_y^{})^{n_y}(a_z^{})^{n_z}|G`$. So we conclude that, in the ground state band, $``$ does have the form stated in Eq. (4), and hence is invariant under the contracted U(4) group in the large $`N_c`$ limit. Note that this symmetry, like the familiar large $`N_c`$ spin-flavor symmetry , only applies to a particular subspace of the theory โ in this case the ground state band.
So we have demonstrated that this contracted U(4) symmetry is not only a symmetry of the bound state picture, but indeed a symmetry of QCD. Near the combined heavy quark and large $`N_c`$ limit, there exists a band of low-lying heavy baryons, labeled by $`n_x`$, $`n_y`$ and $`n_z`$, the number of excitation quanta in the $`x`$, $`y`$ and $`z`$ directions, and the excitation energies are $`(n_x+n_y+n_z)\omega `$. As $`N_c\mathrm{}`$, $`\omega 0`$ and the whole band become degenerate.
Such a symmetry has interesting phenomenological implications. For example, consider light quark form factors of the form $`J_{\mathrm{}}=\overline{q}\mathrm{\Gamma }q`$, where $`\mathrm{\Gamma }`$ is an arbitrary combination of the gamma matrices, with momentum transfer of order $`N_c\mathrm{\Lambda }_{QCD}`$. (In general the $`\overline{q}`$ and $`q`$ may have different flavors.) Since $`J_{\mathrm{}}`$ is a single quark operator while the U(4) generators are collective coordinates involving $`N_c`$ quarks, it follows that the commutators like $`[a_j,J_{\mathrm{}}]`$ are of order $`๐ช(N_c^1)`$ and vanish as $`N_c\mathrm{}`$. As a result, the light quark form factor is diagonal in the large $`N_c`$ limit.
$$\mathrm{\Lambda }_Q^{(n_x^{},n_y^{},n_z^{})}(p^{})|J_{\mathrm{}}|\mathrm{\Lambda }_Q^{(n_x,n_y,n_z)}(p)=\delta _{n_xn_x^{}}\delta _{n_yn_y^{}}\delta _{n_zn_z^{}}f(q^2),q^2=(pp^{})^2N_c^0\mathrm{\Lambda }_{QCD},$$
(13)
where $`F(q^2)`$ is an order $`N_c`$ form factor, which is a function of $`q^2`$, the square of the momentum transfer. Consequently, $`g_{\pi \mathrm{\Lambda }_Q\mathrm{\Lambda }_Q}F(q^2)/f_\pi N_c^{1/2}`$, as the pion decay constant $`f_\pi N_c^{1/2}`$. On the other hand, $`g_{\pi \mathrm{\Lambda }_Q\mathrm{\Lambda }_Q^{}}`$ is suppressed in the large $`N_c`$ limit, a result which has been discussed in in the context of the bound state picture, but here presented as an implication of the contracted U(4) group theory.
Like the light quark currents, heavy quark currents $`J_h=\overline{Q}^{}\mathrm{\Gamma }Q`$ is also invariant under the contracted U(4) symmetry. Naively, one may conclude that the Isgur-Wise form factors, which are the matrix elements of such heavy quark currents, also only connect initial and final states with the same $`(n_x,n_y,n_z)`$. Such conclusions are erroneous, however, as the final state is boosted relative to the initial state, and hence
$$\eta (w)\mathrm{\Lambda }_Q^{}^{(n_x^{},n_y^{},n_z^{})}(v^{})|J_h|\mathrm{\Lambda }_Q^{(n_x,n_y,n_z)}(v)=\mathrm{\Lambda }_Q^{}^{(n_x^{},n_y^{},n_z^{})}(v)|B_{vv^{}}^{}J_h|\mathrm{\Lambda }_Q^{(n_x,n_y,n_z)}(v),$$
(14)
where $`w`$ is the scalar product of the four-velocities $`v`$ and $`v^{}`$, which is related to three-velocities $`v_j`$ and $`v_j^{}`$ by $`w=1+|v_jv_j^{}|^2/2`$ in the non-relativistic limit. The boost operator $`B_{vv^{}}=\mathrm{exp}\left(iX_j(vv^{})_j\right)`$ boosts the heavy baryon from velocity $`v`$ to $`v^{}`$. In the large $`N_c`$ limit, $`X_j=m_Q^{}X_{Q}^{}{}_{j}{}^{}+m_Nx_j`$, where the first term commutes with $`a_j`$ and $`a_j^{}`$ but the second term does not. As a result, $`B_{vv^{}}`$ does not commute with $`a_j`$ and $`a_j^{}`$, and the Isgur-Wise form factors between states with different $`(n_x,n_y,n_z)`$ do not vanish:
$$\eta (w)=n_x^{},n_y^{},n_z|\mathrm{exp}\left(im_Nx_j(vv^{})_j\right)|n_x,n_y,n_z,$$
(15)
where $`|n_x,n_y,n_z`$ is the simple harmonic eigenstates, and $`x_j=(a_j+a_j^{})/\sqrt{2m_N\omega }`$. This is simply a group theoretical expression which only depends on two parameters: $`m_N`$ and $`\omega `$, which can be fixed by measuring the excitation energy of the first excited state to be around 330 MeV. All Isgur-Wise form factors (or more exactly, all their derivatives at the point of zero recoil $`w=1`$) between different initial and final heavy baryon states can be expressed as calculable functions of $`m_N`$ and $`\omega `$. For example, at the point of zero recoil ($`v=v^{}`$ and $`w=1`$), the boost operator reduces to an identity operator and the Isgur-Wise form factors are non-zero if and only if $`(n_x,n_y,n_z)=(n_x^{},n_y^{},n_z^{})`$; i.e., the ground state $`\mathrm{\Lambda }_Q`$ can only decay into a ground state $`\mathrm{\Lambda }_Q^{}`$, a well-known prediction of heavy quark symmetry . When the velocity transfer is non-zero but small, $`w=1+ฯต`$, the ground state $`\mathrm{\Lambda }_Q`$ can decay into excited $`\mathrm{\Lambda }_Q^{}`$. Since $`x_j`$ is linear in $`a_j^{}`$, however, at order $`ฯต`$ the only non-vanishing excited state Isgur-Wise form factor is that to the first excited state, and it saturates both the Bjorken and Voloshin sum rules . This analysis of the Isgur-Wise form factors recasts the studies of Refs. in a model independent way.
In the above, we have ignored the spins and flavors of the heavy baryons. The inclusion of these quantum numbers does not change the above analysis. In particular, both the spin-flavor symmetry for the brown muck and the flavor symmetry for the heavy quark, being generated by one-quark operators, commute with our contracted U(4) in the large $`N_c`$ limit. These extra excitation modes live in the $`H^{}`$ term in Eq. (4). For example, with two light flavors, $`H^{}=\sigma I^2`$, with $`\sigma N_c^1`$ for states with $`I+s_{\mathrm{}}=0`$, where $`I`$ is isospin and $`s_{\mathrm{}}`$ is the spin of brown muck (excluding any possible orbital angular momentum) . So the eigenstates of $`H^{}`$ is a tower of states labeled by $`I=0,1,\mathrm{}`$, where the states with $`I=0`$ and 1 are the $`\mathrm{\Lambda }_Q`$ and $`\mathrm{\Sigma }_Q^{()}`$, respectively. Since $`\sigma \omega `$ in the large $`N_c`$ limit, inclusion of such a $`H^{}`$ splits each simple harmonic eigenstate of $`H`$ into a whole tower of $`I=s_{\mathrm{}}`$ states.
In conclusion, we have demonstrated that in the combined heavy quark and large $`N_c`$ limit, there exists a new emergent symmetry which connects orbitally excited states to the ground states. While such a symmetry has interesting phenomenological implications, its utility for quantitative predictions may be limited by potentially large corrections which are typically of order $`N_c^{1/2}`$. For example, the effect of an anharmonic term $`ax^4`$ ($`aN_c^0`$) leads to order $`N_c^{1/2}`$ mixings between the simple harmonic oscillator states. However, even though the corrections may be large, the existence of such a symmetry provides a useful organization principle for low energy properties of heavy baryons, and may provide qualitative or semi-quantitative predictions.
This work is supported by the U.S. Department of Energy grant DE-FG02-93ER-40762. |
no-problem/9912/math9912173.html | ar5iv | text | # Introduction
## Introduction
In Kauffman defines an extension of classical knot diagrams to virtual knot diagrams, motivated by Gauss codes on the one hand and knots in thickened surfaces on the other hand. Several classical knot invariants can be generalized to the virtual theory without much effort, e.g., the knot group and derived invariants such as the Alexander polynomial, the bracket and Jones polynomials, and Vassiliev invariants (which can be introduced in different ways, see and ).
The present paper deals with a polynomial invariant that is derived from an invariant of links in thickened surfaces introduced by Jaeger, Kauffman, and Saleur in . The determinant formulation of the polynomial immediately generalizes to virtual link diagrams. It is a Laurent polynomial in two variables with integral coefficients that vanishes on the class of classical link diagrams but gives non-trivial information for diagrams that represent non-classical virtual links. Especially, examples can be given that the invariant is sensitive with respect to changes of orientation of a virtual knot. Furthermore, the polynomial fulfills a Conway-type skein relation in one variable and thus it is denoted by the term Conway polynomial.
In the same way as in the classical case, the one-variable Alexander polynomial of a virtual link can be derived from the virtual link group, but the skein-relation for (a normalized version of) the classical Alexander polynomial cannot be extended to the class of virtual links. Therefore, this Alexander polynomial is different from the Conway polynomial mentioned above in a non-trivial way โ in contrast to the classical case (and certain generalizations to links 3-manifolds, see for example , Theorem 5.2.11).
This paper is organized as follows. After, in Section 1, a short introduction into the field has been given, the determinant formulation of the Conway polynomial for virtual links is described in Section 2. Some properties of the polynomial are deduced, especially, that the invariant fulfills a Conway-type skein relation, and several example calculations are given. Then, in Section 3, the Alexander invariants derived from the link group, namely, Alexander matrix, Alexander ideal, and Alexander polynomial, are defined and it is shown that the Alexander polynomial does not fulfill any linear skein relation. Finally, in Section 4, general problems in extending certain invariants of classical links to the virtual category are described and the direction of further investigations is indicated.
## 1 Virtual Knots and Links
In classical knot theory, knots and links in 3-dimensional space are examined. As a main tool, projections of such links to an appropriate plane are considered, namely, the so-called link diagrams (see, for example, , , , , , ). The idea of virtual knot theory is to consider link diagrams where an additional crossing type is allowed. Thus, for an oriented diagram, there are three types of crossings: the classical positive or negative crossings and the virtual crossings (see Fig. 1).
For a motivation to introduce virtual crossings resulting from examing knots in thickened surfaces see .
Technically, a virtual link diagram is an oriented 4-valent planar graph embedded in the plane with appropriate orientations of edges and additional crossing information at each vertex as depicted in Fig. 1. Denote the set of virtual link diagrams by $`๐ฑ๐`$. Two diagrams $`D,D^{}๐ฑ๐`$ are called equivalent if one can be transformed into the other by a finite sequence of extended Reidemeister moves (see Fig. 2)
combined with orientation preserving homeomorphisms of the plane to itself. A virtual link is an equivalence class of virtual link diagrams. This gives a purely combinatorial definition of virtual links and, indeed, it is not allowed to perform all the modifications of a virtual link diagram that are inspired by moving objects in 3-space as in classical knot theory. For example, the virtual knots depicted in Fig. 3
are not equivalent, which can be shown by calculating their Conway polynomials defined in Section 2, though they could easily be seen to be equivalent by performing a โflypeโ if there were arbitrary classical crossings instead of the two virtual ones.
In the Jones polynomial is extended to virtual links by making use of the connection with the bracket polynomial. The latter one can be defined for virtual link diagrams in the same way as for classical link diagrams using the bracket skein relation, i.e., every classical crossing is cut open in the two possible ways which results in diagrams having only virtual crossings and therefore being equivalent to crossing-free diagrams.
Extending other well-known link polynomials that can be defined via skein relations, such as HOMFLY and Kauffman polynomials, is much more difficult because it is in general not possible to get a diagram of a trivial link from an arbitrary virtual link diagram by changing some of the diagramโs classical crossings. Therefore, it is a priori not clear which basis can be chosen for the skein module (see ) corresponding to a given skein relation. This problem also arises when defining the Conway polynomial $`_D(z)[z]`$ for a diagram $`D`$ via the skein relation
$$_{D_+}(z)_D_{}(z)=z_{D_0}(z)$$
where $`(D_+,D_{},D_0)`$ is a skein triple, i.e., the three diagrams are identical except in a small disk where they differ as depicted in Fig. 4.
In the following two sections two different extensions of the classical one-variable Alexander-Conway polynomial are examined. The first one satisfies a Conway-type skein relation in one variable and thus is called Conway polynomial, but it is introduced without using skein theory. The second one is the Alexander polynomial derived from the virtual link group.
## 2 A Polynomial Invariant of Virtual Links
A general method to define invariants of virtual links that very often does work is to apply the definition of an invariant for classical link diagrams to virtual link diagrams by ignoring the virtual crossings. This method is used in what follows to define an invariant for virtual links which is essentialy identical to an invariant for links in thickened surfaces that has been introduced in .
Let $`D`$ be a virtual link diagram with $`n1`$ classical crossings $`c_1,\mathrm{},c_n`$. Define
$$M_+:=\left(\begin{array}{cc}1x& y\\ xy^1& 0\end{array}\right)\text{and}M_{}:=\left(\begin{array}{cc}0& x^1y\\ y^1& 1x^1\end{array}\right).$$
For $`i=1,\mathrm{},n`$, let $`M_i:=M_+`$ if $`c_i`$ is positive and let $`M_i:=M_{}`$ otherwise. Define the $`2n\times 2n`$ matrix $`M`$ as a block matrix by $`M:=diag(M_1,\mathrm{},M_n)`$.
Furthermore, consider the graph belonging to the virtual link diagram where the virtual crossings are ignored, i.e., the graph consists of $`n`$ vertices $`v_1,\mathrm{},v_n`$ corresponding to the classical crossings and $`2n`$ edges corresponding to the arcs connecting two classical crossings (the edges possibly intersect in virtual crossings). Subdivide each edge into two half-edges and label the four half-edges belonging to the vertex $`v_i`$ by $`i_l^{}`$, $`i_r^{}`$, $`i_r^+`$, $`i_l^+`$ as depicted in Fig. 5.
A permutation of the set $`\{1,\mathrm{},n\}\times \{l,r\}`$ is given by the following assignment: $`(i,a)(j,b)`$ if the half-edges $`i_a^+`$ and $`j_b^{}`$ belong to the same edge of the virtual diagramโs graph. Let $`P`$ denote the corresponding $`2n\times 2n`$ permutation matrix where rows and columns are enumerated $`(1,l)`$, $`(1,r)`$, $`(2,l)`$, $`(2,r)`$, โฆ, $`(n,l)`$, $`(n,r)`$.
Finally, define $`Z_D(x,y):=(1)^{w(D)}det(MP)`$, where $`w(D)`$ denotes the writhe of $`D`$, i.e, the number of positive crossings minus the number of negative crossings in $`D`$. (If $`D`$ has no classical crossings then $`Z_D(x,y)`$ can be defined by $`Z_D(x,y):=0`$, see Theorem 3.)
###### Theorem 1
$`Z:๐ฑ๐[x^{\pm 1},y^{\pm 1}]`$ is an invariant of virtual links up to multiplication by powers of $`x^{\pm 1}`$.
Proof: The independence of the definition with respect to the ordering of the classical crossings and the invariance of $`det(MP)`$ under Reidemeister moves of type II and III follows exactly as in , the only difference being an exchange of the variable $`x`$ for $`x`$. The behaviour under Reidemeister moves of type I is depicted in Fig. 6.
Since the changes of sign corresponding to Reidemeister moves of type I are compensated by the factor $`(1)^{w(D)}`$ in $`Z_D(x,y)`$ and since the definition of $`M`$ and $`P`$ does not depend on the virtual crossings of the diagram, the statement on $`Z`$ follows immediately.
$`\mathrm{}`$
Remark A normalization of the polynomial $`Z_D(x,y)`$ using rotation numbers, as done in , is not possible in the case of virtual link diagrams because of Reidemeister move Iโ. But due to the change of the variable โ$`x`$โ to โ$`x`$โ in comparison with , at least the sign of the polynomial is determined.
Define the normalized polynomial $`\stackrel{~}{Z}_D(x,y)`$ as follows. If $`Z_D(x,y)`$ is a non-vanishing polynomial and $`N`$ is the lowest exponent in the variable $`x`$ then define $`\stackrel{~}{Z}_D(x,y):=x^NZ_D(x,y)`$. Otherwise let $`\stackrel{~}{Z}_D(x,y):=Z_D(x,y)=0`$.
###### Corollary 2
$`\stackrel{~}{Z}:๐ฑ๐[x,y^{\pm 1}]`$ is an invariant of virtual links.
$`\mathrm{}`$
Example Let $`D`$ be the virtual knot diagram depicted in Fig. 7.
The diagram arises from a diagram of the right-handed trefoil where a positive crossing has been replaced by a virtual one. The corresponding polynomial $`Z_D(x,y)`$ can be calculated by using the definition:
$$(1)^2det\left(\begin{array}{cccc}1x& y& 0& 1\\ xy^1& 0& 1& 0\\ 1& 0& 1x& y\\ 0& 1& xy^1& 0\end{array}\right)=x^2+x^2y^1+xyxy^1y1$$
The normalized polynomial $`\stackrel{~}{Z}_D(x,y)`$ is identical to the polynomial $`Z_D(x,y)`$ in this case.
###### Theorem 3
Let $`D`$, $`D_1`$, $`D_2`$ be virtual link diagrams and let $`D_1D_2`$ denote the disconnected sum of the diagrams $`D_1`$ and $`D_2`$. Then the following hold.
1. $`Z_D(x,y)=\stackrel{~}{Z}_D(x,y)=0`$ if $`D`$ has no virtual crossings
2. $`Z_{D_1D_2}(x,y)=Z_{D_1}(x,y)Z_{D_2}(x,y)`$, $`\stackrel{~}{Z}_{D_1D_2}(x,y)=\stackrel{~}{Z}_{D_1}(x,y)\stackrel{~}{Z}_{D_2}(x,y)`$
Proof: Part a) follows as in . Part b) is an immediate consequence of the definition of the matrix $`MP`$.
$`\mathrm{}`$
Remark For a connected sum $`D_1\mathrm{\#}D_2`$ of virtual link diagrams $`D_1`$ and $`D_2`$, a formula of the form
$$Z_{D_1\mathrm{\#}D_2}(x,y)=cZ_{D_1}(x,y)Z_{D_2}(x,y)$$
with a constant $`c`$ does not hold in general, in contrast to what could be expected at first glance. For example, let $`D_1`$ be a diagram with non-vanishing $`Z`$โpolynomial and let $`D_2`$ be a diagram of the trivial knot. Then the equation above obviously gives a contradiction.
Example The opposite of Theorem 3 a) does not hold in general. A counter-example is given in Fig. 8.
The diagram represents a non-classical link since reversing the orientation of one of the two components yields a virtual link diagram with non-trivial $`Z`$โpolynomial, whereas the $`Z`$โpolynomial of the diagram with the original choice of orientations vanishes.
###### Theorem 4
Let ($`D_+`$, $`D_{}`$, $`D_0`$) be a skein triple of virtual link diagrams. Then the following skein relation holds:
$$x^{\frac{1}{2}}Z_{D_+}(x,y)x^{\frac{1}{2}}Z_D_{}(x,y)=(x^{\frac{1}{2}}x^{\frac{1}{2}})Z_{D_0}(x,y)$$
Proof: The formula can easily be checked by verifying the corresponding relation for every state $`<v|f>`$ in the state sum model used in to define the โpartition functionโ $`Z_D(x,y)`$.
$`\mathrm{}`$
Since a skein relation as in Theorem 4 is fulfilled, in the following $`Z_D(x,y)`$ and $`\stackrel{~}{Z}_D(x,y)`$ will be called Conway polynomial and normalized Conway polynomial, respectively. A more familiar version of the Conway skein relation can be achieved by setting $`x:=t^2`$ and defining $`Z_D^{}(t,y):=t^{w(D)}Z_D(t,y)`$. Then, considering $`w(D_0)=w(D_+)1=w(D_{})+1`$, Theorem 4 immediately gives the following.
###### Corollary 5
Let ($`D_+`$, $`D_{}`$, $`D_0`$) be a skein triple of virtual link diagrams. Then the following skein relation holds:
$$Z_{D_+}^{}(t,y)Z_D_{}^{}(t,y)=(t^1t)Z_{D_0}^{}(t,y)$$
$`\mathrm{}`$
Remark Applying the skein relation of Theorem 4 is, of course, very helpful for calculating the Conway polynomial. For example, the virtual link depicted in Fig. 8 can immediately be seen to have vanishing Conway polynomial since changing an arbitrary crossing in the diagram yields the diagram of a Hopf link and cutting open the same crossing gives a diagram of the trivial knot. For the diagram that arises from changing the orientation of one link component, the corresponding skein tree has a branch that ends in a diagram with two virtual and two classical crossings which cannot be simplified by any changes of the classical crossings. Therefore, latter diagram has to be calculated by using the definition of the Conway polynomial.
###### Theorem 6
Let $`D`$ be a virtual link diagram. Then:
1. $`Z_D(x,x)=0`$
2. $`Z_D(x,1)=0`$
3. $`Z_D(1,y)`$ does not depend on the over-under information of the diagramโs classical crossings.
Proof: Part a) and b) follow from the fact that summing up the columns (rows) of the determinant belonging to $`Z_D(x,x)`$ ($`Z_D(x,1)`$) gives the trivial column (row) vector. Part c) is an immediate consequence of setting $`x=1`$ in Theorem 4 (or in the definition).
$`\mathrm{}`$
Remark By Theorem 6, $`Z_D(1,y)[y^{\pm 1}]`$ is an invariant of virtual links that is invariant with respect to changes of classical crossings, too. Thus $`Z_D(1,y)`$ can give information to distinguish those โbasic linksโ whose Conway polynomials must be calculated by the determinant formulation instead of applying the skein relation, i.e., generating elements of the virtual skein module related to the Conway skein relation. For example, the value of $`Z_D(1,y)`$ for the non-trivial diagram at the leaf of the skein tree that has been mentioned in the previous remark is non-zero. Therefore, the corresponding generating element is different from the trivial knot.
For a virtual link diagram $`D`$, let $`\overline{D}`$ denote the mirror image of $`D`$, i.e., the diagram that arises from $`D`$ by changing the over-under information of every classical crossing. If $`D`$ and $`\overline{D}`$ are equivalent then $`D`$ is called amphicheiral and otherwise chiral. Furthermore, let $`D^{}`$ be the inverse of $`D`$, i.e., the diagram with inverse orientation for every component of $`D`$. If $`D`$ and $`D^{}`$ are equivalent then $`D`$ is called invertible and otherwise non-invertible.
In the case of classical links, it is known that Kauffman and HOMFLY polynomials, and therefore the Conway polynomial too, are insensitive with respect to invertibility of a knot or link (indeed, this is true for any invariant derived from quantum groups, see , ). Surprisingly, the Conway polynomial of a virtual knot can be different from the polynomial of its inverse.
Example The virtual knot with diagram $`D`$ that is depicted in Fig. 9
is chiral as well as non-invertible. Calculating normalized Conway polynomials of $`D`$, $`\overline{D}`$, $`D^{}`$, $`\overline{D}^{}`$ gives the following result:
$$\stackrel{~}{Z}_D(x,y)=(x1)^2(y+1)(x^2y^21),\stackrel{~}{Z}_D^{}(x,y)=y\stackrel{~}{Z}_D(x,y)$$
$$\stackrel{~}{Z}_{\overline{D}}(x,y)=(x1)^2(y^21)(xy^1+1),\stackrel{~}{Z}_{\overline{D}^{}}(x,y)=y^1\stackrel{~}{Z}_D(x,y)$$
Thus the four diagrams under consideration represent pairwise different virtual knots.
## 3 The Alexander Polynomial Derived from the Link Group
In the same way as for classical links, the (virtual) link group can be defined via the Wirtinger presentation of a virtual link diagram, i.e., a group generator is assigned to each arc of the diagram and a group relation can be read off at each classical crossing corresponding to the rules shown in Fig. 10
(for details see ). The link group is an invariant of virtual links.
The above definition of the link group is a purely combinatorial one and it has, in general, nothing to do with the complement of a link in 3-space as in classical knot theory. Indeed, it can be shown that there exists a virtual knot group which is not the fundamental group of any 3-manifold. Related work on virtual knot groups can be found in and .
As explained in , the Alexander matrix corresponding to a group presentation can be calculated via a differential calculus. For a presentation with $`n`$ generators and $`m`$ relations, it is a $`m\times n`$ matrix with entries from the ring $`[t^{\pm 1}]`$. The (first) Alexander ideal $`(D)`$ of a diagram $`D`$ is generated by the $`(n1)\times (n1)`$ minors of the Alexander matrix for the corresponding link group and the (first) Alexander polynomial $`\mathrm{\Delta }_D(t)[t^{\pm 1}]`$ is the greatest common divisor of $`(D)`$. The Alexander ideal is an invariant of virtual links and the Alexander polynomial is an invariant up to sign and up to multiplication by powers of $`t^{\pm 1}`$.
Remark In Alexander polynomials derived from an extended Alexander group of a virtual link diagram are considered. Indeed, the Conway polynomial $`Z_D`$ is related to the 0th virtual Alexander polynomial of , see Remark 4.2 therein.
In contrast to the classical Alexander polynomial, the Alexander polynomial for virtual links does not fulfill any linear skein relation as stated in the next theorem. Therefore it is crucially different from the Conway polynomial defined in Section 2.
###### Theorem 7
For any normalization $`A_D(t)`$ of the polynomial $`\mathrm{\Delta }_D(t)`$, i.e., $`A_D(t)=\epsilon _Dt^{n_D}\mathrm{\Delta }_D(t)`$ with some $`\epsilon _D\{1,1\}`$ and $`n_D`$, the equation
$$p_1(t)A_{D_+}(t)+p_2(t)A_D_{}(t)+p_3(t)A_{D_0}(t)=0\text{with }p_1(t),p_2(t),p_3(t)[t^{\pm 1}]$$
has only the trivial solution $`p_1(t)=p_2(t)=p_3(t)=0`$.
Proof: Assume the above skein relation has a non-trivial solution. Consider the skein triples $`(D_+,D_{},D_0)`$ where $`D`$ is a classical knot diagram with one crossing and $`(D_+^{},D_{}^{},D_0^{})`$ where $`D^{}`$ is a standard diagram, with arbitrary orientations of the components, of the Hopf link (the latter triple corresponding to an arbitrary of the diagramโs two crossings). Then the related Alexander polynomials have values, up to normalization, as follows.
$$\mathrm{\Delta }_{D_+}(t)=\mathrm{\Delta }_D_{}(t)=\mathrm{\Delta }_{D_0^{}}(t)=1,\mathrm{\Delta }_{D_0}(t)=\mathrm{\Delta }_D_{}^{}(t)=0,\mathrm{\Delta }_{D_+^{}}(t)=t1$$
Inserting these values, each multiplied by a factor $`\epsilon t^n`$, into the skein relation immediately shows that, up to normalization, $`p_2(t)=p_1(t)`$ and $`p_3(t)=(t1)p_1(t)`$. Thus the skein relation is equivalent to the classical Alexander skein relation:
$$A_{D_+}(t)A_D_{}(t)=(t1)A_{D_0}(t)$$
Finally, let $`D^{\prime \prime }`$ be the virtual link diagram arising from $`D^{}`$ by changing an arbitrary classical crossing to a virtual crossing. Then $`\mathrm{\Delta }_{D_+^{\prime \prime }}(t)=\mathrm{\Delta }_{D_{}^{\prime \prime }}(t)=t1`$ and $`\mathrm{\Delta }_{D_0^{\prime \prime }}(t)=1`$. Inserting these values, each multiplied by a factor $`\epsilon t^n`$, into the Alexander skein relation yields a contradiction.
$`\mathrm{}`$
Remark The proof of Theorem 7 shows that the classical Alexander skein relation is, up to a factor, the unique one that holds for the Alexander polynomial of classical links, and this skein relation cannot be extended to the class of virtual links.
Example The virtual knot with diagram $`D`$ which is depicted in Fig. 9 cannot be distinguished from its inverse by Alexander polynomials since the Alexander ideals are identical:
$$(D)=(D^{})=(2,t^2+t+1)$$
Example
The virtual knot with trivial knot group and trivial Jones polynomial investigated in , see Fig. 11, has non-vanishing normalized Conway polynomial
$$(x1)(x^2y^2)(1+y^1)$$
and therefore it does not represent a classical knot. This is a result that could not be achieved with the means of .
## 4 Concluding Remarks
As mentioned in , the Alexander-Conway polynomial for links in thickened surfaces can be defined in the multivariable case as well and the construction analogously yields a multivariable Conway polynomial for virtual links. Also the multivariable Alexander polynomial can be derived from the virtual link group and it is an invariant that is different from the Conway polynomial as has been seen above for the one-variable case.
It is quite natural to consider generalizations of HOMFLY and Kauffman polynomials to virtual links next. Besides the definition of these polynomials via skein relations, there are several state models for them, see , , , , , , and also and citations therein. But, when trying to generalize any of these approaches, one meets with at least one of the following three obstacles. First, a virtual link diagram in general cannot be unknotted by classical crossing changes. Therefore it is necessary to find a basis for the virtual skein module corresponding to the HOMFLY and Kauffman skein relations, respectively, which is not yet known. Secondly, some models make use of the signed graph corresponding to a black-and-white colouring of the plane, but it is not clear how to handle virtual crossings in a generalized model. And finally, state models mostly rely on rotation numbers which are not invariant with respect to Reidemeister moves of type Iโ.
It should be mentioned that missing invariance under Reidemeister moves of type I and Iโ is a more serious problem when defining virtual link invariants than missing invariance under Reidemeister moves of type I when defining classical link invariants. For example, having in mind Theorem 7, it is not possible to derive a linear skein relation for the Alexander polynomial from the virtual link group. The proof that is given by Hartley for the classical case cannot be extended to virtual link diagrams because it intrinsically uses a normalization of the Alexander polynomial via rotation numbers during the proof. Observe that in classical knot theory a result by B. Trace assures that a regular isotopy invariant can always be made to an ambient isotopy invariant using writhe and rotation number of an oriented link diagram.
A more general class of invariants that may be generalized to virtual links are quantum link invariants. In it is described how to define such invariants via $`R`$-matrices and the abstract tensor approach, see also . Again, the invariance under Reidemeister moves of types I and Iโ causes difficulties and thus the virtual quantum link invariants under consideration are only invariant with respect to virtual regular isotopy, i.e., the two troublemaking moves are avoided.
The Conway polynomial for virtual links that has been introduced in the present paper also arises from a quantum link invariant (see ) but it is not difficult to get control over its behaviour with respect to Reidemeister moves of types I and Iโ. It is left to further investigations which (quantum link) invariants have similar properties and can be extended to full invariants of virtual links.
## Acknowledgement
The author would like to thank Saziye Bayram for finding an error in a calculation, and Dan Silver for informing him about his results. |
no-problem/9912/cond-mat9912256.html | ar5iv | text | # Josephson-plasma and vortex modes in layered superconductors
\[
## Abstract
The Josephson-plasma and vortex modes in layered superconductors have been studied theoretically for low magnetic fields parallel and perpendicular to layers. The two modes belong to the same lowest-frequency branch of collective-mode spectrum localized near vortices. This is the Josephson-plasma resonance if pancakes are strongly pinned and cannot move. Otherwise the lowest-frequency mode is a vortex mode governed by pancake pinning and the vortex mass. The latter is strongly enhanced by wandering of a vortex line around its average direction. The recently observed jump of the magnetoabsorption-resonance frequency at the vortex phase transition line is interpreted as a transition from the Josephson-plasma to the vortex mode.
\]
Magnetoabsorption microwave resonances observed in Bi compounds (see and references therein) have become one of the most important subjects for studying the interlayer Josephson coupling. The resonances were observed for magnetic fields normal and nearly parallel to superconducting layers, and in the absence of the field. A remarkable feature of the resonances was that at high magnetic fields $`H`$ the resonance frequency decreases roughly as $`1/\sqrt{H}`$ (the anticyclotronic behavior). Recently the magnetoabsorption resonances were observed in low perpendicular magnetic fields $`H`$ . In low fields the resonance frequency very weakly depends on $`H`$, but at a field about a few hundred G, the magnetoabsorption resonance frequency jumps to lower values with the anticyclotronic dependence, as observed in earlier experiments in higher fields.
The first attempt to explain these resonances for high perpendicular magnetic fields was done by Kopnin et al. who related the resonances to a vortex mode governed by surface pinning and the Magnus force. They were able to explain the anticyclotronic behavior, but the essential role of surface pinning and the Magnus force was not been confirmed by further experiments.
Later the majority of researchers came to conclusion that the magnetoabsorption resonances are related to the Josephson Plasma Resonances (JPR) (see and references therein) with the frequency
$$\omega _0^2=\omega _p^2\mathrm{cos}\phi _0(\stackrel{}{r}),$$
(1)
where $`\omega _p`$ is the Josephson-plasma frequency at zero magnetic field, $`\stackrel{}{r}`$ is the inplane coordinate, and $`\phi _0(\stackrel{}{r})`$ is the stationary gauge-invariant phase difference between layers, which is nonzero due to misalignment of pancakes in neighboring layers by thermal fluctuations and disorder.
However, this interpretation suffers from a number of inconsistencies (see discussion in Refs. ), and the original idea to relate the magnetoabsorption resonances with a vortex mode should not be ruled out, though properties of this vortex mode must be different from those assumed in Ref. . In Ref. the magnetoabsorption resonances in high parallel magnetic fields were interpreted as a vortex mode with the frequency determined by the balance of the pinning force on pancakes and the inertia force determined by the vortex mass. The Magnus force is not effective in the parallel geometry.
In the present work I investigate how a growing low magnetic field, either parallel or perpendicular to layers, changes the collective-mode spectrum in layered superconductors. The goal is to find correspondence of the JPR and the vortex mode to branches of this spectrum. In order to avoid a semantic ambiguity, I shall use the name JPR only for the mode with the frequency given by Eq. (1), as done in all recent experimental papers, though in old papers this name had not the same meaning (see below). The vortex mode is defined as a Goldstone mode which has a finite frequency only due to pinning. I shall show that the JPR mode and the vortex mode belong to the same branch of the spectrum which is the lowest-frequency oscillation mode localized near vortices. This mode becomes the JPR only if pancakes cannot move because of very strong pinning. If the pinning strength is finite, the lowest-frequency mode is a vortex mode governed by pinning and the vortex mass. The vortex mass is related with the electric energy in the interlayer spacing, and the misalignment of pancakes in the perpendicular field essentially increases the vortex mass. On the basis of the presented analysis I suggest that in the recent magnetoabsorption experiments in low perpendicular magnetic fields they observed a transition from the JPR to the vortex mode accompanied by a frequency jump.
The analysis will be done for a single long Josephson junction, but results of the analysis are relevant also for a layered superconductor after slight modifications (see below). The state of the junction is determined from the sine-Gordon equation for the phase:
$$\frac{1}{\lambda _J^2}\mathrm{sin}\phi ^2\phi =\frac{1}{c_s^2}\ddot{\phi },$$
(2)
where $`c_s=c\sqrt{s/2\epsilon \lambda }`$ is the Swihart velocity, $`s`$ is the barrier thickness, or the interlayer spacing, $`\lambda `$ is the London penetration depth for bulk superconductors forming the junction ($`\lambda s`$), $`\epsilon `$ is the dielectric permeability, $`\lambda _J^2=\mathrm{\Phi }_0^2/32\pi ^3\lambda E_J`$ is the Josephson length, $`\mathrm{\Phi }_0`$ is the magnetic-flux quantum, and $`E_J`$ is the Josephson coupling energy.
The collective modes correspond to small oscillations around the stationary solution $`\phi _0(x)`$: $`\phi (x,t)=\phi _0(x)+\phi ^{}(x,t)`$. The small phase $`\phi ^{}e^{i\omega t}`$ is determined by the equation (we shall omit the prime later on)
$$^2\phi \frac{1\mathrm{cos}\phi _0}{\lambda _J^2}\phi =\frac{\omega ^2\omega _p^2}{c_s^2}\phi .$$
(3)
In the absence of the magnetic field the ground state corresponds to $`\phi _0=0`$, and the solutions of Eq. (3) are plane waves $`\mathrm{exp}(i\stackrel{}{k}\stackrel{}{r}i\omega t)`$. The oscillation spectrum has a plasma edge, i.e., $`\omega =\sqrt{\omega _p^2+c_sk^2}>\omega _p`$, where $`\omega _p^2=e^2E_Js/\epsilon \mathrm{}^2=c_s^2/\lambda _J^2`$ is the Josephson plasma frequency.
If the magnetic field is strictly parallel to the junction plane, the phase $`\phi (x,t)`$ depends only on one spatial coordinate $`x`$. In the stationary case ($`\dot{\phi }=0`$) the sine-Gordon equation has exact periodic solutions $`\phi _0(x)`$ which correspond to chains of Josephson vortices and the period (intervortex distance) $`a`$ determines the magnetic field: $`H=\mathrm{\Phi }_0/2\lambda a`$. For such $`\phi _0(x)`$, Eq. (3) can be also solved exactly for any intervortex distance . However, we introduce small periodic $`\phi _0(x)`$ which is not a solution of the sine-Gordon equation and vanishes at distances more than $`\lambda _J/2`$ from vortex centers. This yields a small periodic attractive potential $`U=[1\mathrm{cos}\phi _0(x)]/\lambda _J^2`$ in Eq. (3). We assume that $`a\lambda _J`$ (low magnetic fields), and look for a periodic solution symmetric in the interval $`a/2<x<a/2`$. The coordinate $`x=0`$ corresponds to the center of the potential well. At $`|x|<\lambda _J/2`$ the perturbation theory yields $`\phi (x)1_0^x๐x_1_0^{x_1}๐x_2U(x_2)`$. At $`\lambda _J/2<|x|<a/2`$ $`\phi (x)\mathrm{cos}k(a/2x)`$. The lowest-frequency mode corresponds to the bound state with an imaginary $`k=ip`$. Continuity of the phase and its derivative at $`x\pm \lambda _J/2`$ requires that
$$p\mathrm{tanh}pa=\frac{1}{\lambda _J^2}_{a/2}^{a/2}๐x[1\mathrm{cos}\phi _0(x)].$$
(4)
At very low magnetic fields when $`pa1`$, the bound state corresponds to the oscillation frequency
$$\omega _0^2=\omega _p^2c_s^2p^2=\omega _p\left\{1\frac{a^2}{\lambda _J^2}\left[1\mathrm{cos}\phi _0(x)\right]^2\right\}.$$
(5)
But if the inequality $`pa1`$ holds, expansion in the left-hand side of Eq. (4) yields
$$p^2\frac{1}{a\lambda _J^2}_{a/2}^{a/2}๐x[1\mathrm{cos}\phi _0(x)]=\frac{1\mathrm{cos}\phi _0(x)}{\lambda _J^2},$$
(6)
and the squared mode frequency $`\omega _0^2`$ is given by Eq. (1).
Thus the JPR belongs to the lowest-frequency mode localized near the potential well. However, the potential is very weak, the size of the bound state $`1/p`$ (the localization length) is larger than the intervortex distance $`a`$, and therefore localization is not so pronounced.
Let us consider also the lowest-frequency oscillation which belongs to the continuum spectrum with a real $`k`$. The phase distribution for a continuum mode must have at least one node in the vicinity of the vortex, in order to satisfy the condition of orthogonality to the ground bound state. Thus roughly $`k=\pi /a`$, and the lowest frequency $`\omega _c`$ of the continuum spectrum (delocalized phase oscillation) exceeds the zero-field plasma edge $`\omega _p`$:
$$\omega _c^2=\omega _p^2+c_s^2k^2=\omega _p^2\left(1+\frac{\pi ^2\lambda _J^2}{a^2}\right).$$
(7)
The frequency of this mode grows at increasing the magnetic field. Note that in the old papers this mode with $`\omega _c>\omega _p`$, but not the localized mode with $`\omega _0<\omega _p`$, was called the plasma mode.
Our calculation has not revealed the Goldstone vortex mode, because our potential does not correspond to a real Josephson vortex with the phase $`\phi _0`$ which satisfies the stationary sine-Gordon equation. For real vortices the perturbation theory is invalid, but the exact theory yields that the lowest frequency $`\omega _0`$ vanishes. The phase in the bound state is proportional to the derivative of the stationary phase distribution $`d\phi _0(x)/dx`$.
So the procedure to tune the potential from zero to values of the order $`1/\lambda _J^2`$ goes through states which are not realized physically, since the vortex is a topological nonlinear excitation, and its topological charge cannot change continuously. However, this procedure clearly demonstrates genesis of the JPR and the vortex mode from the original spectrum of the spatially uniform state. The JPR and the vortex mode belong to the same spectrum branch related to a phase oscillation localized near the defect (vortex), but the JPR with the frequency Eq. (1) is possible only for an artificial weak potential. Thus in parallel fields the JPR mode doesnot exist at all .
In a layered superconductor a perpendicular field creates vortices which are chains of pancakes. If pancakes lie on an ideally straight line normal to layers, there is no phase difference across Josephson junctions. But because of disorder and thermal fluctuations, pancakes do not form an ideally straight line, and pancakes in neighboring layers are separated by random distances . For a single Josephson junction this means that the vortex line in two banks of the junction meets the junction plane in two different points with the distance $`r_w`$ between them, and a phase difference $`\phi _0(\stackrel{}{r})`$ across the junction appears. If $`r_w\lambda _J`$, the phase distribution corresponds to a Josephson string, which is a segment of a Josephson vortex of width $`\lambda _J`$ stretched between the points ($`x=0,y=0`$) and ($`x=0,y=r_w`$). If $`r_w\lambda _J`$, the statinary sine-Gordon equation yields that at $`r\lambda _J`$ $`\phi _0(\stackrel{}{r})`$ is a dipole field:
$$\phi _0(\stackrel{}{r})=\mathrm{arctan}\frac{y}{x}\mathrm{arctan}\frac{yr_w}{x}.$$
(8)
At large distances $`r=\sqrt{x^2+y^2}r_w`$ (but not larger than $`\lambda _J`$) the phase is $`\phi _0(\stackrel{}{r})=r_wx/r^2`$. At distances $`r\lambda _J`$ the phase $`\phi _0(\stackrel{}{r})`$ is exponentially small. We shall call such a phase distribution a short Josephson string.
We look for the lowest-frequency mode for a short Josephson string assuming for the sake of simplicity that the potential $`(1\mathrm{cos}\phi _0)/\lambda _J^2`$ in Eq. (3) is axisymmetric. The most important perturbation originates from distances $`r_wr\lambda _J`$ where the axisymmetric part of $`1\mathrm{cos}\phi _0(r)\phi _0(r)^2/2`$ is equal to $`r_w^2/4r^2`$. An axisymmetric mode is determined from the equation
$$\frac{1}{r}\frac{d}{dr}\left(r\frac{d\phi }{dr}\right)\frac{r_w^2}{4\lambda _J^2}\frac{1}{r^2}\phi =\frac{\omega ^2\omega _p^2}{c_s^2}\phi .$$
(9)
For a bound state in a weak potential one may neglect the right-hand side of Eq. (9) for $`r<\lambda _J`$, and the perturbation theory yields
$$\phi (r)=1\frac{r_w^2}{8\lambda _J^2}\left(\mathrm{ln}\frac{r}{r_w}\right)^2.$$
(10)
Outside the potential well, at $`r>\lambda _J`$, the axisymmetric bound-state oscillation is $`\phi =AI_0(pr)+BK_0(pr)`$, or, if $`pa1`$, $`\phi A\left(1+p^2r^2/4\right)B\mathrm{ln}pr`$. The periodical boundary conditions in a real vortex array can be approximately simulated by the condition $`\phi ^{}(r)=0`$ at the boundary of the Wigner-Zeitz cell $`r=a=\sqrt{\mathrm{\Phi }_0/\pi H}`$. Then
$$\phi A\left(1+\frac{p^2a^2}{2}\mathrm{ln}\frac{1}{pr}\right).$$
(11)
The continuity conditions at the border of the potential well ($`r=\lambda _J`$) yield
$$\frac{r_w^2}{2\lambda _J^2}\mathrm{ln}\frac{\lambda _J}{r_w}=\frac{p^2a^2}{1+\frac{p^2a^2}{2}\mathrm{ln}\frac{1}{p\lambda _J}}.$$
(12)
In very low magnetic fields ($`a\mathrm{}`$)
$$p\frac{1}{\lambda _J}\mathrm{exp}\left[\frac{4\lambda _J^2}{r_w^2}\frac{1}{\mathrm{ln}(\lambda _J/r_w)}\right].$$
(13)
Then the squared frequency $`\omega _0^2=\omega _p^2c_s^2p^2`$ is
$$\omega _0^2=\omega _p^2\left\{1\mathrm{exp}\left[\frac{8\lambda _J^2}{r_w^2}\frac{1}{\mathrm{ln}(\lambda _J/r_w)}\right]\right\}.$$
(14)
For a long Josephson string $`r_w\lambda _J`$ one maynot use the perturbation theory, but the lowest-frequency mode can be calculated as a bending oscillation for an elastic string of the length $`r_w`$ and width $`\lambda _J`$ with its ends fixed. The line tension of the string (the string energy per unit length) is $`ฯต=8E_J\lambda _J`$. The mass of the moving string is determined by the electric energy:
$$M=\frac{2s}{v_L^2}\frac{\epsilon E^2}{8\pi }(d\stackrel{}{r})_2=\frac{\epsilon \mathrm{\Phi }_0^2}{16\pi ^3c^2s}\left(\frac{\phi _0}{x}\right)^2(d\stackrel{}{r})_2.$$
(15)
where $`E=(\mathrm{}/2es)\dot{\phi }=(\mathrm{}/2es)(\stackrel{}{v}_L\stackrel{}{})\phi _0`$ is the electric field parallel to the $`c`$ axis, and $`\stackrel{}{v}_L`$ is the string velocity. Neglecting the logarithmic contribution from vicinities of the string ends, this expression yields for the mass per unit length $`\mu _l=M/r_w=\mathrm{\Phi }_0^2/2\pi ^3c^2s\lambda _J`$ . Finally, the frequency is $`\omega _0^2=\pi ^2ฯต/\mu _lr_w^2=\pi ^2\omega _p^2\lambda _J^2/r_w^2=\pi ^2c_s^2/r_w^2`$. At $`r_w\lambda _J`$ where the crossover from a short to a long Josephson string takes place, this frequency is of the same order as given by Eq. (14).
We see that low magnetic fields do not affect the lowest-frequency localized mode. But at increasing $`H`$, i.e., decreasing the parameter $`pa`$ in Eq. (12), the logarithmic term in the denominator becomes unimportant. Then the squared frequency is given by Eq. (1), with
$$1\mathrm{cos}\phi _0(x)\frac{\phi _0(x)^2}{2}=\frac{r_w^2}{2a^2}\mathrm{ln}\frac{\lambda _J}{r_w},$$
(16)
and the bound state corresponds to the JPR mode. The restriction on observation of the JPR mode is
$$p^2a^2\mathrm{ln}\frac{1}{p\lambda _J}=\frac{r_w^2}{2\lambda _J^2}\mathrm{ln}\frac{\lambda _J}{r_w}\mathrm{ln}\frac{a^2}{r_w^2\mathrm{ln}(\lambda _J/r_w)}1.$$
(17)
Thus the JPR is a weakly localized mode with imaginary $`k`$, contrary to Bulaevskii et al. who related the JPR to the most homogeneous delocalized state. Delocalized states belong to the continuum spectrum, and as well as in the parallel 1D geometry \[see Eq. (7)\], the lowest-frequency mode of the continuum spectrum exceeds the plasma frequency and grows with the magnetic field.
Up to now we considered oscillations for a string with fixed (strongly pinned) ends, since if the ends moved, a singular contribution would appear in $`\phi `$. Decreasing the pinning strength, the frequency of the localized mode decreases, and if pinning vanishes, the frequency must vanish also. The string ends correspond to pancakes in a layered superconductor. Thus the localized mode is a JPR if pinning of pancakes is infinitely strong. But for a finite pinning strength the localized mode is a vortex mode with pancakes and strings moving together and with the frequency which depends on pinning. Since pinning varies from a sample to a sample, it is difficult to suggest an universal estimation of the frequency of the vortex mode. But, as already discussed in Ref. , the fact that the experimentally measured $`c`$-axis critical current has the same dependence on the magnetic field as the squared magnetoabsorption resonance frequency, confirms our suggestion that the magnetoabsorption resonances in high fields correspond not to the JPR, but to the vortex mode governed by pinning.
In order to observe a vortex-mode resonance in a layered superconductor, the inertia force proportional to the vortex mass (we neglect the Magnus force here) should not be small compared to the friction force. A moving vortex line is a chain of pancakes connected by random strings of average length $`r_w`$ , and $`r_w`$ may be considered as a size of an extended vortex core. The Josephson length $`\lambda _J=\gamma s`$ ($`\gamma `$ is the anisotropy ratio) usually exceeds $`r_w`$. The mass $`M`$ of the short string is determined mostly by the vicinity of two pancakes, and in Eq. (15) $`\phi _0/x=y/r^2`$ \[or $`(yr_w)/r^2`$\]. Then the mass per unit length is (cf. Ref. )
$$\mu _s=\frac{M}{s}=\frac{\epsilon \mathrm{\Phi }_0^2}{8\pi ^2s^2c^2}\mathrm{ln}\frac{r_w}{\xi _{ab}},$$
(18)
where $`\xi _{ab}`$ is the coherence length in the $`ab`$ plane. On the other hand, the dissipation rate for string motion is also proportional to the electric energy: $`\eta v_L^2=\sigma _cE^2(d\stackrel{}{r})_2`$, where $`\sigma _c`$ is the normal conductivity along the $`c`$ axis and $`\eta `$ is the friction coefficient per unit vortex length along the $`c`$ axis. Then the quality factor of the resonance (the ratio of the inertia force to the friction force) is $`Q=\mu _l\omega _0/\eta =\epsilon \omega _0/4\pi \sigma _c`$. It is identical to the quality factor of the uniform plasma oscillation as one can see comparing the supercurrent and the normal current in the expression for the total current $`j=(\epsilon \omega _p^2/4\pi i\omega )\stackrel{}{E}+\sigma _c\stackrel{}{E}`$. Thus normal-current dissipation is not more dangerous for the vortex mode than for the plasma mode.
The assumption of infinitely strong pancake pinning cannot be valid everywhere, and therefore the JPR mode cannot be an universal explanation of the magnetoabsorption resonances. This explanation was based on Eqs. (1) and (16). In Refs. I argued that in order to derive the observed anticyclotronic dependence from these equations, one must make a too strong assumption that $`r_wa`$ in a wide interval of fields. The condition $`r_wa`$ means a complete disintegration of vortex lines. Note that now Bulaevskii et al. also believe that this condition does not take place and line structure retains in the vortex liquid. Then they must accept that it is impossible to explain the anticyclotronic behavior in the liquid phase in terms of JPR. As discussed in Ref. , the line structure in the vortex liquid was confirmed by experiments on the microwave absorption .
The analysis given above shows that Eq. (1) itself is not always valid. In addition, in high magnetic fields ($`a\lambda _J`$) one may not use Eq. (16) for the decoupled state of the vortex matter in which pancakes positions in neighboring layers are uncorrelated, but the 2D itralayer crystalline order still exists. Then the Josephson strings form an ordered 2D lattice in any interlayer spacing. In high magnetic fields when $`\lambda _J`$ exceeds $`a`$, Eq. (16) is valid if orientations of the Josephson strings are random, and there is no interference between contributions of distant strings (at distances larger than $`a`$) to $`\phi _0(\stackrel{}{r})^2`$ . But if the strings form a lattice, the distribution of $`\phi _0(\stackrel{}{r})`$ is similar to the distribution of the current component $`j_y`$ for the vortex lattice in the mixed state. This means that the contributions from distant strings to $`\phi _0(\stackrel{}{r})^2`$ cancel, and the logarithm $`\mathrm{ln}(\lambda _J/r_w)`$ in Eq. (16) must be replaced by $`\mathrm{ln}(a/r_w)`$. Then even the assumption $`r_wa`$ does not help to obtain the anticyclotronic behavior.
In summary, we explain the experimental dependence of magnetoabsorption resonances on a low perpendicular magnetic field as the following. In very low fields, before the frequency jump, the magnetoabsorption resonance weakly depends on a field as was predicted for the JPR , and therefore is the JPR mode. After the jump which apparently coincides with the vortex-matter phase transition (the second-peak line at low temperatures), the magnetoabsorption resonance becomes the vortex mode governed by pancake pinning. This explains the anticycltronic dependence after the jump. The nature of the frequency jump depends on the structure of the vortex matter in the high-field phase which is not well established up to now. It maybe related with a jump of $`r_w`$ as supposed by the decoupling scenario of the phase transition . The jump of $`r_w`$ should result in a jump of vortex mass, or in weakening of pinning, or both.
Any scenario of the phase transition supposes that in the high-field phase directions of vortex lines strongly oscillate , Then even the electric field strictly parallel to the $`c`$ axis and to the average direction of vortices can make some tilted segments of vortex lines to oscillate. Thus the Lorentz-force-free geometry of the magnetoabsorption experiments in perpendicular fields does not rule out a possibility to excite the vortex mode.
The work was supported by the grant of the Israel Academy of Sciences and Humanities. |
no-problem/9912/cond-mat9912043.html | ar5iv | text | # Condensation Energy and Spectral Functions in High Temperature Superconductors
## I Introduction
The origin of high temperature superconductivity in the cuprates is still a matter of great debate. Recently, there have been several different theoretical proposals for the mechanism of high $`T_c`$ superconductivity, each of which leads to a characteristically different reason for the lowering of the free energy. This has focused attention on how various spectroscopic probes can yield information on the source of the condensation energy which drives the formation of the superconducting ground state.
The first, and perhaps the most radical, proposal is the interlayer tunneling theory of Anderson and co-workers, where it is conjectured that the condensation energy is due to a gain in the c-axis kinetic energy in the superconducting state . Some measurements of the c-axis penetration depth are in conflict with the predictions of this theory. Others, such as recent c-axis optical conductivity data indicating a violation of the optical sum rule, are in support of this hypothesis, although alternative explanations have been proposed for these observations . An even more unusual suggestion has been recently made by Hirsch and Marsiglio, where they argue that the bulk of the condensation energy comes from a gain in the in-plane kinetic energy. A rather different approach proposes the lowering of the Coulomb energy in the long wavelength, infrared region , which has not been experimentally tested as yet. A fourth approach advocates a lowering of the exchange energy in the superconducting state due to the formation of a resonant mode in the dynamic spin susceptibility near $`๐ช=(\pi ,\pi ,\pi )`$, and has recently received experimental support from neutron scattering studies .
We note that all of the above proposals focus on a part of the Hamiltonian describing the system: either a part of the kinetic energy, or a part of the interaction energy. Correspondingly, the experiments to test these ideas focus on two-particle correlation functions in a specific region of momentum and frequency space.
In this paper we propose to exploit a very general exact relation between the one-particle Greenโs function of a system and its internal energy (see Eq. 1 below). This approach, in principle, allows us to determine the โsourceโ of the condensation energy without making any a priori assumptions about which piece of the Hamiltonian is responsible for the gain in condensation energy. The exact expression used involves moments of the occupied part of the one-electron spectral function, and since this quantity is directly related to angle-resolved photoemission spectroscopy (ARPES) measurements, our approach also appears very promising from a practical point of view.
As a specific illustration of this general framework, we study the condensation energy for a very simple self-energy for the normal and superconducting states which captures the essential features of the observed ARPES lineshapes, the so-called mode model . This analysis leads to several interesting conclusions as discussed below, but most importantly, it suggests an intimate connection between the nature of the normal state spectral function (Fermi liquid or non Fermi liquid) and the formation of sharply defined quasiparticle excitations below $`T_c`$, and the gain in free energy in the superconducting state.
This paper is organized as follows. In Section II, the formalism relating the condensation energy to the spectral function is developed. In Section III, the mode model is introduced, and the nature of the resulting condensation energy is discussed. In Section IV, our observations concerning ARPES spectra are used to comment on the results of previous spectroscopic studies, as well as the origin of the condensation energy. In Section V, we address the question of the nature of the superconducting transition versus hole doping. In Section VI, we offer some concluding remarks. Finally, we include two appendices. Appendix A further explores questions raised in Section II in regards to the full Hamiltonian and the virial theorem. In Appendix B, we comment on the applicability of the formalism of Section II to experimental data (ARPES and tunneling).
## II Formalism
We begin with the assumption that the condensation energy does not have a component due to phonons, though as we mention below, this condition can be relaxed. We note that at optimal doping, the isotope exponent $`\alpha `$ is essentially zero , and Chester proved that the change in ion kinetic energy between superconducting and normal states vanishes for $`\alpha =0`$. To proceed, we assume an effective single-band Hamiltonian which involves only two particle interactions. Then, simply exploiting standard formulas for the internal energy $`U=H\mu N`$ ($`\mu `$ is the chemical potential, and $`N`$ the number of particles) in terms of the one-particle Greenโs function, we obtain
$`U_NU_S=`$ (2)
$`{\displaystyle \underset{๐ค}{}}{\displaystyle _{\mathrm{}}^+\mathrm{}}๐\omega (\omega +ฯต_k)f(\omega )\left[A_N(๐ค,\omega )A_S(๐ค,\omega )\right]`$
Here and below the subscript $`N`$ stands for the normal state, $`S`$ for the superconducting state. $`A(๐ค,\omega )`$ is the single-particle spectral function, $`f(\omega )`$ the Fermi function, and $`ฯต_k`$ the bare energy dispersion which defines the kinetic energy part of the Hamiltonian. Note that the $`\mu N`$ term has been absorbed into $`\omega `$ and $`ฯต_k`$, that is, these quantities are defined relative to the appropriate chemical potential, $`\mu _N`$ or $`\mu _S`$. In general, $`\mu _N`$ and $`\mu _S`$ will be different. This difference has to be taken into account, since the condensation energy is small.
The condensation energy is defined by the zero temperature limit of $`U_NU_S`$ in the above expression. Note that this involves defining (or somehow extrapolating to) the normal state spectral function at $`T=0`$. Such an extrapolation, which we return to below, is not specific to our approach, but required in all estimates of the condensation energy. We remark that Eq. 1 yields the correct condensation energy, $`N(0)\mathrm{\Delta }^2/2`$, for the BCS theory of superconductivity .
We also note that Eq. 1 can also be broken up into two pieces to individually yield the thermal expectation value of the kinetic energy (using $`2ฯต_k`$ in the parentheses in front of $`f(\omega )`$), and that of the potential energy (using $`\omega ฯต_k`$ instead). Further, this expression can also be generalized to the free energy by including the entropy term as discussed by Wada . Moreover, if the phonons can be treated in an harmonic approximation, the terms missing in Eq. 1 (half the electron-phonon interaction, and all other phonon terms) reduce to twice the phonon kinetic energy . The phonon kinetic energy can then be determined if the isotope coefficient is known . For $`\alpha =1/2`$, the missing terms in this approximation reduce to twice the condensation energy, so that Eq. 1 is realized again, but with a negative sign.
The great advantage of Eq. 1 is that it involves just the occupied part of the single particle spectral function, which is measured by angle resolved photoemission spectroscopy (ARPES) . Therefore, in principle, one should be able to derive the condensation energy from such data, if an appropriate extrapolation of the normal state spectral function to T=0 can be made. On the other hand, a disadvantage is that the bare energies, $`ฯต_k`$, are a priori unknown. Note that these are not directly obtained from the measured ARPES dispersion, which already includes many-body renormalizations, nor are they simply determined by the eigenvalues of a band calculation, as such calculations also include an effective potential term. Rather, they could be determined by projecting the kinetic energy operator onto the single-band subspace. Methodologies for doing this when reducing to an effective single-band Hubbard model have been worked out for the cuprates , and could be exploited for this purpose.
Eq. 1 trivially reduces to the following:
$`U_NU_S={\displaystyle \underset{๐ค}{}}ฯต_k\left[n_N(๐ค)n_S(๐ค)\right]`$ (3)
$`+{\displaystyle _{\mathrm{}}^+\mathrm{}}๐\omega \omega f(\omega )\left[N_N(\omega )N_S(\omega )\right]`$ (4)
where $`n(๐ค)`$ is the momentum distribution function, and $`N(\omega )`$ the single-particle density of states. While ARPES has the advantage of giving information on both terms in this expression, other techniques could be exploited as well for the individual terms in Eq. 2. For instance, $`n(๐ค)`$ in principle can be obtained from positron annihilation or Compton scattering, while $`N(\omega )`$ could be determined from tunneling data, although matrix elements could be a major complication for both tunneling and ARPES.
We conclude this Section with some remarks about a low-energy effective single-band Hamiltonian used to derive Eq. 1 versus the full Hamiltonian of the solid which includes quadratic dispersions for all (valence and core) electrons, ionic kinetic energies, together with all Coulombic interactions (see e.g., Ref. ). As shown by Chester the full $`H`$ can be very useful for studying the condensation energy. We discuss some points related to such a description in the Appendix.
Here we only wish to emphasize one important point which will come up later in our analysis. In terms of the full Hamiltonian, the transition to the superconducting state must be driven by a gain in the potential energy (ignoring ion terms for this argument), as is intuitively obvious and also rigorously shown by Chester using the virial theorem. However, the kinetic energy terms in the effective single-band Hamiltonian can (and in general do) incorporate effects of the potential energy terms of the full Hamiltonian. Further, there is no virial theorem restriction on the expectation values of the kinetic and potential terms of the effective Hamiltonian (since these do not, in general, obey the requisite homogeneity conditions). As a consequence, there is nothing preventing the effective low-energy Hamiltonian from having a superconducting transition driven by a lowering of the (effective) kinetic energy.
## III The Mode Model
To illustrate the power of the formalism, as well as some of the subtleties discussed above, we now analyze the condensation energy arising from a spectral function described by a simple model self-energy which captures some of the essential features of the ARPES data in the important region of the Brillouin zone near $`(\pi ,0)`$ in the cuprates. These features are: (1) a broad normal state spectral function $`A`$ which seems $`T`$-independent in the normal state (except in the underdoped case, where there is a pseudogap which fills in as $`T`$ increases), and thus can be used as the extrapolated โnormalโ state $`A_N`$ down to $`T=0`$ in Eq. 1. (2) A superconducting state spectral function $`A_S`$ which shows a gap, a sharp quasiparticle peak, and a dip-hump structure at higher energies. At a later stage, we will have to make some reasonable assumptions about the $`๐ค`$-dependence of the spectral functions to perform the zone sum in Eq. 1.
These nontrivial changes in the ARPES lineshape going from the normal to the superconducting state have been attributed to the interaction of an electron with an electronic resonant mode below $`T_c`$, which itself arises self-consistently from the lineshape change. Strong arguments have been given which identify this resonant mode with one observed by magnetic neutron scattering . Thus our analysis below will also have bearing upon the arguments mentioned in the Introduction which relate the resonant mode directly to changes in the exchange energy.
The simplest version of the resonant mode model is a self-energy of the form
$$\mathrm{\Sigma }=\frac{\mathrm{\Gamma }}{\pi }\mathrm{ln}\left|\frac{\omega \omega _0\mathrm{\Delta }}{\omega +\omega _0+\mathrm{\Delta }}\right|+i\mathrm{\Gamma }\mathrm{\Theta }(|\omega |\omega _0\mathrm{\Delta }),$$
(5)
where $`\omega _0`$ is the resonant mode energy, $`\mathrm{\Delta }`$ the superconducting energy gap, and $`\mathrm{\Theta }`$ the step function. (A more complicated form has been presented in earlier work .) This self-energy is then used in the superconducting state spectral function
$$A=\frac{1}{\pi }\mathrm{Im}\frac{Z\omega +ฯต}{Z^2(\omega ^2\mathrm{\Delta }^2)ฯต^2}$$
(6)
where $`Z=1\mathrm{\Sigma }/\omega `$. We note that for this form of $`\mathrm{\Sigma }`$, the spectral function $`A_S`$ will consist of two delta functions located at $`\pm E`$, where $`E`$ satisfies two conditions: (1) it has a value less than $`\omega _0+\mathrm{\Delta }`$ and (2) the denominator of Eq. 4 vanishes. The weight of the delta functions are then determined as $`|dA^1(\pm E)/d\omega |`$. In addition, there are incoherent pieces for $`|\omega |`$ greater than $`\omega _0+\mathrm{\Delta }`$. We use the same self-energy for the (extrapolated) normal state with $`\mathrm{\Delta }=0`$ and $`\omega _0=0`$, so that $`A_N`$ reduces to a Lorentzian centered at $`ฯต`$ with a full width half maximum of $`2\mathrm{\Gamma }`$.
To begin with, for simplicity, we treat both $`\omega _0`$ and $`\mathrm{\Delta }`$ as momentum independent. It is straightforward to evaluate Eq. 1 with the sum over momentum reducing to an integral over $`ฯต`$. In Fig. 1a, we plot the integrand of the $`ฯต`$ integral (i.e., after the $`\omega `$ integral has been done). The parameters used are the same ones used earlier to fit ARPES data near optimal doping at the $`(\pi ,0)`$ point. The result is somewhat surprising. The integrand is negative for $`ฯต`$ near zero (i.e., $`k`$ near $`k_F`$) and positive for $`ฯต`$ far enough away. This should be contrasted with the BCS result , shown in Fig. 1b, where the contribution at $`k_F`$ (which is $`\mathrm{\Delta }/2`$) is maximal and positive.
To gain insight into this unusual result, we also show in Fig. 1a the decomposition of this result into kinetic and potential energy pieces. Unlike BCS theory (Fig. 1b), where the condensation is driven by the potential energy, in the mode model case, it is kinetic energy driven. To understand the unusual decrease in the kinetic energy as one goes below $`T_c`$, we show in Fig. 2 the momentum distribution function $`n(๐ค)`$ plotted versus $`ฯต`$. Note that in contrast to BCS theory, $`n(๐ค)`$ is sharper in the superconducting state than in the normal state. The reason is very simple. The (extrapolated) normal state is subject to a large broadening $`\mathrm{\Gamma }`$ all the way down to $`T=0`$ which smears out $`n(๐ค)`$ on the scale of $`\mathrm{\Gamma }`$. At $`T=0`$ the result is simply: $`n_N(๐ค)=1/2\mathrm{tan}^1\left(ฯต/\mathrm{\Gamma }\right)/\pi `$. In the superconducting state, although $`\mathrm{\Delta }`$ broadens $`n(๐ค)`$ as in BCS theory, one now has quasiparticle peaks. The effect of this on sharpening $`n(๐ค)`$ is much larger than the broadening due to $`\mathrm{\Delta }`$ (for $`\mathrm{\Delta }\mathrm{\Gamma }`$), so the net effect is a significant sharpening. As a consequence, the kinetic energy is lowered in the superconducting state.
Note that these counterintuitive results would not have been obtained had $`\omega _0`$ retained the same (non-zero) value in the normal state. In this case, sharp quasiparticles would exist in the normal state, and all of our usual expectations are fulfilled: $`n_N(๐ค)`$ would have had a step discontinuity (also illustrated in Fig. 2), and the normal state kinetic energy would have been considerably lower than the superconducting one. In fact, for this situation, the model is equivalent to that of Einstein phonons in an approximation where the gap is treated as a (real) constant in frequency . However, the normal state ARPES data near $`(\pi ,0)`$ are clearly consistent with $`\omega _0=0`$ and are $`T`$-independent with a $`\mathrm{\Gamma }T`$, which suggests that the T=0 extrapolation used here is reasonable.
These points are further illustrated in Fig. 1c, where we show the energy difference between the normal state with $`\omega _0`$ non-zero, and the normal state with $`\omega _0`$ zero. Note the similarity to Fig. 1a, i. e. , the unusual behavior in Fig. 1a is due to the formation of a gap in the incoherent part of the spectral function, with the resulting appearance of quasiparticle states, and thus not simply due to the presence of a superconducting energy gap, $`\mathrm{\Delta }`$. Now, in the real system, it is the transition to a phase coherent superconducting state which leads to the appearance of the resonant mode at non-zero energy, which causes the gap in $`Im\mathrm{\Sigma }`$, which results in the incoherent gap and quasiparticles, which in turn generates the mode. Although this self-consistency loop clearly indicates the electron-electron nature of the interaction (as opposed to an electron-phonon one), the connection of these effects with the onset of phase coherence (as opposed to the opening of a spectral gap, which is known to occur at a higher temperature, $`T^{}`$) is not understood at this time. That is, the mode model is a crude simulation of the consequences of some underlying microscopic theory which has yet to be developed.
As for the potential energy piece, we note that the contribution to Eq. 1 at $`k_F`$ (where $`ฯต_k=0`$) reduces to the first moment of the spectral function. In Fig. 3a, we plot the spectral function at $`k_F`$ in both the normal and superconducting states. (For illustrative purposes, we have replaced the delta function peaks in the superconducting state by Lorentzians of half width half maximum 10 meV). From this plot, we note that the quasiparticle peaks give a positive contribution to the condensation energy, but that at higher energies (large $`|\omega |`$), there is a negative contribution. This negative contribution is very important because it is weighted by $`\omega `$ in the integrand of Eq. 1. To see this quantitatively, we plot in Fig. 3b the first moment difference at the Fermi surface ($`ฯต_k=0`$) as a function of the lower cut-off on the $`\omega `$ integration (the upper cutoff at $`T=0`$ is $`\omega =0`$). We clearly see the positive contribution due to the quasiparticle peak and the (five times larger) negative contribution due to the incoherent tail. This explains why the net contribution from the potential energy term is negative. We can contrast this with BCS theory, where only the quasiparticle part exists, and so the net contribution is positive.
An interesting question concerns what happens in this model as the broadening, $`\mathrm{\Gamma }`$, is reduced. In Fig. 4, we show results like for Fig. 1a, but for various $`\mathrm{\Gamma }`$ values. As $`\mathrm{\Gamma }`$ is reduced and becomes comparable to $`\mathrm{\Delta }`$, one crosses over from the unusual behavior in Fig. 1a to a behavior very similar to that of BCS theory in Fig. 1b. That is, the condensation energy crosses over from being kinetic energy driven to being potential energy driven. This is not a surprise, since in the limit $`\mathrm{\Gamma }`$ goes to zero, the model reduces to BCS theory. The physics behind this, though, is quite interesting. For large $`\mathrm{\Gamma }`$, the normal state is very non Fermi liquid like. As $`\mathrm{\Gamma }`$, is reduced, though, the normal state becomes more Fermi liquid like . As a consequence, one crosses over from being kinetic energy driven to potential energy driven (when $`\mathrm{\Gamma }\mathrm{\Delta }`$). The relation of kinetic energy driven behavior with the presence of a non Fermi liquid normal state, and a Fermi liquid superconducting state, was realized early on by Anderson , and will be returned to again in Section IV of the paper. Fig. 4 also draws attention to the fact that being kinetic or potential energy driven is a relative point. Note in Fig. 4b that near $`k_F`$, the two contributions have the same sign. Individual terms, such as the potential energy in Fig. 4b, and the kinetic energy in other cases we have explored, can even change sign as a function of $`ฯต_k`$.
A potential worry which emerges from the above calculations is the large contribution in Figs. 1 and 4 at large $`|ฯต|`$. In particular, in most cases, the bulk of the contribution to the condensation energy comes well away from the Fermi surface, in contrast to BCS theory. In Fig. 1a, this is due to the large $`\mathrm{\Gamma }`$, which leads to a substantial rearrangement of the spectral function even for large $`|ฯต|`$, causing large contributions to both the potential and kinetic energy pieces. Even in the case of Fig. 4a, where $`\mathrm{\Gamma }`$ is quite small, there is still a potential energy contribution at large $`|ฯต|`$. This can be traced to the gap in the incoherent part of the spectral function, with the resulting spectral weight being recovered around $`\omega =ฯต`$, leading to a potential energy shift. Even in the BCS case, Fig. 1b, the individual potential and kinetic energy pieces would not converge if integrated over an infinite range in $`ฯต`$ (even though their difference would). In BCS theory, this is corrected by an ultraviolet cut-off (the Debye frequency). We elect not to include such a cut-off in the mode model, since it would be lead to another adjustable parameter, and the $`ฯต`$ integral is bound by the band edges, and so is convergent. In the real system, the โmodeโ effects in the spectral function disappear as one approaches the band edges, and as discussed in the following paragraph, this effect can be crudely simulated by setting the mode energy proportional to $`\mathrm{\Delta }_k`$, the latter quantity in the d-wave case vanishing along the zone diagonal where the band edges are located.
Although we plot only the differences in Figs. 1, 3b, and 4, the individual normal and superconducting state terms are quite large. This raises the question of what the value of Eq. 1 would actually be if summed over the zone. To do this, we must make some assumptions about what the momentum dependence of various quantities are. For simplicity sake, we will treat $`\mathrm{\Gamma }`$ as $`k`$ independent, though we note that available ARPES data are consistent with this quantity being reduced in size as one moves from $`(\pi ,0)`$ towards the Fermi crossing along the $`(\pi ,\pi )`$ direction. In the first sum, denoted by case (a), we treat $`\mathrm{\Delta }`$ and $`\omega _0`$ as $`k`$ independent. In the second sum, denoted by case (b), we replace $`\mathrm{\Delta }`$ by $`\mathrm{\Delta }_k=\mathrm{\Delta }_0(\mathrm{cos}(k_xa)\mathrm{cos}(k_ya))/2`$, where $`\mathrm{\Delta }_k`$ is the standard d-wave gap function, but still retain a $`k`$-independent $`\omega _0`$. In the third sum (c), in addition to the d-wave $`\mathrm{\Delta }_k`$ we also take $`\omega _0=c|\mathrm{\Delta }_k|`$, with the $`k`$ dependence of $`\omega _0`$ crudely simulating the fact that the mode effects in the spectral function are reduced as one moves away from the $`(\pi ,0)`$ points of the zone . The values of these parameters are the same as used in Fig. 1a, and are consistent with ARPES and neutron data for Bi2212 ($`\mathrm{\Gamma }`$=230 meV, $`\mathrm{\Delta }_0`$=32 meV, $`c`$=1.3). To perform the zone sum, we have to make some assumptions on what the $`ฯต_k`$ are. As the mode model is designed to account for the difference between the normal state and superconducting state, we elect to use normal state ARPES dispersions for $`ฯต_k`$ , though we caution that this represents a different choice for the โkineticโ energy part of the effective single-band Hamiltonian than is typically used . Because this dispersion has particle-hole asymmetry, the chemical potential will not be the same in the superconducting state as in the normal state. The chemical potential is thus tuned to achieve the same density (a hole doping x=0.16) as the normal state. Note that the normal state density itself is a function of $`\mathrm{\Gamma }`$ (we assume $`\omega _0`$=0 for the normal state).
Performing the zone sum, we find condensation energies of +3.6, +3.3, and +1.1 meV, per CuO plane, for case (a), (b) and (c) respectively. We note that the last result is the more physically appropriate, and though small, is somewhat larger than the condensation energy of 1/4 meV per plane estimated by Loram et al from specific heat data for optimal doped YBCO . The above values will be reduced if a more realistic $`k,\omega `$ dependence is used for $`\mathrm{\Gamma }`$, since, as we noted above, $`\mathrm{\Gamma }`$ decreases as one moves away from $`(\pi ,0)`$. As consistent with Fig. 1a, the contribution to the condensation energy is negative for an anisotropic shell around the Fermi surface (due to the anisotropy of $`\mathrm{\Delta }_k`$ and $`ฯต_k`$), and positive outside of this shell. Again, this will be sensitive to the $`k`$ dependence of $`\mathrm{\Gamma }`$, as can be seen from Fig. 4. We also remark that there are chemical potential shifts of +2.6, +2.1, and +1.4 meV, respectively, for case (a), (b) and (c). Again, the last value is the more physically appropriate. It is very interesting to note that somewhat smaller positive shifts (around +0.6 meV) have been seen experimentally in YBCO . These shifts are a consequence of particle-hole asymmetry and the change in $`n_k`$ when going into the superconducting state.
## IV Connections With Previous Work
While a quantitative evaluation of Eq. 1 using experimental data as input on the right hand side must await further progress as discussed in Appendix B, several qualitative points can be made even at this stage. From Eq. 1, there is a one to one correspondence between the changes in the spectral function and the condensation energy. That is, the condensation energy is due to the profound change in lineshape seen in photoemission data when going below $`T_c`$. When summed over the zone, this in turn leads to changes in the tunneling density of states (second part of Eq. 2). These spectral function changes cause, and are themselves caused by, changes of various two particle correlation functions, such as the optical conductivity and the dynamic spin susceptibility, which have previously been used by others to comment about the nature of the condensation energy.
In this context, we now discuss the earlier work concerning the c-axis conductivity. The most dramatic changes in the ARPES lineshape when going below $`T_c`$ occur near the $`(\pi ,0)`$ points of the zone. It is exactly these points of the zone which appear to have the largest c-axis tunneling matrix elements associated with them . Previous work has found a strong correlation between the c-axis conductivity and ARPES spectra near the $`(\pi ,0)`$ points of the zone . Therefore, it is rather straightforward to speculate that it is the formation of strong quasiparticle peaks in these regions of the zone, and the resulting changes in the spectral function at higher binding energy, which is responsible for the lowering of the c-axis kinetic energy. We note that earlier, Anderson had remarked that if the quasiparticle weight is coming from high binding energy, then one would expect a lowering of the kinetic energy. This in fact is what is occuring in the mode model calculations, though we note from our work that the true quantity which determines the sign of the kinetic energy change in the vicinity of $`k_F`$ is the gradient of the momentum distribution function at $`k_F`$.
We also remark that the change in c-axis kinetic energy has been recently addressed by Ioffe and Millis in the context of the same mode model used in the current paper . These effects would enter directly in Eq. 1 by including a c-axis tunneling contribution to $`ฯต_k`$ . As for the in-plane kinetic energy, it is so large that it is difficult to determine its contribution to Eq. 1 from optical conductivity data because of some of the same normalization concerns mentioned in Appendix B in regards to ARPES and tunneling data. Still, if the mode model calculation is a reflection of reality, we can speculate that $`n(๐ค)`$ will probably sharpen in the superconducting state, leading to a lowering of the in-plane kinetic energy. How large the effect will be is somewhat difficult to determine, in that the same regions of the zone where large changes are seen in the ARPES lineshape are also characterized by small Fermi velocities (the optical conductivity involves a zone sum weighted by $`v_F^2`$). Along the $`(\pi ,\pi )`$ direction, for instance, there is still some controversy concerning how dramatic the lineshape change is below $`T_c`$ . Also, as can be seen from Fig. 4, this question is very dependent on the variation of the normal state lineshape in the zone. Although the lineshape near $`(\pi ,0)`$ is highly non Fermi liquid like, the behavior along the $`(\pi ,\pi )`$ direction appears to be marginal Fermi liquid like . As remarked in Section III, the more Fermi liquid like the normal state lineshape is, the greater the tendency is to switch over to potential energy driven behavior instead. Improved experimentation should again lead to a resolution of these issues.
This brings us to the question concerning the relation of the magnetic resonant mode observed by neutron scattering to the condensation energy. All calculations of the resonant mode assume the existence of quasiparticle peaks. In the absence of such quasiparticle peaks, a sharp resonance is not expected. That is, the sharp resonance observed by neutron scattering, and the resulting lowering in the exchange energy part of the t-J Hamiltonian, is again a consequence of the formation of quasiparticle states. In this context, it is important to note that the d-wave coherence factors associated with quasiparticle states are important for the formation of the resonance, whether in the context of calculations in the particle-hole channel , or in the particle-particle scenario proposed by Demler and Zhang . In any case, this again supports our statement, motivated by Eq. 1, that it is the dramatic change in the ARPES spectra below $`T_c`$ which is the source of the condensation energy.
In this regard, we note a puzzling feature in connection with the mode model. Although it was designed to take into account the effect of the magnetic resonance mode on the spectral function, the condensation in the mode model is kinetic energy driven. This is in contrast to the potential energy driven nature of the condensation with the resonant mode discussed in the context of the t-J model , despite the same underlying physics. There are two possibilities for this apparent discrepancy. First, the break-up of the Hamiltonian into potential and kinetic energy pieces depends on the particular single-band reduction which is done. The superexchange energy, which is a kinetic energy effect at the level of the Hubbard model , appears as a potential energy term when reduced to the t-J Hamiltonian. In the mode model, the kinetic energy is equated to $`ฯต_k`$ based on normal state ARPES dispersions , while the potential energy term leads to effects described by the $`\mathrm{\Sigma }`$ of Eq. 3.
The second possibility is that the argument of Ref. is confined to low energies of order $`\mathrm{\Delta }`$. As demonstrated in Fig. 3b, if the mode model is confined to such energy scales, the first moment (i.e., the potential) term would reverse sign, since the quasiparticle peak always gives a positive contribution to the first moment. That is, one would expect the resonance to lower the exchange energy since it is a consequence of the quasiparticle states, which lower the potential energy in Eq. 1. It is the difference in the high energy incoherent tails (Fig. 3), though, which is ultimately responsible for the increase of the net potential energy in Fig. 1a. This would imply that the neutron scattering results may change if more complete data at higher energies and other $`q`$ values are obtained. That is, the true answer will depend on where the weight for the neutron resonance is coming from, in complete analogy to the earlier mentioned argument of Anderson in regards to where the quasiparticle weight is coming from.
This discussion again emphasizes that the current debate concerning kinetic energy driven superconductivity versus potential energy driven superconductivity must be kept in proper context, as the very definition of the kinetic and potential pieces is dependent upon what effective low energy Hamiltonian one employs, and what energy range one considers.
## V Doping Dependence
The condensation energy as estimated from specific heat is known to decrease strongly as the doping is reduced . This is despite the increase of the spectral gap . There are two reasons for this suggested by the above line of reasoning. First, the normal state itself at $`T_c`$ already exhibits a large spectral gap, the so-called pseudogap, which acts to reduce the difference in Eq. 1. Second, the weight of the quasiparticle peak strongly decreases as the doping is reduced . This reduces the quasiparticle contribution to both the first moment and to $`n(๐ค)`$. We caution that the normal state extrapolation down to T=0 will be more difficult to estimate for underdoped experimental data because of the influence of the pseudogap, which is known to fill in as a function of temperature . Still, the available underdoped ARPES and tunneling data are certainly in support of a smaller condensation energy than overdoped data due to the pseudogap, which is in agreement with conclusions based on specific heat data . The new contribution to these arguments is the strong reduction of the weight of the quasiparticle peak in the underdoped case which makes the condensation energy smaller still. In fact, based on our arguments, the strong reduction of the superfluid density upon underdoping is almost certainly connected with the strong reduction in the quasiparticle weight.
Finally, Anderson has speculated that the superconducting transition temperature is potential energy driven on the overdoped side, kinetic energy driven on the underdoped side. This is a distinct possibility, since $`\mathrm{\Gamma }`$ is known from ARPES data to be strongly reduced as the doping increases on the overdoped side, and as Fig. 4 demonstrates, one might expect (if the mode model is a reflection of reality) a crossover from kinetic energy driven behavior to potential energy driven behavior as $`\mathrm{\Gamma }`$ is reduced. In this context, we note the Basov et al result that the lowering of the c-axis kinetic energy appears to be confined to the underdoped side of the phase diagram. Moreover, if one attributes $`T^{}`$ on the underdoped side to the onset of pairing correlations , then one anticipates a potential energy gain due to pairing to occur at this finite temperature crossover. At $`T_c`$, phase coherence in the pair field is established, and the resulting quasiparticle formation and related spectral changes could lead to a kinetic energy driven transition of the sort discussed above. We emphasize โcouldโ, since in the context of Eq. 1, there is no unambiguous evidence yet from real ARPES data that such is the case.
## VI Concluding Remarks
We conclude this paper by noting that the above arguments based on condensation energy considerations highlights one of the key question of the high $`T_c`$ problem: why do quasiparticle peaks only appear below $`T_c`$? This is especially relevant in the underdoped case, since the spectral gap turns on at a considerably higher temperature than $`T_c`$, but the quasiparticle peaks again form only at $`T_c`$ . This implies that there is a deep connection between the onset of phase coherence in the pair field and the onset of coherence in the single electron degrees of freedom . We suggest that the understanding of this connection will be central to solving the high $`T_c`$ problem. The result of the current paper is that Eq. 1 brings this issue into much shaper focus. In particular, as a cautionary note, the incoherent part of the spectral function is likely to be as important as the quasiparticle component in determining the condensation energy (Fig. 3). That is, it is the overall shape of the spectral function (the peak-dip-hump behavior of Fig. 3a), rather than just the quasiparticle part, which is ultimately responsible for the total condensation energy. We believe that experimental data analyzed in the context of Eq. 1 will play an important role in providing a solution to the high $`T_c`$ problem.
###### Acknowledgements.
We thank Hong Ding, Helen Fretwell, Adam Kaminski, and Joel Mesot for discussions concerning the ARPES data, and Laura Greene, Christophe Renner, and John Zasadzinski for providing their tunneling data. This work was supported by the the U. S. Dept. of Energy, Basic Energy Sciences, under contract W-31-109-ENG-38, the National Science Foundation DMR 9624048, and DMR 91-20000 through the Science and Technology Center for Superconductivity. MR is supported in part by the Indian DST through the Swarnajayanti scheme.
## A The Full Hamiltonian and the Virial Theorem
In this Appendix, we make further comments on some issues which were briefly discussed at the end of Section II, relating to the use of the full Hamiltonian versus an effective single-band Hamiltonian.
We note that as written, Eq. 1 does not apply to the full Hamiltonian of the solid which includes all the electronic and ionic degrees of freedom together with their Coulombic interactions as discussed in Ref. . In principle an expression similar to Eq. 1 could be written if the quantities in Eq. 1 were replaced by matrices in reciprocal lattice space . For our purposes, where an energy difference is being looked at, a unitary transformation to band index space would be desirable. The resulting off-diagonal terms would then represent interband transitions. These could be of potential importance, even for the energy difference. For example, the violation of the c-axis optical conductivity sum rule implies a change in interband terms so that the total optical sum rule is satisfied.
The usefulness of the full Hamiltonian is that one can use the virial theorem $`2KnV3P\mathrm{\Omega }=0`$, exploiting the fact that the kinetic energy $`K`$ is a homogeneous function of order 2 in momentum, and the potential energy $`V`$ is a homogeneous function of order n in position. Here $`P`$ is the pressure and $`\mathrm{\Omega }`$ denotes the volume. For Coulomb forces $`n=1`$, and ignoring the pressure terms (which are negligible at ambient pressure), this reduces to $`2K+V=0`$.
If we assume that the form of Eq. 1 applies to the full Hamiltonian (which could be possible if all interband terms dropped out of the energy difference, as well as all electron-ion and ion-ion terms) then by using the virial theorem, the right hand side of Eq. 2 can be shown to reduce to 2/3 the first moment of the density of states at $`T=0`$. In addition, the change in the kinetic energy would be the negative of the condensation energy, with the potential energy twice the condensation energy.
This reduced form of Eq. 2, though, must be treated with extreme caution, and is likely not useful to the problem at hand. The reason is that the kinetic energy and potential energy terms of the full Hamiltonian are not the same as the kinetic and potential energy terms of the effective single-band Hamiltonian. It is only for the former that the virial theorem manipulations would be allowed. As an example, BCS theory obeys Eq. 2, but not the reduced form.
## B Comments on ARPES and Tunneling
The purpose of Section III was to demonstrate how Eq. 1 works out in practice for a model where exact calculations could be done. This is important when considering real experimental data. We have spent considerable effort analyzing Eqs. 1 and 2 using experimental data from ARPES and tunneling as input, and plan to report on these endeavors in a future publication. But given what we have learned from the mode model, some of the problems associated with an analysis based on experimental data can be appreciated. First, the condensation energy is obtained by subtracting two large numbers. Therefore, normalization of the data becomes a central concern. Problems in this regard when considering $`n(๐ค)`$, which is the zeroth moment of the ARPES data, were discussed in a previous experimental paper . For the first moment, these problems are further amplified due to the $`\omega `$ weighting in the integrand. This can be appreciated from Fig. 3, where the bulk of the contribution in the mode model comes from the mismatch in the high energy tails of the normal state and superconducting state spectral functions. When analyzing real data, we have found that the tail contribution, either from ARPES or from tunneling, is very sensitive to how the data are normalized. Different choices of normalization can even lead to changes in sign of the first moment.
Another concern concerns the $`๐ค`$ sum in Eq. 1. Both ARPES and tunneling have (their own distinct) $`๐ค`$-dependent matrix elements, which lead to weighting factors not present in Eq. 1. For ARPES, these effects can in principle be factored out by either theoretical estimates of the matrix elements , or by comparing data at different photon energies to obtain information on them . For tunneling, information on matrix elements can be obtained by comparing different types of tunneling (STM, tunnel junction, point contact), or by employing directional tunneling methods.
Another issue in connection with experimental data is an appropriate extrapolation of the normal state to zero temperature. Information on this can be obtained by analyzing the temperature dependence of the normal state data, remembering that the Fermi function will cause a temperature dependence of the data which should be factored out before attempting the $`T=0`$ extrapolation. We finally note that the temperature dependence issue is strongly coupled to the normalization problem mentioned above. In ARPES, the absolute intensity can change due to temperature dependent changes in absorbed gasses, surface doping level, and sample location . In tunneling, the absolute conductance can change due to temperature dependent changes in junction characteristics. In both cases, changes of background emission with temperature is another potential problem.
Despite these concerns, we believe that with careful experimentation, many of these difficulties can be overcome, and even if an exact determination of Eq. 1 is not possible, insights into the origin of the condensation energy will certainly be forthcoming from the data. This is particularly true for ARPES, which has the advantage of being $`๐ค`$ resolved and thus giving one information on the relative contribution of different $`๐ค`$ vectors to the condensation energy. |
no-problem/9912/hep-ph9912339.html | ar5iv | text | # On bottom mixing with exotic quarks
## Abstract
In this paper we present a calculation of the effects of bottom mixing with new exotic quarks in the forward-backward and left-right asymmetries, the bottom branching ratio and the QCD coupling constant. A global fit with the recent data on these quantities is done and stringent bounds are obtained. We discuss the effects of different isospin signatures for the new possible exotic quarks. The consequences for superstrig-inspired $`E_6`$ models are discussed. Constraints on the bottom mixing with the isosinglet quarks of the fundamental 27-plet are presented.
Some extensions of the standard model predict the existence of new quarks and leptons. This is the case of the superstring-inspired $`E_6`$ models . In the fundamental $`27`$ representation we have new $`Q=1/3`$ isosinglet which can be mixed with the standard bottom quark . In $`SO(n)`$ we can also have mirror fermions . These models have gained a renewed interest with the recent possibility that neutrinos have non-zero mass . From general arguments, the present neutrino masses can be a hint to the physics at the grand unification scale at $`10^{15}GeV`$ . A natural scenario for the neutrino mass spectrum is the possibility of new isosinglet neutral leptons as indicated in the 27-plet. Other scenarios could also be possible and it is a fundamental problem in present day elementary particle physics to find evidences for the other consequences of any model which could generalize the standard model. However, these theoretical expectations have not received, so far, any direct experimental support . The validity of the standard model has been tested at the level of quantum corrections and this puts strong limits on new physics. New particles, if they exist, must be at a scale well above 100 GeV . It is well known that deviations from the standard model couplings must be necessary very small .
In this paper we suggest that an important step in the search for an extended model, such as $`E_6`$ is the possibility of new mixing in the quark sector. Of particular interest are the bounds involving the bottom quark. Since the third family has high mass states, there is a theoretical prejudice that deviations from the standard model will be more important here. In recent years, this seemed to be the case with the bottom asymmetries and hadronic branching ratio $`R_b`$. The more recent data has reduced these deviations to an acceptable level of less than $`2\sigma `$ . In this paper we present a calculation of the effects of bottom mixing with new exotic quarks in the forward-backward and left-right asymmetries, the bottom hadronic branching ratio and the QCD coupling constant. A global fit with the recent data on these quantities is done and stringent bounds are obtained.
We consider three models, which differ by the $`SU(2)U(1)`$ assignment to new quarks and keep the same standard model attributes to the bottom:
1. \- Vector singlet model (VSM). This is the case of the fundamental $`27`$ of $`E_6`$, with a new isosinglet quark, usually called $`\mathrm{"}h\mathrm{"}`$.
2. \- Vector doublet model(VDM). In this model, two new doublets are introduced, with left and right helicities.
3. \- Fermion-mirror-fermion model (FMF). In this model we have a new doublet and singlet with opposite left and right assignments relative to the standard model.
In all models we allow different left and right mixing angles. It is then straightforward to calculate, at tree level the change in the $`Zbb`$ couplings, given in table I, where $`s_i^2=sin^2\theta _i`$.
We now discuss the general hypothesis involving our results. The first point to clarify is that we have taken into account in our calculation only the effects of mixing. In the class of models considered above one has many other effects such as new gauge bosons and scalars. A complete treatment of all the contributions is highly desirable but we will then face the problem of a large number of unknown parameters. It is known that there are no significant contributions of these new particles to radiative corrections and so we have decided to study only the mixing angle effects. This is equivalent to consider the weak isospin difference for each model. We are also supposing that mixing effects are small, as shown by several authors . It is well known at present that radiative quantum corrections in the standard model are necessary in order to fit the experimental high precision data. With this in mind we consider that the changes in the physical observables due to new bottom mixing will make small contributions to the full standard model calculations, including first order corrections. So, we present the results of our calculation as powers of a small mixing angle and keep only the first term. They are displayed in equations 1 and 2. In equations 1, $`\mathrm{"}A\mathrm{"}`$ means the forward-backward and the left-right asymmetries. We call attention to the diference of signs in each expression. This is a direct consequence of the isospin content of each extended model. Our calculation was performed in the on-shell scheme, with $`sin^2\theta _W=0.2230`$ and $`m_t=174.3`$ GeV. In equations 2, for $`R_b`$, all models tend to reduce the standard model prediction. For the asymmetries, the VDM and VSM models show opposite corrections whereas the FMF model tends to cancel each correction.
$`A_{VSM}`$ $`=`$ $`A_{SM}(10.1551s_L^2)`$
$`A_{VDM}`$ $`=`$ $`A_{SM}(1+0.8544s_R^2)`$ (1)
$`A_{FMF}`$ $`=`$ $`A_{SM}(1+0.8544s_R^20.1551s_L^2)`$
$`R_{VSM}`$ $`=`$ $`R_{SM}(12.2768s_L^2)`$
$`R_{VDM}`$ $`=`$ $`R_{SM}(10.4132s_R^2)`$ (2)
$`R_{FMF}`$ $`=`$ $`R_{SM}(10.4132s_R^22.2768s_L^2)`$
We have also performed a global fit to the present experimental data. The fit is compatible with zero mixing for all models.We have obtained, at $`95\%`$ confidence level, the following upper bounds for each model
$`s_L^2<0.046`$ for VSM
$`s_R^2<0.002`$ for VDM (3)
$`s_R^2<0.087,s_L^2<0.046`$ for FMF
The numerical results for the data are shown in table II, for the Particle Data Group average in their 1999 update. They clearly show no evidence for mixing within the models discussed. We have checked the change in $`\alpha _s`$ due to the bottom mixing. There is a correlation between $`\alpha _s`$ and the changes in $`Zbb`$ . With the upper bounds above there is a small contribution, bounded by $`\mathrm{\Delta }\alpha _s<0.005`$.
It is well known that $`A^b`$ is related to $`A_{FB}^{0,b}`$ and to $`A_{leptonic}`$.For the SLC measurements, we have $`A_b=0.892\pm 0.016`$ and for the LEP results alone we have $`A_b=0.904\pm 0.018`$. The global fit for these cases shown no significate difference from the PDG average.
In conclusion, the search for deviations from the standard model predictions in the bottom parameters can be a window to test the predictions of grand-unified models. As there is an enormous experimental activity on b-physics, it is expected that the uncertainties on the basic b-parameters will be strongly reduced very soon . If the future data is closer to the standard model predictions, our results will imply new stronger bounds on mixing angles. If, on the contrary, there are significant experimental discrepancies with the standard model, our calculation could be very useful in establishing the theoretical origin of then. For example, if $`R_b`$ turns to be above the standard model prediction, then this effect can not be attribute to mixing with new quarks in any of the models here considered. For the asymmetries, one must look in equation (1) for a correct sign in any possible deviation. In particular, for the fundamental $`27`$ in $`E_6`$, with a new isosinglet exotic quark, one expects that the experimental asymmetries should be smaller than the standard model predictions.
Acknowledgments: This work was partially supported by the following Brazilian agencies: CNPq, FUJB, FAPERJ and FINEP. |
no-problem/9912/astro-ph9912342.html | ar5iv | text | # Chemical composition of 90 F and G disk dwarfs Based on observations carried out at the Beijing Astronomical Observatory (Xinglong, P.R. China). Tables 3, 4 and 5 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html
## 1 Introduction
The chemical abundances of long-lived F and G main sequence stars, combined with kinematical data and ages, provide a powerful way to probe the chemical and dynamical evolution of the Galaxy.
As far as the disk stars are concerned, many general trends have been discovered during the past decades. Most notable results are correlations of metallicity with age, Galactocentric distance, and vertical distance from the Galactic plane based on photometric or low-resolution observations (e.g. Eggen et al. Eggen62 (1962); Twarog Twarog80 (1980)). In addition, the abundance patterns for some elements have been derived for small samples of stars: oxygen and $`\alpha `$ elements relative to iron vary systematically from overabundances at $`\text{[Fe/H]}1.0`$ to a solar ratio at $`\text{[Fe/H]}0.0`$, while most iron-peak elements follow iron for the whole metallicity range of the disk. These results have provided important constraints on chemical evolution models for the Galactic disk.
With improved observation and analysis techniques, which make it possible to study the Galactic chemical evolution (GCE) in detail, some old conclusions have, however, been challenged and new questions have arisen. Particularly important is the detailed abundance analysis of 189 F and G dwarfs with $`1.1<\text{[Fe/H]}<0.25`$ by Edvardsson et al. (1993a , hereafter EAGLNT). The main results from this work may be summarized as follows: (1) There are no tight relations between age, metallicity and kinematics of disk stars, but substantial dispersions imposed on weak statistical trends. (2) There exists a real scatter in the run of \[$`\alpha `$/Fe\] vs. \[Fe/H\] possibly due to the mixture of stars with different origins. The scatter seems to increase with decreasing metallicity starting at $`\text{[Fe/H]}0.4`$. Together with a possible increase in the dispersion of $`W_{\mathrm{LSR}}`$ (the stellar velocity perpendicular to the Galactic plane with respect to the Local Standard of Rest, LSR) at this point, the result suggests a dual model for disk formation. It is, however, unclear if the transition at $`\text{[Fe/H]}0.4`$ represents the division between the thin disk and the thick disk. (3) A group of metal-poor disk stars with $`R_\mathrm{m}<7`$ kpc is found to have larger \[$`\alpha `$/Fe\] values than stars with $`R_\mathrm{m}>9`$ Kpc, indicating a higher star formation rate (SFR) in the inner disk than that in the outer disk. Since essentially all the oldest stars in EAGLNT have small $`R_\mathrm{m}`$, it is, however, difficult to know upon which, $`R_\mathrm{m}`$ or age, the main dependence of \[$`\alpha `$/Fe\] is. (4) At a given age and $`R_\mathrm{m}`$, the scatter in \[$`\alpha `$/Fe\] is negligible while \[Fe/H\] does show a significant scatter. The former implies that the products of supernovae of different types are thoroughly mixed into the interstellar medium (ISM) before significant star formation occurs. Based on this, the large scatter in \[Fe/H\] may be explained by infall of unprocessed gas with a characteristic mixing time much longer than that of the gas from supernovae of different types. (5) The Galactic scatter may be different for individual $`\alpha `$ elements; \[Mg/Fe\] and \[Ti/Fe\] show a larger scatter at a given metallicity than \[Si/Fe\] and \[Ca/Fe\]. It suggests that individual $`\alpha `$ elements may have different origins. (6) A new stellar group, rich in Na, Mg, Al, was found among the metal-rich disk stars, suggesting additional synthesis sources for these elements.
Given that the study of EAGLNT was based on a limited sample of stars with certain selection effects and that the analysis technique induced uncertainties in the final abundances, some subtle results need further investigation before they can provide reliable constraints on theory. For example, it is somewhat unclear if the different \[$`\alpha `$/Fe\] at a given metallicity between the inner disk and the outer disk stars is real and if old disk stars are always located in the inner disk. Moreover, recent work by Tomkin et al. (Tomkin97 (1997)) argued against the existence of NaMgAl stars. In addition, a number of elements, which are highly interesting from a nucleosynthetic point of view, were not included in the work of EAGLNT.
The present work, based on a large differently selected sample of disk stars, aims at exploring and extending the results of EAGLNT with improved analysis techniques. Firstly, we now have more reliable atmospheric parameters. The effective temperature is derived from the Strรถmgren $`by`$ color index using a recent infrared-flux calibration and the surface gravity is based on the Hipparcos parallax. About one hundred iron lines (instead of $`30`$ in EAGLNT) are used to provide better determinations of metallicity and microturbulence. Secondly, the abundance calculation is anchored at the most reliable theoretical or experimental oscillator strengths presently available in the literature. Thirdly, greater numbers of Fe ii, Si i and Ca i lines in our study should allow better abundance determinations, and new elements (K, Sc, V, Cr and Mn) will give additional information on Galactic evolution. Lastly, the stellar age determination is based on new evolutionary tracks, and the space velocity is derived from more reliable distance and proper motion values.
In the following sections 2 to 6, we describe the observations and methods of analysis in details and present the derived abundances, ages and kinematics. The results are discussed in Sect. 7 and compared to those of EAGLNT. Two elements, Sc and Mn, not included in EAGLNT and represented by lines showing significant hyperfine structure (HFS) effects, are discussed in a separate paper (Nissen et al. Nissen99 (1999)), which includes results for halo stars from Nissen & Schuster (Nissen97 (1997)).
## 2 Observations
### 2.1 Selection of stars
The stars were selected from the $`uvby\beta `$ photometric catalogues of Olsen (Olsen83 (1983), Olsen93 (1993)) according to the criteria of $`5800T_{\mathrm{eff}}6400\mathrm{K}`$, $`4.0\text{log}g4.5`$ and $`1.0\text{[Fe/H]}+0.3`$ with approximately equal numbers of stars in every metallicity interval of 0.1 dex. In this selection, the temperature was determined from the $`by`$ index with the calibration of Magain (Magain87 (1987)), gravity was calculated from the $`c_1`$ index as described in EAGLNT, and metallicity was derived from the $`m_1`$ index using the calibrations of Schuster & Nissen (Schuster89 (1989)). The later redeterminations of the temperature with the calibration of Alonso et al. (Alonso96 (1996)) and the gravity from Hipparcos parallax lead to slight deviations from the selection criteria for some stars.
Based on the above selection, 104 F and G stars were observed, but 3 high-rotation ($`V\mathrm{sin}i25\text{ km\hspace{0.17em}s}\text{-1}`$) stars and 9 double-line spectroscopic binaries were excluded from the sample. Another 11 stars have radial velocity dispersions higher than the measurement error of the CORAVEL survey. HD 106516A and HD 97916 are suspected binaries (Carney et al. Carney94 (1994)), and HD 25998 and HD 206301 are possibly variables (e.g. Petit Petit90 (1990); Morris & Mutel Morris88 (1988)). These 15 stars (marked in the column โRemโ of Table 3 are being carefully used in our study. The remaining stars are considered as single stars, but are checked for differences in iron abundances between Fe i and Fe ii lines using gravities from Hipparcos parallaxes as suggested by Fuhrmann (Fuhrmann98 (1998)). As described later, additional 2 stars were excluded during the analysis and thus the sample contains 90 stars for the final discussion and conclusions.
### 2.2 Observations and data reduction
The observations were carried out with the Coudรฉ Echelle Spectrograph attached to the 2.16m telescope at Beijing Astronomical Observatory (Xinglong, P.R. China). The detector was a Tek CCD ($`1024\times 1024`$ pixels with $`24\times 24\mu m^2`$ each in size). The red arm of the spectrograph with a 31.6 grooves/mm grating was used in combination with a prism as cross-disperser, providing a good separation between the echelle orders. With a 0.5 mm slit (1.1 arcsec), the resolving power was of the order of 40 000 in the middle focus camera system.
The program stars were observed during three runs: March 21-27, 1997 (56 stars), October 21-23, 1997 (27 stars) and August 5-13, 1998 (21 stars). The exposure time was chosen in order to obtain a signal-to-noise ratio of at least 150 over the entire spectral range. Most bright stars have S/N $``$ 200 โ 400. Figure 1 shows the spectra in the region of the oxygen triplet for two representative stars HD 142373 and HD 106516A. In addition, the solar flux spectrum as reflected from the Moon was observed with a S/N $``$ 250 and used as one of the โstandardโ stars in determining oscillator strengths for some lines (see Sect. 4.2).
The spectra were reduced with standard MIDAS routines for order identification, background subtraction, flat-field correction, order extraction and wavelength calibration. Bias, dark current and scattered light corrections are included in the background subtraction. If an early B-type star could be observed close to the program stars, it was used instead of the flat-field in order to remove interference fringes more efficiently. The spectrum was then normalized by a continuum function determined by fitting a spline curve to a set of pre-selected continuum windows estimated from the solar atlas. Finally, correction for radial velocity shift, measured from at least 20 lines, was applied before the measurement of equivalent widths.
### 2.3 Equivalent widths and comparison with EAGLNT
The equivalent widths were measured by three methods: direct integration, Gaussian and Voigt function fitting, depending on which method gave the best fit of the line profile. Usually, weak lines are well fitted by a Gaussian, whereas stronger lines in which the damping wings contribute significantly to their equivalent widths need the Voigt function to reproduce their profiles. If unblended lines are well separated from nearby lines, direct integration is the best method. In the case of some intermediate-strong lines, weighted averages of Gaussian and Voigt fitting were adopted.
The accuracy of the equivalent widths is estimated by comparing them to the independent measurements by EAGLNT for 25 stars in common. Five of them were observed at the ESO Observatory ($`R\mathrm{60\hspace{0.17em}000}`$, S/N $``$ 200) and 23 were observed at the McDonald Observatory ($`R\mathrm{30\hspace{0.17em}000}`$, S/N $``$ 200 โ 500).
The systematic difference between the two sets of measurements is small and a linear least squares fitting gives:
$`EW_{\mathrm{Xl}}`$ $`=`$ $`1.025\left(\pm 0.012\right)EW_{\mathrm{ESO}}+0.89\left(\pm 0.56\right)(m\text{ร
})`$
$`EW_{\mathrm{Xl}}`$ $`=`$ $`1.083\left(\pm 0.006\right)EW_{\mathrm{McD}}0.94\left(\pm 0.28\right)(m\text{ร
})`$
The standard deviations around the two relations are 3.8 mร
(for 129 lines in common with ESO) and 4.3 mร
(for 575 lines in common with McDonald). Given that the error of the equivalent widths in EAGLNT is around 2 mร
, we estimate an rms error of about 3 mร
in our equivalent widths. As shown by the comparison of our equivalent widths with ESO data in Fig. 2, the equivalent widths below 50 mร
are consistent with the one-to-one relation. The deviations for the stronger lines may be due to the fact that all lines in EAGLNT were measured by Gaussian fitting, which leads to an underestimate of equivalent widths for intermediate-strong lines because of neglecting their wings. We conclude that the Xinglong data may be more reliable than the EAGLNT data for lines in the range of $`50<EW<100`$ mร
.
## 3 Stellar atmospheric parameters
### 3.1 Effective temperature and metallicity
The effective temperature was determined from the Strรถmgren indices ($`by`$ and $`c_1`$) and \[Fe/H\] using the calibration of Alonso et al. (Alonso96 (1996)). If the color excess $`E(by)`$, as calculated from the $`H_\beta `$ index calibration by Olsen (Olsen88 (1988)), is larger than 0.01, then a reddening correction was applied.
The metallicity, required in the input for temperature and abundance calculation, was first derived from the Strรถmgren $`m_1`$ index using the calibrations of Schuster & Nissen (Schuster89 (1989)). But the spectroscopic metallicity obtained later was used to iterate the whole procedure.
The errors of the photometric data are $`\sigma (by)=0.004`$ and $`\sigma (c_1)=0.008`$ according to Olsen (Olsen93 (1993)). Adopting $`\sigma (\text{[Fe/H]})`$ = 0.1 from the spectroscopic analysis, the statistical error of $`T_{\mathrm{eff}}`$ is estimated to about $`\pm 50`$ K. Considering a possible error of $`\pm `$50 K in the calibration, the error in temperature could reach $`\pm 70`$ K. We do not adopt the excitation temperature, determined from a consistent abundance derived from Fe i lines with different excitation potentials, because errors induced by incorrect damping parameters (Ryan Ryan98 (1998)) or non-LTE effects can be strongly dependent on excitation potential, leading to an error in effective temperature as high as 100 K.
### 3.2 Gravity
In most works, gravities are determined from the abundance analysis by requiring that Fe i and Fe ii lines give the same iron abundance. But it is well known that the derivation of iron abundance from Fe i and Fe ii lines may be affected by many factors such as unreliable oscillator strengths, possible non-LTE effects and uncertainties in the temperature structure of the model atmospheres. From the Hipparcos parallaxes, we can determine more reliable gravities using the relations:
$`\mathrm{log}{\displaystyle \frac{g}{g_{}}}`$ $`=`$ $`\mathrm{log}{\displaystyle \frac{}{_{}}}+4\mathrm{log}{\displaystyle \frac{T_{\mathrm{eff}}}{T_{\mathrm{eff}}^{}{}_{}{}^{}}}+0.4\left(M_{\mathrm{bol}}M_{\mathrm{bol},}\right)`$ (1)
and
$`M_{\mathrm{bol}}`$ $`=`$ $`V+BC+5\mathrm{log}\pi +5,`$ (2)
where, $``$ is the stellar mass, $`M_{\mathrm{bol}}`$ the absolute bolometric magnitude, $`V`$ the visual magnitude, $`BC`$ the bolometric correction, and $`\pi `$ the parallax.
The parallax is taken from the Hipparcos Satellite observations (ESA ESA97 (1997)). For most program stars, the relative error in the parallax is of the order of 5%. Only two stars in our sample have errors larger than 10%. From these accurate parallaxes, stellar distances and absolute magnitudes were obtained. Note, however, that our sample includes some binaries, for which the absolute magnitude from the Hipparcos parallax could be significantly in error. An offset of $`0.75`$ mag will be introduced for a binary with equal components through the visual magnitude in Eq. (2). Thus, we also calculated absolute magnitudes from the photometric indices $`\beta `$ and $`c_1`$ using the relations found by EAGLNT. Although the absolute magnitude of a binary derived by the photometric method is also not very accurate due to different spectral types and thus different flux distributions of the components, it may be better than the value from the parallax method. Hence, for a few stars with large differences in absolute magnitudes between the photometry and parallax determination, we adopt the photometric values.
The bolometric correction was interpolated from the new BC grids of Alonso et al. (Alonso95 (1995)) determined from line-blanketed flux distributions of ATLAS9 models. It is noted that the zero-point of the bolometric correction adopted by Alonso et al., $`BC_{}=0.12`$, is not consistent with the bolometric magnitude of the Sun, $`M_{\mathrm{bol},}`$=4.75, recently recommended by the IAU (IAU99 (1999)). But the gravity determination from the Eq. (1) only depends on the $`M_{\mathrm{bol}}`$ difference between the stars and the Sun and thus the zero-point is irrelevant.
The derivation of mass is described in Sect. 6. The estimated error of 0.05 $`M_{}`$ in mass corresponds to an error of 0.03 dex in gravity, while errors of $`0.05`$ mag in BC and 70 K in temperature each leads to an uncertainty of 0.02 dex in log$`g`$. The largest uncertainty of the gravity comes from the parallax. A typical relative error of 5% corresponds to an error of 0.04 dex in log$`g`$. In total, the error of log$`g`$ is less than 0.10 dex.
The surface gravity was also estimated from the Balmer discontinuity index $`c_1`$ as described in EAGLNT. We find a small systematical shift (about 0.1 dex) between the two sets of log$`g`$, with lower gravities from the parallaxes. There is no corresponding shift between $`M_V`$(par) and $`M_V`$(phot). The mean deviation is 0.03 mag only, which indicates that the systematic deviation in log$`g`$ comes from the gravity calibration in EAGLNT.
### 3.3 Microturbulence
The microturbulence, $`\xi _t`$, was determined from the abundance analysis by requiring a zero slope of \[Fe/H\] vs. EW. The large number of Fe i lines in this study enables us to choose a set of lines with accurate oscillator strengths, similar excitation potentials ($`\chi _{low}4.0`$ eV) and a large range of equivalent widths (10 โ 100 mร
) for the determination. With this selection, we hope to reduce the errors from oscillator strengths and potential non-LTE effects for Fe i lines with low excitation potentials. The error of the microturbulence is about 0.3 km s<sup>-1</sup>.
The relation of $`\xi _t`$ as a function of $`T_{\mathrm{eff}}`$ and log$`g`$ derived by EAGLNT corresponds to about 0.3 km s<sup>-1</sup> lower values than those derived from our spectroscopic analysis. No obvious dependence of the difference on temperature, gravity and metallicity can be found. In particular, the value for the Sun in our work is 1.44 km s<sup>-1</sup>, also 0.3 km s<sup>-1</sup> higher than the value of 1.15 found from the EAGLNT relation. The difference in $`\xi _t`$ between EAGLNT and the present work is probably related to the difference in equivalent widths of intermediate-strong lines discussed in Sect. 2.3. EAGLNT measured these lines by fitting a Gaussian function and hence underestimated their equivalent widths, leading to a lower microturbulence.
Finally, given that the atmospheric parameters were not determined independently, the whole procedure of deriving $`T_{\mathrm{eff}}`$, log$`g`$, \[Fe/H\] and $`\xi _t`$ was iterated to consistency. The atmospheric parameters of 90 stars are presented in Table 3. The uncertainties of the parameters are: $`\sigma (T_{\mathrm{eff}})=70`$ K, $`\sigma (\text{log}g)=0.1`$, $`\sigma (\text{[Fe/H]})=0.1`$, and $`\sigma (\xi _t)=0.3`$ km s<sup>-1</sup>.
## 4 Atomic line data
### 4.1 Spectral lines
All unblended lines with symmetric profiles having equivalent widths larger than 20 mร
in the solar atlas (Moore et al. Moore66 (1966)) were cautiously selected. The equivalent width limit ensures that lines are not disappearing in the most metal-poor disk stars at $`\text{[Fe/H]}=1.0`$. Given that very weak lines would lead to an increase of random errors in the abundance determination and that too strong lines are very sensitive to damping constants, only weak and intermediate-strong lines with $`3<EW<100`$ mร
in the stellar spectra were adopted in our abundance analysis except for potassium, for which only one line ($`K\text{i}`$ $`\lambda `$7699) with an equivalent width range of 50 โ 190 mร
, is available.
### 4.2 Oscillator strengths
Due to the large number of measurable lines in the spectra, Fe i lines were used for microturbulence determination and temperature consistency check. Hence, careful selection of oscillator strengths for them is of particular importance. Of many experimental or theoretical calculations of oscillator strengths for Fe i lines, only three sources with precise $`gf`$ values were chosen. They are: Blackwell et al. (1982b ; 1982c ), OโBrian et al. (OBrian91 (1991)) and Bard & Kock (Bard91 (1991)) or Bard et al. (Bard94 (1994)). The agreements between these sources are very satisfactory, and thus mean log$`gf`$ values were adopted if oscillator strengths are available in more than one of the three sources. A few oscillator strengths with large differences between these sources were excluded.
References for other elements are: O i (Lambert Lambert78 (1978)), Na i (Lambert & Warner Lambert68 (1968)), Mg i (Chang Chang90 (1990)), Al i (Lambert & Warner Lambert68 (1968)), Si i (Garz Garz73 (1973)), K i ( Lambert & Warner Lambert68 (1968)), Ca i (Smith & Raggett Smith81 (1981)), Ti i (Blackwell et al. 1982a ; 1986a ), V i (Whaling et al. Whaling85 (1985)), Cr i (Blackwell et al. 1986b ), Fe ii (Biรฉmont et al. Biemond91 (1991); Hannaford et al. Hannaford92 (1992)), Ni i (Kostyk Kostyk82 (1982); Wickliffe & Lawler Wickliffe97 (1997)) and Ba ii (Wiese & Martin Wiese80 (1980)).
These experimental or theoretical $`gf`$ values were inspected to see if they give reliable abundances by evaluating the deviation between the abundance derived from a given line and the mean abundance from all lines of the same species. A significant mean deviation in the same direction for all stars (excluding the suspected binaries) were used to correct the $`gf`$ value. Lines with large deviations in different directions for different stars were discarded from the experimental or theoretical log$`gf`$ list. The two sets of values are presented in columns โabs.โ and โcor.โ in Table 4. Based on these $`gf`$ values, we derived abundances for 10 โstandardโ stars: HD 60319, HD 49732, HD 215257, HD 58551, HD 101676, HD 76349, HD 58855, HD 22484, the Sun and HD 34411A (in metallicity order). Oscillator strengths for lines with unknown $`gf`$ values were then determined from an inverse abundance analysis of the above 10 stars, which are distributed in the metallicity range $`0.9\text{[Fe/H]}0.0`$ dex and were observed at high S/N $``$ 250 โ 400. Generally, the $`gf`$ values for a given line from different โstandardโ stars agree well, and a mean value (given in the column โdifโ in Table 4 was thus adopted.
### 4.3 Empirical enhancement factor
It has been recognized for a long time that line broading derived from Unsรถldโs (Unsold55 (1955)) approximation to Van der Waals interaction is too weak, and an enhancement factor, $`E_\gamma `$, should be applied to the damping parameter, $`\gamma `$. Usually, enhancement factors of Fe i lines with excitation potential of the lower energy level ($`\chi _{low}`$) less than 2.6 eV are taken from the empirical calibration by Simmons & Blackwell (Simmons82 (1982)). For Fe i lines with $`\chi _{low}>2.6`$ eV, $`E_\gamma =1.4`$ is generally used in abundance analysis. Recently, Anstee & OโMaraโs (Anstee95 (1995)) computed the broadening cross sections for s-p and p-s transitions and found that $`E_\gamma `$ should be $`2.53.2`$ for lines with $`\chi _{low}>3.0`$ eV, whereas lines with $`\chi _{low}<2.6`$ eV have broadening cross sections more consistent with Simmons & Blackwellโs (Simmons82 (1982)) work.
Following EAGLNT and other works, we adopted Simmons & Blackwellโs (Simmons82 (1982)) $`E_\gamma `$ for Fe i lines with $`\chi _{low}<2.6`$ eV and $`E_\gamma =1.4`$ for the remaining Fe i lines, while a value of 2.5 was applied for Fe ii lines as suggested by Holweger et al. (Holweger90 (1990)). Enhancement factors for Na i, Si i, Ca i and Ba ii were taken from EAGLNT (see references therein). Finally, a value of 1.5 was adopted for the K i, Ti i and V i lines considering their low excitation potentials, and a factor of 2.5 was applied to the remaining elements following Mรคckle et al. (Mackle75 (1975)). The effects of changing these values by 50% on the derived abundances are discussed in Sect. 5.2.
The atomic line data are given in Table 4.
## 5 Abundances and their uncertainties
### 5.1 Model atmospheres and abundance calculations
The abundance analysis is based on a grid of flux constant, homogeneous, LTE model atmospheres, kindly supplied by Bengt Edvardsson (Uppsala). The models were computed with the MARCS code using the updated continuous opacities by Asplund et al. (Asplund97 (1997)) including UV line blanketing by millions of absorption lines and many molecular lines.
The abundance was calculated with the program EQWIDTH (also made available from the stellar atmospheric group in Uppsala) by requiring that the calculated equivalent width from the model should match the observed value. The calculation includes natural broadening, thermal broadening, van der Waals damping, and the microturbulent Doppler broadening. The mean abundance was derived from all available lines by giving equal weight to each line. Finally, solar abundances, calculated from the Moon spectrum, were used to derive stellar abundances relative to solar values (Table 5). Such differential abundances are generally more reliably than absolute abundances because many systematic errors nearly cancel out.
### 5.2 Uncertainties of abundances
There are two kinds of uncertainties in the abundance determination: one acts on individual lines, and includes random errors of equivalent widths, oscillator strengths, and damping constants; another acts on the whole set of lines with the main uncertainties coming from the atmospheric parameters.
#### 5.2.1 Errors from equivalent widths and atomic data
The comparison of equivalent widths in Sect. 2.3 indicates that the typical uncertainty of the equivalent width is about 3 mร
, which leads to an error of about 0.06 dex in the elemental ratio X/H derived from a single line with an equivalent width around 50 mร
. For an element represented by N lines, the error is decreased by a factor $`\sqrt{N}`$. In this way, the errors from equivalent widths were estimated for elements with only one or a few lines. Alternatively, the scatter of the deduced abundances from a large number of lines with reliable oscillator strengths gives another estimate of the uncertainty from equivalent widths. With over 100 Fe i lines for most stars, the scatter varies somewhat from star to star with a mean value of 0.07 dex, corresponding to an error of 0.007 dex in \[Fe/H\]. Other elements with significant numbers of lines, such as Ca, Ni and Si, have even smaller mean line-to-line scatters.
The uncertainties in atomic data are more difficult to evaluate. But any error in the differential abundance caused by errors in the $`gf`$ values is nearly excluded due to the correction of some experimental or theoretical $`gf`$ values and the adoption of mean $`gf`$ values from 10 โstandardโ stars. Concerning the uncertainties in the damping constants, we have estimated their effects by increasing the adopted enhancement factors by 50%. The microturbulence was accordingly adjusted because of the coupling between the two parameters. The net effect on the differential abundances with respect to the Sun is rather small as seen from Table 1.
#### 5.2.2 Consistency check of atmospheric parameters
As a check of the photometric temperature, the derived iron abundance from individual Fe i lines was studied as a function of the excitation potential. To reduce the influence of microturbulence, only lines with equivalent widths less than 70 mร
were included. A linear least squares fit to the abundance derived from each line vs. low excitation potential determines the slope in the relation $`\text{[Fe/H]}=a+b\chi _{low}`$. The mean slope coefficient for all stars is $`b=0.004\pm 0.013`$. There is only a very small (if any) dependence of $`b`$ on effective temperature, surface gravity or metallicity. A suspected binary, HD 15814, has a very deviating slope coefficient ($`b=0.056`$) and is excluded from further analysis.
The agreement of iron abundances derived from Fe i and Fe ii lines is satisfactory when gravities based on Hipparcos parallaxes are used (see Fig. 3). The deviation is less than 0.1 dex for most stars with a mean value of $`0.009\pm 0.07`$ dex. From $`\text{[Fe/H]}=0.0`$ to $`\text{[Fe/H]}=0.5`$, the mean deviation ($`\text{[Fe/H]}_{\mathrm{II}}\text{[Fe/H]}_\mathrm{I}`$) seems, however, to increase by about 0.1 dex in rough agreement with predictions from non-LTE computations (see Sect. 5.3).
The deviation in iron abundances based on Fe i and Fe ii abundance provides a way to identify binaries and to estimate the influence of the component on the primary. The suspected binaries are marked with an additional square around the filled circles in Fig. 3. It shows that there is no significant influence from the component for these binaries except in the case of HD 15814, which was already excluded on the basis of itโs $`b`$-coefficient in the excitation equilibrium of Fe i lines. Thus, the other possible binaries were included in our analysis. It is, however, surprising that HD 186257 show a higher iron abundance based on Fe ii lines than that from Fe i lines with a deviation as large as 0.28 dex. We discard this star in the final analysis and thus have 90 stars left in our sample.
#### 5.2.3 Errors in resulting abundances
Table 1 shows the effects on the derived abundances of a change by 70 K in effective temperature, 0.1 dex in gravity, 0.1 dex in metallicity, and 0.3 km s<sup>-1</sup> in microturbulence, along with errors from equivalent widths and enhancement factors, for two representative stars.
It is seen that the relative abundances with respect to iron are quite insensitive to variations of the atmospheric parameters. One exception is \[O/Fe\] due to the well known fact that the oxygen abundance derived from the infrared triplet has an opposite dependence on temperature to that of the iron abundance. After rescaling of our oxygen abundances to results from the forbidden line at $`\lambda 6300`$ (see next section) the error is somewhat reduced. Therefore, the error for \[O/Fe\] in Table 1 might be overestimated.
In all, the uncertainties of the atmospheric parameters give errors of less than 0.06 dex in the resulting \[Fe/H\] values and less than 0.04 dex in the relative abundance ratios. For an elemental abundance derived from many lines, this is the dominant error, while for an abundance derived from a few lines, the uncertainty in the equivalent widths may be more significant. Note that the uncertainties of equivalent widths for V and Cr (possibly also Ti) might be underestimated given that their lines are generally weak in this work. In addition, with only one strong line for the K abundance determination, the errors from equivalent widths, microturbulence and atomic line data are relatively large.
Lastly, we have explored the HFS effect on one Al i line at $`\lambda `$6698 , one Mg i line at $`\lambda `$5711, and two Ba ii lines at $`\lambda `$6141 and $`\lambda `$6496. The HFS data are taken from three sources: Biehl (Biehl76 (1976)) for Al, Steffen (Steffen85 (1985)) for Mg and Franรงois (Francois96 (1996)) for Ba. The results indicate that the HFS effects are very small for all these lines with a value less than 0.01 dex.
### 5.3 Non-LTE effects and inhomogeneous models
The assumption of LTE and the use of homogeneous model atmospheres may introduce systematic errors, especially on the slope of various abundance ratios \[X/Fe\] vs. \[Fe/H\]. These problems were discussed at quite some length by EAGLNT. Here we add some remarks based on recent non-LTE studies and computations of 3D hydrodynamical model atmospheres.
Based on a number of studies, EAGLNT concluded that the maximum non-LTE correction of \[Fe/H\], as derived from Fe i lines, is 0.05 to 0.1 dex for metal-poor F and G disk dwarfs. Recently, Thรฉvenin & Idiart (Thevenin99 (1999)) computed non-LTE corrections on the order of 0.1 to 0.2 dex at $`\text{[Fe/H]}=1.0`$. Fig. 3 suggests that the maximium correction to \[Fe/H\] derived from Fe i lines is around 0.1 dex, but we emphasize that this empirical check may depend on the adopted $`T_{\mathrm{eff}}`$ calibration as a function of \[Fe/H\].
The oxygen infrared triplet lines are suspected to be affected by non-LTE formation, because they give systematically higher abundances than forbidden lines. Recent work by Reetz (Reetz98 (1999)) indicates that non-LTE effects are insignificant ($`<0.05`$ dex) for metal-poor and cool stars, but become important for warm and metal-rich stars. For stars with $`\text{[Fe/H]}>0.5`$ and $`T_{\mathrm{eff}}>6000`$ K in our sample, non-LTE effects could reduce the oxygen abundances by 0.1-0.2 dex. For this reason, we use Eq. (11) of EAGLNT to scale the oxygen abundances derived from the infrared triplet to those derived by Nissen & Edvardsson (Nissen92 (1992)) from the forbidden \[O i\] $`\lambda `$6300.
The two weak Na i lines ($`\lambda 6154`$ and $`\lambda 6160`$) used for our Na abundance determinations, are only marginally affected by deviations from LTE formation (Baumรผller et al. Baumuller98 (1998)). The situation for Al may, however, be different. The non-LTE analysis by Baumรผller & Gehren (Baumuller97 (1997)) of one of the Al i lines used in the present work ($`\lambda 6698`$) leads to about 0.15 dex higher Al abundances for the metal-poor disk dwarfs than those calculated from LTE. No non-LTE study for the other two lines used in the present work is available. We find, however, that the derived Al abundances depend on $`T_{\mathrm{eff}}`$ with lower \[Al/Fe\] for higher temperature stars. This may be due to the neglect of non-LTE effects in our work. Hence, we suspect that the trend of \[Al/Fe\] vs. \[Fe/H\] could be seriously affected by non-LTE effects.
The recent non-LTE analysis of neutral magnesium in the solar atmosphere by Zhao et al. (Zhao98 (1998)) and in metal-poor stars by Zhao & Gehren (Zhao99 (1999)) leads to non-LTE corrections of 0.05 dex for the Sun and 0.10 dex for a $`\text{[Fe/H]}=1.0`$ dwarf, when the abundance of Mg is derived from the $`\lambda 5711`$ Mg i line. Similar corrections are obtained for some of the other lines used in the present work. Hence, we conclude that the derived trend of \[Mg/Fe\] vs. \[Fe/H\] is not significantly affected by non-LTE.
The line-profile analysis of the K i resonance line at $`\lambda `$7699 by Takeda et al. (Takeda96 (1996)) shows that the non-LTE correction is $`0.4`$ dex for the Sun and $`0.7`$ dex for Procyon. There are no computations for metal-poor stars, but given the very large corrections for the Sun and Procyon one may expect that the slope of \[K/Fe\] vs. \[Fe/H\] could be seriously affected by differential non-LTE effects between the Sun and metal-poor stars.
The non-LTE study of Ba lines by Mashonkina et al. (Mashonkina99 (1999)), which includes two of our three Ba ii lines ($`\lambda `$5853 and $`\lambda `$6496), give rather small corrections ($`<0.10`$ dex) to the LTE abundances, and the corrections are very similar for solar metallicity and $`\text{[Fe/H]}1.0`$ dwarfs. Hence, \[Ba/Fe\] is not affected significantly.
In addition to possible non-LTE effects, the derived abundances may also be affected by the representation of the stellar atmospheres by plane-parallel, homogeneous models. The recent 3D hydrodynamical model atmospheres of metal-poor stars by Asplund et al. (Asplund99 (1999)) have substantial lower temperatures in the upper photosphere than 1D models due to the dominance of adiabatic cooling over radiative heating. Consequently, the iron abundance derived from Fe i lines in a star like HD 84937 ($`T_{\mathrm{eff}}6300`$ K, $`\text{log}g4.0`$ and $`\text{[Fe/H]}2.3`$) is 0.4 dex lower than the value based on a 1D model. Although the effect will be smaller in a $`\text{[Fe/H]}1.0`$ star, and the derived abundance ratios are not so sensitive to the temperature structure of the model, we clearly have to worry about this problem.
### 5.4 Abundance comparison of this work with EAGLNT
A comparison in abundances between this work and EAGLNT for the 25 stars in common provides an independent estimate of the errors of the derived abundances. The results are summarized in Table 2.
The agreement in iron abundance derived from Fe i lines is satisfactory with deviations within $`\pm `$0.1 dex for the 25 common stars. These small deviations are mainly explained by different temperatures given the fact that the abundance differences increase with temperature deviations between the two works. The rms deviation in iron abundance derived from Fe ii lines are slightly larger than that from Fe i lines. The usage of different gravities partly explain this. But the small line-to-line scatter from 8 Fe ii lines in our work indicates a more reliable abundance than that of EAGLNT who used 2 Fe ii lines only.
Our oxygen abundances are systematically higher by 0.15 dex than those of EAGLNT for 5 common stars. Clearly, the temperature deviation is the main reason. The systematically lower value of 70 K in our work increases \[O/Fe\] by 0.10 dex (see Table 1).
The mean abundance differences for Mg, Al, Si, Ca and Ni between the two works are hardly significant. The systematical differences (this work โ EAGLNT) of $``$0.08 dex for \[Na/Fe\] and \[Ti/Fe\] and $`+`$0.07 dex for \[Ba/Fe\] are difficult to explain, but we note that when the abundances are based on a few lines only, a systematic offset of the stars relative to the Sun may occur simply because of errors in the solar equivalent widths.
## 6 Stellar masses, ages and kinematics
### 6.1 Masses and ages
As described in Sect. 3.2, the stellar mass is required in the determination of the gravity from the Hipparcos parallax. With the derived temperature and absolute magnitude, the mass was estimated from the stellar position in the $`M_V\mathrm{log}T_{\mathrm{eff}}`$ diagram (see Fig. 4) by interpolating in the evolutionary tracks of VandenBerg et al. (VandenBerg99 (1999)), which are distributed in metallicity with a step of $`0.1`$ dex. These new tracks are based on the recent OPAL opacities (Rogers & Iglesias Rogers92 (1992)) using a varying helium abundance with \[$`\alpha `$/Fe\] = 0.30 for $`\text{[Fe/H]}0.3`$ and a constant helium abundance ($`Y=0.2715`$) without $`\alpha `$ element enhancement for $`\text{[Fe/H]}0.2`$. Fig. 4 shows the position of our program stars with $`0.77<\text{[Fe/H]}<0.66`$ compared to the evolutionary tracks of Z = 0.004 ($`\text{[Fe/H]}=0.71`$). The errors in $`T_{\mathrm{eff}}`$, $`M_V`$, and \[Fe/H\] translate to an error of $`0.06M_{}`$ in the mass.
Stellar age is an important parameter when studying the chemical evolution of the Galaxy as a function of time. Specifically, the age is useful in order to interpret abundance ratios as a function of metallicity. In this work, the stellar age was obtained simultaneously with the mass from interpolation in the evolutionary tracks of VandenBerg et al. (VandenBerg99 (1999)). It was checked that practically the same age is derived from the corresponding isochrones. As an example, a set of stars are compared to isochrones in Fig. 5. The error of the age due to the uncertainties of $`T_{\mathrm{eff}}`$, $`M_V`$, and \[Fe/H\] is about 15% ($`\sigma (\mathrm{log}\tau )`$ = 0.07) except for a few stars, which have relatively large errors of the Hipparcos parallaxes.
### 6.2 Kinematics
Stars presently near the Sun may come from a wide range of Galactic locations. Information on their origin will help us to understand their abundance ratios. Therefore, stellar space velocity, as a clue to the origin of a star in the Galaxy, is very interesting.
The accurate distance and proper motion available in the Hipparcos Catalogue (ESA ESA97 (1997)), combined with stellar radial velocity, make it possible to derive a reliable space velocity. Radial velocities from the CORAVEL survey for 53 stars were kindly made available by Nordstrรถm (Copenhagen) before publication. These velocities are compared with our values derived from the Doppler shift of spectral lines. A linear least squares fit for 40 stars (excluding the suspected binaries) gives:
$`RV=0.997\left(\pm 0.002\right)RV_{\mathrm{CORAVEL}}+0.26\left(\pm 0.12\right)\text{ km\hspace{0.17em}s}\text{-1}`$
The rms scatter around the relation is 0.72 km s<sup>-1</sup>, showing that our radial velocities are as accurate as 0.5 km s<sup>-1</sup>. Hence, our values are adopted for stars not included in the CORAVEL survey.
The calculation of the space velocity with respect to the Sun is based on the method presented by Johnson & Soderblom (Johnson87 (1987)). The correction of space velocity to the Local Standard of Rest is based on a solar motion, ($``$10.0,$`+`$5.2,$`+`$7.2) km s<sup>-1</sup> in (U,V,W)<sup>1</sup><sup>1</sup>1In the present work, U is defined to be positive in the anticentric direction., as derived from Hipparcos data by Dehnen & Binney (Dehnen98 (1998)). The error in the space velocity arising from the uncertainties of distance, proper motion and radial velocity is very small with a value of about $`\pm 1`$ km s<sup>-1</sup>.
The ages and space velocities derived in the present work are generally consistent with EAGLNT. But the more accurate absolute magnitude, as well as the new set of theoretical isochrones, in our study should give more reliable ages than those determined by EAGLNT based on the photometric absolute magnitude and the old isochrones of VandenBerg & Bell (VandenBerg85 (1985)). This situation is also true for space velocities with our results based on distances and proper motions now available from Hipparcos.
## 7 Results and discussion
### 7.1 Relations between abundances, kinematics and ages
The observed trends between abundances, kinematics and ages are the most important information for theories of Galactic evolution. Especially, EAGLNT have provided many new results on this issue. For example, the substantial dispersion in the AMR found by EAGLNT argues against the assumption of chemical homogeneity adopted in many chemical evolution models. It is, however, important to test the results of EAGLNT for a different sample of disk stars. Based on more reliable ages and kinematics, the present study makes such an investigation.
#### 7.1.1 Age-metallicity relation in the disk
Fig. 6 shows the age-metallicity relations for $`\alpha `$, iron<sup>2</sup><sup>2</sup>2Here and in the following sections and figures, the iron abundance is the mean abundance derived from all Fe i and Fe ii lines with equal weight to each line. and barium elements, where $`\alpha `$ represents the mean abundance of Mg, Si, Ca and Ti. Generally, there is a loose correlation between age and abundance. Stars younger than 5 Gyr ($`\mathrm{log}\tau _9<0.7`$) are more metal-rich than $`\text{[Fe/H]}0.3`$, and stars with $`\text{[Fe/H]}<0.5`$ are not younger than 6-7 Gyr ($`\mathrm{log}\tau _9>0.8`$). The deviating young halo star HD 97916 (indicated by an asterisk in Fig. 6) is discussed in Sect. 8.
The correlation between age and abundance is, however, seriously distorted by a considerable scatter. Stars with solar metallicity have an age spread as large as 10 Gyr, and coeval stars at 10 Gyr show metallicity differences as high as 0.8 dex. Such a dispersion cannot be explained by either the abundance error ($`<0.1`$ dex) or the age uncertainty ($`15`$%) in the AMR. This is an important constraint on GCE models, which must reproduce both the weak correlation and the substantial dispersion.
It is seen from Fig. 6 that Ba has the steepest slope in the AMR, Fe has intermediate slope, and the $`\alpha `$ elements show only a very weak trend with \[Fe/H\]. This was also found by EAGLNT and is consistent with nucleosynthesis theory that suggests that the main synthesis sites of Ba, Fe and $`\alpha `$ elements are AGB stars (1-3 $`M_{}`$), SNe Ia (6-8 $`M_{}`$) and SNe II ($`>8M_{}`$), respectively. Due to their longer lifetime, lower mass stars contribute to the enrichment of the Galaxy at a later epoch, i.e. after massive stars have been polluting their products into the ISM. Hence, the Ba abundance is relatively low in the beginning of the disk evolution and increases quickly in the late stage, leading to a steeper slope.
It is interesting that there is a hint of a smaller metallicity spread for young stars with $`\mathrm{log}\tau _9=0.40.8`$ in this work than in EAGLNT, while the spread is similar for old stars. If we are not misled by our sample (less young stars than in the EAGLNT sample and a lack of stars with $`\mathrm{log}\tau _9<0.4`$), it seems that there are metal-rich stars at any time in the solar neighbourhood while metal-poor stars are always old. Another interesting feature for the young stars is that \[Ba/H\] has a smaller metallicity spread than \[Fe/H\] and \[$`\alpha `$/H\]. This could be due to the dependence of elemental yield on the progenitorโs mass. Ba is produced by AGB stars with a rather small mass range of 1-3 $`M_{}`$, while Fe and $`\alpha `$ elements are synthesized by SNe having a mass range $``$6-30 $`M_{}`$.
#### 7.1.2 Stellar kinematics as functions of age and metallicity
The study of the dispersion in kinematical parameters as a function of Galactic time is more interesting than the kinematical data alone, because any abrupt increase in dispersion may indicate special Galactic processes occurring during the evolution. Generally, dispersions in $`V_{\mathrm{LSR}}`$, $`W_{\mathrm{LSR}}`$ and total velocity increase with stellar age. We have not enough stars at $``$ 2.5 Gyr and 10 Gyr to confirm the abrupt increases in the $`W_{\mathrm{LSR}}`$ dispersion found by EAGLNT at these ages. Instead, our data seems to indicate that the kinematical dispersion (possibly also the metallicity) is fairly constant for stars younger than 5 Gyr ($`\mathrm{log}\tau _9=0.7`$), but it increases with age for stars with $`\mathrm{log}\tau _9>0.7`$. Coincidentally, 5 Gyr corresponds to $`\text{[Fe/H]}0.4`$ dex, the metallicity where EAGLNT suggested an abundance transition related to a dual formation of the Galactic disk. The abundance transition at $`\text{[Fe/H]}0.4`$ dex is confirmed by our data, but the increase of the $`W_{\mathrm{LSR}}`$ dispersion at $`\text{[Fe/H]}0.4`$ found by EAGLNT is less obvious in our data.
When the velocity component in the direction of Galactic rotation, $`V_{\mathrm{LSR}}`$, is investigated as a function of the metallicity (see Fig. 7), we find that there are two subpopulations for $`\text{[Fe/H]}0.6`$ with positive $`V_{\mathrm{LSR}}`$ in group A and negative $`V_{\mathrm{LSR}}`$ in group C, while stars with $`\text{[Fe/H]}0.6`$ have $`V_{\mathrm{LSR}}`$ around $`V_{\mathrm{LSR}}=10\text{ km\hspace{0.17em}s}\text{-1}`$ (group B). The pattern persists when other elements are substituted for Fe. As shown in Edvardsson et al. (1993b ), there is a tight correlation between $`V_{\mathrm{LSR}}`$ and the mean Galactocentric distance in the stellar orbit, $`R_m`$. Hence, we can trace the metallicity at different Galactocentric distances assuming that $`R_m`$ is a reasonable estimator of the radius of the starโs original orbit. Note, however, that the lower metallicity toward the Galactic center ($`V_{\mathrm{LSR}}50\text{ km\hspace{0.17em}s}\text{-1}`$) for group C stars may be due to their large ages. Excluding these stars, a trend of decreasing metallicity with increasing $`V_{\mathrm{LSR}}`$ for stars with similar age is found, which indicates a radial abundance gradient in the disk, and thus suggests a faster evolution in the inner disk than the outer. This is compatible with a higher SFR, due to the higher density, in the inner disk.
There are two possibilities to explain the stars in group C. One is anchored to the fact that the oldest stars ($`>`$ 10 Gyr) in our sample have the lowest $`V_{\mathrm{LSR}}`$, i.e. the smallest $`R_m`$, indicating that the Galaxy did not extend to the Sun at 10 Gyr ago according to an inside-out formation process of the Galaxy. The other is that these stars come from the thick disk, which is older and more metal-poor than the thin disk.
### 7.2 Relative abundances
The general trends of elemental abundance with respect to iron as a function of metallicity, age and kinematics are to be studied in connection with Galactic evolution models and nucleosynthesis theory. The main results are shown in Fig. 8 and will be discussed together with those of EAGLNT.
#### 7.2.1 Oxygen and magnesium
In agreement with most works, \[O/Fe\] shows a tendency to decrease constantly with increasing metallicity for disk stars. As oxygen is only produced in the massive progenitors of SNe II, Ib and Ic, it is mainly build up at early times of the Galaxy, leading to an overabundance of oxygen in halo stars. The \[O/Fe\] ratio gradually decline in the disk stars when iron is produced by the long-lived SNe Ia. The time delay of SNe Ia relative to SNe II is responsible for the continuous decrease of oxygen in disk stars. The tendency for \[O/Fe\] to continue to decrease at $`\text{[Fe/H]}>0.3`$ argues for an increasing ratio of SNe Ia to SNe II also at the later stages of the disk evolution.
In general, the relation of \[O/Fe\] vs. $`V_{\mathrm{LSR}}`$ reflects the variation of \[Fe/H\] with $`V_{\mathrm{LSR}}`$ (see Fig. 7). \[O/Fe\] decreases with increasing $`V_{\mathrm{LSR}}`$ for stars with $`V_{\mathrm{LSR}}<0`$ and slowly increase with further larger $`V_{\mathrm{LSR}}`$. Considering their similar ages, the decreasing \[O/Fe\] from group A to group B stars may be attributed to the increasing $`V_{\mathrm{LSR}}`$, whereas the higher \[O/Fe\] of group C is due to an older age.
The magnesium abundance shows a decreasing trend with increasing metallicity like oxygen for $`\text{[Fe/H]}<0.3`$ but it tends to flatten out for higher metalicities. Given that magnesium is theoretically predicted to be formed only in SNe II, the similar decreasing trend as oxygen is easily understood, but the flat \[Mg/Fe\] towards higher metallicities than $`\text{[Fe/H]}>0.3`$ is unexpected. It seems that SNe II are not the only source for Mg. Perhaps SNe Ia also contribute to the enrichment of Mg during disk evolution.
The flat trend of \[Mg/Fe\] vs. \[Fe/H\] for $`\text{[Fe/H]}>0.3`$ is also evident from the data of EAGLNT if the high Mg/Fe ratios of their NaMgAl stars are reduced to a solar ratio as found by Tomkin et al. (Tomkin97 (1997)). Feltzing and Gustafsson (Feltzing99 (1999)) also find \[Mg/Fe\] to be independent of metallicity for their more metal-rich stars although the scatter is large.
With more magnesium lines in the present study, we get a similar scatter of \[Mg/Fe\] as EAGLNT. The scatter is slightly larger than that of oxygen and in particular much larger than those of Si and Ca. Although we do not find a large line-to-line scatter in the Mg abundance determination, it is still unclear if the scatter in \[Mg/Fe\] is cosmic. Only 3 Mg i lines are available for most stars while Si and Ca are represented by 20-30 lines. There is no obvious evidence showing the scatter to be an effect of different $`V_{\mathrm{LSR}}`$. Nor do we find a clear separation of thick disk stars from thin disk stars in the diagram of \[Mg/Fe\] vs. \[Mg/H\], as has been found by Fuhrmann (Fuhrmann98 (1998)). It seems that neither observation nor theory is satisfactory for Mg.
#### 7.2.2 Silicon, calcium and titanium
Like magnesium, \[Si/Fe\] and \[Ca/Fe\] decrease with increasing metallicity for $`\text{[Fe/H]}<0.4`$ and then flatten out with further increasing \[Fe/H\]. The result is in agreement with EAGLNT, who found a โkinkโ at $`\text{[Fe/H]}=0.30.2`$. But \[Ca/Fe\] possibly continues to decrease for $`\text{[Fe/H]}>0.4`$ based on our data. The suspicion that Si is about 0.05 dex overabundant relative to Ca for $`\text{[Fe/H]}>0.2`$ and the possible upturn of silicon at higher metallicity in EAGLNT are not supported by our data.
Both Si and Ca have a very small star-to-star scatter (0.03 dex) at a given metallicity for thin disk stars. The scatter is slightly larger among the thick disk stars. Since the scatter corresponds to the expected error from the analysis, we conclude that the Galactic scatter for \[Si/Fe\] and \[Ca/Fe\] is less than 0.03 dex in the thin disk.
\[Ti/Fe\] was shown by EAGLNT to be a slowly decreasing function of \[Fe/H\] and the decrease continues to higher metallicity. Our data show a similar trend but the continuous decrease toward higher metallicity is less obvious with a comparatively large star-to-star scatter. There is no evidence that the scatter is correlated with $`V_{\mathrm{LSR}}`$. We note that Feltzing & Gustafsson (Feltzing99 (1999)) find a similar scatter in \[Ti/Fe\] for metal-rich stars with the Ti abundance based on 10-12 Ti i lines.
#### 7.2.3 Sodium and aluminum
Na and Al are generally thought to be products of Ne and C burning in massive stars. The synthesis is controlled by the neutron flux which in turn depends on the initial metallicity and primarily on the initial O abundance. Therefore, one expects a rapid increase of \[Na/Mg\] and \[Al/Mg\] with metallicity. But our data shows that both Na and Al are poorly correlated with Mg in agreement with EAGLNT. This means that the odd-even effect has been greatly reduced in the nucleosynthesis processes during the disk formation.
When iron is taken as the reference element, we find that \[Na/Fe\] and \[Al/Fe\] are close to zero for $`\text{[Fe/H]}<0.2`$, while EAGLNT found 0.1-0.2 dex differences between $`\text{[Fe/H]}=0.2`$ and $`\text{[Fe/H]}=1.0`$. Our results support the old data by Wallerstein (Wallerstein62 (1962)) and Tomkin et al. (Tomkin85 (1985)), who suggested \[Na/Fe\] $``$ 0.0 for the whole metallicity range of the disk stars. The situation is the same for Al; EAGLNT found an overabundance of \[Al/Fe\] $`0.2`$ for $`\text{[Fe/H]}<0.5`$, whereas we find a solar ratio for the low metallicity stars. As discussed in Sect. 5.3 this may, however, be due to a non-LTE effect.
In the case of the more metal rich stars the abundance results for Na and Al are rather confusing. EAGLNT found that some metal-rich stars in the solar neighbourhood are rich in Na, Mg and Al, but the existence of such NaMgAl stars was rejected by Tomkin et al. (Tomkin97 (1997)). Several further studies, however, confirmed the overabundance of some elements again. Porte de Morte (Porte96 (1996)) found an overabundance of Mg but not of Na. Feltzing & Gustafsson (Feltzing99 (1999)) confirmed the upturn of \[Na/Fe\] but their metal-rich stars did not show Mg and Al overabundances. In the present work we find a solar ratio of Na/Fe up to $`\text{[Fe/H]}0.1`$, and a rather steep upturn of \[Al/Fe\] beginning at $`\text{[Fe/H]}0.2`$. As discussed in Sect. 5.3, our Al abundances may, however, be severely affected by non-LTE effects. We conclude that more accurate data on Na and Al abundances are needed.
#### 7.2.4 Potassium
\[K/Fe\] shows a decreasing trend with increasing metallicity for disk stars. The result supports the previous work by Gratton & Sneden (Gratton87 (1987)) but our data have a smaller scatter. Assuming that potassium is a product of explosive oxygen burning in massive stars, Samland (Samland98 (1998)) reproduces the observed trend rather well. Timmes et al. (Timmes95 (1995)), on the other hand, predicts \[K/Fe\]$`<0.0`$ for $`\text{[Fe/H]}<0.6`$ in sharp contrast to the observations. Given that the K i resonance line at $`\lambda `$7699, which are used to derive the K abundances, is affected by non-LTE as discussed in Sect. 5.3, it seems premature to attribute K to one of $`\alpha `$ elements.
#### 7.2.5 Vanadium, chromium and nickel
V and Cr seem to follow Fe for the whole metallicity range with some star-to-star scatter. The scatter is not a result of mixing stars with different $`V_{\mathrm{LSR}}`$, and the few very weak lines used to determine the abundances prevent us to investigate the detailed dependence on metallicity and to decide if the scatter is cosmic or due to errors.
Ni follows iron quite well at all metallicities with a star-to-star scatter less than 0.03 dex. Two features may be found after careful inspection. Firstly, there is a hint that \[Ni/Fe\] slightly decreases with increasing metallicity for $`1.0<\text{[Fe/H]}<0.2`$. The trend is more clear, due to smaller star-to-star scatter, than in EAGLNT. Secondly, there is a subtle increase of \[Ni/Fe\] for $`\text{[Fe/H]}>0.2`$. Interestingly, Feltzing & Gustafsson (Feltzing99 (1999)) found a slight increase of \[Ni/Fe\] towards even more metal-rich stars.
#### 7.2.6 Barium
The abundance pattern of Ba is very similar to that of EAGLNT except for a systematic shift of about +0.07 dex in \[Ba/Fe\]. Both works indicate a complicated dependence of \[Ba/Fe\] on metallicity. First, \[Ba/Fe\] seems to increase slightly with metallicity for $`\text{[Fe/H]}<0.7`$, and then keeps a constant small overabundance until $`\text{[Fe/H]}0.2`$, after which \[Ba/Fe\] decreases towards higher metallicities.
Barium is thought to be synthesized by neutron capture s-process in low mass AGB stars with an evolutionary timescale longer than that of iron-producing SNe Ia. Therefore, \[Ba/Fe\] is still slightly underabundant at $`\text{[Fe/H]}=1.0`$. Ba is then enriched significantly at later stages of the disk evolution, but the decrease of \[Ba/Fe\] for more metal-rich stars beginning with $`\text{[Fe/H]}0.2`$ is unexpected.
Given that the low \[Ba/Fe\] for some stars may be related to their ages, the relation of \[Ba/Fe\] vs. \[Fe/H\] at different age ranges was investigated (see Fig. 9). In agreement with EAGLNT, the run of \[Ba/Fe\] vs. \[Fe/H\] in old stars with $`\mathrm{log}\tau _9>0.9`$ ($``$8 Gyr) and $`0.7<\mathrm{log}\tau _9<0.9`$ shows a flat distribution for $`\text{[Fe/H]}<0.3`$ and a negative slope for $`\text{[Fe/H]}>0.3`$. All young stars with $`\mathrm{log}\tau _9<0.7`$ ($``$5 Gyr) have $`\text{[Fe/H]}>0.3`$ and a clear decreasing trend of \[Ba/Fe\] with \[Fe/H\] is seen. In addition, there is a hint of higher \[Ba/Fe\] for younger stars both in the interval $`0.7<\text{[Fe/H]}<0.3`$, where \[Ba/Fe\] is constant and in the interval $`\text{[Fe/H]}>0.3`$, where \[Ba/Fe\] is decreasing. This is consistent with the formation of young stars at a later stage of the disk when long-lived AGB stars have enhanced Ba in the ISM. The flat \[Ba/Fe\] for $`\text{[Fe/H]}<0.3`$ may be explained by the suggestion of EAGLNT that the synthesis of Ba in AGB stars is independent of metallicity, i.e. that Ba shows a primary behaviour during the evolution of the disk. But the age effect alone cannot explain the underabundant \[Ba/Fe\] in metal-rich stars, because \[Ba/Fe\] decreases with metallicity for all ages after $`\text{[Fe/H]}=0.3`$. One reason could be that s-element synthesis occurs less frequently in metal-rich AGB stars possibly because the high mass loss finishes their evolution earlier.
## 8 Concluding remarks
One of the interesting results of this study is that the oldest stars presently located in the solar neighbourhood have $`V_{\mathrm{LSR}}50\text{ km\hspace{0.17em}s}\text{-1}`$. Hence, they probably originate from the inner disk having $`R_\mathrm{m}<7`$ kpc. This is not coincidentally found in our study. The EAGLNT sample contains about 20 such stars. As shown in both works, these stars are generally more metal-poor than other stars and they show a larger spread in \[Fe/H\] and \[Ba/H\] than in \[$`\alpha `$/H\] (see Fig. 6). According to EAGLNT, they have higher \[$`\alpha `$/Fe\] than other disk stars at a metallicity about $`\text{[Fe/H]}=0.7`$.
Considering these different properties, we suggest that they do not belong to the thin disk. Firstly, they are older (10-18 Gyr) than other stars. Secondly, if they are thin disk stars, it is hard to understand why stars coming from both sides of the solar annulus have lower metallicity than the local region. Thirdly, these stars show a relatively small metallicity dispersion at such early Galactic time, i.e. smaller than stars at 8-10 Gyr. This is not in agreement with the effect of orbital diffusion working during the evolution of the thin disk, which suggests larger metallicity dispersion for older stars. Finally, the $`W_{\mathrm{LSR}}`$ dispersion of these stars is about 40 km s<sup>-1</sup>, considerably larger than the typical value of about 20 km s<sup>-1</sup> for thin disk stars. Consistently, the kinematics, age, metallicity and abundance ratios of these stars follow the features of the thick disk: $`V_{\mathrm{LSR}}50\text{ km\hspace{0.17em}s}\text{-1}`$, $`\sigma (W_{\mathrm{LSR}})40\text{ km\hspace{0.17em}s}\text{-1}`$, $`\tau >10`$ Gyr, $`\text{[Fe/H]}<0.5`$ and \[$`\alpha `$/Fe\] $`0.2`$. We conclude that these oldest stars in both EAGLNT and this work are thick disk stars. Hence, they are probably not resulting from an inside-out formation of the Galactic disk, but have been formed in connection with a merger of satellite components with the Galaxy.
Concerning the abundance connection of the thick disk with the thin disk, our data for \[$`\alpha `$/Fe\], shown in Fig. 10, suggest a more smooth trend than those of EAGLNT, who found a correlation between \[$`\alpha `$/Fe\] and $`R_\mathrm{m}`$ at $`\text{[Fe/H]}0.7`$. We leave the issue open considering the small number of these stars in our work. Two stars marked by their names in Fig. 10 may be particularly interesting because they show significantly higher \[$`\alpha `$/Fe\] than other stars. Fuhrmann & Bernkopf (Fuhrmann99 (1999)) suggest that one of them, HD 106516A, is a thick-disk field blue straggler. It is unclear if this can explain the higher \[$`\alpha `$/Fe\]. HD 97916 is a nitrogen rich binary (Beveridge & Sneden Beveridge94 (1994)) with $`U_{\mathrm{LSR}}=117\text{ km\hspace{0.17em}s}\text{-1}`$ and $`W_{\mathrm{LSR}}=101\text{ km\hspace{0.17em}s}\text{-1}`$ (typical for halo stars), but with $`V_{\mathrm{LSR}}=22\text{ km\hspace{0.17em}s}\text{-1}`$ similar to the value for thin disk stars. Surprisingly, this star is also very young (5.5 Gyr) for itโs metallicity.
It is interesting to re-inspect the observational results for thin disk stars excluding the thick disk stars. More direct information on the evolution of the Galactic thin disk will be then obtained. In summary, the thin disk is younger (not older than 12 Gyr), more metal-rich ($`\text{[Fe/H]}>0.8`$) and has a smaller \[$`\alpha `$/Fe\] spread (0.1 dex) without the mixture of the thick disk stars. In particular, the AMR is more weak and there seems to exist a radial metallicity gradient. All these features agree better with the present evolutionary models for the Galactic disk.
We emphasize here that there is no obvious gradient in \[$`\alpha `$/Fe\] for the thin disk at a given metallicity. Such a gradient was suggested by EAGLNT based on higher \[$`\alpha `$/Fe\] of the oldest stars with $`R_\mathrm{m}<7`$ kpc than stars with $`R_\mathrm{m}>7`$ kpc (see their Fig. 21). After we have ascribed these oldest stars to the thick disk, the abundance gradient disappears.
Our study of relative abundance ratios as a function of \[Fe/H\] suggests that there are subtle differences of origin and enrichment history both within the group of $`\alpha `$ elements and the iron-peak elements. Nucleosynthesis theory predicts that Si and Ca are partly synthesized in SNe Ia, while O and Mg are only produced in SNe II (Tsujimoto et al. Tsujimoto95 (1995)). Our data suggest, however, that SNe Ia may also be a significant synthesis site of Mg, because \[Mg/Fe\] shows a trend more similar to \[Si/Fe\] and \[Ca/Fe\] than to \[O/Fe\]. Ti may not lie in a smooth extension of Si and Ca, because there is a hint of a decrease of \[Ti/Fe\] for $`\text{[Fe/H]}>0.4`$ not seen in the case of Si and Ca. The situation for the odd-Z elements is more complicated. The available data for Na and Al show confusing disagreements; EAGLNT finds an overabundance of 0.1 to 0.2 dex for \[Na/Fe\] and \[Al/Fe\] among the metal-poor disk stars, whereas our study points at solar ratios. Two other odd-Z elements, K and Sc (Nissen et al. Nissen99 (1999)), behave like $`\alpha `$ elements, but the result for K is sensitive to the assumption of LTE. The iron-peak elements also show different behaviours: V, Cr and Ni follow Fe very well, while \[Mn/Fe\] (Nissen et al. Nissen99 (1999)) decreases with decreasing metallicity from \[Mn/Fe\] $`0.0`$ at $`\text{[Fe/H]}=0.0`$ to \[Mn/Fe\] $`0.4`$ at $`\text{[Fe/H]}=1.0`$. We conclude that the terms โ$`\alpha `$ elementsโ and โiron-peak elementsโ do not indicate productions in single processes, and that each element seems to have a unique enrichment history.
## Acknowledgements
This research was supported by the Danish Research Academy and the Chinese Academy of Sciences. Bengt Edvardsson is thanked for providing a grid of the Uppsala new MARCS model atmospheres, and Birgitta Nordstrรถm for communicating CORAVEL radial velocities in advance of publication. |
no-problem/9912/gr-qc9912065.html | ar5iv | text | # Gravitational waves from the ๐-modes of rapidly rotating neutron stars
## I The $`r`$-Mode Instability
The $`r`$-mode instability has been the subject of about thirty papers over the past two years. I will not be able to do them all justice here.<sup>1</sup><sup>1</sup>1For a recent review with more emphasis on completeness, see Friedman and Lockitch fl . Instead I will summarize the most important (as I see them) results with a direct impact on gravitational-wave detection, beginning with the basic model worked out in 1998 and ending with the latest (end of 1999) developments in this rapidly changing field.
The reason for the excitement is a version of the CFS instabilityโnamed for Chandrasekhar, who discovered it in a special case c70 , and for Friedman and Schutz, who investigated it in detail and found that it is generic to rotating perfect fluids fs78 . The CFS instability allows some oscillation modes of a fluid body to be driven rather than damped by radiation reaction, essentially due to a disagreement between two frames of reference.
The mechanism can be explained heuristically as follows. In a non-rotating star, gravitational waves radiate positive angular momentum from a forward-moving mode and negative angular momentum from a backward-moving mode, damping both as expected. However, when the star rotates the radiation still lives in a non-rotating frame. If a mode moves backward in the rotating frame but forward in the non-rotating frame, gravitational radiation still removes positive angular momentumโbut since the fluid sees the mode as having negative angular momentum, radiation drives the mode rather than damps it. Another example of such an effect due to a disagreement between frames of reference is the well-known Kelvin-Helmholtz instability, which leads to rough airplane rides over the jet stream and pounding surf on the California coast.
Mathematically, the criterion for the CFS instability is
$$\omega (\omega +m\mathrm{\Omega })<0,$$
(1)
with the mode angular frequency $`\omega `$ (in an inertial frame) in general a function of the azimuthal quantum number $`m`$ and rotation angular frequency $`\mathrm{\Omega }`$. For any set of modes of a perfect fluid, there will be some modes unstable above some minimum $`m`$ and $`\mathrm{\Omega }`$. However, fluid viscosity generally grows with $`m`$ and there is a maximum value of $`\mathrm{\Omega }`$ (known as the Kepler frequency $`\mathrm{\Omega }_K`$) above which a rotating star flies apart. Therefore the instability is only astrophysically relevant if there is some range of frequencies and temperatures (viscosity generally depends strongly on temperature) in which it survives.
The $`r`$-modes are a set of fluid oscillations with dynamics dominated by rotation. They are in some respects similar to the Rossby waves found in the Earthโs oceans and have been studied by astrophysicists since the 1970s pp78 . The restoring force is the Coriolis inertial โforceโ which is perpendicular to the velocity. As a consequence, the fluid motion resembles (oscillating) circulation patterns. The (Eulerian) velocity perturbation is
$$\delta \stackrel{}{v}=\alpha \mathrm{\Omega }R(r/R)^m\stackrel{}{r}\times \stackrel{}{}Y_{mm}(\theta ,\varphi )+O(\mathrm{\Omega }^3),$$
(2)
where $`\alpha `$ is a dimensionless amplitude (roughly $`\delta v/v`$) and $`R`$ is the radius of the star. Since $`\delta \stackrel{}{v}`$ is an axial vector, mass-current perturbations are large compared to the density perturbations. The Coriolis restoring force guarantees that the $`r`$-mode frequencies are comparable to the rotation frequency,
$$\omega +m\mathrm{\Omega }=\frac{2}{m+1}\mathrm{\Omega }+O(\mathrm{\Omega }^3).$$
(3)
It was not until the time of the last Amaldi Conference in mid-1997 that Andersson a98 noticed that the $`r`$-mode frequencies satisfy the mode instability criterion (1) for all $`m`$ and $`\mathrm{\Omega }`$, and that Friedman and Morsink fm98 showed the instability is not an artifact of the assumption of discrete modes but exists for generic initial data. In other words, all rotating perfect fluids are subject to the instability.
## II Driving vs. Damping
The universe is inhabited not by balls of perfect fluid, but by stars subject to internal viscous processes which tend to damp out oscillation modes. To evaluate the stability of modes in realistic neutron stars, we must compare driving and damping timescales.
In the small-amplitude limit, a mode is a driven, damped harmonic oscillator with an exponential damping timescale il91
$$\frac{1}{\tau }=\frac{1}{2E}\frac{dE}{dt}=\frac{1}{2E}\left[\left(\frac{dE}{dt}\right)_G+\underset{V}{}\left(\frac{dE}{dt}\right)_V\right]=\frac{1}{\tau _G}+\underset{V}{}\frac{1}{\tau _V}.$$
(4)
Here $`E`$ is the energy of the mode in the rotating frame and $`dE/dt`$ is the sum of contributions from gravitational radiation (subscript $`G`$) and all viscous processes (subscript $`V`$). The mode is stable if the damping timescale $`\tau `$ is positive, unstable if $`\tau `$ is negative. The gravitational radiation timescale $`\tau _G`$ depends on the rotation frequency $`\mathrm{\Omega }`$, and the viscous timescales generally depend also on the temperature $`T`$. Therefore we define a critical frequency $`\mathrm{\Omega }_c`$ such that
$$\frac{1}{\tau (\mathrm{\Omega }_c,T)}=0$$
(5)
and decide if a given mode is astrophysically interesting by examining the curve $`\mathrm{\Omega }_c(T)`$.
Neutron stars are complicated objects, but a simple model suffices to estimate the most important driving and damping timescales in the very young ones. When hotter than $`10^9`$K (younger than about a year), most of the star is a ball of ordinary, barotropic (equation of state independent of temperature) fluid. Given a putative equation of state, the gravitational radiation timescale $`\tau _G`$ can be calculated by standard multipole integrals t80 , although the $`r`$-modes are nonstandard in that the leading-order (in $`\mathrm{\Omega }`$) contribution is not from the mass multipoles but from the mass-current multipoles lom98 . Viscous damping is due both to shearing of the fluid and to compression and rarefaction of individual fluid elements (bulk viscosity). The shear viscosity is stronger (timescale is shorter) at lower temperatures (like everyday experience with motor oil) and can be calculated from neutron-neutron scattering cross-sections cl87 . The bulk viscosity is a weak nuclear interaction effect and thus is much stronger at higher temperatures. Compression and rarefaction of the fluid by the mode disturbs the density-dependent equilibrium $`p+en`$, generating neutrinos which efficiently carry energy away s89 . As the star cools the viscous mechanisms change (see Sec. V), but this model is good enough for a first look.
The net damping timescale of the most unstable ($`m=2`$) $`r`$-mode can be written in terms of fiducial timescales (written with tildes)
$$\frac{1}{\tau (\mathrm{\Omega },T)}=\frac{1}{\stackrel{~}{\tau }_G}\frac{\mathrm{\Omega }^6}{(\pi G\overline{\rho })^3}+\frac{1}{\stackrel{~}{\tau }_S}\left(\frac{10^9\text{K}}{T}\right)^2+\frac{1}{\stackrel{~}{\tau }_B}\left(\frac{T}{10^9\text{K}}\right)^6\frac{\mathrm{\Omega }^2}{\pi G\overline{\rho }},$$
(6)
where $`\overline{\rho }`$ is the mean density of the equilibrium star. The numerical values of the fiducial timescales (for a simplistic equation of state) have been evaluated as lom98 ; aks99 ; lmo99
$$\stackrel{~}{\tau }_G=3.3\text{ s},\stackrel{~}{\tau }_S=2.5\times 10^8\text{s},\stackrel{~}{\tau }_B=2.0\times 10^{11}\text{s}.$$
(7)
The numbers change by factors of two or so for different neutron-star models, but the curve plotted in Fig. 1 and its conclusion are very robust. The $`r`$-modes are unstable in realistic neutron stars over an interesting range of frequencies and temperatures. Neutron stars born rotating at or near the Kepler frequency will spin down and emit gravitational radiation in the process; the question now is how much.
## III Spindown-Cooling Model
To detect gravitational waves from the $`r`$-modes, we need to know how the modes grow beyond the limits of perturbation theory and spin down the neutron star as it cools during the first year of its life. Even in the simple approximation of an ordinary fluid ball, this involves nonlinear hydrodynamics and radiation reaction, which are both tricky subjects and could take years to explore properly. In the meantime we make do with a simple model o-98 developed to make the first, rough estimates of detectability.
In this model, we consider three coupled systemsโa uniformly rotating fluid background (with angular velocity $`\mathrm{\Omega }`$), the most unstable $`r`$-mode (with dimensionless amplitude $`\alpha `$), and the rest of the universe. The two systems in the star (mode and background) couple to each other by viscosity and nonlinear fluid effects. The mode couples to the universe by gravitational radiation; the background does not. The modeโs energy evolves by gravitational radiation and viscous damping; the behavior of the background is determined by conservation of energy and angular momentum. Although some of the modeโs energy goes into heating the star, the standard neutrino cooling law ($`T/10^9`$K) $``$ (1 yr$`/t`$)<sup>1/6</sup> is all but unaffected.
If a star is born spinning at $`\mathrm{\Omega }_K`$ at temperature $`10^{11}`$K (and recent observations fast suggest that some stars are), the evolution falls into three distinct phases. The growth phase begins when the $`r`$-modes go unstable of order one second after the supernova. During this phase a small initial perturbation $`\alpha `$ grows exponentially on a timescale of order one minute while $`\mathrm{\Omega }`$ remains almost constant (the mode is too small to emit much angular momentum). In this regime linearized hydrodynamics (which is all we know at the moment) is a good approximation. Within at most a few minutes after the supernova, $`\alpha `$ becomes so large that nonlinear hydrodynamic effects can no longer be neglected. Previous studies of other modes dl77 indicate that the main effect might be a saturation of the mode amplitude at some constant value, which we can treat as a phenomenological parameter. In this saturation phase, the star spins down very rapidly ($`d\mathrm{\Omega }/dt\mathrm{\Omega }^7`$) and emits gravitational radiation of strain amplitude
$$h(t)=4\times 10^{24}\left(\frac{\mathrm{\Omega }}{\pi G\overline{\rho }}\right)^3\left(\frac{20\text{ Mpc}}{D}\right)\alpha _{\mathrm{max}},$$
(8)
at a detector at distance $`D`$ (normalized here to the distance at which we expect several events per year). As the star spins down, the gravitational radiation gets much weaker (recall $`1/\tau _G\mathrm{\Omega }^6`$). Also, viscous damping becomes stronger, especially since other mechanisms come into playโfor instance, when the neutrons become superfluid after cooling to about $`10^9`$K. Thus, within a year the star has moved along a track such as that in Fig. 1 and entered the decay phase, where the $`r`$-modes are stabilized by viscosity and $`\alpha `$ slowly dies away without changing $`\mathrm{\Omega }`$ much. The final spin frequency $`\mathrm{\Omega }_{\mathrm{end}}`$ is in practice another phenomenological parameter, since it depends on the more complicated viscous processes of cooler neutron stars as well as on $`\alpha _{\mathrm{max}}`$.
## IV Detectability of Gravitational Waves
Even at its strongest, an $`r`$-mode signal is below the strain noise of a gravitational-wave detector. But electromagnetic astronomers have been pulling faint pulsar signals out of noisy data for decades, and their data analysis techniques can be adapted for the $`r`$-modes.
Surprisingly, even the crude model of the source given in Sec. III is good enough to estimate the detectability of the gravitational waves. The quantity of interest is not the raw strain $`h(t)`$ but rather a characteristic strain
$$h_c(f)=h[t(f)]\sqrt{f^2/|df/dt|},$$
(9)
where $`df/dt`$ is the time derivative of the gravitational wave frequency. The optimal (filtered) signal-to-noise ratio is
$$(S/N)^2=2(d\mathrm{ln}f)(h_c/h_{\mathrm{rms}})^2,$$
(10)
where the rms strain noise is related to the detectorโs power spectral noise density by
$$h_{\mathrm{rms}}=\sqrt{fS_h(f)}.$$
(11)
Thus $`(S/N)^2`$ can be estimated by looking at a plot such as Fig. 2. For the $`r`$-modes, we find that o-98
$$h_c=6\times 10^{22}\left(\frac{f}{\text{1 kHz}}\right)^{1/2}\left(\frac{20\text{ Mpc}}{D}\right)$$
(12)
with $`(S/N)=8`$ for the projected LIGO-II (enhanced) noise curve as of 1998. This result is independent of much of the detailed physics of the source, including $`\alpha _{\mathrm{max}}`$.<sup>2</sup><sup>2</sup>2To my knowledge the argument was first made by R. D. Blandford in 1984 (but never published) that such a robust result holds for any system evolving mainly via gravitational radiationโlike the $`r`$-modes in the saturation phase. However, it does depend on the detailed astrophysics through the final low-frequency cutoff, which does not change much even if the viscous damping changes by orders of magnitude.
The optimal signal-to-noise ratio is only an upper limitโit assumes matched filtering, which requires precise tracking of the signal phase. While our knowledge of astrophysics will never be good enough to track an $`r`$-mode signal to within one cycle out of $`10^9`$, there are alternatives. The lower limit has been set by Brady and Creighton bc using the simplest possible search algorithm, patterned on the techniques used to find pulsar signals in electromagnetic data. Assuming the supernova has been observed optically and a sky position is available, the Doppler shifts due to the Earthโs motion can be removed to obtain a signal which is sinusoidal but for the (slow) intrinsic frequency evolution of the source. Even without any modeling of this evolution, i.e. by expanding
$$f(t)=f_0\left(1+\underset{k}{}f_kt^k\right)$$
(13)
for short Fourier transforms (integrating for year is computationally too expensive) and combining the transforms in some way for different trial values of the spindown parameters $`f_k`$, it is possible to obtain one fifth of the optimal signal-to-noise. This is in spite of the fact that the search is computationally limited by the requirement that data analysis keep pace with data acquisition and by the fact that the $`r`$-modes evolve so quickly that many terms $`f_k`$ are needed. With constraintsโeven rough onesโfrom a physical model, the $`f_k`$ are no longer all independent and the efficiency of data analysis could be increased. Since event rate goes roughly as $`(S/N)^3`$, it is important to beat the lower limit.
A stochastic background from the superposition of many faint $`r`$-mode signals out to cosmological distances will also exist. However, it is much fainter than a single signal and thus detectable only by (advanced) LIGO-III o-98 ; fms99 .
## V Open Questions
We are now (at the end of 1999) in the midst of a renewed flurry of activity on $`r`$-mode astrophysics. Several effects neglected in the first simple scenario are being worked out. Some of them could damp the $`r`$-modes much more effectively than previously thought, pushing the detectability of the gravitational waves from LIGO-II to LIGO-III. However, this is far from certain and the astrophysicists are having an exciting time working it out. Here is a list of the effects that (I think) have the most direct impact on detection prospects.
Superfluid viscosity. One of the most eagerly awaited papers has been the calculation of the damping effects of โmutual frictionโ, a process which paradoxically increases the viscous damping when a neutron star cools to a superfluid. At temperatures below about $`10^9`$K this viscous mechanism was expected to dominate, and the big question was whether the damping was sufficient to stabilize the $`r`$-modes in stars older than about a yearโespecially the low-mass x-ray binaries (see the review by G. Ushomirsky in this volume). The answer lm is a definite maybe. The damping timescale varies by several orders of magnitude, depending on a parameter of superfluid physics (the neutron-proton entrainment coefficient) which is yet poorly known. More work is in progress, but recently mutual friction has been upstaged by other issues.
Relativistic effects. Most $`r`$-mode calculations to date have assumed Newtonian gravity. Relativity was thought to simply multiply various numbers by redshift factors of order unity, but there are two important qualitative differences with Newtonian gravity. First there is the claim by Kojima k98 that the $`r`$-mode frequency becomes smeared over a finite bandwidth. This claim is contradicted, however, by Lockitch l99 . With Andersson and Friedman, he alf finds that relativistic $`r`$-modes do however have an increased coupling to bulk viscosity similar to that of the โgeneralized $`r`$-modesโ li99 ; lf99 of Newtonian stars, which are still unstable but less so.
Nonlinear fluid dynamics. At least two groups r- ; fsk are working on codes to numerically solve the fully nonlinear fluid equations for the $`r`$-modes and determine the saturation amplitude. However, the problem is complicated and the investment of coding and formalism is large, so expect results in a year or two at best. Order of magnitude arguments small have been made to claim that the $`r`$-modes saturate due to mode-mode coupling at a very small amplitude $`\alpha 10^5`$, which would render the signal undetectable. However, these arguments neglect the unique symmetries of the $`r`$-modes; and based on work on the $`g`$-modes of white dwarfs gw99 it seems that the $`r`$-modes could indeed grow much larger. Semi-analytical analyses mmc of mode-mode coupling may give some indications about mode saturation while we wait for numerical results.
Magnetic fields. If the growth of the $`r`$-modes leads to substantial differential rotation, it could wind up magnetic field lines frozen into the fluid of a young neutron star, amplifying any seed field and saturating the $`r`$-modes at a small amplitude. Two recent papers rls ; lu claim that the $`r`$-modes produce differential rotation, but this is a nonlinear effect which the authors have tried to treat with linear perturbation theory. Strictly speaking the gravitational radiation is also nonlinear (quadratic in $`\alpha `$), but the canonical energy and angular momentum are global quantities whose perturbation can be derived self-consistently from a Lagrangian principle. It is not clear how to do this for local, dynamical quantities such as vorticity.
Crust formation. Perhaps the most important new result is that the formation of a solid crust (below about $`10^{10}`$K) can act to strongly stabilize the mode. Bildsten and Ushomirsky bu find that shear viscosity in the fluid boundary layer just below the crust decreases the damping timescale by $`10^5`$$`10^7`$. They conclude that the $`r`$-modes are completely suppressed in low-mass x-ray binaries and that the signal-to-noise ratio is reduced by three for newborn neutron stars. But it is not clear to me that this result is correct for newborn neutron stars. If an $`r`$-mode is already excited when the crust starts to form (of order a minute after the supernova), the intense and localized shear heating in the boundary layer can re-melt the crust if the pre-existing $`r`$-mode is strong enough. In this case, the outer layers stay in a self-regulating equilibrium at the melting temperature and the old model of the evolution is largely unaffected. I estimate that, in this case, โstrong enoughโ means an $`r`$-mode amplitude of $`\alpha =10^3`$. This points out some interesting questions for future research: First, what is the initial value of $`\alpha `$ when the $`r`$-modes first go unstable? The first model o-98 used gratuitously small values to make a point, but no one knows yet what are reasonable values. Also, what exactly is the melting temperature of a new crust? If it is $`8\times 10^9`$K rather than $`10^{10}`$K then the $`r`$-mode could have plenty of time to grow, and in astrophysics a 20% error is considered high precision.
Although I have skipped over many astrophysics issues, I realize even this short list may be bewildering to the experimenters and data analysts who are the main audience at the Amaldi Conference. If I had to distill my presentation into one sentence, I would say: Let the theorists argue for another two years; the $`r`$-modes are not as good a bet as binaries, but they may not be far behind.
## VI Acknowledgments
I am grateful to many colleagues for discussions of published and especially unpublished work which enabled me to give a good review: H. Asada, ร. Flanagan, J.-A. Font, J. Friedman, J. Ipser, F. Lamb, Y. Levin, K. Lockitch, G. Mendell, S. Morsink, E. S. Phinney, L. Rezzolla, N. Stergioulas, K. Thorne, G. Ushomirsky, and especially L. Lindblom. |
no-problem/9912/cond-mat9912232.html | ar5iv | text | # Relation between the superconducting gap energy and the two-magnon Raman peak energy in Bi2Sr2Ca1-xYxCu2O8+ฮด
## Abstract
The relation between the electronic excitation and the magnetic excitation for the superconductivity in Bi<sub>2</sub>Sr<sub>2</sub>Ca<sub>1-x</sub>Y<sub>x</sub>Cu<sub>2</sub>O<sub>8+ฮด</sub> was investigated by wide-energy Raman spectroscopy. In the underdoping region the $`B_{1\mathrm{g}}`$ scattering intensity is depleted below the two-magnon peak energy due to the โhot spotsโ effects. The depleted region decreases according to the decrease of the two-magnon peak energy, as the carrier concentration increases. This two-magnon peak energy also determines the $`B_{1\mathrm{g}}`$ superconducting gap energy as $`2\mathrm{\Delta }\alpha \mathrm{}\omega _{\mathrm{Two}\mathrm{Magnon}}J_{\mathrm{effective}}`$ $`(\alpha =0.340.41)`$ from under to overdoping hole concentration.
preprint: HEP/123-qed
The effects of strong electron-spin interactions in high $`T_\mathrm{C}`$ superconductors manifest themselves both in charge excitation spectra and spin excitation spectra. For example, the angle-resolved photoemission spectroscopy (ARPES) revealed that electronic states near $`(\pi ,0)`$ (โhot spotsโ) are depleted in the underdoping region, because large parts of the electronic states become incoherent as the result of interactions with electrons near $`(0,\pi )`$ via collective magnetic excitations near $`(\pi ,\pi )`$ . The resonance peak energy of this magnetic excitation, which is observed in the neutron scattering spectroscopy, appears in ARPES as the energy difference between the peak and the dip in the superconducting gap structure . Raman scattering can measure both charge and spin excitations simultaneously in the superconducting state, because the two-magnon scattering persists even in the metallic states . The present experiment aims to obtain the relation among the two-magnon scattering energy, which is the measure of the effective exchange energy, the superconducting gap energy, and the boundary energy of the depleted $`B_{1g}`$ scattering intensity in the underdoping region.
Electronic Raman scattering can detect selected parts in the $`k`$-space by choosing the combination of the incident and scattered light polarizations . When the polarizations of the incident and scattered light are parallel to the $`x`$ and $`y`$ directions, respectively, ($`(xy)`$-polarization configuration), the allowed symmetry is the $`B_{1\mathrm{g}}`$ and the observable electronic excitations are mainly near the region connecting (0, 0) and $`(\pi ,0)`$ in the $`k`$-space. The $`d(a^2b^2)`$ superconducting gap has the maximum in the (0, 0)-$`(\pi ,0)`$ direction. Here $`x`$ and $`y`$ axes are at $`45^{}`$ from the $`a`$ and $`b`$ axes which are along the direction connecting Cu-O-Cu. In the $`(ab)`$-polarization configuration, the allowed symmetry is $`B_{2\mathrm{g}}`$ and the electronic excitations mainly near (0, 0)-$`(\pi ,\pi )`$ are detected. The $`d(a^2b^2)`$ superconducting gap has the node in this direction. The observed gap energy is larger in $`B_{1\mathrm{g}}`$ than in $`B_{2g}`$ consistently with the $`d(a^2b^2)`$ superconductivity . The $`B_{1\mathrm{g}}`$ gap energy increases, as the carrier concentration decreases . While the $`B_{2\mathrm{g}}`$ gap energy follows the $`T_\mathrm{C}`$ . As going to the overdoping region, the $`B_{2\mathrm{g}}`$ superconducting gap approaches to the $`B_{1\mathrm{g}}`$ gap in energy looking like a loss of the anisotropy , while the ARPES observed the clear gap node . The $`B_{1\mathrm{g}}`$ gap structure as well as the $`B_{1\mathrm{g}}`$ scattering intensity itself decreases, as the carrier concentration decreases . It can be attributed to the โhot spotsโ effects . Many Raman scattering studies on Bi<sub>2</sub>Sr<sub>2</sub>Ca<sub>1-x</sub>Y<sub>x</sub>Cu<sub>2</sub>O<sub>8+ฮด</sub> have been performed . However, the relation among the effective exchange energy obtained from the two-magnon scattering, the energy width of the incoherent electronic states estimated from the depleted electronic scattering intensity, and the superconducting gap energy has not been reported.
Single crystals of Bi<sub>2</sub>Sr<sub>2</sub>Ca<sub>1-x</sub>Y<sub>x</sub>Cu<sub>2</sub>O<sub>8+ฮด</sub> were synthesized by the travelling solvent floating zone method utilizing an infrared radiation furnace with quaternary oval mirrors. The starting composition for the feed and seed rods is Bi<sub>2.1</sub>Sr<sub>1.9</sub>Ca<sub>1-x</sub>Y<sub>x</sub>Cu<sub>2</sub>O<sub>8+ฮด</sub> and that for the solvent is Bi<sub>2.2</sub>Sr<sub>1.6</sub>Ca<sub>0.85</sub>Cu<sub>2.2</sub>O<sub>z</sub> . The crystals used in this experiment are an overdoped sample ($`x=0`$, mid-point of the transition $`T_\mathrm{C}`$=84 K, hole concentration $`/`$Cu $`p=0.20`$), an optimally doped sample ($`x=0.1`$, $`T_\mathrm{C}`$=95 K, $`p=0.16`$), and underdoped samples ($`x=0.2`$, $`T_\mathrm{C}`$=87 K, $`p=0.13`$ and $`x=0.3`$, $`T_\mathrm{C}`$=75 K, $`p=0.11`$). Here $`p`$ is estimated using $`T_\mathrm{C}/T_\mathrm{C}^{\mathrm{max}}=182.6(p0.16)^2`$ with $`T_\mathrm{C}^{\mathrm{max}}=95`$ K .
Raman spectra were measured on fresh cleaved surfaces in a quasi-back scattering configuration utilizing a triple monochromator, a liquid nitrogen cooled CCD detector, a 5145 ร
Ar-ion laser. The laser beam was focused on the area of 50 $`\mu `$m$`\times `$500 $`\mu `$m. The wide-energy spectra were measured at the laser power 25 mW and the low-energy spectra at 10 mW. The increase of temperature was less than 2 K for the 10 mW excitation. The same spectra were measured four times to remove the cosmic ray noise by comparing the intensities at each channel. The wide energy spectra covering $`127000`$ cm<sup>-1</sup> was obtained by connecting 45 spectra with narrow energy ranges after correcting the spectroscopic efficiency of the optical system. The same spot on the surface was measured during the temperature variation by correcting the sample position viewed through a TV camera inside the spectrometer. The experimental fluctuation for the intensity is less than $`\pm 5`$ % throughout the present experiments.
Figure 1 shows the $`B_{1\mathrm{g}}`$, $`B_{2\mathrm{g}}`$ and $`A_{1\mathrm{g}}`$ Raman spectra in the superconducting state at 20 k and in the normal state at 100 K. The $`A_{1\mathrm{g}}`$ spectrum is obtained by subtracting the $`(xy)`$ spectrum from the $`(aa)`$ spectrum. As the hole concentration decreases, the $`B_{1\mathrm{g}}`$ two-magnon peak shifts to higher energy and connects with the two-magnon energy 3050 cm<sup>-1</sup> in the antiferromagnetic insulator Bi<sub>2</sub>Sr<sub>2</sub>Ca<sub>0.5</sub>Y<sub>0.5</sub>Cu<sub>2</sub>O<sub>8+ฮด</sub> . The intensity of the $`B_{1\mathrm{g}}`$ electronic scattering from 150 to 650 cm<sup>-1</sup> at $`x=0`$ is larger than the intensity of the two-magnon peak at 1000 cm<sup>-1</sup>. The low-energy electronic scattering intensity below the two-magnon peak energy decreases, as the hole concentration decreases. This depletion of the scattering intensity is consistent with the depletion of the low-energy spectral intensity at the โhot spotsโ near $`(\pi ,0)`$ observed by ARPES . The finding that the $`B_{1\mathrm{g}}`$ Raman intensity is depleted below the two-magnon peak energy indicates that the electron self-energy includes the process of transition from $`(\pi ,0)`$ to $`(0,\pi )`$ by emitting the $`(\pi ,\pi )`$ magnon and back to $`(\pi ,0)`$ by emitting the $`(\pi ,\pi )`$ magnon. The creation of the $`(\pi ,\pi )`$ and $`(\pi ,\pi )`$ magnons is the same as the process of the two-magnon Raman scattering peak. In the superconducting state at 20 K, the gap structure is strongly enhanced, as the hole concentration increases to the optimum and the overdoping region.
The hole concentration dependence of the $`B_{2\mathrm{g}}`$ spectrum is quite different from the $`B_{1\mathrm{g}}`$ spectrum. In the 100 K spectrum at x=0.3, the scattering intensity decreases monotonically from 200 cm<sup>-1</sup> to over 7000 cm<sup>-1</sup>. As hole concentration increases above $`x=0.2`$, the scattering intensity decreases below 1500 cm<sup>-1</sup> in the form of a step function. The drop of the intensity is observed near 4000 cm<sup>-1</sup> besides the step-like decrease at 20 K. The interaction of electrons at $`(\pi /2,\pi /2)`$ and at $`(\pi /2,\pi /2)`$ via the $`(\pi ,\pi )`$ magnetic excitation may contribute these structure. It may be noted that the 1500 cm<sup>-1</sup> is the upper limit of the strong resonant two-phonon scattering. At 20 K, the gap structure appears with almost the same strength for all samples.
The $`A_{1\mathrm{g}}`$ spectrum shows the intermediate carrier concentration dependence between the $`B_{1\mathrm{g}}`$ and $`B_{2\mathrm{g}}`$ spectra. The electronic Raman intensity decreases below 1300 cm<sup>-1</sup> at $`x=0.3`$. The low-energy scattering intensity increases, as the carrier concentration increases.
Figure 2 shows the $`B_{1\mathrm{g}}`$, $`B_{2\mathrm{g}}`$, and $`A_{1\mathrm{g}}`$ \[$`(xx)`$-$`(ab)`$\] spectra at 20 K and 100 K. It is obvious that the gap structure is strongly enhanced at the optimum and the overdoped samples in the $`B_{1\mathrm{g}}`$ and $`A_{1\mathrm{g}}`$ spectra, but the intensity is almost the same in the $`B_{2\mathrm{g}}`$ spectra. In order to eliminate the phonon peaks, the difference spectra between at 20 K and at 100 K are plotted in Fig. 3. The peak energy is assigned to the superconducting gap energy. The gap energy increases in $`B_{1\mathrm{g}}`$ and $`A_{1\mathrm{g}}`$, as the hole concentration decreases, but decreases in $`B_{2\mathrm{g}}`$ except for the small increase from $`x=0`$ to $`x=0.1`$.
The energies of the superconducting gap and the two-magnon peak at 20 K are plotted in the upper panel of Fig. 4. Integrated relative intensities of the superconducting gap peak for $`I(20\mathrm{K})>I(100\mathrm{K})`$ are plotted in the lower panel of Fig. 4.
The $`k`$-dependent gap structure was investigated by ARPES . The results are (1) the gap energy on the Fermi surface has a node on the (0, 0)-$`(\pi ,\pi )`$ line irrespective of the hole concentration, (2) the maximum gap energy on the (0, 0)-$`(\pi ,0)`$ direction increases, as hole concentration decreases, (3) the angular-dependent gap energy from the (0, 0)-$`(\pi ,\pi )`$ direction to the (0, 0)-$`(\pi ,0)`$ direction changes from the linearly increasing function of the angle near (0, 0)-$`(\pi ,\pi )`$ in the optimum and overdoped samples to the function with positive curvature in the underdoped samples. The hole concentration dependence of the $`B_{1\mathrm{g}}`$ and $`B_{2\mathrm{g}}`$ gap energies observed by Raman scattering is consistent with the results of ARPES, that is, the energy of the $`B_{1\mathrm{g}}`$ gap which represents mainly the gap near (0, 0)-$`(\pi ,0)`$ increases, as the hole concentration decreases, and the energy of the $`B_{2\mathrm{g}}`$ gap which represents the gap near (0, 0)-$`(\pi ,\pi )`$ decreases. However, the following experimental results cannot be explained in the simple picture. The first is that the $`B_{2\mathrm{g}}`$ gap energy is almost the same as the $`B_{1\mathrm{g}}`$ gap energy in the overdoped sample at x=0, although the ARPES clearly observed the node on the (0, 0)-$`(\pi ,\pi )`$ direction . The second is the $`A_{1\mathrm{g}}`$ gap energy which is predicted to be inbetween the $`B_{1\mathrm{g}}`$ and $`B_{2\mathrm{g}}`$ gap energies differently from the experimental results. The theory noted that the gap energy observed by Raman scattering is sensitive to the structure of the Fermi surface . Further theoretical investigation is expected.
The important point shown in Fig. 4 is that the $`B_{1\mathrm{g}}`$ gap energy is proportional to the $`B_{1\mathrm{g}}`$ two-magnon energy
$`2\mathrm{\Delta }(B_{1\mathrm{g}})=\alpha \mathrm{}\omega (B_{1\mathrm{g}}\text{two-magnon}),`$ (1)
for the wide carrier concentration region from underdoping to overdoping. The proportionality coefficient $`\alpha `$ is about 0.4, gradually increasing from 0.34 at $`x=0`$ to 0.41 at $`x=0.3`$. The two-magnon peak energy is about 3$`J`$ in the insulating phase, where $`J`$ is the exchange energy between Cu atoms. If this relation holds into the metallic phase, the energy gap equals about the effective exchange energy. In addition the upper limit energy for the depletion in the $`B_{1\mathrm{g}}`$ spectrum is just the two-magnon peak energy in the underdoping region. These experimental results indicate that the superconductivity is directly induced by the interaction with the magnetic excitation at $`(\pi ,\pi )`$. As for the depletion picture in the underdoping region near the insulator-metal transition, we can present the example where there is no depletion in both $`B_{1\mathrm{g}}`$ and $`B_{2\mathrm{g}}`$ spectra. BaCo<sub>1-x</sub>Ni<sub>x</sub>S<sub>2</sub> is the case, where the spectrum like the $`B_{2\mathrm{g}}`$ spectrum of Fig. 1 appear abruptly both in $`B_{1\mathrm{g}}`$ and $`B_{2\mathrm{g}}`$ spectra, when the phase changes into the paramagnetic metallic state $`(x>0.22)`$ from the antiferromagnetic state $`(x<0.22)`$ . This abrupt increase of the electronic scattering intensity at the transition to the paramagnetic metallic phase is related to the enhancement of the electronic specific heat ($`T`$-linear coefficient $`(\gamma )`$ of the low-temperature specific heat) in the metallic phase near the transition. It is known that the antiferromagnetic insulator-metal transition in high $`T_\mathrm{C}`$ superconductors is characterized by no enhancement of $`\gamma `$ . Thus it can be concluded that $`\gamma `$ is not enhanced at the insulator-metal transition, if the electronic Raman scattering intensity, or the electronic density of states, is depleted by the โhot spotsโ effects, and vice versa.
In conclusion the present experiment elucidates the relation among the $`B_{1\mathrm{g}}`$ two-magnon peak energy, the $`B_{1\mathrm{g}}`$ superconducting gap energy, and the upper-limit energy of the depleted electronic density of states near $`(\pi ,0)`$ due to the โhot spotsโ effects. These experimental results indicate that the $`(\pi ,\pi )`$ magnetic excitation plays the crucial role for the high $`T_\mathrm{C}`$ superconductivity.
Acknowledgments - The authors would like to thank K. Takenaka for the characterization of single crystals. This work was supported by CREST of the Japan Science and Technology Corporation. |
no-problem/9912/hep-th9912273.html | ar5iv | text | # UT-KOMABA 99-21hep-th/9912273 Brane Cube Realization of Three-dimensional Nonabelian Orbifolds
## 1 Introduction
During recent years supersymmetric field theories have been investigated using D-branes on a singular manifold such as orbifolds and a conifold. In this setup field theories arise as world volume theories of the D-branes. Aspects of field theories in question are encoded in geometric information of the singularity.
On the other hand there has been another approach of construction of supersymmetric field theories using branes which appear in string theory. In this framework quantities of field theories are determined by configurations of branes. This approach has an advantage that aspects of field theories can be visualized. For some cases relations between the two approaches have been discussed, and investigations on brane configurations are also helpful to the study of geometries around singularities probed by D-branes.
In this paper we study brane realizations of supersymmetric field theories corresponding to D-branes on an orbifold $`๐^3/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }`$ a nonabelian finite subgroup of $`SU(3)`$. Finite subgroups of $`SU(3)`$ are classified into ADE like series just like the finite subgroups of $`SU(2)`$. For A-type subgroups, which are abelian, D-branes on the orbifold were investigated in and brane configurations corresponding to this case are known as brane box models . For nonabelian cases, gauge groups and field content were investigated in . Brane configurations for D-type subgroups $`\mathrm{\Gamma }=\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$ were also proposed in <sup>1</sup><sup>1</sup>1Brane configurations for other types of nonabelian orbifolds were discussed in .. The crucial point of the construction of the configuration is the correspondence between quiver diagrams of $`\mathrm{\Gamma }`$ and brane configurations for $`๐^3/\mathrm{\Gamma }`$. According to the fact that the quiver diagram of $`\mathrm{\Delta }(3n^2)`$ ($`\mathrm{\Delta }(6n^2)`$) is a $`๐_3`$ ($`S_3`$) quotient of the quiver diagram of $`๐_n\times ๐_n`$, the configuration for D-branes on $`๐^3/\mathrm{\Delta }(3n^2)`$ ($`๐^3/\mathrm{\Delta }(6n^2)`$) is obtained from the configuration for D-branes on $`๐^3/๐_n\times ๐_n`$ by taking $`๐_3`$ ($`S_3`$) quotient. In order that the $`๐_3`$ quotient can be defined, brane configuration for $`๐^3/๐_n\times ๐_n`$ must have $`๐_3`$ symmetry. In , the $`๐_3`$ symmetry was realized by a web of $`(p,q)`$ 5-branes.
It is interesting to study whether similar approach can be applied to the cases in which $`\mathrm{\Gamma }`$ are exceptional type subgroups of $`SU(3)`$. We can see that quiver diagrams of such subgroups can be obtained from that of $`\mathrm{\Delta }(3\times 3^2)`$ by certain quotients: for instance, the quiver diagram of an exceptional type subgroup $`\mathrm{\Sigma }(648)`$ is obtained by a quotient by the tetrahedral group. From the correspondence between quiver diagrams and brane configurations, we expect that a brane configuration for $`๐^3/\mathrm{\Sigma }(648)`$ is a quotient of the configuration for $`๐^3/\mathrm{\Delta }(3\times 3^2)`$ by the tetrahedral group. It is however difficult to realize such a quotient on the brane configuration for $`๐^3/\mathrm{\Delta }(3\times 3^2)`$ since it has essentially two-dimensional structure and it is not clear how the group acts on it. The situation is similar to the case of the brane configuration for $`๐^3/\mathrm{\Delta }(3n^2)`$. In that case, we constructed a brane configuration for $`๐^3/๐_n\times ๐_n`$ with a $`๐_3`$ symmetry in order that we can define a $`๐_3`$ quotient. In the present case, it would be natural that the brane configuration for $`๐^3/\mathrm{\Delta }(3\times 3^2)`$ on which the tetrahedral group acts has three-dimensional structure and manifest symmetry under the tetrahedral group.
In this paper, as a step to the construction of brane configurations corresponding to the E-type subgroups, we propose brane configuraions for $`๐^3/\mathrm{\Delta }(3n^2)`$ and $`๐^3/\mathrm{\Delta }(6n^2)`$ with three-dimensional structure. The point of the construction is to lift the quiver diagram of $`๐_n\times ๐_n`$, which has essentially two-dimensional structure, into three-dimensions. The configuration corresponding to this quiver diagram consists of three kinds of NS5-branes intersecting each other and D4-branes whose three directions are bounded by NS5-branes. Naively, the configuration seems to be inappropriate since the number of supersymmetries of the configuration is half of the number the field theory should have: the configuration have two supercharges while field theories of D-branes on an orbifold $`๐^3/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }SU(3)`$ have four supercharges. We argue, however, that the supersymmetry of the field theory on D-branes is enhanced and the number of supersymmetries is the same as that of D-branes on $`๐^3/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }SU(3)`$. By taking quotients on the configuration, brane configurations for $`๐^3/\mathrm{\Delta }(3n^2)`$ and $`๐^3/\mathrm{\Delta }(6n^2)`$ can be constructed.
The organization of this paper is as follows. In Section 2, we present three-dimensional realization of the quiver diagrams of $`\mathrm{\Gamma }=๐_n\times ๐_n,\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$. In Section 3, we first review relations between quiver diagrams and brane configurations. Based on the relations, we construct brane configuration corresponding to D1-branes on orbifolds $`๐^3/๐_n\times ๐_n`$, $`๐^3/\mathrm{\Delta }(3n^2)`$ and $`๐^3/\mathrm{\Delta }(6n^2)`$. We also argue the enhancement of supersymmetries of the field theory on D-branes. In Section 4, we comment on a possibility of constructing configurations for $`๐^3/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }`$ an E-type subgroup of $`SU(3)`$ based on the configuration for $`๐^3/\mathrm{\Delta }(3\times 3^2)`$ constructed in Section 3.
## 2 Three-dimensional realization of quiver diagrams
In this section, we draw quiver diagrams of finite subgroups of $`SU(3)`$ in three-dimensional form. A quiver diagram of a finite group $`\mathrm{\Gamma }`$, which consists of nodes and arrows, represents algebraic structure of irreducible representations of $`\mathrm{\Gamma }`$,
$$R_3R^a=_bn_{ab}R^b.$$
(2.1)
Here $`R^a`$ denotes irreducible representations of $`\mathrm{\Gamma }`$ and $`R_3`$ is a faithful three-dimensional representation of $`\mathrm{\Gamma }`$. Each node of the quiver diagram represents each irreducible representation of $`\mathrm{\Gamma }`$ and the coefficient $`n_{ab}`$ is the number of arrows starting from the $`a`$-th node and ending at the $`b`$-th node.
### 2.1 Quiver diagrams of $`๐_n\times ๐_n`$
We first present the quiver diagram of an abelian subgroup $`๐_n\times ๐_n`$ of $`SU(3)`$. Irreducible representations of $`๐_n\times ๐_n`$ consist of $`n^2`$ one-dimensional representations $`R_1^{(l_1,l_2)}`$ with $`(l_1,l_2)๐_n\times ๐_n`$. We assign an irreducible representation $`R_1^{(l_1,l_2)}`$ to points of a two-dimensional lattice $`๐\times ๐`$ represented as
$$(n_1,n_2)=(l_1+n๐,l_2+n๐).$$
(2.2)
If we choose the three-dimensional representation to be
$$R_3=R_1^{(1,0)}R_1^{(0,1)}R_1^{(1,1)},$$
(2.3)
the decomposition of the product $`R_3`$ and $`R_1^{(l_1,l_2)}`$ becomes
$$R_3R_1^{(l_1,l_2)}=R_1^{(l_1+1,l_2)}R_1^{(l_1,l_2+1)}R_1^{(l_11,l_21)}.$$
(2.4)
This equation implies that there are three arrows which start from the node $`(l_1,l_2)`$; the three end points are $`(l_1+1,l_2)`$, $`(l_1,l_2+1)`$ and $`(l_11,l_21)`$. The quiver diagram of the group $`๐_n\times ๐_n`$ is depicted in Figure 1.
Now we redraw the quiver diagram of Figure 1 in three-dimensional form. The idea is that we consider the quiver diagram of Figure 1 as a projection of a certain quiver diagram in three-dimensional space onto a certain plane. Concretely speaking, we consider a part of the quiver diagram in Figure 2(a) as a projection of a cube in Figure 2(b) along a diagonal direction of the cube.
Under this consideration, the assignment (2.2) of the irreducible representations to a two-dimensional lattice $`๐^2`$ is translated into the assignment of $`R_1^{(l_1,l_2)}`$ to points on a three-dimensional lattice $`๐^3`$ represented as
$$(n_1,n_2,n_3)=(l_1+n๐+m,l_2+n๐+m,m),$$
(2.5)
where $`m`$ is an integer. As indicated in Figure 2(b), three arrows going from the nodes $`(n_1,n_2,n_3)`$ have end points at $`(n_1+1,n_2,n_3)`$, $`(n_1,n_2+1,n_3)`$ and $`(n_1,n_2,n_3+1)`$. An example of the quiver diagram is depicted in Figure 3.
Note that the quiver diagram is uniform along the direction $`n_1=n_2=n_3`$, which becomes a key point in the discussion of supersymmetry enhancement.
### 2.2 Quiver diagrams of $`\mathrm{\Delta }(3n^2)`$
As discussed in , the quiver diagram of the group $`\mathrm{\Delta }(3n^2)`$ is obtained by a $`๐_3`$ quotient of the quiver diagram of $`๐_n\times ๐_n`$. The $`๐_3`$ acts on the label of irreducible representations as
$$(l_1,l_2)(l_2,l_1l_2)(l_1+l_2,l_1).$$
(2.6)
If the three nodes related by the $`๐_3`$ action correspond to different representations, the $`๐_3`$ quotient means that the three nodes must be identified. For instance, the three nodes labeled by (3,1), (3,2) and (2,1) must be identified if $`n=4`$. Such nodes represent a three-dimensional irreducible representation of $`\mathrm{\Delta }(3n^2)`$. For representations invariant under the $`๐_3`$ action such as $`R_1^{(0,0)}`$, the $`๐_3`$ quotient must be understood as a split of the node into three nodes, each of which represents a one-dimensional irreducible representation of $`\mathrm{\Delta }(3n^2)`$. The quiver diagram of the group $`\mathrm{\Delta }(3n^2)`$ is depicted in Figure 4(a).
We now consider a three-dimensional realization of the quiver diagram. The idea is the same as the two-dimensional realization of the quiver diagram. That is, the quiver diagram of the group $`\mathrm{\Delta }(3n^2)`$ is obtained by a $`๐_3`$ quotient of the quiver diagram of $`๐_n\times ๐_n`$ given in Figure 3. The action of $`๐_3`$ on the lattice points of $`๐^3`$ is defined as follows.
$$(n_1,n_2,n_3)(n_2,n_3,n_1)(n_3,n_1,n_2).$$
(2.7)
It means that $`๐_3`$ acts on the three-dimensional space as a $`3\pi /2`$ rotation along the line specified by $`n_1=n_2=n_3`$. Combining with the assignment (2.5) of the irreducible representations of $`๐_n\times ๐_n`$ on the lattice $`๐^3`$, one can see that the action (2.7) is equivalent to the action of $`๐_3`$ given in (2.6). Three-dimensional version of the quiver diagram is depicted in Figure 4(b).
Now we comment on the speciality when $`n/3`$ is an integer. When $`n/3`$ is not an integer, the only fixed node under the $`๐_3`$ action is $`(0,0)`$. On the other hand, when $`n/3`$ is an integer, there are additional fixed nodes $`(2n/3,n/3)`$ and $`(n/3,2n/3)`$ as one can see from Figure 5.
### 2.3 Quiver diagrams of $`๐^3/\mathrm{\Delta }(6n^2)`$
The quiver diagram of the group $`\mathrm{\Delta }(6n^2)`$ is obtained by a further $`๐_2`$ quotient on the quiver diagram of $`\mathrm{\Delta }(3n^2)`$. The $`๐_2`$ acts on the label of irreducible representations as
$$(l_1,l_2)(l_2,l_1).$$
(2.8)
It means that $`๐_2`$ acts as a reflection with respect to the line extending a diagonal direction $`n_1=n_2`$. If the two nodes related by the $`๐_2`$ action correspond to different representations, they must be identified. For instance, the nodes (1,2) and (2,1), both of which are three-dimensional irreducible representations of $`\mathrm{\Delta }(3n^2)`$ for $`n=4`$, must be identified. Such nodes represent six-dimensional irreducible representation of $`\mathrm{\Delta }(6n^2)`$. For nodes invariant under the $`๐_2`$ action, $`๐_2`$ acts as a split the node into a certain set of nodes. For details on the irreducible representations of $`\mathrm{\Delta }(6n^2)`$, see . The quiver diagram of the group $`\mathrm{\Delta }(6n^2)`$ is depicted in Figure 6(a).
In the three-dimensional version of the quiver diagram, $`๐_2`$ acts on a three-dimensional lattice $`๐^3`$ as
$$(n_1,n_2,n_3)(n_2,n_1,n_3).$$
(2.9)
It means that $`๐_2`$ acts as a reflection with respect to the plane specified by $`n_1=n_2`$. Combining with the $`๐_3`$ action (2.7), it is equivalent to the reflections with respect to the planes $`n_2=n_3`$ and $`n_3=n_1`$. The three-dimensional version of the quiver diagram of the group $`\mathrm{\Delta }(6n^2)`$ is depicted in Figure 6(b).
## 3 Brane cube configurations for $`๐^3/\mathrm{\Gamma }`$
As discussed in , brane box type realization of D-brane gauge theory on an orbifold $`๐^3/\mathrm{\Gamma }`$ has a direct correspondence to the quiver diagram of $`\mathrm{\Gamma }`$. In the D-brane gauge theory on the orbifold $`๐^3/\mathrm{\Gamma }`$, each node in the quiver diagram corresponds to a gauge group factor $`U(N_a)`$ where $`N_a`$ is the dimension of the irreducible representation. In the brane box type realization of the gauge theory, it corresponds to a box with $`N_a`$ D-branes. Arrows in the quiver diagram represent matter contents of the D-brane gauge theory on $`๐^3/\mathrm{\Gamma }`$. That is, an arrow from the node $`a`$ to the node $`b`$ represents a bifundamental matter transforming as $`(N_a,\overline{N}_b)`$ under $`U(N_a)\times U(N_b)`$. In the brane box type configuration, it comes from an oriented open string starting from D-branes on the $`a`$-th box and ending on D-branes on the $`b`$-th box. It implies that the $`a`$-th box and $`b`$-th box must adjoin each other. These two boxes are separated by another brane, for example, NS 5-branes or $`(p,q)`$ 5-branes. Due to the orientation of such branes at the boundary, only one orientation of open strings are allowed and it induces a particular set of bifundamental matters . Thus the arrows in the quiver diagram indicate how to connect boxes with D-branes. We summurize the correspondence in Table 1.
It is also important that the configuration provides the same supersymmetry as that of the D-brane gauge theory on the orbifold. Several brane configurations satisfying these reqirement were constructed. In , a brane configuration is constructed for $`\mathrm{\Gamma }=๐_n\times ๐_m`$ by using D5-branes and two kinds of NS5-branes. In , brane configurations were constructed for the nonabelian groups $`\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$ as well as the abelian group $`๐_n\times ๐_n`$ using $`(p,q)`$ 5-branes and D3-branes.
In this section, we construct another kind of brane configurations for $`\mathrm{\Gamma }=๐_n\times ๐_n`$, $`\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$ based on the three-dimensional version of the quiver diagrams given in the last section.
### 3.1 Brane cube configurations for $`๐^3/๐_n\times ๐_n`$
In this subsection, we construct brane configurations for $`๐^3/๐_n\times ๐_n`$ based on the three-dimensional version of the quiver diagram given in Figure 3. As the nodes in the quiver diagram lie at the lattice points of $`๐^3`$, boxes also lie at the lattice points of $`๐^3`$. The node at $`(n_1,n_2,n_3)`$ is connected to the nodes at $`(n_1+1,n_2,n_3)`$, $`(n_1,n_2+1,n_3)`$ and $`(n_1,n_2,n_3+1)`$ in the quiver diagram by three outgoing arrows, so the box at $`(n_1,n_2,n_3)`$ must adjoin boxes at $`(n_1+1,n_2,n_3)`$, $`(n_1,n_2+1,n_3)`$ and $`(n_1,n_2,n_3+1)`$. A natural brane configuration satisfying these requirement is depicted in Figure 7. Note that the cube at $`(n_1,n_2,n_3)`$ has the same label as the cube at $`(n_1+1,n_2+1,n_3+1)`$.
The configuration consists of the following branes:
* NS5-branes located along 012345 directions.
* NSโ5-branes located along 012367 directions.
* NSโ5-branes located along 014567 directions.
* D4-branes located along 01246 directions.
D4-branes are bounded in the direction 6 by the NS5-branes, in the direction 4 by the NSโ5-branes, and in the direction 2 by the NSโ5-branes. Thus the non-compact directions of the D4-branes are 0 and 1, and hence the low energy theory becomes two-dimensional. The numbers written in the boxes are the labels of the irreducible representations of $`๐_n\times ๐_n`$. Due to the orientation of NS5-branes, only one orientation of open strings is allowed and it gives a bifundamental matter corresponding to the arrow of the quiver diagram.
There is however a subtlety on the number of supersymmetries. The configuration consists of four kinds of branes, each of which breaks 1/2 supersymmetries. Thus the brane cube configuration has two supercharges. Since the number of supersymmetries of the gauge theory of D-branes on $`๐^3/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }SU(3)`$ is four, it seems that the two gauge theories have different numbers of supersymmetries. As we will discuss, however, the low energy field theory of D-branes obtained from the brane cube configuration has an enough number of supersymmetries due to a certain enhancement of supersymmetry.
To explain the supersymmetry enhancement, we derive the brane cube configuration from different point of view. It is obtained from D1-branes on $`๐^4/\mathrm{\Gamma }`$ by performing T-duality three times. In fact, a similar but different configuration were discussed in . Both of them have a property that the configuration is uniform along one direction. It is the key point of the supersymmetry enhancement.
We start with the D1-brane gauge theory on $`๐^4/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }`$ an abelian subgroup of $`SU(4)`$ . We consider $`|\mathrm{\Gamma }|`$ D1-branes on $`๐^4`$ and let the D1-branes extend along 01 directions. The low energy effective action of the D1-brane is given by the dimensional reduction of ten-dimensional $`๐ฉ=1`$ supersymmetric $`U(|\mathrm{\Gamma }|)`$ Yang-Mills theory to two dimensions. The theory contains gauge field $`A`$, four complex scalar fields $`Z^\mu `$ ($`\mu =1,2,3,4`$), eight left-handed fermions $`\lambda _{}`$ and eight right-handed fermions $`\psi _+`$. These fields transforms in the adjoint of $`U(|\mathrm{\Gamma }|)`$. Then we project the theory onto $`\mathrm{\Gamma }`$ invariant states. To perform the projection, one must define how $`\mathrm{\Gamma }`$ acts on $`๐^4`$. This is specified by a four-dimensional representation $`R_4`$. We denote the decomposition of $`R_4`$ into irreducible representations $`R^a`$ ($`a=1,\mathrm{},|\mathrm{\Gamma }|`$) of $`\mathrm{\Gamma }`$ as
$$R_4=R^{a_1}R^{a_2}R^{a_3}R^{a_4}.$$
(3.1)
(As $`\mathrm{\Gamma }`$ is abelian, its irreducible representations are one-dimensional.) Each irreducible representation $`R^{a_\mu }`$ acts on the coordinates $`z^\mu `$ of $`๐^4`$ as $`z^\mu R^{a_\mu }z^\mu `$. The requirement that $`\mathrm{\Gamma }`$ is a subgroup of $`SU(4)`$ can be stated as
$$R^{a_1}R^{a_2}R^{a_3}R^{a_4}=R^0$$
(3.2)
where $`R^0`$ is the trivial representation. If we write $`R^aR^b=R^{ab}`$, the condition is represented as $`R^{a_4}=R^{a_1a_2a_3}`$. To define D-branes on $`๐^4/\mathrm{\Gamma }`$, one must also determine how $`\mathrm{\Gamma }`$ acts on the Chan-Paton indices. We define the action of $`\mathrm{\Gamma }`$ on the Chan-Paton indices as the regular representation of $`\mathrm{\Gamma }`$. Note that the theory obtained by the projection is a two-dimensional (0,2) supersymmetric theory with the gauge group $`U(1)^{|\mathrm{\Gamma }|}=\mathrm{\Pi }_aU(1)_a`$, where $`U(1)_a`$ represents the gauge group $`U(1)`$ corresponding to the representation $`R^a`$.
The field content surviving the projection is as follows:
* a gauge field $`A_a`$ of $`U(1)_a`$,
* four complex bosons $`Z_a^\mu `$ which transform in the $`(\text{ }\text{ }\text{ }\text{ }\text{ },\overline{\text{ }\text{ }\text{ }\text{ }\text{ }})`$ of $`U(1)_a\times U(1)_{aa_\mu }`$,
* a left-handed Dirac fermion $`\lambda _a`$ which transform in the adjoint of $`U(1)_a`$,
* three left-handed Dirac fermions $`\lambda _a^{\mu \nu }`$ which transforms in the $`(\text{ }\text{ }\text{ }\text{ }\text{ },\overline{\text{ }\text{ }\text{ }\text{ }\text{ }})`$ of $`U(1)_a\times U(1)_{aa_\mu a_\nu }`$,
* four right-handed Dirac fermions $`\psi _{+a}^\mu `$ which transforms in the $`(\text{ }\text{ }\text{ }\text{ }\text{ },\overline{\text{ }\text{ }\text{ }\text{ }\text{ }})`$ of $`U(1)_a\times U(1)_{aa_\mu }`$,
where $`a`$ runs from 1 to $`|\mathrm{\Gamma }|`$. Here $`\lambda _a^{\mu \nu }`$ is antisymmetric in $`\mu \nu `$, so there are six fields to each $`a`$, three of which are independent. There is an ambiguity to determine the three independent left-handed Dirac fermions. We take three fields $`\lambda _a^{i4}`$ ($`i=1,2,3`$) be independent.
These fields form (0,2) multiplets as follows <sup>2</sup><sup>2</sup>2For details on supersymmetric field theories in two-dimensions, see .:
* The gauge field $`A_a`$ and the left-handed fermion $`\lambda _a`$ and an auxiliary field $`D_a`$ form a (0,2) gauge multiplet $`V_a`$,
$$V_a=A_{0a}A_{1a}2i\theta ^+\overline{\lambda }_a2i\overline{\theta }^+\lambda _a+2\theta ^+\overline{\theta }^+D_a,$$
(3.3)
* Four complex bosons $`Z_a^\mu `$ and four right-handed fermions $`\psi _{+a}^\mu `$ form four (0,2) chiral multiplets $`\mathrm{\Phi }_a^\mu `$,
$$\mathrm{\Phi }_a^\mu =Z_a^\mu +\sqrt{2}\theta ^+\psi _{+a}^\mu i\theta ^+\overline{\theta }^+Z_a^\mu ,$$
(3.4)
* Three left-handed fermions $`\lambda _a^{i4}`$ and auxiliary fields $`G_a^i`$ form three (0,2) Fermi multiplets $`\mathrm{\Lambda }_a^i`$,
$$\mathrm{\Lambda }_a^i=\lambda _a^{i4}\sqrt{2}\theta ^+G_a^ii\theta ^+\overline{\theta }^+(D_0+D_1)\lambda _a^{i4}\sqrt{2}\overline{\theta }^+E_a^i$$
(3.5)
where $`D_\alpha `$ is the supersymmetric derivative and $`E_a`$ is a function of chiral superfields which will be defined below.
The two-dimensional (0,2) supersymmetric gauge theories are described by a Lagrangian of the form,
$$L=L_{gauge}+L_{ch}+L_F+L_J+L_{D,\theta },$$
(3.6)
$$L_{gauge}=\frac{1}{8e_a^2}d^2x๐\theta ^+๐\overline{\theta }^+\overline{\mathrm{{\rm Y}}}_a\mathrm{{\rm Y}}_a,$$
(3.7)
$$L_{ch}=\frac{i}{2}d^2x๐\theta ^+๐\overline{\theta }^+\overline{\mathrm{\Phi }}_a^\mu (๐_0๐_1)\mathrm{\Phi }_a^\mu ,$$
(3.8)
$$L_F=\frac{1}{2}d^2x๐\theta ^+๐\overline{\theta }^+\overline{\mathrm{\Lambda }}_a^i\mathrm{\Lambda }_a^i,$$
(3.9)
$$L_J=\frac{1}{\sqrt{2}}d^2x๐\theta ^+\mathrm{\Lambda }_a^iJ_a^i|_{\overline{\theta }^+=0}h.c.,$$
(3.10)
$$L_{D,\theta }=\frac{t_a}{4}d^2x๐\theta ^+\mathrm{{\rm Y}}_a|_{\overline{\theta }^+=0}+h.c.,$$
(3.11)
where $`\mathrm{{\rm Y}}_a`$ is the field strength of the superspace gauge field $`V_a`$, $`๐_\alpha `$ is the gauge covariant derivative and $`t_a=\theta _a/2\pi +ir_a`$ represents a Fayet-Iliopoulos parameter and a theta parameter of $`U(1)_a`$. The interactions of the theory are completely defined by functions $`J_a^i`$ and $`E_a^i`$. In the present case, $`J_a^i`$ and $`E_a^i`$ take the following form,
$$J_a^i=ฯต_{ijk}\mathrm{\Phi }_{aa_i}^j\mathrm{\Phi }_{aa_ia_j}^k,$$
(3.12)
$$E_a^i=\mathrm{\Phi }_a^4\mathrm{\Phi }_{aa_4}^i\mathrm{\Phi }_a^i\mathrm{\Phi }_{aa_i}^4.$$
(3.13)
They satisfy the following relation
$$\underset{a}{}\underset{i}{}J_a^iE_a^i=0.$$
(3.14)
To realize the gauge theory in terms of brane configurations, we perform T-dualities three times along $`U(1)`$ orbits associated with three complex planes <sup>3</sup><sup>3</sup>3To be precise, we must substitute $`๐^4/\mathrm{\Gamma }`$ by a manifold with the same singularity but different asymptotics to make the radius of the $`U(1)`$ orbits finite at infinity.. There is an ambiguity to choose three directions out of four complex coordinates. If we T-dualize along three compact directions associated with $`z^1`$, $`z^2`$ and $`z^3`$, we obtain a brane cube configuration depicted in Figure 8.
It consists of three kinds of NS5-branes and D4-branes. The (0,2) chiral multiplet $`\mathrm{\Phi }_a^\mu `$ transforming in the $`(\text{ }\text{ }\text{ }\text{ }\text{ }_a,\overline{\text{ }\text{ }\text{ }\text{ }\text{ }}_{aa_\mu })`$ comes from open strings extending from the box labeled by $`a`$ to the box labeled by $`aa_\mu `$. We can represent the four chiral multiplets $`\mathrm{\Phi }_a^1`$, $`\mathrm{\Phi }_a^2`$, $`\mathrm{\Phi }_a^3`$ and $`\mathrm{\Phi }_a^4`$ by arrows with components $`(1,0,0)`$, $`(0,1,0)`$, $`(0,0,1)`$ and $`(1,1,1)`$. The (0,2) Fermi multiplet $`๐ฒ_a^i`$ transforming in the $`(\text{ }\text{ }\text{ }\text{ }\text{ }_a,\overline{\text{ }\text{ }\text{ }\text{ }\text{ }}_{aa_ia_4})`$ comes from open strings from the box $`a`$ to the box $`aa_ia_4`$. We can represent the three Fermi multiplets $`๐ฒ_a^1`$, $`๐ฒ_a^2`$ and $`๐ฒ_a^3`$ by arrows with components $`(0,1,1)`$, $`(1,0,1)`$ and $`(1,1,0)`$. The arrows are depicted in Figure 9.
Now we restrict the model to the cases with $`R^{a_4}=R^0`$. Then the condition for $`\mathrm{\Gamma }SU(4)`$ in (3.2) becomes
$$R^{a_1}R^{a_2}R^{a_3}=R^0.$$
(3.15)
It means that the orbifold $`๐^4/\mathrm{\Gamma }`$ is $`๐^3/\mathrm{\Gamma }\times ๐`$ where $`\mathrm{\Gamma }`$ is a subgroup of $`SU(3)`$. In this case, the supersymmetry of the two-dimensional theory is (2,2) instead of (0,2), and the (0,2) multiplets are combined into (2,2) multiplets as follows:
* The (0,2) gauge multiplet $`V_a`$ and the adjoint (0,2) chiral multiplet $`\mathrm{\Phi }_a^4`$ are combined to form (2,2) vector multiplet $`V_a^{}`$. ($`\mathrm{\Phi }_a^4`$ transforms in the adjoint of $`U(1)_a`$ due to $`R^{a_4}=R^0`$.)
$`V_a^{}=\theta ^{}\overline{\theta }^{}(A_{0a}A_{1a})+\theta ^+\overline{\theta }^+(A_{0a}+A_{1a})\sqrt{2}\theta ^{}\overline{\theta }^+Z_a^4\sqrt{2}\theta ^+\overline{\theta }^{}\overline{Z}_a^4`$
$`+2i\theta ^{}\theta ^+(\overline{\theta }^{}\overline{\lambda }_a+\overline{\theta }^+\psi _{+a}^4)2i\overline{\theta }^{}\overline{\theta }^+(\theta ^{}\lambda _a+\theta ^+\overline{\psi }_{+a}^4)2\theta ^{}\theta ^+\overline{\theta }^{}\overline{\theta }^+D_a`$ (3.16)
* The (0,2) chiral multiplets $`\mathrm{\Phi }_a^i`$ and the (0,2) Fermi multiplets $`\mathrm{\Lambda }_a^i`$ form three (2,2) chiral multiplets $`\mathrm{\Phi }_a^{}_{}{}^{}i`$. ($`\mathrm{\Lambda }_a^i`$ transforms in the $`(\text{ }\text{ }\text{ }\text{ }\text{ }_a,\overline{\text{ }\text{ }\text{ }\text{ }\text{ }}_{aa_i})`$ due to $`R^{a_4}=R^0`$.)
$$\mathrm{\Phi }_a^{}_{}{}^{}i=Z_a^i(y)+\sqrt{2}\theta ^{}\lambda _a^{i4}(y)+\sqrt{2}\theta ^+\psi _{+a}^i(y)2\theta ^{}\theta ^+G_a^i(y)$$
(3.17)
where $`y^0=x^0i(\theta ^+\overline{\theta }^++\theta ^{}\overline{\theta }^{})`$ and $`y^1=x^1i(\theta ^+\overline{\theta }^+\theta ^{}\overline{\theta }^{})`$.
In the brane cube configuration in Figure 8, the cube at $`(n_1,n_2,n_3)`$ and the cube at $`(n_1+1,n_2+1,n_3+1)`$ are equivalent due to the condition $`R^{a_4}=R^0`$. This is nothing but the configuration considered in Figure 7 if we take $`\mathrm{\Gamma }=๐_3\times ๐_3`$, $`R^{a_1}=R_1^{(1,0)}`$, $`R^{a_2}=R_1^{(0,1)}`$ and $`R^{a_3}=R_1^{(1,1)}`$. The (2,2) chiral multiplets $`\mathrm{\Phi }_a^1`$, $`\mathrm{\Phi }_a^2`$ and $`\mathrm{\Phi }_a^3`$ can be represented by arrows with components $`(1,0,0)`$, $`(0,1,0)`$ and $`(0,0,1)`$. The (2,2) vector multiplet $`V_a`$ can be represented by an arrow with components $`(1,1,1)`$.
The actions $`L_{gauge}`$ (3.7) and $`\mu =4`$ part of $`L_{ch}`$ (3.8) of the (0,2) supersymmetric theory are combined into $`L_{gauge}`$ of the (2,2) supersymmetric theory
$$L_{gauge}=\frac{1}{4e_a^2}d^2xd^4\theta \overline{\mathrm{\Sigma }}_a\mathrm{\Sigma }_a$$
(3.18)
with some interaction terms. Here $`\mathrm{\Sigma }_a`$ is the field strength of the (2,2) superspace gauge field $`V_a^{}`$. Remaining part of $`L_{ch}`$ (3.8) and $`L_F`$ (3.9) become $`L_{ch}`$ of the (2,2) supersymmetric theory,
$$L_{ch}=d^2xd^4\theta \overline{\mathrm{\Phi }}_a^{}_{}{}^{}ie^V^{}\mathrm{\Phi }_a^{}_{}{}^{}i.$$
(3.19)
$`L_J`$ (3.10) corresponds to the superpotential term $`L_W`$ with $`W=ฯต_{ijk}\mathrm{\Phi }_a^{}_{}{}^{}i\mathrm{\Phi }_{aa_i}^{}_{}{}^{}j\mathrm{\Phi }_{aa_ia_j}^{}_{}{}^{}k`$,
$$L_W=d^2x๐\theta ^+๐\theta ^{}W(\mathrm{\Phi }_a^{}_{}{}^{}i)|_{\overline{\theta }^+=\overline{\theta }^{}=0}h.c..$$
(3.20)
Three arrows representing the three chiral multiplets appearing in the superpotential form a triangle up to the identification along the diagonal direction. Note that the superpotential $`W(\mathrm{\Phi }_a^{}_{}{}^{}i)`$ is related to $`J_a^i`$ as
$$J_a^i=\frac{W}{\mathrm{\Phi }_a^{}_{}{}^{}i},$$
(3.21)
and the equation (3.14) implies the gauge invariance of the superpotential. Combining with the D-term part
$$L_{D,\theta }=\frac{t_a}{4}d^2x๐\theta ^+๐\overline{\theta }^{}\mathrm{\Sigma }_a|_{\theta ^{}=\overline{\theta }^+=0}+h.c.,$$
(3.22)
one obtain two-dimensional (2,2) supersymmetric gauge theory. Thus although the brane cube configuration has only two supercharges the two-dimensional field theory on D-branes has an enough number of supersymmetries.
To understand the reason for the supersymmetry enhancement, it is useful to compare the configuration of Figure 7 with a brane configuration obtained by another T-duality. As noted earlier, there is an ambiguity to choose three directions to perform T-duality out of four complex coordinates. If we T-dualize along three compact directions associated with $`z^1`$, $`z^2`$ and $`z^4`$, we obtain a brane cube configuration depicted in Figure 10.
The chiral multiplet $`\mathrm{\Phi }_a^\mu `$ transforming in the $`(\text{ }\text{ }\text{ }\text{ }\text{ }_a,\overline{\text{ }\text{ }\text{ }\text{ }\text{ }}_{aa_\mu })`$ comes from open strings from the box $`a`$ to the box $`aa_\mu `$. We can represent the four chiral multiplets $`\mathrm{\Phi }_a^1`$, $`\mathrm{\Phi }_a^2`$, $`\mathrm{\Phi }_a^3`$ and $`\mathrm{\Phi }_a^4`$ by arrows with components $`(1,0,0)`$, $`(0,1,0)`$, $`(1,1,1)`$ and $`(0,0,1)`$. The Fermi multiplet $`๐ฒ_a^i`$ transforming in the $`(\text{ }\text{ }\text{ }\text{ }\text{ }_a,\overline{\text{ }\text{ }\text{ }\text{ }\text{ }}_{aa_ia_4})`$ comes from open strings from the box $`a`$ to the box $`aa_ia_4`$. We can represent the three Fermi multiplets $`๐ฒ_a^1`$, $`๐ฒ_a^2`$ and $`๐ฒ_a^3`$ by arrows with components $`(1,1,0)`$, $`(0,1,1)`$ and $`(1,1,0)`$. The arrows are depicted in Figure 11.
We would like to emphasize that the two configurations in Figure 8 and Figure 10 give the same field theory since both of them are related to the D-branes on $`๐^4/\mathrm{\Gamma }`$ by T-dualities and hence the two configurations are related by T-duality.
In the brane cube configuration given in Figure 10, the cube at $`(n_1,n_2,n_3)`$ and the cube at $`(n_1,n_2,n_3+1)`$ are equivalent. If we set $`R^{a_4}=R^0`$, the (2,2) chiral multiplets $`\mathrm{\Phi }_a^1`$, $`\mathrm{\Phi }_a^2`$ and $`\mathrm{\Phi }_a^3`$ can be represented by arrows with components $`(1,0,0)`$, $`(0,1,0)`$ and $`(1,1,1)`$, while the (2,2) vector multiplet $`V_a`$ can be represented by an arrow with components $`(0,0,1)`$. This is the model considered in Section 3.2 of . That is, if we take $`\mathrm{\Gamma }=๐_3\times ๐_3`$, $`R^{a_1}=R_1^{(1,0)}`$, $`R^{a_2}=R_1^{(0,1)}`$ and $`R^{a_3}=R_1^{(1,1)}`$, we obtain the configuration in Figure 12.
In the configuration of Figure 12, we can remove NSโ5-branes extending along 012345 directions without changing the matter contents and gauge groups since the configuration of Figure 12 is trivial along the direction 8. The resulting configuration is equivalent (up to a certain T-duality) to the usual brane box model with two kinds of NS5-branes. Therefore the field theory obtained from the configuration in Figure 12 is equivalent to the field theory obtained from the usual brane box model, so the field theory has four supercharges. As stated above, the configuration of Figure 7 is related to Figure 12 T-duality, the field theory realized by the configuration has the enough number of supersymmetries.
Note that the configuration of Figure 7 is more natural than that of Figure 12 in the sense that the three coordinates of $`๐^3`$ (three chiral multiplets $`\mathrm{\Phi }^{}_{}{}^{}i`$) are treated equivalently. This is the key point of the construction of the configuration for $`๐^3/\mathrm{\Delta }(3n^2)`$. Due to the symmetric assignment of the multiplets, however, enhancement of supersymmetry becomes rather nontrivial.
### 3.2 Brane cube configurations for $`๐^3/\mathrm{\Delta }(3n^2)`$
The brane configuration for the orbifold $`๐^3/\mathrm{\Delta }(3n^2)`$ is obtained from that for $`๐^3/๐_n\times ๐_n`$ by a $`๐_3`$ quotient. The $`๐_3`$ acts as a $`2\pi /3`$ rotation along the line $`x^2=x^4=x^6`$. The brane configuration is given in Figure 13.
The reason that the configuration gives the structure of the irreducible representations of $`\mathrm{\Delta }(3n^2)`$ is the same as in . The gauge group coming from each box depend on whether the box include fixed points of the $`๐_3`$ action. When $`n/3`$ is not an integer, only the boxes with index (0,0) include fixed line of the $`๐_3`$ action. When $`n/3`$ is an integer, the boxes with indices $`(2n/3,n/3)`$ and $`(n/3,2n/3)`$ also include fixed lines in addition to (0,0). For such boxes, we must take a $`๐_3`$ quotient, which leads the gauge group to be $`U(1)^3`$ since the gauge field on the box takes the following form,
$$A\left(\begin{array}{ccc}a_1& a_2& a_3\\ a_3& a_1& a_2\\ a_2& a_3& a_1\end{array}\right).$$
(3.23)
It implies that such boxes correspond to a sum of three one-dimensional representations of $`\mathrm{\Delta }(3n^2)`$. On the other hand, for the boxs which do not include the fixed lines of the $`๐_3`$ action, three D-branes simply pile up and give the gauge group $`U(3)`$. It implies that such boxes correspond to three-dimensional irreducible representations of $`\mathrm{\Delta }(3n^2)`$. One can see that the quotienting procedure precicely reproduces the structure of the irreducible representations of $`\mathrm{\Delta }(3n^2)`$. We can also verify that matter contents obtained after the $`๐_3`$ quotient coincide with those specified by the quiver diagram of $`\mathrm{\Delta }(3n^2)`$.
### 3.3 Brane cube configurations for $`๐^3/\mathrm{\Delta }(6n^2)`$
The brane configuration for the orbifold $`๐^3/\mathrm{\Delta }(6n^2)`$ is obtained from that for $`๐^3/\mathrm{\Delta }(3n^2)`$ by a $`๐_2`$ quotient. The $`๐_2`$ acts as a reflection with respect to the plane $`x_2=x_4`$ as indicated in the quiver diagram of $`\mathrm{\Delta }(6n^2)`$ in Figure 6(b). Combining with the $`๐_3`$ identification for the configuration corresponding to $`๐^3/\mathrm{\Delta }(3n^2)`$, it is equivalent to the reflections with respect to the planes $`x_4=x_6`$ and $`x_6=x_2`$. The brane configuration is given in Figure 14. One can verify that the configuration reproduces the structure of the quiver diagrams of $`\mathrm{\Delta }(6n^2)`$. For details on the structure of the irreducible representations, see .
## 4 Discussions
In this paper, we have proposed brane cube configurations corresponding to D1-branes on nonabelian orbifolds $`๐^3/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }SU(3)`$. This is based on the three-dimensional version of the quiver diagrams of $`\mathrm{\Gamma }`$. The configuration consists of three kinds of NS5-branes and D4-branes. Since the D4-branes are bounded by NS5-branes along three directions, the low energy theory becomes two-dimensional. Due to the fact that the configuration is uniform along a diagonal direction, supersymmetry of the field theory is enhanced twice as many as naively expected.
The original motivation to consider configurations with three-dimensional structure comes from the study of configurations for an orbifold $`๐^3/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }`$ an E-type subgroup of $`SU(3)`$. As mentioned in introduction, the quiver diagram of $`\mathrm{\Sigma }(648)`$ is a quotient of the quiver diagram of $`\mathrm{\Delta }(3\times 3^2)`$ by the tetrahedral group. According to the correspondence between quiver diagrams and brane configurations for $`๐^3/\mathrm{\Gamma }`$, we expect that a brane configuration for $`๐^3/\mathrm{\Sigma }(648)`$ is obtained from the configuration for $`๐^3/\mathrm{\Delta }(3\times 3^2)`$ given in Figure 13 by taking a quotient by the tetrahedral group. In fact, the configuration in Figure 13 has structure of the tetrahedral group: the configuration consists of cubes, which are in a sence dual to tetrahedra, so we can define an action of the tetrahedral group on the configuration. It is now under investigation whether such a quotient actually reproduces required properties of the gauge theories.
Finally we would like to compare the configurations in this paper with those given in . The configuration given in consists of a web of $`(p,q)`$ 5-branes and D3-branes. As the D3-branes are bounded along two-directions by $`(p,q)`$ 5-branes, they provide two-dimensional field theory, which coincides with the brane cube configurations. Thus it seems that we should start with D1-branes on $`๐^4/\mathrm{\Gamma }๐^3/\mathrm{\Gamma }\times ๐`$ to realize $`๐_3`$ symmetry of brane configurations. From this viewpoint, the fact that the brane box model gives four-dimensional theory is owing to the speciality of the abelian case in which an explicit $`๐_3`$ symmetry is not necessary.
As we have discussed, the configurations given in have esentially two-dimensional structure, while the brane cube configurations are three-dimensional. It is interesting to see whether there is a relation like duality between the two types of configurations. If such a relation can be found, it may be possible to construct brane configurations corresponding to $`๐^3/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }`$ an E-type subgroup based on the brane configurations made out of a web of $`(p,q)`$ 5-branes.
In , it was also argued that the brane configurations are dual to toric diagrams of $`๐^3/\mathrm{\Gamma }`$ and that three-dimensional McKay correspondence - may be understood as T-duality. We hope that the three-dimensional version of the quiver diagrams and the brane cube configurations provide some hints on investigations along these lines.
Acknowledgements
I would like to thank T. Kitao for valuable discussions. This work is supported in part by Japan Society for the Promotion of Science(No. 10-3815). |
no-problem/9912/hep-ph9912431.html | ar5iv | text | # Cross Sections for ๐- and ๐-induced Dissociation of ๐ฝ/๐ and ๐'
## Acknowledgments
This research was supported by the Division of Nuclear Physics, DOE, under Contract No. DE-AC05-96OR21400 managed by Lockheed Martin Energy Research Corp. ES acknowledges support from the DOE under grant DE-FG02-96ER40944 and DOE contract DE-AC05-84ER40150 under which the Southeastern Universities Research Association operates the Thomas Jefferson National Accelerator Facility. TB acknowledges additional support from the Deutsche Forschungsgemeinschaft DFG under contract Bo 56/153-1. The authors would also like to thank D. B. Blaschke, C. M. Ko, G. Rรถpke and S. Sorensen for useful discussions. |
no-problem/9912/astro-ph9912100.html | ar5iv | text | # Evidence for a discrete source contribution to low-energy continuum Galactic ๐พ-rays
## I Introduction
Although โdiffuseโ emission dominates the COMPTEL all-sky maps in the energy range 1โ30 MeV, its origin is not yet firmly established; in fact it is not even clear whether it is truly diffuse in nature. This is in contrast to the situation at higher energies where the close correlation of the EGRET maps with HI and CO surveys establishes a major component as cosmic-ray interactions with interstellar gas. This paper discusses recent studies of the low-energy diffuse continuum emission based on the modelling approach described in sm98 . The high energy ($`>`$ 1 GeV) situation is addressed in smr98 ; Strong2000 . The present work uses observational results reported in Strong98 ; new imaging and spectral results from COMPTEL are presented in Bloemen2000 but differences are not important for our conclusions.
## II Electrons, $`\gamma `$-rays and synchrotron
Conventionally the low-energy $`\gamma `$-ray continuum spectrum has been explained by invoking a soft electron injection spectrum with index 2.1โ2.4, and this could reproduce the 1โ30 MeV emission as bremsstrahlung plus inverse Compton emission (see e.g. Strong97 ). Fig 1 shows a range of electron spectra which result from propagation of injection spectral indices 2.0โ2.4; the model is from smr98 ; in order to illustrate more clearly the effect these spectra are without reacceleration. The nucleon spectrum is consistent with local observations and is described in smr98 . Fig 2 shows the inner Galaxy $`\gamma `$-ray spectrum for the same electron spectra. The best fit is evidently obtained for index 2.2โ2.3.
A problem with this, which was noted earlier but has become clearer with more refined analyses, is the constraint from the observed Galactic synchrotron spectrum on the electron spectral index above 100 MeV. The synchrotron index is hard to measure because of baseline effects and thermal emission, but there has been a lot of new work in this area, in part because of interest in the cosmic microwave background. Fig 3 summarizes relevant measurements of the synchrotron index together with the predictions for the range of electron spectra in Fig 1. The new 22โ408 MHz value from Roger is of particular importance here; it is consistent with that derived earlier in a detailed synchrotron modelling study Lawson . The $`\gamma `$-rays fit best for an injection index 2.2โ2.3, but the synchrotron index for 100โ1000 MHz is then about 0.8 which is above the measured range. Although we illustrate this for just one family of spectra for a particular set of propagation parameters, it is clear that it covers the possible range of plausible spectra so that changing the propagation model would not alter the conclusion. Hence we are unable to find an electron spectrum which reproduces the $`\gamma `$-rays without violating the synchrotron constraints. If there were a very sharp upturn in the electron injection spectrum below 200 MeV, as illustrated in Fig 1, then we could explain the $`\gamma `$-rays as bremsstrahlung emission without violating the synchrotron constraints, but even then it would not reproduce the intensities below 1 MeV measured by OSSE Kinzer .
## III An unresolved source population ?
In view of the problems with diffuse emission we suggest that an important component (at least 50%) of the $`\gamma `$-ray emission below 10 MeV originates in a population of unresolved point sources; it is clear that these must anyway dominate eventually as we go down in energy from $`\gamma `$-rays to hard X-rays (see e.g. Valinia98 ), so we propose the changeover occurs at MeV energies. For illustration we have tried adding (with arbitrary normalization) to the diffuse emission possible spectra for the unresolved population (Fig 4): a low-state Cyg X-1 type McConnell2000 appears too steep, but a Crab-like type ($`E^{2.1}`$) would be satisfactory, and would require a few dozen Crab-like sources in the inner Galaxy. These would not be detectable as individual sources by COMPTEL and such a model not violate any observational constraints which we know of. In the examples in Fig 4 we have used the hard electron injection spectrum (index 1.8) required to fit the $`>`$1 GeV excess smr98 ; Strong2000 so that with Crab-like sources we can finally reproduce the entire spectrum from 100 keV to 10 GeV.
This hypothesis has many observational consequences which can only be investigated by detailed modelling of source populations. |
no-problem/9912/chao-dyn9912017.html | ar5iv | text | # Differential light scattering: probing the sonoluminescence collapse
\[
## Abstract
We have developed a light scattering technique based on differential measurement and polarization (differential light scattering, DLS) capable in principle of retrieving timing information with picosecond resolution without the need for fast electronics. DLS was applied to sonoluminescence, duplicating known results (sharp turnaround, self-similar collapse); the resolution was limited by intensity noise to about 0.5 ns. Preliminary evidence indicates a smooth turnaround on a $`0.5`$-ns time scale, and suggests the existence of subnanosecond features within a few nanoseconds of the turnaround.
\]
Since Gaitanโs seminal work , significant advances have been made in our understanding of single-bubble sonoluminescence (SL). The dissociation hypothesis (DH) introduced by Lohse et al. combines the merits of an intuitive approach based on the relatively tractable Rayleigh-Plesset equation (RPE) of bubble dynamics with an impressive ability to reproduce a wide range of observables . More sophisticated theories yield a more realistic picture of the phenomenon that is in general agreement with the results of the DH-RPE treatment. Experimentally, however, information on the bubble interior is still very scarce. In particular, no direct and conclusive evidence exists yet for either plasma formation or shock waves inside the collapsing bubble. As a result, many competing theories still vie with the adiabatic or shock-wave heating theory for the distinction of accurately describing the SL phenomenon. It seems desirable, then, to explore additional ways of probing the interior of the bubble with a time resolution comparable to the duration of the flash, measured to be 40โ380 ps . Light scattering has already been shown to be a useful probe of the bubble dynamics, sensitive as it is to the dielectric interface at the bubble wall. It is also a promising candidate for detection of either plasma or shock waves, since both features can modulate substantially the local dielectric constant.
Our goal was to push measurements of the light scattering cross section of the collapsing bubble to a greater temporal resolution than that afforded by the pulsed Mie scattering technique , which appears to be limited to around $`\frac{1}{2}`$ ns by low light levels and the need for averaging. In order to achieve higher resolution, we have developed a technique called differential light scattering (DLS) that reduces statistical uncertainty in the detection process by making use of more powerful ultrafast laser pulses. Since DLS does not rely on a fiducial time reference, it, too, is completely insensitive to electronic timing noise, allowing the use of relatively slow detectors. The DLS technique is based on two central concepts: (i) using a differential measurement to yield jitter-free timing information, and (ii) using polarized light to generate such a measurement through scattering.
The differential measurement concept was recently introduced for the first time by Rella et al. in the context of ultrafast gating of optical pulses . The technique they invented, differential optical gating (DOG), was used to measure the shape of a midinfrared pulse with subpicosecond resolution. Our technique applies the DOG concept to light scattering. DLS relies on collecting many pair of correlated samples of the same periodic event $`I(t)`$ (see Fig. 1). Each pair $`i`$ consists of the first sample $`I(t_i)`$ and the second sample $`I(t_i+\delta t)`$, where $`t_i`$ is the time of the first sample (modulo the period $`T`$), and $`\delta t`$ is an appropriately chosen (and short) time delay. From each such pair, an intensity difference
$$\delta I_i=I(t_i+\delta t)I(t_i)$$
(1)
is produced and plotted against the first sample $`I(t_i)`$, generating what we will call a DLS plot. When enough event pairs are collected, the points representing them in the DLS plot will join together in defining a continuous curve.
The DLS plot can be thought of as a predictor: given an intensity $`I`$ at some time $`t`$, the plot yields what $`I`$ will be after a time $`\delta t`$. One can numerically step along the curve on this plot to retrieve the desired direct function $`I(t)`$. Depending on the nature of the features of interest, in some cases it is more fruitful to plot, for each pair, $`\delta I_i`$ against the second sample $`I(t_i+\delta t)`$. The reconstruction of $`I(t)`$ is then carried out backwards in time. When the data are particularly noisy, however, features apparent in the DLS plot will be lost in the reconstruction process, eliminating any benefit of the technique. In this case, it is better to work directly in DLS space. Using a model for $`I(t)`$, a DLS curve can be generated from it by applying map (1), then fit to the data points in the DLS plot with a minimization algorithm.
To implement the DLS concept (Fig. 2), a laser pulse is split equally in two and recombined as in a Michelson interferometer, with one pulse having traveled a longer path. The two-pulse train is focused onto the bubble, yielding two bursts of scattered light separated by an adjustable time delay $`\delta t`$. In order to distinguish between the two pulses during detection, an additional degree of freedom is needed. Color discrimination is a possibility, but with several drawbacks, among which the need for frequency doubling and the strong wavelength dependence of light scattering. Polarization, on the other hand, is perfectly suited to this technique. Calculations using Mie scattering theory , which is rigorously valid for spheres of arbitrary size, show that scattered intensities are highly polarization dependent. For the case of linearly polarized light, scattering at $`\theta =90^o`$ (where zero is forward) vanishes if the polarization vector e is parallel to the scattering plane. Therefore, the polarization of one of the pulses is made to rotate by $`90^o`$ (with the quarter-wave plate shown in Fig. 2) before the two pulses are recombined. Figure 3 shows the sequence of the two pulses scattering from the bubble. The first pulse scatters preferentially in the plane containing one photomultiplier tube (PMT), while the second pulse does so in the perpendicular plane, which contains the other PMT.
Combining the time delay and the polarization dependence results in the ability to assign the scattered intensity recorded by one detector to the earlier pulse, and that recorded by the other detector to the later pulse. Therefore, each laser burst yields an ordered pair of scattered intensities. By scanning the arrival time of the pulse pairs over some portion of the acoustic cycle, enough data can be collected to generate a DLS plot of the desired interval of the bubbleโs evolution.
The experiments were performed in a 100-ml spherical boiling flask filled with distilled, deionized water (resistivity $`\rho =15.9`$ M$`\mathrm{\Omega }`$-cm) at $`20\pm 1`$ <sup>o</sup>C. The water was prepared in a gas-handling system under an air pressure of 0.2 bar, then loaded into the flask without further exposure to air. A sealed connection to a volume reservoir (kept at atmospheric pressure for these experiments) inspired by Ref. provided pressure release from volume changes induced by temperature fluctuations. The entire assembly was leak tested; with the flask under vacuum, the rate of pressure rise was conservatively determined to be 16 nbar s<sup>-1</sup>. However, under normal operating conditions the flask is filled with water and repressurized to 1 bar: this forces outside air to diffuse into the undersaturated water through microscopic interfaces, resulting in a substantially lower rate of contamination. The experiments described here took place 110 days after loading. Within that time, the pressure in an initially empty flask would have risen to roughly 0.2 bar; the air concentration in the water-filled flask can instead be expected to have risen by perhaps a few percent of atmospheric saturation.
The first acoustic resonance of the flask was determined to be 26.9 kHz. The acoustic drive was provided by an audio amplifier, the output of which (typically $`4.5`$ W rms) was fed through an impedance-matching network before being delivered in parallel to two disc-shaped piezoelectric transducers (PZTs) epoxied to diametrically opposite points on the flask. A third, smaller PZT cemented to the flask provided acoustic pickup, used to map the normal modes of the flask and to monitor the behavior of the bubble through its filtered acoustic signature.
The laser used to probe the bubble was a regenerative Ti:sapphire amplifier pumped by a Q-switched, frequency-doubled neodymium-doped yttrium lithium fluoride (Nd:YLF) laser operated at 1 kHz, 10 mJ/pulse (Positive Light Spitfire and Merlin, respectively) and seeded by a 82-MHz mode-locked Ti:Sapphire oscillator, in turn pumped by an Ar<sup>+</sup> cw laser (Spectra Physics Tsunami and Beamlok 2080, respectively). The oscillator provided 60-fs, 800-nm pulses at 82 MHz; the amplifier output consisted of partially uncompressed (chirped) 50-ps, 800-nm pulses at 1 kHz with approximately 1 mJ/pulse. The dominantly TEM<sub>00</sub> mode beam was sent through a spatial filter to clean up mode asymmetries and yielded a nearly Gaussian profile. To eliminate gross beam distortion caused by the irregular flask surfaces, a laser beam input port was made by cutting a hole in the flask and cementing in place a custom-made fused silica powerless meniscus. The light scattered by the bubble was collected with a relay system, passed through polarizers (appropriate for each branch) and 800-nm narrow bandpass filters, and delivered to two PMTs (Hamamatsu R955P and R636). The PMT signals were integrated by SRS SR250 boxcar averagers, which were in turn sampled by a 1-MHz A/D board on a personal computer.
The synchronization scheme (shown schematically in Fig. 2) involved generating a logic signal at $`f_{acous}=26.9`$ kHz, and digitally dividing its frequency by 27 to yield another logic signal at approximately 1 kHz. The 26.9 kHz logic signal was filtered before being fed to the audio amplifier to serve as the acoustic drive, while the 1 kHz signal was used to trigger the laser and the data acquisition electronics. This ensured that the SL drive signal and the regenerative amplifier pulse trains would be synchronized to each other to about 1-ns precision. Additional timing circuitry allowed for the delay between the SL flash (which occurs very nearly at the same point of the acoustic cycle, within 0.5 ns of turnaround ) and the laser pulse pairs to be varied continuously by up to 50 $`\mu `$s, either manually or automatically. This allowed us to probe the bubble at any given phase of the acoustic cycle.
In order to obtain values for the ambient bubble radius $`R_0`$ and the acoustic drive amplitude $`P_a`$, we developed a time-stamp technique that yielded a time series of scattered intensities $`I(t)`$ over the whole acoustic cycle. A time-to-amplitude converter (TAC, 566 EG&G ORTEC) measured the interval (up to a constant offset) elapsed between the arrival of the laser pulse and the SL flash, as signaled by an additional PMT sensitive to SL light only. The TAC output was logged through a boxcar along with the signal from one of the PMTs used in DLS, and used for time-stamping. Scattering events were recorded as the delay between the laser and SL was scanned automatically through a whole acoustic cycle.
Since the result of this procedure was a time series of intensities, a calibration was performed to establish a conversion from $`I(t)`$ to $`R(t)`$. This was done using a stroboscopic imaging system similar to that of Ref. , except that in our case the drive for the LED was locked to the same frequency $`f_{acous}`$ as that driving the bubble. We obtained $`R_{max}`$ by fixing the LED time delay so that the bubble was shown on the monitor screen at maximum size, and $`I_{max}`$ from the scattering data. A calculation based on Mie theory provided the $`I(R)`$ map necessary to complete the calibration.
In practice, uncertainties in the calibration of the imaging system, as well as in the actual measurement of the bubble size, prompted us to use our measurement of $`R_{max}`$ as an estimate with $`\pm `$ 10% uncertainty. The $`R(t)`$ data were then fed to a fitting algorithm that established $`R_0`$, $`P_a`$, and an appropriate overall scale factor in a nonlinear least-squares calculation using the RPE. The fact that the scale factor for the best fit was determined in this way to be $`1.09\pm 0.05`$ gave us confidence in the validity of our imaging method. The bubble parameter values thus found were $`R_0=5.3\pm 0.2\mu `$m and $`P_a=1.34\pm 0.04`$ bar.
It is worth mentioning that the time-stamp technique described above can be used directly to obtain light scattering data from a collapsing SL bubble. The drawback is that, unlike in DLS, the electronic response of the measuring instruments is the limiting factor. We used this procedure to collect rough timing information as a cross-check in our analysis of DLS data; with the devices at our disposal, 2-ns resolution was achieved. We estimate that with two microchannel plate PMTs, two constant-fraction discriminators, and a faster TAC, an overall timing uncertainty of 50 ps should be achievable .
In Fig. 4 we show representative results from our DLS experiments. In these plots, as in Fig. 1 (b), the abscissa is $`I(t_i)`$ and the ordinate is $`\delta I_i`$. In Fig. 4(a) the delay between pulses was 5 ns, and in Fig. 4(b) it was 1 ns. The range of $`\delta t`$ for which useful information can be gathered is dictated by the physical process under study: delays much shorter than 1 ns yielded DLS plots unresolved into a discernible structure, while delays much longer than 5 ns are not well suited for investigating short time scales.
To aid interpretation, we divide the plots into three regions: A (the collapse, $`t<0`$), B (the transition region), and C (the rebound, $`t>0`$). The approximately flat region A corresponds to the collapse, since $`\delta I_i<0`$, indicating that the bubble is shrinking. In C the rebounding bubble expands, but at a much lower rate than during collapse, so $`\delta I_i`$ is positive and smaller in magnitude than in A. However, a greater spread in the data there results in fuzzy clustering across $`\delta I_i=0`$. The straight โwallโ in region B forms when the two pulses straddle $`t=0`$ (compare to the open squares in Fig. 1).
The straight section A in Fig. 4(a) is due to a constant slope of $`I(t)`$ during collapse. This critical behavior has been previously observed for time scales ranging from 1 $`\mu `$s to 20 ns prior to turnaround ; our observations extend it to $`t=5`$ ns. In Fig. 4(a) sections A and B join rather abruptly, indicating a sharp cusp in $`I(t)`$ on the time scale of the measurement (5 ns). This was expected given the measurements in Ref. . In Fig. 4(b), however, section A appears to show a slight upturn before joining section B, indicating a smooth transition on a time scale less than the pulse delay of 1 ns (since the โwallโ section B is still discernible). Scatter in the data prevents a conclusive interpretation, but the available evidence would support an estimate of the bubble turnaround time at a few hundred picoseconds.
The collection lenses used have a f-number of 1.5; the finite acceptance cone they subtend introduces a pollution, or cross talk, of unwanted light from the other pulse in each detector. Because of the strong polarization dependence of scattering, this cross talk is quite small: it was calculated from Mie theory, and confirmed experimentally, to be less than 5% of the total scattered intensity. Electrical cross talk was measured to be less than 5%. The resulting overall intensity uncertainty is therefore around 7%; the difference uncertainty varies across the plot. While in sections A and B the error estimates are consistent with the observed spread in the DLS data, in section C the spread is significantly larger. This has been observed before in Xe-filled bubbles and ascribed to nonsphericity ; such asymmetry is reported here as regularly occurring in air-filled bubbles.
In conclusion, we have introduced DLS, a light scattering technique based on the DOG concept of differential measurement and on sensitivity to polarization that uses intense ultrashort laser pulses to bypass the problem of electronic timing jitter. The intensity spread in the data is currently the limiting factor in the resolution achieved with this technique. Effectively, intensity noise is translated into timing noise by the mapping that a DLS plot generates. Accordingly, the resolution in the data shown is approximately 0.5 ns. The intrinsic resolution of DLS, however, is given by the laser pulse width used: with our equipment that can be made as low as 500 fs. Data collected from a collapsing SL bubble confirm earlier findings of a self-similar solution and of subnanosecond turnaround time; our preliminary results suggest that the turnaround is smooth on a time scale of a few hundred picoseconds.
We are very grateful to H. A. Schwettman and the Stanford Picosecond Free Electron Laser Center for supporting this research. We also gratefully acknowledge generous equipment loans by J. R. Willison of Stanford Research Systems. We thank B. P. Barber, B. I. Barker, F. L. Degertekin, R. A. Hiller, G. M. H. Knippels, G. Perรงin, S. J. Putterman, C. W. Rella, H. L. Stรถrmer, and members of the Stanford FEL for technical assistance and valuable discussions. |
no-problem/9912/hep-ex9912048.html | ar5iv | text | # A stronger classical definition of Confidence Limits
## I Introduction
The concept of Confidence Region for a parameter at a given Confidence Level is a center piece of classical statistics and was first introduced by Neyman. It gives a definite meaning to the making of statistical inferences about the region where the value of an unknown parameter might fall, without any assumption on whether the parameter can be attributed some probability distribution and what it might be. An alternative approach to setting acceptance regions for a parameter is from Bayes, that on the opposite assumes and explicitly incorporates the information from a probability distribution of the parameter, which is supposed to be known before the measurement of the data set in hand, and it is therefore called โa prioriโ distribution.
With regard to the choice between the two methods of statistical inference, the Author shares a common opinion that Bayesian methods are very useful whenever there is a solid ground for establishing the โa prioriโ parameter distribution, which this method readily exploits in optimal way, but the classical methods are the only reasonable choice whenever this does not happen. Unfortunately the measurements of physics quantities belong almost always to the second class. The widespread preference of the physicists for classical methods of setting acceptance regions seemed recently to weaken when it was realized that the usual procedures for setting limits in the classical framework can lead in some cases to highly counter-intuitive results.
Several solutions to this unpleasant situation have been proposed, some of them requiring partial fallback on Bayesian concepts, or even argued that the classical method was fundamentally weak, and could not work without the supplement of some Bayesian ingredient.
Other authors , defended the classical point of view by proposing some alternative methods for setting limits that eliminate the unpleasant results while still adhering to Neymanโs prescription. The present work follows that same line of looking for meaningful results within the classical approach, avoiding any Bayesian contamination. However, I argue that none of the previous proposals is completely satisfying, and that a deeper revision of current ideas is needed in order to really solve the difficulties, yielding to very different conclusions from past work on the subject.
It is worth noting that the insistence on classical methods should not be taken to imply that the Bayesian method are not very useful in the more limited field where they are unambiguously applicable.
In Sec. II a few examples of problematic limits are discussed, some of which appear not to have been previously considered. In Sec. III A I analyze the reasons for the physicistโs dissatisfaction and what they reveal about the incompleteness of the classical CL definition by Neyman, and in sec. III B I propose a general solution of these issues completely contained in the realm of classical statistics. In Sec. IV the most important features of the proposed approach are discussed, with brief notes on some specific examples.
## II Problems with standard classical limits
### A Definitions and notations
Let $`\mu M`$ indicate some unknown parameters, and $`xX`$ a random variable we can observe, whose probability distribution $`p(x|\mu )`$ (pdf for short) depends in some way on the unknowns $`\mu `$. Both $`\mu `$ and $`x`$ can be arbitrary objects, e.g. they can be vectors of real numbers of any length. When the observable is continuous a probability density rather that a discrete distribution is necessary to describe it, but for simplicity the same notation $`p(x|\mu )`$ will be used, and the distinction will be explicitly noted only when necessary. In both cases $`p(xS|\mu )`$ will indicate the total probability for the observable to fall in a given subset $`SX`$, independently on whether it is obtained by a sum (discrete variable) or an integration (continuous), or both<sup>*</sup><sup>*</sup>*Note that $`p(x\{\overline{x}\}|\mu )=p(\overline{x}|\mu )`$ for discrete variables, while $`p(x\{\overline{x}\}|\mu )=0`$ for continuous variables, independently of the value of the density $`p(\overline{x}|\mu )`$ at the point $`\overline{x}`$..
Let $`B(x)`$ be any function associating to each possible observed value of $`x`$ a subset of values of $`\mu `$ ($`B`$ is intended to represent some algorithm to select โplausibleโ values of the unknown $`\mu `$ on the basis of our observation). The classical definition of CL from Neyman can then be stated as follows: the function $`B`$ (โconfidence bandโ) is said to have โConfidence Levelโ equal to $`CL`$ if, whatever the value of $`\mu `$, the probability of obtaining a value of $`x`$ such that $`\mu `$ is included in the accepted region $`B(x)`$ is (at least) CL. In short:
$$CL(B)=\underset{\mu }{inf}p(\mu B(x)|\mu )=1\underset{\mu }{sup}p(\mu B(x)|\mu )$$
(1)
Obviously the Confidence Level is a property of the band $`B`$ as a whole, not of a confidence region associated to a particular value of $`x`$: it is quite possible for two different algorithms $`B`$ and $`B^{}`$, to give the same confidence region for some $`\overline{x}`$, and still have very different Confidence Levels. This is the reason for the need of always deciding the algorithm $`B`$ before making the actual measurement, clearly implied by the original formulation, but apparently often forgotten, and only recently clearly pinpointed.
Neymanโs definition is so general, that after choosing the desired CL, there is a very wide variety of bands $`B`$ satisfying it. In a generic case, confidence regions can be arbitrarily complicated subsets of the $`\mu `$ space. One can even construct fractal confidence region if one likes to do so.
For this reason, soon some โrulesโ have been invented to easily obtain simple confidence regions with desirable properties. Most of them are based on ordering all possible values of the observable $`x`$ according to some rule, and then determining the confidence region by adding up in order as many values as needed for reaching the desired coverage, that is the integral of the pdf over the accepted region. Common examples of rules are upper/lower limits, based on ordering for increasing/decreasing value of $`x`$ (assumed a number), โcenteredโ (for unidimensional $`x`$, order by decreasing tail probability, yields equal probabilities in the upper and lower excluded regions), and the band obtained by ordering for decreasing $`p(x|\mu )`$ (โnarrowest bandโ, or โCrow bandโ in the following).
These rules really have nothing fundamental, but they have been so commonly used that they have been sometimes identified with the very essence of the CL concept. For this reason, when some examples were found that showed serious limitations of these rules, their failure has been sometimes perceived as a failure of classical statistics as a whole, and alternative solutions often looked for in Bayesian concepts.
Obviously, other choices can be singled out within the huge space of classical solutions, to give satisfactory solutions to those cases. In order to overcome the limitations of the other methods, the new method of Likelihood Ratio (LR) orderingThis is often referred to in the literature as โunified approachโ due to its capability of producing a single band containing both โcentralโ and โupper/lowerโ intervals, but that property is not of particular relevance in the present context, therefore the more explicit expression LRโordering is adopted has been recently proposed. This amounts to order the observable values by decreasing $`p(x|\mu )/p(x|\widehat{\mu })`$, where $`\widehat{\mu }`$ represents the maximum likelihood estimate of $`\mu `$, given $`x`$. This method appears to have distinct advantages over the previous, and stirred great interest around this problem. However, it does have limitations, that have inspired some amendments.
I will argue in the next subsection that the LRโordering method and its modifications have pitfalls as serious as those of other methods they are intended to replace, and cannot therefore be considered a genuine solution.
### B Specific examples
I proceed now to examine some examples of problematic confidence bands.
The pathologies encountered are essentially of two kinds. The first and more obvious is when the confidence region happens to be the empty set. I avoid to speak of โunphysical valuesโ of the parameters because I find it a confusing terminology: in every problem the parameters can assume values inside some domain, determined by the nature of the problem. If that domain actually describes all conceivable values of parameters for which a $`p(x|\mu )`$ exists, then there is no meaning in referring to hypothetical values outside that domain: they just do not exist as possible values for $`\mu `$. On the other hand, if the formulation of the physical problem allows to attach a meaning to other values of the parameters, they should be taken into account from the start, and cannot be called โunphysicalโ. Similar considerations apply to the expression โthe maximum of the likelihood function lies outside the physical regionโ: the expression usually really means that the maximum occurs on the border of the parameter space, which does not poses particular problems and certainly does not suggest arbitrary extrapolations of the likelihood function outside its domain of existence.
The other possible pathology is to have โunreasonably smallโ confidence regions, that is actually just a softer version of the previous. It is less obvious to detect, but it should be clear that it is just as unacceptable from the physicistโs point of view. Also, it is potentially more dangerous since the experiment result will superficially appear to convey a great deal of information. How do we know that a limit is too tight ? A possible symptom of this situation is when the limits become tighter with decreasing experiment sensitivity, as in the example of Poisson with background below.
#### 1 Poisson with background: a sensitivity paradox
Let us examine briefly this problem of Confidence Limits of great practical importance. The probability distribution is given by:
$$p_b(n|\mu )=e^{(\mu +b)}\frac{(\mu +b)^n}{n!}$$
(2)
While the observed number of counts $`n`$ can only be positive, the presence of a background $`b`$ constraints the overall mean $`\mu +b`$ of the Poisson to be larger than $`b`$, and therefore creates the possibility of โnegative fluctuationsโ in the form of occurrence of much less observed counts than the average level of background. Theโusualโ ordering rules mentioned in sec. II A readily produce empty confidence regions in that case.
The LRโordering prevents this, but its results are counter-intuitive and hard-to-interpret as well.
The problem appears clearly when comparing the results of experiments observing the same number of counts, but affected by different levels of background. It is easy to see that with the LR method the upper limit on $`\mu `$ goes to zero for every $`n`$ as $`b`$ goes to infinity, so that a low fluctuation of the background entitles to claim a very stringent limit on the signal. This means that the limit can be much more stringent than in the case of zero observed events and zero background. This is clearly hard to accept.
The modification proposed in only softens this behavior, and in addition uses Bayesian concepts in its formulation, therefore the uncompromising classical physicist will not want to consider it.
The absurdity of the result is best seen by looking at the case of zero observed events. This has been clearly pointed out in.
If there is no background, and one observes zero events, one knows that no signal event showed in the sample at hand, and one can deduce an upper limit on $`\mu `$ from this fact. If there is some level of background and one observes zero events, that implies two facts:
* a) no signal event showed in the current sample
* b) no background event showed in the current sample
The two occurrences are statistically independent, by assumption of Poisson distribution, therefore they can be considered separately. Fact b) is totally uninteresting for what concerns the signal: our only help in making decisions about $`\mu `$ is fact a), which is exactly the same information we had in the case of no background. A sensible algorithm must therefore give the same upper limit on $`\mu `$ in the zero-count case, whatever the expected background.
This failure is particularly important if one considers that this behavior stems from the same root as the other problem that the LR proposal is intended to cure. In fact, the problem can be summarized by saying that the low likelihood of occurrence of event b) โfoolsโ the algorithm into making up a very narrow confidence region that has no basis in what we actually learned from the experiment. This is exactly the same mechanism that leads to empty regions with the older rules: the rarity of a set of results is taken as a good reason for rejecting values of the parameter even if it is uncorrelated with the value of that parameter. This should make us dubious about the question of whether the approach of LR ordering really addresses the issue.
This problem was not missed by the proponents of the method, who devote a section of their paper to it . They maintain that the concern for this problem is motivated by โa misplaced Bayesian interpretation of classical intervalsโ, but nonetheless suggest that in this kind of cases the experimentalist should not publish just the limits, but also an additional quantity to represent the โsensitivityโ of the experiment. This however avoids the question of how to provide an interval that properly and completely represent the results of the measurement, including all information about the sensitivity, that is the question the present work tries to address.
The modification of LRโordering proposed in to address this problem is based on Bayesian quantities, therefore the uncompromising classical physicist will not accept it. Furthermore, it only softens the problem rather than eliminating it.
A nice classical solution to this dilemma has been presented in Ref. , based on explicitly eliminating the spurious information from the calculation of the coverage, while still ordering the observable values according to LR. The amount of background events in the sample is forced to be less than the total number of observed events. This modification removes the paradoxical behavior of the limits, and produces results which seem reasonable from all points of view, so the particular problem of the Poisson with background might be considered as solved.
However, the above procedure appears to be ad hoc, and it is not clear how to apply it to different situations, like the other examples of this section. In addition to that, the example that follow will show an important weakness of that variant, and any other variant based on the LR ordering rule.
#### 2 Gaussian with positive mean
This is another very important example: $`p(x|\mu )`$ is gaussian, but the condition $`\mu >0`$ holds. If one tries to apply the Crow band, which is the usual choice for the unbounded case, one gets empty confidence region for $`x<1.96`$ at 95% CL. This does not happen if one uses the LR ordering rule, as extensively discussed in .
This example makes a very good case for the LR method, but unfortunately it is easy to expose its instability. Consider a modification of the gaussian pdf obtained by adding a second, very narrow gaussian of the same height but negligible width and area. Let the second gaussian be centered at a different location, for instance $`\mu _2=1/\mu `$. What is important is just that $`\mu _2\mathrm{}`$ as $`\mu 0`$.
Intuitively, this is a very small change of the problem: it just means that in a negligible fraction of cases the measurement $`x`$ will fall in a different, narrowly determined location. This is not so artificial an example as it may seem, since it is quite possible for an experimental apparatus to have rare occurrences of singular responses.
How the confidence regions should change, according to common sense ? If the probability of this occurrence is very small (letโs say $`1CL`$) one would just ignore the possibility and quote the same confidence limits as before. One would therefore want from a sound algorithm to yield very similar bands to the unperturbed case. Unfortunately, this does not happen with the LR ordering method: since the ordering is based on the value of the maximum of the pdf, the narrow peak of negligible physical meaning is capable of altering the ordering completely: the maximum of the Likelihood is now a constant for every value of $`x`$, and the resulting band goes back suddenly to something very similar to the old Crow band, that is just ordering by $`p(x|\mu )`$. For large negative deviations, the intervals are not exactly empty, but contain a tiny interval centered around the peak of the second gaussian. However, this hardly makes the result satisfying from a physicistโs point of view. When observing a large negative deviation, it is much more likely that is comes from the tail of the main gaussian rather than from the โextremely rareโ second gaussian, and one would like the confidence limits to reflect this fact. The response of the LR method that instead โcompletely forgetsโ the main gaussian to focus on the secondary peak, no matter how narrow, appears as a crucial failure. From a practical point of view, this kind of instability of the solution means that the response of the apparatus must be known with infinite precision in order to be able to use the algorithm.
Note that the problem is intrinsic to the ordering, therefore any modification of the method acting only on the coverage criteria, as the one proposed in for handling the Poisson case, will be plagued by the same problem.
It is also worth reflecting on what happens if the second peak is not so narrow, but rather comparable to the main peak. In that situation, the LR algorithm might give a result which is not so violently in contrast with common sense. Yet, it is hard to avoid the suspect that also in that case the result will be, in some illโdefined way, not what a physicist wants.
#### 3 Empty confidence regions are not ruled out by the LR method
The previous example showed a case where LR ordering yields negligibly narrow confidence regions. For completeness, it is worth noting that it is also possible to formulate examples where the LRโordering produces completely empty confidence regions on wide ranges of the observable, contrary to what is generally assumed.
This can be obtained, for instance, by adding to the pdf a narrow, wiggling ridge of ever increasing height, still of negligible area. For instance, in the previous example one might simply add to the pdf the function:
$$ฯตN(x_0+\delta \mathrm{sin}(\mu ),\frac{\alpha }{1+\mu })$$
where $`N(m,\sigma )`$ stands for the Gaussian function with unit area, mean $`m`$ and standard deviation $`\sigma `$, and $`\delta `$, $`\alpha `$, and $`ฯต`$ are real numbers ( $`\alpha `$ and $`ฯต`$ are โsmallโ). It is easy to see that the likelihood function for any $`x[x_0\delta ,x_0+\delta ]`$ has periodic โspikesโ with a height that increases without limit as $`\mu \mathrm{}`$, therefore the maximum likelihood is infinite, and the LR is zero for all $`x[x_0\delta ,x_0+\delta ]`$ and all values of $`\mu `$, including the points on the spikes themselves. As a consequence, all points in the interval $`x[x_0\delta ,x_0+\delta ]`$ will get the lowest possible rank in the ordering, so they will be the last to be added to the accepted region, for all $`\mu `$. If an interval is chosen in such a way that $`p(x[x_0\delta ,x_0+\delta ]|\mu )<1Cl`$ (which is always possible whatever the pdf), then the confidence region will be the empty set for all $`x[x_0\delta ,x_0+\delta ]`$.
The example is clearly very artificial, but is nonetheless valuable in signaling the existence of a problem.
#### 4 Uniform distribution
An example which is simpler than the previous and totally plausible in practice, yet presents unexpected difficulties is the uniform distribution:
$$p(x|\mu )=1\text{ if }\mu <x<\mu +1\text{, otherwise 0}.$$
(3)
Letโs consider the case of the domain of $`\mu `$ being the full set of real numbers. The upper/lower limits presents no trouble in this case, but both the Crow band and the LR band are indeterminate, since every value of $`x`$ gets assigned the same rank, for whatever $`\mu `$, therefore any band satisfying Neymanโs condition will satisfy both. In particular, note that LRโordering does not exclude empty confidence intervals in this example. Again this indicates that the root of the difficulties that motivated this approach has not, in fact, been eliminated.
Anyway, here we are again confronted with instability of the solution: a very small perturbation of this pdf, obtained by adding an arbitrary โinfinitesimalโ function with zero total integral will resolve the ambiguity in a way which depends completely on the exact form of the perturbation, however small its size. In this case it is not even necessary to consider narrow spikes as in previous examples: the instability can be obtained with perfectly smooth and slowโvarying functions.
Also, there is no obvious way to extend to this case the modifications suggested in for the Poisson with background example.
#### 5 Indifferent distributions
In order to better illuminate the nature of the problem that is frustrating the attempts at obtaining sound classical limits it is useful to examine a โtrivialโ example: a probability distribution that does not depend on the value of $`\mu `$:
$$p(x|\mu )=p(x)$$
(4)
For simplicity, consider the specific case of a distribution of a discrete observable with just two values (โAโ and โBโ) depending on a parameter with just two possible values (โPโ and โQโ), given by the following table:
| | P | Q |
| --- | --- | --- |
| A | 0.95 | 0.95 |
| B | 0.05 | 0.05 |
(5)
Clearly in this case the observable is not providing any information on the parameter. What is a โsensibleโ band in this case? Obviously no conclusion can be drawn, so it should be clear that the only acceptable band is the one that includes the whole table. On the other end, most rules will yield an empty region in case โBโ.
The LR is constant everywhere, so the LRโordering allows you to choose any Neyman band. In force of the economical principle that unneeded overcoverage is to be avoided, the best solution appears to be the band that covers only the upper row of the table, and leaves an empty region for case โBโ, just as the Crow rule.
In principle, nothing forbids to even choose arbitrarily to reject one of the two values โPโ and โQโ and keep the other in the case โBโ is observed, thus accepting some overcoverage. That choice is very unreasonable from a physicistโs viewpoint: it means one can conclude essentially anything from the occurrence of event โBโ. For instance, when investigating the neutrino mass, one can make an โexperimentโ by doing something completely unrelated, for instance, by throwing a pair of dice. Since the probability of getting, say, 6 on both dice is $`<3\%`$, if that event actually occurs, one is entitled to exclude a mass range of his choice at 97% CL. I think very few persons would accept this as a sensible inference, yet the procedure is perfectly correct from the point of view of Neymanโs definition, and is compatible with LRโordering, too.
Here the criteria of coverage shows clearly its inadequacy: to obtain a sensible answer it is not enough that no more than 5% of the outcomes are excluded for every $`\mu `$, it would also be necessary to make sure in some way that the choice one makes is not based on information irrelevant for distinguishing different values of the parameter.
It should be clear at this point that this is the fundamental weakness of Neymanโs definition (1), from which all problems arise. As for the LR ordering rule, it appears to be going somehow in the right direction, but it is unable to provide a clearโcut answer to a simple problem like this.
Things get even worse if a small perturbation of the indifferent band is introduced, leading to the following situation:
| | P | Q |
| --- | --- | --- |
| A | $`0.95+ฯต`$ | $`0.95ฯต`$ |
| B | $`0.05ฯต`$ | $`0.05+ฯต`$ |
Common sense clearly suggests not to draw any conclusion in this case, too (not at 95% CL, at least).
The LR method instead provides now unambiguously the answer of a confidence region covering all but the lower left cell. This means, no conclusion is drawn from observing event โAโ, but โPโ is excluded if event โBโ is observed.
Admittedly, now โQโ is the maximum Likelihood estimation of the parameter , but the difference with the previous case of โcrazy inferencesโ is infinitesimal. When we claim that the conclusion has 95% CL, what meaning can we attach to this number if, however small the difference, the CL is always 95% ? It looks like a too strong statement for an infinitesimal difference between the two hypothesis. Note that the band obtained for this case is exactly the same that would have been obtained from the following distribution, at the same CL:
| | P | Q |
| --- | --- | --- |
| A | $`0.95+ฯต`$ | $`0.05+ฯต`$ |
| B | $`0.05ฯต`$ | $`0.95ฯต`$ |
yet the two situations are intuitively very different in terms of sensitivity of the experiment to the value of the parameter.
This example sheds serious doubts on the meaningfulness of valutations of the sensitivity of a designed experiment based on expected confidence limits calculated with any current rule. Again, this is a very serious inconvenient, and the failure in handling a so simple example should make us suspicious of many other bands, or maybe of all Neymanโs confidence bands.
## III Proposal of a classical solution
### A Nature of the problem
All proposed classical rules for building confidence bands meet with severe difficulties when confronted even with simple problems. This is true also for the recently proposed LR-ordering which appears to do only slightly better that older recipes.
It is worth noting that the characteristics of the most common pdfโs taken as example of the difficulties (first two of previous section) has often lead to speak of a โproblem of bounded regionsโ or of โsmall signalsโ. However, the additional examples provided should be sufficient to clarify that the presence of a boundary, or the smallness of the number of counts are just accidents without connection to the root of the problem.
One should ask what is the exact reason for considering the previous examples of confidence limits unacceptable. Their results are obviously mathematically correct. The problem is not of โmathematicalโ or โstatisticalโ but of โphysicsโ nature: one is lead to setting confidence limits which are intuitively โunpleasantโ to the physicist, sometimes even paradoxical. We donโt want to accept a result like and empty confidence region, which we know is false no matter what $`\mu `$ is, because we feel we could do better inferences by keeping that fact into account somehow. Indeed a result of this kind does not convey much useful information to the reader. The same can be said for the softer pathology of โunreasonably smallโ confidence regions.
It is hard to avoid the suspect that problems of the kind exemplified above might be happening even in other cases that we usually regard as problemโfree, just because the problem is not so apparent to intuition.
Each of the encountered problems lies in the choice of a particular confidence band, and in principle can be cured by simply choosing a different band. However, one cannot content oneself with avoiding the problem case by case by rejecting unreasonable results โby handsโ. The above described weakness are so important to undermine the physicistโs belief in the meaning of CL. It is therefore necessary to find a general way to avoid any such โunwantedโ conclusion, even in possibly softer, hidden forms.
The question is: can we state precisely what properties we require from a confidence band to call it โphysically sensibleโ ? Does a single well-defined procedure exist to construct one in a generic case ?
Let us look in more detail at the meaning of confidence limits.
The Neymanโs definition of CL can be phrased in the following way:
โAn algorithm is said to have Confidence Level CL if it provides correct answers with probability at least CL, whatever the value of $`\mu `$ (or whatever its probability distribution, if it has one)โ
From a practical point of view, this means that if one considers, for instance, the set of all published limits at 95% CL, the expected fraction of them which is indeed โwrongโ (that, is, the limits do not include the true value) is 5% (or smaller, if there is some overcoverage).
For comparison, the definition of Bayesian credibility level, when phrased in a similar way, sounds like: โAn algorithm is said to have a Bayesian credibility level BL if all answers it produces have at least a probability BL of being correct, provided $`\mu `$ has the (known) probability distribution $`\pi (\mu )`$โ. In exchange for an additional assumption (the aโpriori distribution) the Bayesian method provides a probability statement about each measurement.
The classical approach cannot possibly do that, since the concept of probability for a single result of being correct simply cannot be formulated in the classical language: each particular result is either true or false, since the unknown parameter is taken to have one (if unknown) value, rather than a distribution of possible values. Superficially, however, the classical method seems to provide a close performance, when saying that the whole set of results contains only a limited fraction of wrong results ($`<1CL`$).
There is, however, a subtle difference between a statement extracted from a sample containing 95% correct statements, and a statement that has a 95% probability of being true. The difference is that in the first case some manifestly false or very unlikely statement are allowed to be part of the set (e.g. , empty confidence regions), provided they are a minority, while in the second case this is not possible: every single possible Bayesian inferred result is forced to be as likely as all others.
This is the fundamental reason for the absence of pathological conclusion in the Bayesian approach, that keeps tempting the classical physicist. Its appeal is so strong that even the purest classical papers show some slight inclination toward Bayesianism.
As an example, the method suggested in for evaluating an experiment sensitivity uses the concept of โaverage limitโ, that in general requires a aโpriori distribution of the parameter to be assumed, even if the paper only consider the special case of no signal for that purpose. In , after a nice classical suggestion for solving the Poisson problem classically, the results are compared to Bayesian results and their similarity is taken as a support to their soundness, notwithstanding the fact that if one had to change the a-priori distribution to something different the Bayesian result will change completely, while the classical result will always stay the same.
It becomes therefore imperative to ask the question: is there any way to give to the classical method the same solidity without introducing any Bayesian element ? If there is none, than it may be simpler to abandon the classical method completely and use Bayesian concepts instead.
The purpose of this paper is to suggest that there is indeed a way to obtain the desired properties in the classical framework.
The definition of CL ensures that the result will be correct at least a fraction CL of the cases. An empty region is never a correct conclusion, because $`\mu `$ has some value by hypothesis. The definition of CL is not meant to prevent wrong conclusions: it just makes sure that they happen rarely, the only limitation to empty CR being that it must not occur with probability greater than 1-CL, whatever the value of $`\mu `$. In fact, it is easy to see that, given a set of values of $`x`$ that has total probability $`<1CL`$ for any $`\mu `$, it is always possible to assign the empty set as confidence region for all $`x`$ in the set, provided the rest of the band is properly adjusted.
This fact may even appear as a kind of inescapable โlawโ of classical statistics. After all, in formal logic one has that given a contradictory (impossible) assumption one can rigorously derive any statement. We might have to accept a kind of probabilistic analogue as well, that is, from the occurrence of an unlikely event one can statistically infer any statement.
However, this is actually not the case. What disturbs the physicist is not the mere possibility of getting wrong results, which he obviously has to accept, but that one might get a wrong result and know it. One could say that those are โunluckyโ experimental results. But there are good reasons to refuse to surrender to the occurrence of โunluckyโ results: common sense suggests that once we get a result that we know to be uncommon, there should be a way to correctly account for its rarity, rather than getting confused by it.
In a way, what we really need is to make sure that all experimental outcomes get uniform treatment, like in the Bayesian method.
In this respect, in is worth noting that the strength of the definition of CL lies in its invariance for any transformation of the space of parameters $`\mu `$, even non-continuous. That is, all points of the parameters space get the same treatment, the metric and even the topology of the parameter space being irrelevant. We could even say that this is the essence of the classical statistics. This is to be contrasted with the Bayesian approach, where the a-priori distribution sets a well-defined metric in the parameter space. Then, why it happens that the classical methods seem to be so much worse than the Bayesian in assuring invariance in the outcome space ?
As a matter of fact, Neymanโs definition of CL (1) is symmetric for all values of $`x`$. However, most rules for constructing confidence bands break this symmetry: it is easy to see that by performing a change of variable in $`x`$ one obtains different bandsLR ordering is an important exception. See sec. IV for a discussion of this point. This allows the mere fact that a particular experimental outcome is unlikely for some parameter value to be used to exclude that value, regardless to the fact that the outcome might be unlikely for entirely different reasons than the value of the parameter being sought. That probability might be low for every value of the parameters, so the exclusion of that particular value of $`\mu `$ is taken on the basis of irrelevant information. Neymanโs construction, while compatible with total symmetry in $`x`$, does not explicitly enforces it, because it applies independently to each value of $`\mu `$, and there is no way to tell whether the distribution of $`x`$ has any dependence on the value of $`\mu `$.
We need therefore to find a way to prevent the introduction of information irrelevant to the determination of the parameters in the choice of the confidence band. It should be intuitively clear that there is a connection between the contamination from irrelevant information and the unequal treatment of various possible experimental outcomes that is the basis of paradoxical results.
The present approach is in some way the opposite of the attempts to improve the classical method by the addition of Bayesian elements: it goes in the direction of an even stricter classical orthodoxy. The use of any metric or topological property of the $`x`$ space is regarded as an โa priori biasโ producing unequal treatment of some values. That is a kind of contamination from โBayesianismโ that needs to be eradicated from a pure classical method, which ought to use only the information contained in the pdf.
### B A stronger concept of Confidence
We formalize the request that the choice of Confidence Regions must not be based on irrelevant information in the following requirement.
Suppose we take a subset of $`x`$ values and rescale all likelihoods $`p(x|\mu )`$ by the same arbitrary factor, (we have to re-normalize the pdf for the rest of $`x`$ values after that, of course). A physically sensible rule for constructing confidence bands must be invariant under this kind of transformation, since the overall absolute level of probability of the events $`x`$ does not affect the information that can be obtained on $`\mu `$. More precisely, we want to restrict the set of all possible confidence bands to a subset that satisfies the following property, which will be called local scale invariance:
DEFINITION โ Let $`xX`$ be an observable and $`\mu M`$ a parameter. Let $``$ be a rule for selecting confidence bands, that is, a function that associates to each possible distribution $`p(x|\mu )`$ a set of Neyman confidence bands with a given CL. We say that $``$ is a locally scaleโinvariant rule if for any two pdfโs $`p(x|\mu )`$ and $`p^{}(x|\mu )`$ such that $`p^{}(x|\mu )=cp(x|\mu )`$ for all $`\mu M`$ and for all $`x\chi X`$ (with $`c`$ positive constant), and for every confidence band $`B(p)`$, there exist a band $`B^{}(p^{})`$ such that $`B(x)=B^{}(x)`$ for every $`x\chi `$.
This requirement is simple, general, and intuitively satisfying: it says that whatever algorithm we want to use to choose a CR for a certain set of possible observations, it must not be influenced by anything else than the dependence on $`\mu `$ of the probability of the observations in question. Note that both the observable and the parameter space can be completely generic sets. We keep requiring all bands to comply with Neymanโs condition, which however does not by itself guarantee the above property, neither it does any of the proposed algorithms for producing confidence bands, including the LR-ordering. The latter appears clearly from our previous discussion of the example of Poisson with background.
It is interesting to observe that the rank assigned to $`x`$ by the LRโordering rule is indeed invariant under the above transformation, but the coverage criteria used to decide when to stop adding values of $`x`$ to the acceptance region is not. The normalization constant creates the difficulty here, since one can have a region rejected in one case, that cannot be rejected in the other because its contribution to the total integral may be too large.
We will now show that this seemingly weak requirement is actually very stringent in determining the set of allowed confidence bands, and that it can be turned into a well definite procedure for constructing bands.
This is seen from the following theorem.
THEOREM โ The largest set of locally scaleโinvariant bands coincides with the set of bands satisfying the following requirement:
For every $`\mu M`$ and every $`\chi X`$:
$$\frac{p(x\chi \mu B(x)|\mu )}{sup_\mu p(x\chi |\mu )}1CL.$$
(6)
whenever the denominator is nonโzero.
PROOF:
Part 1 โ All bands in a locally scaleโinvariant set satisfy condition (6).
Suppose (6) does not hold. Then there is a band $`B`$, a subset $`\chi `$ and a parameter value $`\overline{\mu }`$ such that
$$\frac{p(x\chi \overline{\mu }B(x)|\overline{\mu })}{sup_\mu p(x\chi |\mu )}>1CL$$
(7)
Then consider a new pdf defined inside $`\chi `$ by:
$$p^{}(x|\mu )=\frac{p(x|\mu )}{sup_\mu p(x\chi |\mu )}$$
and arbitrarily extended outside $`\chi `$. This is always possible since by construction $`_\chi p^{}(x|\mu )๐x1`$ for every $`\mu `$.
Obviously, for every $`\mu `$:
$$p^{}(\mu B(x)|\mu )p^{}(x\chi \mu B(x)|\mu )$$
And from (7):
$$p^{}(x\chi \mu B(x)|\mu )=\frac{p(x\chi \mu B(x)|\mu )}{sup_\mu p(x\chi |\mu )}>1CL$$
then
$$p^{}(\overline{\mu }B(x)|\overline{\mu })>1CL$$
which contradicts Neymanโs condition. Therefore $`B`$ could not be part of an invariant set, in contradiction with the hypothesis. Therefore eq. (6) is proved.
Part 2 โ The set of all bands satisfying (6) is a locally scaleโinvariant rule.
First of all, note that (6) implies Neymanโs condition as a special case (just take $`\chi =X`$).
Take any $`p`$, $`B`$, $`\chi X`$, $`c>0`$, and $`p^{}=cp`$ for all $`x\chi `$. Note that the ratio in (6) does not change when the pdf is scaled by a constant, so if B satisfies (6) for $`p`$ in $`\chi `$ and all its subsets it will also satisfy it for $`p^{}`$. Letโs define $`B^{}=B`$ in $`\chi `$ and $`B^{}=M`$ (the whole parameter space) outside $`\chi `$. Then, for any $`\xi X`$ we have:
$`{\displaystyle \frac{p^{}(x\xi \mu B^{}(x)|\mu )}{sup_\mu p^{}(x\chi |\mu )}}=`$
$`={\displaystyle \frac{p^{}(x(\xi \chi )\mu B^{}(x)|\mu )}{sup_\mu p^{}(x\chi |\mu )}}`$
$`{\displaystyle \frac{p^{}(x(\xi \chi )\mu B^{}(x)|\mu )}{sup_\mu p^{}(x(\xi \chi )|\mu )}}=`$
$`={\displaystyle \frac{p(x(\xi \chi )\mu B(x)|\mu )}{sup_\mu p(x(\xi \chi )|\mu )}}1CL`$
This means $`B^{}`$ satisfies (6) for the distribution $`p^{}`$. Since we defined $`B^{}=B`$ in $`\chi `$, this proves local scaleโinvariance of the set of bands given by (6).
Part 1 and 2 together show that the two sets coincide, concluding the proof. Note that they implicitly prove that the โlargest set of locally scaleโinvariant bandsโ indeed exists, which was not granted a priori<sup>ยง</sup><sup>ยง</sup>ยงWe could have proved the existence beforehand, by observing that the union of any number of invariant rules is still an invariant rule, therefore the largest invariant rule is immediately identified as the union of all possible invariant rules..
Condition (6) is clearly connected with the intuitive concept of uniform treatment of all experimental results, and offers a much clearer indication than the equivalent scaleโinvariance requirement about how to construct in practice a satisfying confidence band.
It also appears as a natural extension of Neymanโs $`CL`$ concept, because it amounts to simply applying at local level the same requirement Neyman imposed on the observable space as a whole.
This fact suggest an alternative formulation: rather than regarding the (6) as a rule for identifying a particular subset of confidence bands, we can take this condition as a new, more restrictive, definition of limits within the classical framework (โStrong Confidence Limitsโ) and define a new quantity (โstrong CLโ, or โsCLโ) in analogy with the usual $`CL`$ (eq. (1)):
$$sCL(B)=1\underset{\mu ,\chi }{sup}\frac{p(x\chi \mu B(x)|\mu )}{sup_\mu p(x\chi |\mu )}$$
(8)
The strong CL is then a quantity that can be evaluated for a completely arbitrary band, just as the regular CL. Note that it is always $`sCLCL`$, in accordance with the greater strength of the concept.
## IV Properties of Strong Confidence Regions
The meaning of strong confidence can be summarized as follows: take a subsample of possible experimental results, however defined. While it is still not guaranteed that the probability for them to be correct is at least CL as with Bayesian methods, what we gained over Neymanโs CL is that, independently of the aโpriori distribution of $`\mu `$, the number of wrong result is a small fraction of the maximum expected number of results of that kind. That is, there may be distributions for $`\mu `$ that lead to all results in that category to be false, but in that case those results will present themselves much more rarely than when they lead to correct conclusion, and this holds for all possible results in the same way. This is basically how far we can go within the classical framework in terms of getting โindividually certifiedโ resultsNote that, even if one assumes that a distribution of the parameter exists, a probability statement about each result is impossible to obtain in the classical framework without knowing the distribution (a priori), unless one chooses the trivial solution of the band covering the whole space..
The possibility of empty confidence regions is ruled out here in full generality, unlike the case of LR ordering: if there is a set $`\chi `$ of values for which the confidence region is empty, then obviously there exist a $`\mu `$ for which the ratio on left side of (6) is arbitrarily close to 1. Unless CL=0, that means the total probability of $`\chi `$ is identically zero for all $`\mu `$.
It is also easy to see that strong bands are stable for small perturbations of the pdf like those previously discussed in the examples. This is due to the fact that the requirements being made are based on integrals of the pdf rather than its punctual values. The integrals on all subsets that are not too small stay the same after the addition of perturbations of small total probability. The effect is only seen on small scales, and it just forces the addition of the small region where the perturbation is large to the unperturbed band.
### A Independence from change of variable
The above defined strong bands have another interesting property: they are invariant under any change of variable in $`x`$โspace. This is obvious since the probabilities appearing in the ratio in (6) scale proportionally under any change of variable. We stressed before that the strength of the classical approach lies in its independence from metric in $`\mu `$โspace, that is, in its equanimity with respect to every value of $`\mu `$, in contrast with the Bayesian approach where all values are explicitly weighted for relative a-priori importance.
It should be clear as well that the use of a particular metric of $`x`$ space in constructing a CR is a way to introduce aโpriori discriminations between values of the $`x`$, that is, to introduce arbitrary (irrelevant) information in the choice of the confidence band, so it is not surprising that this invariance is a consequence of our approach.
Amongst all common rules for selecting CR, LR-ordering is the only one to be independent from transformations of $`x`$. The partial success of the LR ordering principle might in the end be traced back to its compliance with this requirement of independence from metric in $`x`$ space. In fact, the LR ordering rule is equivalent to the narrowest band in that particular metric in parameter space that makes the maximum likelihood value constant for all $`\mu `$.
We have seen, however, that while this property is desirable, and probably necessary for physically sensible results, it is not sufficient to ensure them.
### B Construction of Strong Confidence Regions
A simple and useful corollary of (6) is:
COROLLARY โ If the observable is discreteIt does not hold in full generality for a continuous variable, since it is always possible to choose an arbitrary band for single isolated $`x`$ without affecting any of the formulas above, that always refer to events or set of events with of non-zero total probability. However, it is intuitively expected that it holds for continuous variables too, provided some regularity condition is asked, than for every value of $`x`$, any strong band always includes all values of $`\mu `$ such that $`p(x|\mu )>p(x|\widehat{\mu })(1sCL)`$.
PROOF: just put $`\chi =\{x\}`$ in (6).
This immediately shows another appealing feature of strong CL: it is forbidden to exclude any value of $`\mu `$ having a likelihood โcloseโ to the max-likelihood value. Again, this is not a property of any other method (LRโordering just tends to give such values somewhat high ranks, and $`\widehat{\mu }`$ gets the highest rank whenever it exists, but there is no guarantee on their actual inclusion in the band. Other rules do even less than that).
Note that, just as in Neymanโs CL, in a generic case there may be many different โlegalโ bands for a given pdf and sCL, therefore the question of the choice between them reappears. However, since there is now no fear of unreasonable results, the only reason for pinpointing a general and unique choice is just to prevent the possible distorted practice of choosing the band after the experiment, and the question is largely a matter of convenience. In order to be coherent with the spirit of the current approach, however, the choice must be formulated in such a way to be invariant under any transformation of $`x`$.
For instance, a good choice might be to minimize the coverage for every value of $`\mu `$ independently. This makes for the lowest possible CL for the given sCL. Conversely, the bands chosen in this way will have the highest sCL for a given CL. They can be considered with good reason the โbest bandโ for a given CL, in case an experimenter wishes to fix the desired value of CL as usual, rather than the sCL. Obviously, if the maximum sCL corresponding to a given CL is small or even zero, that means no physically sensible band is possible without increasing the CL (that is,โovercoveringโ of all values of $`\mu `$ is necessary).
In practice the freedom of choice is often very limited, since the โcore regionโ identified by the above corollary must be completely included by any legal strong band at the given sCL. That core region is defined only by the the pdf for the local values of $`x`$, therefore is not affected by changes in the pdf for other values.
The actual determination of the bands in other than the simplest cases requires numerical calculations. We now describe a simple algorithm to construct in practice a band satisfying the criteria.
In order to do numerical calculations, the pdf must be discretized if the parameter or the observable are continuous. This is achieved by sampling the parameter space with an Nโdimensional grid, and splitting the space $`X`$ of the observable into a finite number of regions. Those regions are considered as possible discrete outcomes, and their probabilities are obtained by integrating the density $`p(x|\mu )`$ over each of the regions. In this way, a rectangular matrix is obtained, independently on the dimensionality of the $`x`$ and $`\mu `$ spaces, which may be both arbitrary-length vectors of numbers. This matrix is used as input in the following simple algorithm.
All intervals of $`x`$ are initially assigned to the rejected region, that is, the band is initialized to be empty. For each value of $`\mu `$, one loops over all possible sets composed of any number of the chosen $`x`$ regions. The condition (6) is checked on all sets in turn, and if found invalid, one of the regions in the current set is added to the confidence band, and removed from any further checks. The set of accepted regions obtained upon completion of this procedure for all values of $`\mu `$ is a strong band. The freedom in the choice of the region to be added to the band is what allows different solutions to be generated.
It is not obvious how to achieve the minimal coverage requirement suggested above within this stepwise procedure. There are however simple and reasonable recipes for performing the choice step of the algorithm. One can, for instance, systematically choose the lowest/highest $`x`$ to get the analogue of lower/upper limits in the standard approach, or choose the region with the highest value of the ratio tested by condition (6). The latter appears particularly natural and has the interesting characteristics of representing an extension of the LRโordering rule to the sCL context, even if the result might be slightly dependent on the order in which the sets of regions are being checked by the algorithm.
### C Sample Applications
The definition of strong CL gives satisfying answers to all problems listed in Sec. II B.
In some cases the solution follows immediately from the corollary above.
One of them is the โindifferentโ pdf, where the conclusion that no value of the parameter can be excluded, whatever the required sCL is immediately found, and it is stable for small perturbations of the pdf.
For the uniform distribution, the full range of $`\mu `$ for which $`L(\mu )>0`$ gets included, whatever the chosen $`sCL`$. This strong statement reflects the intuitive arbitrariness of any choice wishing to exclude some value of a parameter in favor of others having exactly the same likelihood. In fact, when a problem with uniform pdf is encountered, most physicists donโt even formulate a question of Confidence Limits, but just quote the absolute extrema of the allowed interval for $`\mu `$.
For the Poisson with background, it is easy to see that the result for the case of zero observed events will be independent of background. The probability of zero events is $`e^\mu e^b`$, so by changing the expected background $`b`$ one changes the likelihood by a simple multiplicative constant. From the definition of local scale invariance one has immediately that the limits for this case cannot depend on $`b`$. This statement needs a bit of clarification: we have remarked that the strong band is not uniquely identified in a general case, therefore one can make various choices. What is guaranteed here is that all possible choices for the limits from zero counts for a given value of $`b`$ are also acceptable choices for any other value of $`b`$. This does not imply that one must necessarily make the same choice in the two cases.
We have calculated the confidence limits in the special case of $`b`$=3.0 using the simple method outlined in the previous section, and compared the results with other classical methods in Table I. The upper, lower and the LR-ordering analogue choices mentioned above are shown. The intervals obtained are wider than with any other method.
This should not be considered a loss of power , but rather regarded as a reflection of the higher standards of quality required to the result. The parts of the band that would be excluded by other methods are here included just on the same basis that yields the correct conclusion for the zeroโcount case, and prevents crazy conclusions from indifferent distribution: their likelihood is not low enough with respect to the maximum value. These considerations suggest that one should not consider this widening of the band a loss of power unless one also considers a loss of power the inability to draw conclusions on the mass of neutrinos by throwing dice.
## V Summary
The current methods for determining classical Confidence Limits produce counterโintuitive results in a variety of situations. This includes the recent proposals based on Likelihood Ratio ordering, that is not immune from the problem of empty confidence regions.
By imposing the requirement that only the information contained in the shape of the Likelihood function be used in determining the limits, a stronger definition of classical limits is derived, which is a natural extension of the original Neymanโs condition.
This โstrong confidence limitsโ turns out to be immune to the problem of empty accepted regions, and stable for small perturbations of the probability distribution, at the price of some widening of the usual limits.
###### Acknowledgements.
I wish to thank Luciano Ristori of Istituto Nazionale di Fisica Nucleare in Pisa for useful discussions and comments on the manuscript. |
no-problem/9912/astro-ph9912057.html | ar5iv | text | # Implications of the Optical Observations of Isolated Neutron Stars
## 1. Introduction
Since the first optical observations of the Crab pulsar in the late 1960s (Cooke et al 1969) only 4 more pulsars have been seen to pulsate optically (Middleditch and Pennypacker (1985); Wallace et al (1977); Shearer et al (1997); Shearer et al (1998)). Four of these five pulsars are probably too distant to have any detectable thermal emission. For the fifth (and faintest) pulsar, PSR 0633+17, emission has been shown to be non-thermal (Martin et al 1998). Many suggestions have been made concerning the optical emission process for these young and middle-aged pulsars. However the most successful phenomenological model (both in terms of its simplicity and longevity) has the proposed by Pacini (1972) and in modified form by Pacini and Salvati (1983, 1987). In general they proposed that the high energy emission comes from electrons radiating via synchrotron processes in the outer regions of the magnetosphere.
In recent years a number of groups of carried out detailed simulations of the high-energy processes. These models divide into two groups - between emission low in the magnetosphere (polar cap models) and those with the acceleration nearer to the light cylinder (outer-gap models). Both models have problems explaining the observed features of the limited selection of high energy emitters. Both models suffer from arbitrary assumptions in terms of the sustainability of the outer-gap and the orientation of the pulsarโs magnetic field to both the observers line of sight and the rotation axis. Furthermore observational evidence, see for example Eikenberry & Fazio (1997), severely limits the applicability of the outer-gap to the emission from the Crab. However they have their successes - the total polar-cap emission can be understood in terms of the Goldreich and Julian current from in or around the cap; the Crab polarisation sweep is accurately produced by an outer-gap variant Romani et al (1995).
It is the failure of the detailed models to explain the high energy emission which has prompted this work. We have taken a phenomenological approach to test whether Pacini type scaling is still applicable. Our approach has been to try and restrict the effects of geometry by taking the peak luminosity as a scaling parameter rather than the total luminosity. In this regard we are removing the duty cycle term from PS87. Furthermore it is our opinion that to first order the peak emission represents the local power density along the observerโs line of site.
## 2. The Phenomenology of Magnetospheric Emission
The three brightest pulsars (Crab, Vela and PSR 0540-69) are also amongst the youngest. Table 1 shows the basic parameters for these objects. However all the pulsars have very different pusle shapes resulting in a very different ratio between the integrated flux and the peak flux. Table 1 also shows this peak emission (taken as the emission at the top of the largest peak). Their distances imply that the thermal emission should be low (in all cases $`<`$ 1% of the observed emission).
Of all the optical pulsars PSR 0633+17 is perhaps the most controversial. Early observations (Halpern & Tytler (1988), and Bignani et al (1988)) indicated that Geminga was an $``$ 25.5 m<sub>V</sub> object. Subsequent observations including HST photometry appeared to support a thermal origin for the optical photons, albeit requiring an arbitary assumption of cyclotron resonance feature in the optical (Mignani et al, 1998). The optical observations of Shearer et al (1998) combined with spectroscopic observations (Martin et al, 1998) contradict this view. Figures 1 and 2 show how this misunderstanding could have arisen. Figure 1, based upon data from Bignani et al (1998) shows the integrated photometry. It would be possible to fit a black body curve through this, but only with the a posteriori fitting of a cyclotron resonance feature at about 5500 ร
. Figure 2 however shows the same point plotted on top of the Martin et al spectra, we have also included the pulsed B point. This combined data set indicates a flat spectrum consistent with magnetosheric emission, without the requirement for such an ad hoc feature. It was on the basis of these results that Golden & Shearer (1999) were able to give an upper limit of R of about 10km.
With PSR 0656+14 there is a discrepancy between the radio distance based upon the dispersion measure and the best fits to te X-Ray data. From radio dispersion measure a distance of $`760\pm 190pc`$ can be derived at odds with the X-ray distance of $`250280pc`$ from $`N_H`$ galactic models. Clearly more observations are needed to determine a parallax.
Figure 2 shows the relationship between the peak luminosity with the outer magnetic field. A regression of the form $`PeakLuminoisty=aB^b`$ was determined for the peak luminosity this lead to a relationship of the form Peak Luminosity $`B^{2.86\pm 0.12}`$ significant at the 99.5% level. From PS87 we would expect a relationship of the form Peak Luminosity $`B^4`$ for acceptable values of the energy spectrum exponent of the emitting electrons - in reasonable agreement with our derived relationship. Whilst informative it still goes no further than previous attempts to understand the phenomena of optical emission. The flattening of the peak luminosity relationship for the older, slower pulsars is consistent with their having a steeper energy spectrum than the younger pulsars. However we can state that from both polarisation studies (Smith(1988);Romani & Yadigaroglu (1995)) and from this work we expect that optical emission zone should be sited towards the outer magnetosphere. Timing studies of the size of the Crab pulse plateau indicates a restricted emission volume ($``$ 45 kms in lateral extent) (Golden et al (2000)). This third point, if consistent with the first two, probably points to emission coming from a geometrically defined cusp along our line of sight. Finally, there is no evidence of optical thermal emission from these 5 pulsing optical neutron stars (Martin et al (1998); this work).
## 3. Conclusion
Over the next few years (with the advent of larger telescopes and more sensitive detectors, see for example Perryman (1999) & Romani (1999)) we can confidently expect the number of optical detections of isolated neutron stars to increase. In this region of the spectrum any potential thermal component can be separated from the strongly pulsed magnetospheric emission, allowing for reliable estimates of the neutron star radius to be measured with consequent implications for equation of state models. One word of caution however - our studies (see Golden et al (1999) and this conference) indicate that the optical emission (at least from the Crab pulsar) also exhibits an unpulsed component.
## References
Bignani, G. F., Caraveo, P. A. & Paul, J. A., 1988, A&A, 202, L1
Cocke, W. J., Disney, M. J. & Taylor, D. J., 1969, Nature, 221, 525
Caraveo, P., Bignami, G. F., Mignani, R. & Taff, L. G., 1996, A&AS, 120, 65
Eikenberry, S. S. & Fazio, G. G., 1997, ApJ, 476, 281
Golden, A. & Shearer, A., 1999, A&A, 342, L5
Golden, A. et al, 2000, submitted to ApJ
Martin, C, Halpern, J.P. & Schiminovich, D., 1998, ApJ, 494, L211
Middleditch, J. & Pennypacker, C., 1985, Nature, 313, 659
Mignani, R. P., Caraveo, P. A., & Bignami, G. F., 1998, 332, L37
Pacini, F., 1971, ApJ, 163,17
Pacini, F. and Salvati, M., 1983, ApJ, 274, 369
Pacini, F. and Salvati, M., 1987, ApJ, 321, 445
Perryman, M. A. C., Favata, F., Peacock, A., Rando, N. & Taylor, B. G., 1999, A&A, 346, 30
Romani, R. W., & Yadigaroglu, I.-A., 1995, ApJ, 438, 314
Romani, R. W. et al, 1999, ApJ, 521, L151
Shearer, A. et al, 1997, ApJ, 487, L181
Shearer, A. et al, 1998, A&A, 335, L21
Smith, F., Jones, D., Dick, J. S. P. & Pike, C. D., 1988, MNRAS, 233, 305
Wallace, P. T. et al. 1977, Nature, 266, 692 |
no-problem/9912/hep-ex9912067.html | ar5iv | text | # 3 Power consumption of large EIDE disk drives.
Working with Arrays of Inexpensive EIDE Disk Drives
(Including an Appendix with a December 1999 Update)
David Sanders, Chris Riley, Lucien Cremaldi, and Don Summers
University of MississippiโOxford
Don Petravick
Fermilab
Abstract:
In todayโs marketplace, the cost per Terabyte of disks with EIDE interfaces is about a third that of disks with SCSI. Hence, three times as many particle physics events could be put online with EIDE. The modern EIDE interface includes many of the performance features that appeared earlier in SCSI. EIDE bus speeds approach 33 Megabytes/s and need only be shared between two disks rather than seven disks. The internal I/O rate of very fast (and expensive) SCSI disks is only 50 per cent greater than EIDE disks. Hence, two EIDE disks whose combined cost is much less than one very fast SCSI disk can actually give more data throughput due to the advantage of multiple spindles and head actuators. We explore the use of 12 and 16 Gigabyte EIDE disks with motherboard and PCI bus card interfaces on a number of operating systems and CPUs. These include Red Hat Linux and Windows 95/98 on a Pentium, MacOS and Appleโs Rhapsody/NeXT/UNIX on a PowerPC, and Sun Solaris on a UltraSparc 10 workstation.
Computing in High Energy Physics Conference (CHEP โ98)
August 31 โ September 4, 1998
Hotel InterโContinental
Chicago, Illinois, USA
Contact: sanders@relativity.phy.olemiss.edu
This work was supported by the U.S. Department of Energy under
grant DE-FG02-91ER40622 and contract DE-AC02-76CH03000.
Introduction
In todayโs marketplace, the cost per Terabyte of disks with EIDE (Enhanced Integrated Drive Electronics) interfaces is about a third that of disks with SCSI (Small Computer System Interface). Hence, three times as many particle physics events could be put online with EIDE. The modern EIDE interface includes many of the performance features that appeared earlier in SCSI. EIDE bus speeds approach 33 Megabytes/s and need only be shared between two EIDE disks rather than seven SCSI disks. The internal I/O rate of very fast (and expensive) SCSI disks is only 50 percent greater than EIDE disks. Direct Memory Access (DMA), scatter/gather data transfers without intervention of the Central Processor Unit (CPU), elevator seeks, and command queuing are now available for EIDE, as well as support for disks larger than 8.4 Gigabytes. PCI (Peripheral Control Interface) cards allow the addition of even more EIDE interfaces, in addition to those already on the motherboard.
Motivation
There are a number of High Energy Physics Experiments that have produced Terabytes of data . A few examples as of 12/95 are:
The efficiency of data analysis is greatly enhanced by using disk based files of filtered Data Summary Tapes (DSTs) rather than continually loading files from tapes. However, the high cost of disks have hindered more widespread use. Low cost EIDE disks are improving this situation.
Big Disks
Tests Performed
For this paper we tested two of the large capacity EIDE disks with six different operating systems and a PCI EIDE disk controller card. The six operating systems are Mac OS 8.1, Apple Rhapsody DR2, Sun Solaris 2.6, Windows 95b, Windows 98, and RedHat LINUX 5.1 (kernel 2.0.34). The two disk drives and the disk controller card are described below:
* Quantum Bigfoot<sup>TM</sup> TX 12 GB, 4000 RPM, 142 Mbits/sec Maximum internal data rate, 12 ms average seek time.
* The IBM Deskstar<sup>TM</sup> 16GP . 16.8 GB, 5400 RPM, 162 Mbits/sec Maximum internal data rate, 9.5 ms average seek time.
* Promise Technologies Ultra 33<sup>TM</sup> PCI EIDE controller card . Supports 4 drives, Ultra ATA/EIDE/Fast ATA-2. Cost: $50.
Both the Quantum Bigfoot<sup>TM</sup> TX 12 GB and the IBM Deskstar<sup>TM</sup> 16GP 16 GB disks were successfully tested with the following systems:
Ten Terabyte EIDE Disk Architecture
The recipe for a simple 10 Terabyte EIDE Disk Architecture is as follows:
* Attach eight 16GB EIDE disks to each of 75 CPUs with the help of Promise PCI controller cards.
* Since EIDE cables have a maximum length of 18โ, it is easier to run extra DC power cables into a computer tower than to run EIDE cables out.
* Load data on these disk arrays.
* Plan to usually run analysis jobs on the same machine as the data.
* Use fast Ethernet switches to allow for remote jobs at a modest level.
Future
Future plans may include testing the drives with Apple Rhapsody, Sun Solaris, and newer releases of Red Hat LINUX. (The 8 GB limit seen so far on Rhapsody DR2 and Solaris 2.6 may be fixed in later releases.) Also new technologies that are worth investigating include both โLazy RAIDโ and Firewire<sup>TM</sup>.
Lazy RAID
Lazy RAID (Redundant Array of Inexpensive Disks) is an idea for using disk arrays that offers protection for disks in the event of catastrophic failure of one disk in the array. This system uses a number of data disks (say 7) plus one parity disk. Therefore, if one disk dies the parity disk would allow the recovery of data from the dead disk. One could use the RAW DEVICE interface to calculate parity with the CPU. If a disk fails then the operator would swap out the dead EIDE drive and reconstitute the dead disk drive onto the replacement drive using the parity disk and the remaining data disks. This system is well suited for use as scratch disks where a filtered DST is placed on disk once and read and analyzed many times. Using this scheme the one parity disk is updated only when a file is written to (or erased from) a disk.
Firewire
Firewire IEEE 1394 Specifications :
* Up to 25 or 50 Megabyte/s.
* Up to 63 devices per interface.
* Uses two twisted pair data lines.
* โFairnessโ bus arbitration.
* Supported by MacOS and Windows 98.
A printed circuit board and DSP driver software would have to be developed using the TI chip set . Shown below is a Firewire to EIDE Disk Block Diagram that might allow one Terabyte Per PCI Slot:
Conclusion
EIDE disk arrays are an inexpensive way to add large amounts of disk space to both single Workstations (and PCs) and multiprocessor computing farms. They provide an additional layer to the data storage โcakeโ.
1. S. Bracker, K. Gounder, K. Hendrix, and D. Summers, A Simple Multiprocessor Management System for Event-Parallel Computing, IEEE NS-43 (1996) 2457.
2. http://www.quantum.com/products/hdd/bigfoot\_tx/
3. http://www.storage.ibm.com/hardsoft/diskdrdl/desk/1614data.htm
4. http://www.promise.com/html/sales/Ultra33.html
5. http://www.apple.com/powermac/g3/
6. http://www.dell.com/products/dim/xpsr/specs/index.htm
7. http://www.skipstone.com/info.html
8. http://www.ti.com/sc/docs/news/1998/98029.htm; http://www.ti.com/sc/docs/dsps/details/43/flash.htm; http://www.ti.com/sc/docs/msp/1394/41lv0x.htm; http://www.ti.com/sc/docs/storage/products/cont.htm
EIDE/ATA Disk Drive Update โ December 1999
Costs of disk drives with EIDE/ATA interfaces have fallen since the Chicago CHEP โ98 conference. Drive capacity and I/O rates have risen. EIDE/ATA disks beyond the old 8 Gigabyte limit now work with more platforms and cards. EIDE disks remain more than twice as cost effective as SCSI disks and now equal the internal I/O speeds of many SCSI disks. We have run RAID 5 on EIDE disks under Linux which both stripes data across disks for speed and provides parity bits for data recovery. Tape backup may no longer be required to recover from one disk failure in a set. Arrays of EIDE disks with Linux PCs serving as disk controllers are attractive and may provide many Terabytes of economical rotating online storage. The cost of a quarter Petabyte EIDE disk farm is approaching the cost of a StorageTek PowderHorn silo with 5000 50 Gigabyte RedWood tapes. Finally, we include more information on IEEEโ1394 FireWire which may allow Terabyte arrays of EIDE/ATA disks to be directly connected to one or more computers at up to 50 Megabytes/second per interface.
We now have our 12 GB Quantum Bigfoot TX and 16.8 GB IBM Deskstar 16GP EIDE drives running in our Sun Ultra 10 workstation. We put in a newer motherboard (Sun Part No. 375-0009-09, Date Code: 9843 DARWIN M/B) and upgraded the operating system from Solaris 2.6 to Solaris 7. We did not test the two changes individually, but one or both put us over the old 8 Gigabyte limit.
Promise Technology now sells their Ultra66 EIDE to PCI controller card. The Ultra66 provides up to 66 MB/s on each of two channels with two drives per channel. The card costs $40. We use Promiseโs previous 33 MB/s EIDE to PCI card daily in our Linux PC server (mail, backupโฆ) with IBM Deskstar disks .
ProMAX sells a TurboMAX/ATA 33 Host Adapter for Macintosh PCI buses. It allows adding four EIDE drives to an Apple Macintosh. Disks can be striped in pairs. FirmTek is working on an Apple Macintosh software driver for the Promise Ultra66 EIDE to PCI card.
Terabytes of Linux RAID 5 Disks
RAID stands for Redundant Array of Inexpensive Disks. Many industry offering meet all of the qualifications except the inexpensive part, severely limiting the size of an array for a given budget. This may change. RAID on EIDE disks under Linux software which both stripes data across disks for speed and provides parity bits for data recovery (RAID 5) is now available . With redundant disk arrays, tape backup is not needed to recover from the failure of one disk in a set. This removes a major obstacle to building large arrays of EIDE disks. A RAID 5 set of eight 41 Gigabyte disks fits in a full tower case of a PC running Linux. This provides over a quarter of a Terabyte per box. The boxes would be connected using 100 Megabit/second Fast Ethernet PCI cards in each box plus Ethernet switches . This looks to be very doable.
We have done a quick test of the Linux RAID 5 software using two 25 Gigabyte IBM Deskstar 25GP EIDE disks. The host was a Pentium II with Red Hat Linux 6.0 and a Promise Technology Ultra 33 EIDE/PCI card. The test ran as expected. Naturally, half the disk space is devoted to parity with only two disks. For a real RAID 5 system, eight or more disks would be a more efficient use of space. The fraction of disk space devoted to parity equals the inverse of the number of disks in a set.
On a mundane note, disk drives typically draw two amps at 12 volts for 15 seconds when starting. Thus eight drives can draw 16 amps at 12 volts. This can tax the ratings of an inexpensive commodity 300 watt PC power supply. Care needs to be taken to choose a supply with a large portion of its wattage devoted to 12 volts. The U.S. EPA Energy Star/Green PC Initiative has led to the development of a Standby command for disks that might allow a staggered startup of a disk array. The command โ/sbin/hdparm -S nโ will spin down disks under Linux. As array size grows, a second commodity 300 watt PC power supply might be required per case.
Comparison of Quarter Petabyte Disk and Tape Storage Systems
In Table 4, we compare a quarter Petabyte EIDE disk farm to an automated StorageTek PowderHorn tape silo with eight RedWood tape drives and 5000 50 Gigabyte tapes. The disk farm estimate includes disks, parity disks, Linux and RAID software, CPUs, motherboards, cases , power supplies, memory, Fast Ethernet PCI cards, Promise Ultra66 cards, Ethernet switches , and racks. The Linux PC that runs each disk set costs about the same as a high end SCSI-to-PCI controller card.
To achieve a quarter Petabyte, 873 Linux PCs are required with eight 40.9 GB disks each. One eighth of the disk space is devoted to parity for data recovery from disk failure. Care must be taken to write protect files and disks to prevent accidental deletion. Physically, the PCs form a wall 4 high by 2 deep by 110 wide (2.4 $`\times `$ 1.1 $`\times `$ 24 meters). Each Linux PC consumes about 90 watts, equally divided between the disks and the CPU/motherboard. A dozen 24 000 BTU window air conditioners would suffice to remove this 80 Kilowatt heat load. Much less heat is generated in standby mode. The first level network consists of 288 $75 fast ethernet switches . A single high end switch with 288 fast ethernet ports is used for the network backbone . The disks themselves can be used to transport data between sites. A high rate experiment might generate a Terabyte of data a day which one wished to move. A Terabyte fits on 25 disks, which easily fit in a suitcase for shipping.
In summary, the disk farm cost is not too much greater than the tape silo cost and the performance of the disks is far better. One also gets a Teraflop of computing power as a free bonus; and the disk farm encapsulates data with instructions in physical computing objects which can be exploited to increase efficiency. Disk farm sizes can be scaled in size with great flexibility! Its sometimes difficult for a university to buy a whole tape silo . Now everyone can have the benefit of online data.
Terabyte Arrays of IEEEโ1394 FireWire Disks
The amount of disk space one can connect to a CPU directly with 18โ EIDE cables is currently less than a Terabyte. In some applications, one might want to access more data with fast local disk and not suffer the overhead of network software. One may be able to use four IEEEโ1394 FireWire buses with a single CPU to attach up to 63 inexpensive EIDE disks per bus for a total of 10 Terabytes of local storage at 200 MB/s. FireWireโs peerโtoโpeer topology also adds significant new functionality by allowing multiple computers to share multiple disks directly on a single bus.
Symbios Logic/LSI Logic has a $13 IEEEโ1394 to ATA/ATAPI controller chip . The SYM13FW500 integrates a 400 Mbits/s IEEEโ1394 (FireWire) physical interface (PHY) with an ATA/ATAPI interface, all on a lowโpower CMOS IC. Each SYM13FW500 supports two ATA/ATAPI devices. Wyle Electronics distributes the part. Recently, Oxford Semiconductor has also introduced an IEEEโ1394 to ATA/ATAPI controller chip, the OXFW900 . Texas Instruments has decided not to market their prototype IEEEโ1394 to ATA/ATAPI chip.
EIDE disks with EIDE to IEEE 1394 FireWire interfaces are available from LaCie and VST Technologies. VST Technologies has also shown FireWire RAID arrays with mirroring and striping, but not parity . All the disks are EIDE.
An interesting possibility might be to put eight EIDE drives in an inexpensive PC case with a 300 watt power supply. Then add four EIDE to FireWire interface chips to a circuit board with the same form factor as a PC motherboard. The color of the PC case can even be special ordered for Apple Macintosh users.
Andreas Bombe, Sebastien Rougeaux, and Emanuel Pirker are in the process of writing GNU Linux software drivers for IEEE 1394/FireWire devices .
Pieces appear to be converging. It may, in the not too distant future, be possible to directly connect Terabyte arrays of EIDE/ATA disks to one or more computers at up to 50 Megabytes/second per FireWire interface. Some computers come with FireWire on the motherboard. FireWire interfaces can also be added with cards such as OrangeMicroโs HotLink FireWire PCI Board .
REFERENCES
1. Friedhelm Schmidt, The SCSI Bus and IDE Interface: Protocols, Applications, and Programming, Second edition, AddisonโWesley (1997) ISBN 0-201-17514-2. EIDE stands for Enhanced Integrated Drive Electronics.
2. http://www.storagetek.com/StorageTek/hardware/tape/9310/9310\_sp.html http://www.storagetek.com/StorageTek/hardware/tape/SD3/SD3\_sp.html
3. http://www.shopper.com http://www.gateway.com/spotshop/ Validation Number KM120 10% off
4. http://www.maxtor.com/diamondmax/40.html
5. http://www.storage.ibm.com/hardsoft/diskdrdl/desk/37gp34gxpdata.htm
6. http://www.seagate.com/cda/products/discsales/personal/family/0,1128,154,00.html
7. http://www.westerndigital.com/company/releases/990426.html http://www.westerndigital.com/products/drives/specs/wd273bas.html
8. http://www.western-digital.com/products/drives/specs/wd307aas.html
9. http://www.quantum.com/products/hdd/fireball\_lct10/fireball\_lct10\_specs.htm
10. http://www.maxtor.com/diamondmaxplus/40p.html
11. http://www.seagate.com/products/discsales/discselect/A1a1.html http://www.seagate.com/cda/products/discsales/enterprise/family/0,1130,43,00.html
12. http://www.quantum.com/products/hdd/atlas\_10k/atlas\_10k\_specs.htm http://www.quantum.com/products/hdd/atlas\_10kii/atlas\_10kii\_specs.htm
13. http://www.storage.ibm.com/hardsoft/diskdrdl/ultra/36xpdata.htm
14. http://www.storage.ibm.com/hardsoft/diskdrdl/ultra/72zxdata.htm
15. http://www.seagate.com/products/discsales/discselect/A1a1.html http://www.seagate.com/cda/products/discsales/enterprise/tech/0,1131,214,00.shtml
16. http://www.quantum.com/products/archive/bigfoot\_tx/bigfoot\_tx\_overview.htm
17. http://www.storage.ibm.com/hardsoft/diskdrdl/desk/1614data.htm http://www.storage.ibm.com/hardsoft/diskdrdl/prod/14gx16gppr.htm
18. http://www.promise.com/Products/idecards/u66.htm http://www.promise.com/Products/idecards/u66compat.htm http://www.promise.com/Latest/latedrivers.htm#linuxu66
19. http://www.storage.ibm.com/hardsoft/diskdrdl/prod/ds25gp22.htm
20. ProMAX, 16 Technology Dr., Irvine CA 92618 800โ977โ6629 http://www.promax.com/
21. http://www.firmtek.com Chi Kim Nguyen ckn@firmtek.com
22. David A. Patterson, Garth Gibson, and Randy H. Katz (UCโBerkeley), A Case for Redundant Arrays of Inexpensive Disks (RAID), Proceedings of the ACM Conference on Management of Data (SIGMOD), Chicago, Illinois, USA (June 1988) Pages 109โ116, Sigmod Record 17 (1988) 109.
23. Miguel de Icaza, Ingo Molnar, and Gadi Oxman, The LINUX RAID-1,4,5 Code, 3rd Annual Linux Expo โ97, Research Triangle Park, North Carolina, USA (April 1997); http://luthien.nuclecu.unam.mx/$``$miguel/raid http://linas.org/linux/Software-RAID/Software-RAID.html http://linas.org/linux/Software-RAID/Software-RAID-3.html
24. We use the Lucent Cajun P550 Gigabit Switch with 23 Gigabits per second of switching throughput capacity to connect our Fast Ethernet computers. Up to six cards may be installed in this switch. One option is a card with 48 full duplex 10/100Base-TX ports. http://public1.lucent.com/dns/products/p550.html
25. One might quadruple the number of ports of a high end switch like the Cajun P550 by adding a commodity switch to each of its ports. Several Ethernet switches with 5 full duplex 10/100Base-TX ports cost under $100. All feature a store-and-forward packet switching architecture to help reduce latency. http://www.addtron.com/ ADSโ1005 $69 http://www.dlink.com/products/switches/dss5plus/ DSS-5+ $90 http://www.hawkingtech.com/ PN505ES $79 http://www.linksys.com/scripts/features.asp?part=EZXS55W EtherFast $90 http://netgear.baynetworks.com/products/fs105ds.shtml FS105 $83 http://smc.com/smc/pages\_html/switch.html EZNETโ5SW/6305TX $78
26. http://www.redhat.com
27. IWโQ600 ATX Full Tower Case, 11 bays, 300 watt power supply โ $77 Q600 case dimensions: 600mm high $`\times `$ 200mm wide $`\times `$ 432mm deep. IWโQ2000 ATX Full Tower Case, 11 bays, two 300 watt power supplies โ $208 Q2000 case dimensions: 600mm high $`\times `$ 200mm wide $`\times `$ 476mm deep. โAvailable in different color for OEM customers.โ http://www.in-win.com/framecode/index.html http://www.pricewatch.com
28. Advanced Micro Devices and Pentium III CPUs can perform four single precision floating point adds or multiplies per clock cycle with their 3DNow! or Streaming SIMD Extensions units, respectively. Both 3DNow! and SSE are implementations of SIMD (Single Instruction, Multiple Data) processors. http://www.amd.com/products/cpg/athlon/index.html http://www.amd.com/products/cpg/k6iii/index.html http://www.amd.com/products/cpg/k623d/index.html http://www.intel.com/home/prodserv/pentiumiii/prodinfo.htm
29. For situations with more people than money, manually loaded tapes provide the way to store and move data with the lowest initial investment. The lowest media cost is given by 112 meter long 8mm tapes storing 5 Gigabytes uncompressed on an Exabyte Eliant 820 at 1 MB/s. The tapes cost 53 cents per Gigabyte at the Fermilab stockroom and the Eliant 820 tape drive costs $1300. Used Exabyte 8500 and 8505 tape drives are even cheaper on ebay.com. Using pairs of drives, with one running and the other waiting on deck with a tape ready to go, gives operators time to load tapes . http://www.exabyte.com/products/8mm/eliant/ http://www-stock.fnal.gov/stock/ http://www.exabyte.com/products/
30. IEEE Standard for a High Performance Serial Bus, ISBN 1โ55937โ583โ3. http://standards.ieee.org/catalog/bus.html#1394-1995 http://standards.ieee.org/reading/ieee/std\_public/description /busarch/1394-1995\_desc.html
31. SYM13FW500 ATA/ATAPI to 1394 Native Bridge Data Manual Version 1.02 ftp://ftp.symbios.com/pub/symchips/1394 http://www.symbios.com/news/pr/80330ata.htm
32. http://www.oxsemi.com/products/products.html
33. http://www.lacie.com/scripts/harddrive/drive.cfm?which=30
34. VST Technologies, 125 Nagog Park, Acton, MA 01720 978โ635โ8282 http://www.vsttech.com/vst/products.nsf/pl\_firewire http://www.macnn.com/thereview/reviews/vst/fwhd.shtml http://www.elgato.com/products.html FireWire Disk Control 1.01
35. http://www.vsttech.com/vst/press.nsf/default 08/31/99 09/17/99 http://www.softraid.com
36. http://eclipt.uni-klu.ac.at/ieee1394/ http://www.kt.opensrc.org/kt19990722\_28.html#15
37. http://www.orangemicro.com/firewire.html
38. FNAL E769: C. Stoughton and D. Summers, Using Multiple RISC CPUs in Parallel to Study Charm Quarks, Comput. Phys. 6 (1992) 371; C. Gay and S. Bracker, IEEE NS-34 (1987) 870; S. Hansen et al., IEEE NS-34 (1987) 1003; G. Alves et al., Phys. Rev. Lett. 69 (1992) 3147; 77 (1996) 2388; 77 (1996) 2392; Phys. Rev. D56 (1997) 6003.
39. FNAL E791: S. Bracker, K. Gounder, K. Hendrix, and D. Summers, A Simple Multiprocessor Management System for Event-Parallel Computing, IEEE NS-43 (1996) 2457; S. Amato et al., Nucl. Instrum. Meth. A324 (1993) 535;
E. M. Aitala et al., Phys. Rev. Lett. 76 (1996) 364; 81 (1998) 44; hep-ex/9809026; hep-ex/9809029; hep-ex 9912003. |
no-problem/9912/astro-ph9912122.html | ar5iv | text | # THE ORIGIN OF Lyโข๐ผ ABSORPTION SYSTEMS AT ๐ง>1โIMPLICATIONS FROM THE HUBBLE DEEP FIELD
## 1 INTRODUCTION
Comparison of galaxy and absorber redshifts along common lines of sight demonstrates that a significant and perhaps dominant fraction of low-redshift ($`z<1`$) $`\mathrm{Ly}\alpha `$-forest absorption systems arise in extended gaseous envelopes of galaxies (Lanzetta et al. 1995a; Chen et al. 1998). But it has not yet been possible to extend this comparison to higher redshifts, because normal surveys fail to identify galaxies at redshifts much beyond $`z=1`$. Nevertheless, determining the origin of high-redshift $`\mathrm{Ly}\alpha `$-forest absorption systems bears crucially on all efforts to apply the absorbers as a means to study galaxies in the very distant universe and to study how the extended gas of galaxies evolves with redshift. Previous arguments based on the apparent low metal content and lack of clustering of the absorbers suggested an intergalactic origin (Sargent et al. 1980), but these arguments are weakened by recent Keck measurements of the actual metal content and clustering of the absorbers (Songaila & Cowie et al. 1996; Fernรกndez-Soto et al. 1996).
To extend the comparison of galaxies and absorbers to higher redshifts, it is necessary to systematically identify normal galaxies at redshifts $`z>1`$. Over the past few years, it has become clear that high-redshift galaxies can be reliably identified by means of broad-band photometric techniques. For example, several groups have determined photometric redshifts of galaxies in the Hubble Deep Field (HDF) to redshifts as large as $`z6`$ (Lanzetta, Yahil, & Fernรกndez-Soto 1996; Sawicki, Lin, & Yee 1997; Connolly et al. 1997; Lanzetta, Fernรกndez-Soto, & Yahil 1998; Fernรกndez-Soto, Lanzetta, & Yahil 1999). Spectroscopic redshifts of nearly 120 of these galaxies obtained using the Keck telescope (see Fernรกndez-Soto et al. 1999 for a complete list of references) have demonstrated that the photometric redshifts are both reliable enough and accurate enough to establish galaxy surface density versus redshift to large redshifts (Lanzetta et al. 1998; Fernรกndez-Soto et al. 1999).
Here we combine an empirical measure of the galaxy surface density versus redshift, obtained from the HDF, with an empirical measure of the gaseous extent of galaxies, obtained by Chen et al. (1998), to predict the number density of $`\mathrm{Ly}\alpha `$ absorption systems that originate in extended gaseous envelopes of galaxies versus redshift. Previous comparison of galaxies and $`\mathrm{Ly}\alpha `$ absorption systems at redshifts $`z<1`$ showed that (1) galaxies of all morphological types possess extended gaseous envelopes and (2) the gaseous extent of galaxies scales with galaxy $`B`$-band luminosity but does not depend sensitively on redshift (Chen et al. 1998). If these results apply to galaxies at all redshifts, then known galaxies of known gaseous extent must produce some fraction of $`\mathrm{Ly}\alpha `$ absorption systems at all redshifts.
On the basis of our analysis, we find that this fraction is significant. Specifically, considering $`\mathrm{Ly}\alpha `$ absorption systems of absorption equivalent width $`W0.32`$ ร
, we find that galaxies can account for nearly all observed $`\mathrm{Ly}\alpha `$ absorption systems at $`z<2`$ and that galaxies of luminosity $`L_B0.05L_B_{}`$ can account for approximately 50% of the observed $`\mathrm{Ly}\alpha `$ absorption systems at higher redshifts. We further argue that if the gaseous extent of galaxies does not decrease with increasing redshift, then known galaxies must produce at least as many $`\mathrm{Ly}\alpha `$ absorption systems as our predictions. We show that we can already explain 70% and perhaps all of the observed $`\mathrm{Ly}\alpha `$ absorption systems at $`z>2.0`$ after correcting for faint galaxies that are below the detection threshold of the HDF images.
We adopt a Hubble constant, $`H_0=100h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, and a deceleration parameter, $`q_0=0.5`$, throughout the paper, unless otherwise noted.
## 2 METHODS
To calculate the predicted number density of $`\mathrm{Ly}\alpha `$ absorption systems that originate in extended gaseous envelopes of galaxies versus redshift, we multiply the galaxy number density per unit comoving volume by the absorption gas cross section of galaxies, which is known to scale with properties of galaxies. Our previous comparison of galaxies and $`\mathrm{Ly}\alpha `$ absorption systems at $`z<1`$ showed that the amount of gas encountered along a line of sight depends on galaxy impact parameter $`\rho `$ and $`B`$-band luminosity $`L_B`$ but does not depend strongly on galaxy average surface brightness $`\mu `$, disk-to-bulge ratio $`D/B`$, or redshift $`z`$ (Chen et al. 1998). Namely, galaxies of all morphological types possess extended gaseous envelopes, and the absorption gas cross section scales with galaxy $`B`$-band luminosity but does not evolve significantly with redshift. The left panel of Figure 1 shows a statistically significant correlation between galaxy $`B`$-band magnitude and the residuals of the $`W`$ versus $`\rho `$ anti-correlation. In contrast, the right panel of Figure 1 shows no correlation at all between galaxy redshift and the residuals of the $`W`$ versus $`\rho `$ anti-correlation accounting for galaxy $`B`$-band luminosity.
The predicted number density of $`\mathrm{Ly}\alpha `$ absorption systems $`n(z)`$ originating in extended gaseous envelopes of galaxies can now be written
$$n(z)=\frac{c}{H_0}(1+z)(1+2q_0z)^{1/2}_{L_{B_{\mathrm{min}}}}^{\mathrm{}}๐L_B\mathrm{\Phi }(L_B,z)\pi R^2(L_B),$$
(1)
where $`c`$ is the speed of light, $`\mathrm{\Phi }(L_B,z)`$ is the galaxy luminosity function, and $`R`$ is the gaseous extent. The minimum galaxy $`B`$-band luminosity $`L_{B_{\mathrm{min}}}`$ is determined by the detection threshold of either the galaxy redshift survey or the $`\mathrm{Ly}\alpha `$ absorbing galaxy survey. It is taken to be the brighter of the magnitude limits at which the galaxy luminosity function or the scaling relation was determined. Supplementing results of Chen et al. (1998) with new measurements (Chen et al., in preparation), we find that galaxy gaseous radius $`R`$ scales with galaxy $`B`$-band luminosity $`L_B`$ as
$$\frac{R}{R_{}}=\left(\frac{L_B}{L_B_{}}\right)^t,$$
(2)
with
$$t=0.39\pm 0.09,$$
(3)
and with $`R_{}=172\pm 27h^1\mathrm{kpc}`$ at $`W=0.32`$ ร
for galaxies of luminosity $`L_B>0.03L_B_{}`$.
Given a complete galaxy sample, we can evaluate equation (1) by establishing an empirical galaxy luminosity function without adopting any particular functional form. First, we write the galaxy luminosity function as a discrete sum of $`\delta `$ functions,
$$\mathrm{\Phi }(L_B,z)=\left(\frac{c}{H_0}\right)^1(1+z)^1(1+2q_0z)^{1/2}\frac{1}{\mathrm{\Delta }z\mathrm{\Omega }D_A(z)^2}\underset{i}{}\delta (L_BL_{B_i}),$$
(4)
where $`\mathrm{\Omega }`$ is the angular survey area, $`D_A`$ is the angular diameter distance, and $`L_{B_i}`$ is the $`B`$-band luminosity of galaxy $`i`$. Next, we substitute equation (4) into equation (1) and rearrange terms to yield
$$n(z)=\frac{1}{\mathrm{\Omega }}\frac{1}{(z_2z_1)}\underset{i}{}\frac{\pi R^2(L_{B_i})}{D_A^2(z_i)},$$
(5)
where $`(z_1,z_2)`$ marks the boundary of each redshift bin. Equation (5) indicates that the number density of $`\mathrm{Ly}\alpha `$ absorption systems originating in the extended gaseous envelopes of galaxies is equal to the product of the galaxy surface density and the average of luminosity weighted gas cross sections.
## 3 GALAXY SAMPLE
Here we adopt a galaxy sample from the Hubble Deep Field (HDF) galaxy catalog published by Fernรกndez-Soto et al. (1999), which contains coordinates, optical and near-infrared photometry, and photometric redshift measurements of 1067 galaxies. This galaxy catalog is unique for studying galaxy statistics at high redshifts, because it is sensitive to objects of surface brightness down to $`26.1\mathrm{mag}\mathrm{arcsec}^2`$ and is complete to $`AB(8140)=28.0`$ within the central $`3.93\mathrm{arcmin}^2`$ area of the HDF image (Zone 1 of Fernรกndez-Soto et al. 1999). Photometric redshifts ranging through $`z6`$ were measured from broad-band spectral energy distributions established from the optical (Williams et al. 1996) and infrared (Dickinson et al. 1999, in preparation) images. Spectroscopic redshifts of nearly 120 of these galaxies have been obtained using the Keck telescope (see Fernรกndez-Soto et al. 1999 for a complete list of references). Comparison of the photometric and spectroscopic redshifts shows that the photometric redshifts are accurate to within an RMS relative uncertainty of $`\mathrm{\Delta }z/(1+z)<0.1`$ at all redshifts $`z<6`$ (Lanzetta et al. 1998; Fernรกndez-Soto et al. 1999), except that there are not spectroscopic redshifts available for comparison at redshifts $`z=1.5`$ to $`z=2.0`$. Reliable galaxy statistics as a function of redshift can therefore be measured on the basis of the HDF galaxy catalog.
## 4 ANALYSIS
The goal of the analysis is to compare the predicted number density of $`\mathrm{Ly}\alpha `$ absorption systems originating in extended gaseous envelopes of galaxies (using the HDF galaxy catalog) with the observed number density of $`\mathrm{Ly}\alpha `$ absorption systems.
Here we evaluate equation (5) for galaxies observed in the HDF, given the photometric redshift measurements. Galaxy $`B`$-band luminosity is calculated from the apparent F814W magnitude corrected for luminosity distance, color $`k`$ correction (between rest-frame $`B`$ band and observed-frame F814W band, calculated based on the spectral type determined from the photometric redshift techniques), and bandpass $`k`$ correction. Errors in the predicted number density are estimated using a standard bootstrap method, which incorporates the sampling error with the uncertainties of photometric redshift measurements and the scaling relation. To model the uncertainty of photometric redshift measurements correctly, Lanzetta et al. (1998) have shown that both photometric error and cosmic variance with respect to the spectral templates must be accounted for. The first can be simulated by perturbing galaxy photometry of different bands within the photometric errors. The second can be characterized by the RMS dispersion of photometric and spectroscopic redshift measurements, which is 0.08 at $`z<2`$ and 0.32 at $`z>2`$ (Lanzetta et al. 1998). The uncertainty of photometric redshift measurements is then calculated by forming a quadratic sum of the two.
The results are shown in Table 1 and Figure 2 in comparison with observations. In Figure 2, filled circles represent the observations and open circles represent our predictions. The observations at $`z<1.5`$ are taken from the sample 2 of Weymann et al. (1998), which we normalize to refer to $`\mathrm{Ly}\alpha `$ absorption lines of $`W0.32`$ ร
. The observations at $`z>1.6`$ are taken from the sample 9 of Bechtold (1994)<sup>1</sup><sup>1</sup>1Note that the data from Weymann et al. included all the $`\mathrm{Ly}\alpha `$ absorption lines with or without associated metal absorption lines, while the data from Bechtold included only the $`\mathrm{Ly}\alpha `$ absorption lines without associated metal absorption lines. However, the correction for including metal-line associated systems in the high-redshift sample is expected to be approximately a few percent (Bechtold 1994; Frye et al. 1993). In comparison to the large uncertainty of the number density of high-redshift $`\mathrm{Ly}\alpha `$ absorption systems, it is clear that the absence of metal-line associated systems in the high-redshift sample does not change the redshift distribution of $`\mathrm{Ly}\alpha `$ absorption systems by a noticeable amount.. We select a bin size, $`\mathrm{\Delta }z=0.5`$, when calculating the predicted values. Experiments with different bin sizes show that the redshift distribution of the predicted number density of $`\mathrm{Ly}\alpha `$ absorption systems is insensitive to the selected bin size. We also repeat the calculation, using a deceleration parameter, $`q_0=0.0`$. It turns out that the predicted number density of $`\mathrm{Ly}\alpha `$ absorption systems does not depend sensitively on the adopted deceleration parameter either.
Our predictions have so far been limited to including galaxies of luminosity $`L_B>L_{B_{\mathrm{min}}}`$. To estimate the contribution of faint galaxies to the predicted gas cross section, we calculate the number density of $`\mathrm{Ly}\alpha `$ absorption systems originating in galaxies of luminosity $`L_B<L_{B_{\mathrm{min}}}`$ by choosing a Schechter luminosity function. We evaluate equation (1) by adopting a faint-end slope obtained by Ellis et al. (1996), which is $`\alpha =1.41_{0.07}^{+0.12}`$, and by extrapolating the scaling relation (equation 2 and 3) to obtain the extended gaseous radii for galaxies of luminosity $`0L_B<0.03L_B_{}`$. Increments of the predicted number density versus redshift due to the inclusion of faint galaxies are shown in Figure 2 as well (crosses with dashed horizontal bars).
## 5 DISCUSSION
The HDF images have provided us with a unique chance to relate statistical properties of high-redshift galaxies to statistical properties of $`\mathrm{Ly}\alpha `$ absorption systems, thereby studying the origin of $`\mathrm{Ly}\alpha `$ absorption systems at high redshifts. Combining an empirical measure of galaxy surface density versus redshift with an empirical measure of gaseous extent of galaxies, we have predicted the number density of $`\mathrm{Ly}\alpha `$ absorption systems that originate in extended gaseous envelopes of galaxies versus redshift.
For $`\mathrm{Ly}\alpha `$ absorption systems of absorption equivalent width $`W0.32`$ ร
, comparison of the predicted and observed number densities of $`\mathrm{Ly}\alpha `$ absorption systems shows that (1) known galaxies with known gas cross sections can account for all the observed $`\mathrm{Ly}\alpha `$ absorption systems to within measurement errors at redshifts $`0<z<1`$, (2) known galaxies of luminosity $`L_B0.03L_B_{}`$ with unevolved gas cross sections can account for all $`\mathrm{Ly}\alpha `$ absorption systems at redshifts $`1<z<2`$, and (3) known galaxies of luminosity $`L_B0.05L_B_{}`$ with unevolved gas cross sections can account for approximately 50% of observed $`\mathrm{Ly}\alpha `$ absorption systems at $`z>2`$. Apparently, bright galaxies alone can account for a significant portion of the observed $`\mathrm{Ly}\alpha `$ absorption systems at all redshifts. After correcting for faint galaxies that are below the detection threshold of the HDF image, we can already explain 70% and perhaps all of the observed $`\mathrm{Ly}\alpha `$ absorption systems at $`z>2.0`$ to within errors. Here we discuss three factors that may invalidate the result.
First, we consider evolution of the galaxy luminosity function. There has been some evidence showing that galaxies at the faint end of galaxy luminosity function evolves significantly with redshift (e.g. Ellis 1997, and references therein). Namely, faint galaxies may be more numerous at higher redshifts. To address this issue, we evaluate equation (1) with $`L_{B_{\mathrm{min}}}0`$ to calculate the relative change in the predicted number density, given a different faint-end slope. It turns out that the predicted number density of $`\mathrm{Ly}\alpha `$ absorption systems increases by as much as $`0.3`$ dex, if we vary the faint-end slope from $`\alpha =1.4`$ to $`\alpha =1.6`$. Apparently, a steeper faint-end slope brings the predicted values closer to the observed ones. We conclude that known galaxies must produce at least as many $`\mathrm{Ly}\alpha `$ absorption systems as the predictions.
Next, we consider evolution of neutral gas surrounding galaxies. In the right panel of Figure 1, we show that the residuals of the $`W`$ versus $`\rho `$ anti-correlation after accounting for the scaling of galaxy $`B`$-band luminosity $`L_B`$ do not correlate with galaxy redshift. Although there is no direct observational evidence to support the extension of the redshift independence of the scaling relation to redshifts beyond $`z=1`$, the gaseous extent of galaxies is unlikely to decrease with increasing redshift in these early epochs. Theoretically, it is believed that galaxies were formed through accretion of cooled gas over time, indicating a larger gaseous extent of galaxies at higher redshifts. Observationally, the mass density of neutral gas measured from damped $`\mathrm{Ly}\alpha `$ absorption systems increases significantly with increasing redshift at redshifts between $`z=1.6`$ and $`z=3.5`$ (Lanzetta, Wolfe, & Turnshek 1995; Storrie-Lombardi, McMahon, & Irwin 1996), supporting that galaxies may possess more neutral gas and therefore no less gaseous extent at higher redshifts. We again conclude that known galaxies must produce at least as many $`\mathrm{Ly}\alpha `$ absorption systems as the predictions.
Finally, we consider galaxy clustering effects. Due to limited resolutions of spectroscopic observations, $`\mathrm{Ly}\alpha `$ absorption lines with small velocity separation are sometimes blended together and considered as one absorption system. Lanzetta, Webb, & Barcons (1996) first reported an absorption system which may arise in a group or cluster of galaxies. It was later confirmed by Ortiz-Gรญl et al. (1999), who identified eight absorption features in this system based on a spectrum of higher spectral resolution. Their analysis suggested that the degree of clustering of $`\mathrm{Ly}\alpha `$ absorption systems may be underestimated on velocity scales of several hundred kilometers per second (see also Fernรกndez-Soto et al. 1996). Similarly, the number density of $`\mathrm{Ly}\alpha `$ absorption lines derived from a known galaxy population would be in excess of the observed number density of $`\mathrm{Ly}\alpha `$ absorption lines, because galaxies are strongly clustered and the predicted number density does not suffer from the line blending effect. As a result, the fraction of $`\mathrm{Ly}\alpha `$ absorption systems originating in extended gaseous envelopes of galaxies may be overestimated.
To estimate the amount of excess at different redshifts, we calculate the expected number of neighbouring galaxies of luminosity $`L_B>L_{B_{\mathrm{min}}}`$, $`N_{\mathrm{neighbours}}`$, that are seen at impact parameters $`\rho 200`$ kpc along a line of sight with a velocity difference of $`\mathrm{\Delta }v`$ from an absorption redshift. We adopt a two-point correlation function measured by Magliocchetti & Maddox (1999) and Arnouts et al. (1999) for the HDF galaxies at redshifts $`0z4.8`$, which is characterized by
$$\xi (r,z)=(r/r_0)^\gamma (1+z)^{(3+ฯต)}$$
(6)
with $`\gamma =3+ฯต=1.8`$ and $`r_01.7h^1`$ Mpc, and evaluate the integral of the two-point correlation function over a comoving volume spanned by $`\delta v`$ in depth and 200 kpc in radius. The velocity span, $`\mathrm{\Delta }v`$, is taken to be the spectral resolution of which the observations were carried out. It is $`250\text{km s}\text{-1}`$ for the sample from Weymann et al. (i.e. absorption systems at $`z<1.5`$) and $`75\text{km s}\text{-1}`$ for the sample from Bechtold (i.e. absorption systems at $`z>1.5`$). We show in Table 2 that the estimated excess in our predictions (represented by the number of neighbouring galaxies) can be at most a factor of four at $`z<2`$, but is negligible at $`z>2`$. Therefore, we conclude that the fraction of $`\mathrm{Ly}\alpha `$ absorption systems originating in extended gaseous envelopes of galaxies remains significant even after correcting for galaxy clustering.
In summary, we present our first attempt at relating statistical properties of galaxies to statistical properties of $`\mathrm{Ly}\alpha `$ absorption systems at high redshifts based on the HDF galaxy catalog. Combining an empirical measure of galaxy surface density versus redshift with an empirical measure of gaseous extent of galaxies, we have predicted the number density of $`\mathrm{Ly}\alpha `$ absorption systems that originate in extended gaseous envelopes of galaxies versus redshift. We show that at approximately 50% and as much as 100% of observed $`\mathrm{Ly}\alpha `$ absorption systems of $`W0.32`$ ร
can be explained by extended gaseous envelops of galaxies. The result remains valid after taking into account possibile evolutions of faint galaxies and extended gas around galaxies, as well as galaxy clustering effects. Therefore, we conclude that known galaxies of known gaseous extent must produce a significant fraction and perhaps all of $`\mathrm{Ly}\alpha `$ absorption systems over a large redshift range.
###### Acknowledgements.
The authors thank John Webb for helpful discussions. HWC and KML were supported by NASA grant NAGWโ4422 and NSF grant ASTโ9624216. AF was supported by a grant from the Australian Research Council. |
no-problem/9912/astro-ph9912378.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The mass function of local ($`z<\text{ }0.1`$) galaxy clusters has been used as a stringent constraint for cosmological models. Independent analyses have shown that $`\sigma _8\mathrm{\Omega }_m^{\gamma \left(\mathrm{\Omega }_m\right)}0.5`$โ0.6, where $`\mathrm{\Omega }_m`$ is the density parameter, $`\sigma _8`$ the r.m.s. fluctuation amplitude within a sphere of $`8h^1\mathrm{Mpc}`$ ($`h=H_0/100\mathrm{km}\mathrm{s}^1`$ Mpc<sup>-1</sup>) radius and $`\gamma \left(\mathrm{\Omega }_m\right)0.40.6`$ . The increasing availability of $`X`$โray temperatures for distant ($`z>\text{ }0.3`$) clusters is providing a handle to estimate the density parameter which best reproduces the evolution of the cluster abundance (see also Henry, this volume, for a review), A limitation of this approach comes from the small size of the current samples .
An alternative way to trace the evolution of the cluster abundance is to rely on the luminosity and redshift distribution of $`X`$-ray fluxโlimited cluster samples . The advantage of this approach lies in the availability of large samples, with well understood selection functions. As a limitation, however, one has to face with the uncertain relation between cluster masses and $`X`$โray luminosities. The ROSAT Deep Cluster Survey (RDCS) provides a fluxโlimited complete sample of clusters identified in the ROSAT PSPC archive and including $`>\text{ }100`$ spectroscopically confirmed systems. In the following we will outline the main results of a comparison between the RDCS sample and the predictions of cosmological models. The analysis of RDCS for constraining the evolution of the $`X`$โray luminosity function is contained in a separate paper (Rosati et al., this volume).
## 2 $`X`$โray cluster bias: from luminosity to mass
The Press-Schechter approach is used in our analysis, as it provides an accurate mass function in the range of masses probed by the RDCS . The conversion from masses to X-ray luminosities, which is required in analysis of any flux-limited sample is implemented as follows: (a) convert mass into temperature by assuming virialization, hydrostatic equilibrium and isothermal gas distribution; (b) convert temperature into bolometric luminosity according to $`L_{bol}T^\alpha \left(1+z\right)^A`$; (c) compute the bolometric correction to the 0.5-2.0 keV band.
The critical step is represented by the choice for the $`L_{bol}`$$`T_X`$ relation. Low redshift data for $`T>\text{ }3`$keV indicates that $`\alpha 2.7`$โ3.5, depending on the sample and the data analysis technique , with a reduction of the scatter after account for the effect of cooling flows in central cluster regions . At lower temperatures, evidence has been found for a steepening of the $`L_{bol}`$$`T_X`$ relation below 1 keV . As for the evolution of the $`L_{bol}`$$`T_X`$ relation, existent data out to $`z0.4`$ and, possibly, out to $`z0.8`$ are consistent with no evolution (i.e., $`A0`$). Instead of assuming a unique massโluminosity conversion, in the following we will show how final constraints on cosmological parameters changes as the $`L_{bol}`$$`T_X`$ and $`M`$$`T_X`$ relations are varied.
## 3 Analysis and results
The RDCS subsample, that we will use in the following analysis, has a fluxโlimit of $`S_{lim}=3.5\times 10^{14}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$ and contains 81 clusters with measured redshifts out to $`z=0.85`$ over a 33 sq. deg. area . In order to fully exploit the information provided by the RDCS, we resort to a maximumโlikelihood approach, in which model predictions are compared to the RDCS cluster distribution on the $`(L,z)`$ plane. To this purpose, let $`\varphi (L,z)`$ be the PressโSchechter based luminosity function, as predicted by a given model, so that $`\varphi (L,z)\left(dV/dz\right)`$ $`dzdL`$ is the expected number density of clusters in the comoving volume element $`\left(dV/dz\right)dz`$ and in the luminosity interval $`dL`$. Therefore, the expected number of clusters in RDCS lying in the $`dzdL`$ element of the $`(L,z)`$ plane is $`\lambda (z,L)dzdL=\rho (z,L)`$ $`f_{sky}\left[S(z,L)\right]\left(dV/dz\right)dzdL`$. Here $`f_{sky}`$ is the fluxโdependent RDCS skyโcoverage.
The likelihood function $``$ is defined as the product of the probabilities of observing exactly one cluster in $`dzdL`$ at each of the $`(z_i,L_i)`$ positions occupied by the RDCS clusters, and of the probabilities of observing zero clusters in all the other differential elements of the $`(z,L)`$ plane which are accessible to RDCS. Assuming Poisson statistics for such probabilities and defining $`S=2\text{ln}`$, it is $`S=2_{i=1}^{N_{occ}}\text{ln}\left[\rho (z_i,L_i)\right]+2๐z๐L\lambda (z,L)`$, where the sum runs over the occupied elements of the $`(z,L)`$ plane. Model predictions are also convolved with statistical errors on measured fluxes, as well as with uncertainties in the luminosityโmass relation associated to a $`30\%`$ scatter in the $`L_{bol}`$$`T_X`$ relation and to a $`20\%`$ uncertainty in the massโtemperature conversion. Best estimates of the model parameters are obtained by minimizing $`S`$.
In Figure 1 we show the resulting constraints on the $`\sigma _8`$$`\mathrm{\Omega }_m`$ plane for different values of the shape parameter $`\mathrm{\Gamma }`$, based on assuming $`\alpha =3.5`$ and $`A=0`$ for the $`L_{bol}`$$`T_X`$ relation. It is clear that lowโdensity models are always preferred, quite independent of $`\mathrm{\Gamma }`$. We find $`\mathrm{\Omega }_m=0.35_{0.25}^{+0.35}`$ and $`\sigma _8=0.76_{0.14}^{+0.38}`$ ($`\mathrm{\Omega }_m=0.42_{0.27}^{+0.35}`$ and $`\sigma _8=0.68_{0.12}^{+0.21}`$) for flat (open) models, where uncertainties correspond to $`3\sigma `$ confidence level for three significant fitting parameter. No significant constraints are instead found for $`\mathrm{\Gamma }`$. In order to verify under which circumstances a critical density model may still be viable, we show in Figure 2 the effect of changing the parameters of the $`L_{bol}`$$`T_X`$ relation. Although bestโfitting values of $`\mathrm{\Omega }_m`$ and $`\sigma _8`$ move somewhat on the parameter space, neither a rather strong evolution nor a quite steep profile for the $`L_{bol}`$$`T_X`$ relation can accommodate a critical density Universe: an $`\mathrm{\Omega }_m=1`$ Universe is always a $`>3\sigma `$ event, even allowing for values of the $`A`$ and $`\alpha `$ parameters which are strongly disfavored by present data.
Based on these results, we point out that deep fluxโlimited $`X`$โray cluster samples, like RDCS, which cover a large redshift baseline ($`0.1<\text{ }z<\text{ }1.2`$) and include a fairly large number of clusters ($`>\text{ }100`$) do indeed place significant constraints on cosmological models. To this aim, some knowledge of the $`L_{bol}`$$`T_X`$ evolution is needed from a (not necessarily complete) sample of distant clusters out to $`z1`$. |
no-problem/9912/cs9912019.html | ar5iv | text | # Untitled Document
The paper was retracted. |
no-problem/9912/cond-mat9912025.html | ar5iv | text | # Magnetic history dependence of metastable states in systems with dipolar interactions.
## I Introduction
The magnetic and transport properties of thin films and nanostructured materials have been the subject of intense research during at least the last two past decades because of their interest for magnetic recording and technological applications . Among the interesting phenomena observed in this kind of materials there are the reorientation transitions of the magnetization with increasing temperature or film thickness , the giant magnetoresistance effect , and the wide variety of magnetic patterns that can be stabilized depending on the interplay between the perpendicular induced surface anisotropy, the exchange interaction and the long-range dipolar forces between the microscopic entities.
The development of different microscopy techniques like MFM, AFM, LEM or SEMPA have helped experimentalists to understand the interplay between the transport properties and the magnetic structure of the materials. Moreover, these techniques have shown that the magnetic structures formed in thin films are strongly influenced by the magnetic history of the sample. Thus, the observed patterns may display either out-of-plane ordering with labyrinthian striped domains or bubble-like patterns as well as in-plane vortex-like structures .
Most of the experimental samples being studied at present are epitaxial magnetic thin films of thickness ranging from $`10`$ to $`500`$ nm or granular alloy films from $`200`$ nm to $`1\mu `$m thickness . Current experimental techniques allow to control the ferromagnetic content of granular alloys deposited on nonmagnetic metallic matrices and therefore vary the range of dipolar and exchange interactions between the grains. It seems clear that these experimental systems should be efficiently modeled by a two-dimensional lattice of spins coupled by exchange interaction and shape anisotropy perpendicular to the plane of the sample and that the consideration of long-range forces between the spins, mainly of dipolar origin, is essential to understand the magnetic properties of these systems. The finite thickness of the samples can be taken into account by decreasing the value of the effective local anisotropy constant.
From the theoretical point of view the current understanding of dipolar spin systems can be summarized as follows. It is well established that the ground state of a pure planar dipolar system (no anisotropy and no exchange interaction) is a continuously degenerate manifold of antiferromagnetic (AF) states (checkerboard phase). The introduction of exchange interactions ($`J>0`$) between the spins establishes a competition between the short-range ferromagnetic (FM) order and the long-range dipolar interaction that favours AF order . Contrary to what one would expect, increasing $`J`$ does not serve to stabilize a FM ground state but instead results in the appearance of striped phases of increasing width that do not disappear at high $`J`$. A finite perpendicular anisotropy $`K`$ favours the formation of out-of-plane configurations against the in-plane configurations induced by the dipolar interaction. Monte Carlo simulations as well as theoretical analysis have shown that there is a reorientation transition from out to in-plane order or from in to out-of-plane order depending on the ratio of $`K`$ to $`J`$ as the temperature is increased . In the above mentioned works main attention was put on the determination of phase diagrams but apparently an accurate description of the detailed structure of the ground state has not been reported.
In this work we present the results of extensive Monte Carlo simulations of a model of a thin film with the aim to explain the variety of magnetic behaviours of the above mentioned experimental observations. We start by the description of the model Hamiltonian Sec. II, in Sec. III the phase diagram and ground state configurations of the model are presented demonstrating that it qualitatively reproduces the patterns observed in experiments. In Sec. IV we present the results of two simulations that show the effect of two different magnetic histories on the magnetic order of the system. We conclude with the conclusions in Sec. V
## II Numerical Model and Hamiltonian
Our general model of a thin film consists of a two-dimensional square lattice (lattice spacing $`a`$ and linear size $`N`$) of continuous spins Heisenberg $`๐_i`$ with magnetic moment $`\mu `$ and uniaxial anisotropy $`K`$ perpendicular to the lattice plane described by the Hamiltonian
$$=\overline{J}_{exch}\overline{K}_{anis}+_{dip}$$
(1)
where $`\overline{J}`$, $`\overline{K}`$, and $``$ are given in units of
$$g\frac{\mu ^2}{a^3}$$
(2)
with $`a`$ the lattice spacing. The spin $`๐_i`$ may represent either an atomic spin or the total spin of a grain.
The three terms in the Hamiltonian correspond to short-range nearest neighbour exchange interaction (direct between atomic spins or indirect through the matrix), uniaxial anisotropy energy perpendicular to the lattice plane, and long-range dipolar interaction
$`_{exch}={\displaystyle \underset{n.n.}{}}(๐_i๐_j)`$ (3)
$`_{anis}={\displaystyle \underset{n=1}{\overset{N^2}{}}}(S_n^z)^2`$ (4)
$`_{dip}={\displaystyle \underset{nm}{\overset{N^2}{}}}{\displaystyle \underset{\alpha ,\beta =1}{\overset{3}{}}}S_n^\alpha W_{nm}^{(\alpha \beta )}S_m^\beta .`$ (5)
In the last term we have defined the following set of dipolar interaction matrices
$$W_{nm}^{(\alpha \beta )}=\frac{1}{r_{nm}^3}\left(\delta _{\alpha \beta }\frac{3\delta _{\alpha \gamma }\delta _{\beta \eta }r_{nm}^\gamma r_{nm}^\eta }{r_{nm}^2}\right),$$
(6)
$`r_{nm}`$ a vector connecting spins at sites $`n`$ and $`m`$. The matrices $`W_{nm}^{(\alpha \beta )}`$ depend only on the lattice geometry and boundary conditions and not on the particular spin configuration. In this expression the first term favours antiferromagnetic long-range order while the second one introduces an effective easy-plane anisotropy.
Thus, the properties of the model depend only on two parameters. $`\overline{J}`$, which accounts for the competition between the ferromagnetic ($`J>0`$) interaction and the antiferromagnetic order induced by $`g`$, and $`\overline{K}`$, which accounts for the competition between the out-of-plane alignment favoured by $`K`$ and the in-plane order induced by $`g`$.
This model reduces to the Ising model in the limit $`K=+\mathrm{}`$ when the out-of-plane components are restricted to $`S_i^z=\pm 1`$, and to the planar model when $`K=\mathrm{}`$ and the spins are forced to lie in the lattice plane. Both cases depend only on one parameter $`J/g`$, the ratio of exchange to dipolar energies and are described by the same formal Hamiltonian $`=\overline{J}_{exch}+_{dip}`$. All the simulations have been performed on a system of size $`50\times 50`$ with periodic boundary conditions.
## III Ground State properties
We start by studying the properties of the ground state of the model. For this purpose we have followed the simulated thermal annealing method: starting at a high temperature with a configuration with spins randomly oriented, the temperature is slowly decreased by a constant factor $`\mathrm{\Delta }T`$ (we started with a temperature of 10 and we used a reduction factor of $`0.9`$), at every temperature step the system is allowed to evolve during a number $`t`$ of MC steps long enough (usually between 200 and 250 MCS per spin in our system) so as to reach thermal equilibrium at every temperature, the process is continued until the number of accepted trial jumps is less than a small percentage. In this way we have obtained the ground state energies and configurations of the system.
### A Phase diagram
In Fig. 1 we present the results of the simulations for the ground state energies of the finite $`K`$ model as a function of the reduced exchange parameter $`\overline{J}`$ and for different values of the anisotropy constant (open symbols). In filled circles the energy of the corresponding Ising model ($`K=+\mathrm{}`$) is also given for comparison, the dashed lines correspond to the same curve with the corrections for the different finite anisotropy values added. The continuous line corresponds to the same calculation for the planar (XY) model.
A characteristic feature of the finite $`\overline{K}`$ model is that it behaves in a bimodal way. For small $`\overline{J}`$ ($`\overline{J}<\overline{J}^{}`$) it is completely equivalent to the corresponding Ising model, displaying out-of-plane order while for $`\overline{J}>\overline{J}^{}`$ it behaves like the planar model and it orders in-plane. The value of $`\overline{J}^{}`$ at which the crossover occurs is simply the one for which the ground state energy of the planar model (thick solid line) equals the energy of the corresponding Ising model with the finite anisotropy correction corresponding to a given value of $`K`$ (dashed lines). Therefore, the phase diagram for the finite $`K`$ model can be directly obtained from the results for the planar model by shifting the energy of the Ising model with the corresponding value of $`K`$. Consequently, the Heisenberg model for finite $`K`$ is the combination of the simpler Ising and planar models and its behaviour is dominated by the one having the lowest ground state energy.
The region of parameters of interest to us is precisely those values of $`\overline{J}`$ around the crossover points $`\overline{J}^{}`$ since in this region the out and in-plane configurations are quasi-degenerated in energy and the system displays metastability, that is to say, different kinds of ordering may be induced depending on how the system is driven to the quasi-equilibrium state as we will show in the next sections.
### B Configurations
Before proceeding further let us analyze the ground state configurations in more detail. On the one hand, for $`\overline{J}<\overline{J}^{}`$ (Ising regime) all the curves show three characteristic regions corresponding to different kinds of-out-of plane order. For small values of $`\overline{J}`$ the dipolar energy dominates over the exchange energy and the system orders antiferromagnetically (AF) in a checkerboarded phase of increasing energy as $`\overline{J}`$ increases (two left columns in Fig. 2). At intermediate $`\overline{J}`$ values the system enters a constant energy region in which a phase of AF stripes of width $`h=1`$ is stabilized by the exchange interaction (third column from the left in Fig. 2). Out from this plateau the FM ordering increases resulting in a widening of the stripes and an almost linear decrease of the energy with $`\overline{J}`$ (forth and fifth columns for $`\overline{K}=5.0,10.0`$ in the same figure).
In the metastable region ($`\overline{J}\overline{J}^{}`$) the system starts to turn to the planar regime first displaying configurations with a regular array of vortices with remanent out of plane internal order. This fact can be observed for the case $`\overline{K}=3.3,\overline{J}=1,1.33`$ in Fig. 2 and Fig. 3 where the in-plane projections of the spins are displayed with arrows. The diameter of the vortices progressively increases with increasing $`\overline{J}`$ until the FM order induced by the exchange energy stabilizes the in-plane FM configuration.
Moreover, at small values of $`\overline{K}`$, the system can change to the planar phase without entering some or any of the above mentioned Ising regions. On the contrary, at high enough $`\overline{K}`$ the system never crosses to the planar phase, behaving as the Ising model for any value of $`\overline{J}`$.
## IV Magnetic history dependence simulations
In this section we will show simulations that mimic some of the experimental processes found in recent experiments on thin films. In all of them the measuring protocol consists in taking the sample out of its equilibrium state by the application of an external perturbation that is subsequently removed. Usually this external perturbation is a magnetic field applied perpendicular to or in the thin film plane. The final state induced by this procedure may be very different from the initial one, magnetic domains may be erased or created depending on the magnetic history to which the sample has been submitted.
### A Relaxation after perpendicular saturation
In the first simulation experiment we have considered a system with $`\overline{K}=3.3`$ and $`\overline{J}=1.0`$ which is just at the crossover between the Ising and planar regimes and has a ground state configuration with in-plane vortices. We simulate the application of a saturating field perpendicular to the film plane by starting with a configuration with all the spins pointing along the positive easy-axis direction $`S_i^z=+1`$ and we let the system relax to equilibrium in zero applied field during a long sequence of Monte Carlo steps during which we record the intermediate system configurations. One example of temporal evolution is shown in Fig. 4 where we can see a sequence of snapshots of the system taken at different stages of the relaxation. In the first stages of the evolution the system forms striped out-of-plane structures that evolve very slowly towards the equilibrium configuration of in-plane vortices. Similar results are obtained for other values of $`\overline{J}`$ close to the intersection point of the Ising and planar regimes. For values of $`\overline{J}`$ smaller than $`\overline{J}^{}`$ the system reaches the same state as after an annealing process.
Defects and imperfections in real thin films (commonly present in granular alloys) may act as pinning centers of these intermediate structures. This may be the explanation of the change observed in MFM images of granular films with low concentration of the FM content that in the virgin state show no out-of-plane order but after the application of $`10kOe`$ perpendicular to the film plane display striped and bubble-like domains similar to the ones obtained in the simulation.
### B Demagnetizing filed cycling
The second kind of process consists in the application of a demagnetizing field cycling in the perpendicular direction. The parameters used for the simulation are in this case $`\overline{K}=5.0,\overline{J}=2.1`$, also in the metastable region. As in the previous case in the initial state the system is saturated in positive perpendicular direction with a field $`H_0`$ but now the field is cycled from the positive to negative direction and progressively reduced in magnitude with a period $`T`$ (see the drawing at the top of Fig. 5). In the simulation the initial field has been chosen as the minimum allowing the system to escape from the initial saturated state ($`H_0=10.4`$) and the period ($`T=40MCS`$) is such that the system has time to reverse its magnetization during the reversal of the field.
The results are displayed in the sequence of snapshots of Fig. 5. At the first stages of the process (not shown in the figure) the spins reverse following at each reversal of the field. As the time elapses and the field decreases some reversed groups of spins start to nucleate (black spots in the two first rows). They continue to grow forming out-of-plane labyrinthian configurations separated by in-plane ordered zones (grey areas) arranged in vortices. As in the preceding experiment we find that a system with an in-plane ordered ground state may be driven to a very different ordering state by the magnetic history. What is more remarkable in this case is that the state attained after the cycling process it is not lost with time. Far from relaxing to the in-plane ordered ground state, when the system is allowed to relax in zero field, the incipient structure formed during the demagnetizing cycle is stabilized. The narrow adjacent stripes coalesce one with another to form wider stripes separated by narrower regions of in-plane spins.
## V Conclusions
We have shown that a model of two dimensional Heisenberg spins with anisotropy perependicular to the plane and interacting via exchange and long-range dipolar forces is able to reproduce the different magnetic patterns observed in experiments, from out-of-plane labyrinthian and striped domains to in-plane FM and vortex structures. The long-range character of the dipolar interaction play an essential role to understand the ground state properties of this system, the results would have been very different if the dipolar field acting on the spins had been replaced by a mean-field demagnetizing field as is usually done in some works. An interesting characteristic of the model is that it behaves as the limiting Ising of planar models depending on the values of $`\overline{J}`$ for a given value of $`\overline{K}`$. However, for values of $`\overline{J}`$ around the itersection between the Ising and the planar ground state lines, in-plane and out-of-plane configurations are quasidegenerated in energy and metastable. In this range of parameters our simulations are able to reproduce a surprising experimental observation : if a magnetic field perpendicular to the film plane is applied to a virgin sample with in-plane FM domains, the out-of plane component of the magnetization increases by a factor of 10 and the magnetic pattern displays well contrasted domains. A similar situation happens after perpedicular demagnetizing cycles, now the domains elongate and become wider. The last case could be thought as dinamical phase transition from in to out-of-plane order induced by a driving time dependent magnetic field similar to that observed for Ising spins . Therefore, the application of an external perturbation that changes momentarily the energy landscape together with the existence of highly metastable states facilitates the driving of the system to a new stable configuration that nonetheless it is not the equilibrium one.
## Acknowledgements
We acknowledge CESCA and CEPBA under coordination of $`C^4`$ for the computer facilities. This work was supported by CICYT through project MAT97-0404 and CIRIT under project SR-119. |
no-problem/9912/cond-mat9912303.html | ar5iv | text | # Avalanches at rough surfaces
## I Dynamical scaling for sandpile cellular automata
It is customary in the study of generalised surfaces to examine the widths generated by kinetic roughening , and then establish properties related to dynamical scaling. However, the kinetic roughening of sandpile cellular automata has never been investigated; to begin with, therefore we postulate a principle of dynamical scaling for sandpile cellular automata in terms of the surface width $`W`$ of the sandpile automaton:
$`W(t)`$ $``$ $`t^\beta ,tt_{crossover}L^z`$ (2)
$`W(L)`$ $``$ $`L^\alpha ,L\mathrm{}`$ (3)
As in the case of interfacial widths, these equations signify the following sequence of roughening regimes:
1. To start with, roughening occurs at the CA sandpile surface in a time-dependent way; after an initial transient, the width scales asymptotically with time $`t`$ as $`t^\beta `$, where $`\beta `$ is the temporal roughening exponent. This regime is appropriate for all times less than the crossover time $`t_{crossover}L^z`$, where $`z`$ = $`\alpha /\beta `$ is the dynamical exponent and $`L`$ the system size.
2. After the surface has saturated, i.e. its width no longer grows with time, the spatial roughening characteristics of the mature interface can be measured in terms of $`\alpha `$, an exponent characterising the dependence of the width on $`L`$.
We define the surface width $`W(t)`$ for a sandpile automaton in terms of the mean-squared deviations from a suitably defined mean surface; in analogy with the conventional counterpart for interface growth , we define the instantaneous mean surface of a sandpile automaton as the surface about which the sum of column height fluctuations vanishes. Clearly, in an evolving surface, this must be a function of time; hence all quantities in the following analysis will be presumed to be instantaneous.
The mean slope $`<s(t)>`$ defines expected column heights, $`h_{av}(i,t)`$, according to
$$h_{av}(i,t)=i<s(t)>$$
(4)
where we have assumed that column $`1`$ is at the bottom of the pile. Column height deviations are defined by
$$dh(i,t)=h(i,t)h_{av}(i,t)=h(i,t)i<s(t)>$$
(5)
The mean slope must therefore satisfy
$$\mathrm{\Sigma }_i[h(i,t)i<s(t)>]=0$$
(6)
since the instantaneous deviations about it vanish; thus
$$<s(t)>=2\mathrm{\Sigma }_i[h(i,t)]/L(L+1)$$
(7)
(We note that this slope is distinct from the quantity $`<s^{}(t)>=h(L,t)/L`$ that is obtained from the average of all the local slopes $`s(i,t)=h(i,t)h(i1,t)`$, about which slope fluctuations would vanish on average).
The instantaneous width of the surface of a sandpile automaton, $`W(t)`$, can be defined as:
$$W(t)=\sqrt{\mathrm{\Sigma }_i[dh(i,t)^2]/L}$$
(8)
which can in turn be averaged over several realizations to give, $`<W>`$, the average surface width in the steady state.
We also compute here the height-height correlation function, $`C(j,t)`$, which is defined by
$$C(j,t)=<dh(i,t)dh(i+j,t)>/<dh(i,t)^2>$$
(9)
where the mean values are evaluated over all pairs of surface sites separated by $`j`$ lattice spacings:
$$<dh(i,t)dh(i+j,t)>=\mathrm{\Sigma }_i(dh(i,t)dh(i+j,t))/(Lj)$$
(10)
for $`0j<L`$. This function is symmetric and can be averaged over several realizations to give the average correlation function $`<C(j)>`$.
## II Qualitative effects of avalanching on surfaces
Before moving on to the quantitative descriptors of sandpile avalanching and surface roughening, we present some results using more qualitative indicators. Recent experiments on sandpile avalanches have indicated that there are at least two broad categories; โuphillโ avalanches, which are typically large, and โtriangularโ avalanches which are generally smaller in size. We have found evidence of this in a $`(2+1)d`$ disordered model of sandpile avalanches, which will be presented elsewhere ; but in this work we discuss analogues in $`(1+1)d`$, which are respectively โwedge-shapedโ and โflatโ avalanches. The following data indicate that it is the larger wedge-shaped avalanches which alter surface slope and width, while the flatter, smaller avalanches alter neither very much. This is in accord with earlier work, where it was found that larger avalanches are the consequence of accumulated disorder, while the smaller ones can cause disordered regions to build up along the sandpile surface .
Figure 1(a) shows a time series for the mass of a large ($`L=256`$) evolving disordered sandpile automaton. The series has a typical quasiperiodicity . The vertical line denotes the position of a particular โlargeโ event, while Figure 1(b) shows the avalanche size distribution for the sandpile. Note the peak, corresponding to the preferred large avalanches, which was analysed extensively in earlier work . Our data shows that the avalanche highlighted in Figure 1(a) drained off approximately $`5`$ per cent of the mass of the sandpile, placing it close to the โsecond peakโ of Figure 1(b). Figure 1(c) shows the outline of the full avalanche before and after this event with its initiation site marked by an arrow; we note that, as is often the case in one dimension, the avalanche is โuphillโ. The inset shows the relative motion of the surface during this event; we note that the signatures of smoothing by avalanches are already evident as the precursor state in the inset is much rougher than the final state. Finally we show in Figure 1(d) the grain-by-grain picture of the aftermath pile superposed on the precursor pile, which is shown in shadow. An examination of the aftermath pile and the precursor pile shows that the propagation of the avalanche across the upper half of the pile has left only a very few disordered sites in its wake (i.e. the majority of the remaining sites are $`0`$ type) whereas the lower half (which was undisturbed by the avalanche) still contains many disordered, i.e. $`1`$ type sites in the boundary layer. This leads us to suggest that the larger avalanches rid the boundary layer of its disorder-induced roughness, a fact that is borne out by our more quantitative investigations.
In fact, our studies have revealed that the very largest avalanches, which are system-spanning, remove virtually all disordered sites from the surface layer; one is then left with a normal โorderedโ sandpile, where the avalanches have their usual scaling form for as long as it takes for a layer of disorder to build up. When the disordered layer reaches a critical size, another large event is unleashed; this is the underlying reason for the quasiperiodic form of the time series shown in Figure 1(a).
Before moving on to more quantitative features, we show for comparison the sequence of Figure 1, for
1. an ordered pile - Figure 2(a-d)
2. a small disordered pile - Figure 3(a-d)
We note the following features:
* The small disordered pile has a mass time series (Figure 3(a)) that is midway between the scale-invariance of the ordered pile (Figure 2(a)) and the quasiperiodicity of the large disordered pile (Figure 1(a)).
* The avalanche size distribution of the small disordered pile (Figure 3(b)) is likewise intermediate between that of the ordered pile (which shows the scale invariance observed by Kadanoff et al. ) and the two-peaked distribution characteristic of the disordered pile .
* In both small and large disordered piles, we see evidence of large โuphillโ avalanches which shave off a thick boundary layer containing large numbers of disordered sites, and leave behind a largely ordered pile (see Figure 1(c-d) and Figure 3(c-d)). By contrast the ordered pile loses typically two commensurate layers even in the largest avalanche, with a correspondingly unexciting aftermath state left behind in its wake (Figure 2(c-d)).
We conclude from this that there is, even at a qualitative level, a post-avalanche smoothing of the sandpile surface, beyond a crossover length, as found in earlier work on continuum models ; importantly, our discrete model reveals that this is achieved by the removal of (orientational) disorder, the implications of which we will discuss in our concluding section. The existence of the crossover length, in terms of the mass time series, has also been observed in experiment .
## III Quantitative effects of avalanching on surfaces
### A Intrinsic properties of sandpile surfaces
Inspired by the picture of smoothing avalanches, we have investigated many of the material properties of the sandpile in the special pre- and post-avalanche configurations. From these we have drawn the following conclusions:
* The mean slope of the disordered sandpile peaks (see Table I) before a large avalanche and drops immediately after; this statement is true for events of any size and thus remains trivially true for the ordered sandpile.
* The packing fraction $`\varphi `$ of the disordered sandpile increases after a large event, i.e. effective consolidation occurs during avalanching (see Table I). This consolidation via avalanching mirrors that which occurs when a sandpile is shaken with low-intensity vibrations .
* However, a far deeper statement can be made about the comparison of the surface width for pre- and post- large event sandpiles; Table I shows that the surface width goes down considerably during an event, once again suggesting that a rough precursor pile is smoothed by the propagation of a large avalanche.
We have also investigated the dependence of various material properties of a disordered sandpile on the aspect ratio of the grains . Table II shows our results, and Figure 4 illustrates the variation of the avalanche size distribution.
There is a transition as aspect ratios of $`0.7`$ are approached from above or below; we have shown above that piles with these โcriticalโ aspect ratios manifest strong disorder in the sense of:
* a โsecond peakโ in the avalanche size distribution denoting a preferred size of large avalanches
* large surface widths denoting an increased surface roughness
* a strong correlation between interfacial roughness and avalanche flow since the mean surface width varies dramatically in the pre- and post- large event piles.
Clearly, sandpiles containing grains with aspect ratios close to unity act essentially as totally ordered piles ; there is however a significant symmetry in the shape of the avalanche size distribution curves above and below the transition region (see Figure 4(a) and (d)). These size distributions are reminiscent of those obtained in earlier work for the case of โuniform disorderโ (which referred to piles that have disorder throughout their volume rather than, as is the present case, disorder concentrated in a boundary layer). These observations lead us to speculate that there exist at least three types of avalanche spectra:
1. the scale-invariant statistics characteristic of ordered sandpiles
2. the strongly disordered statistics characterized by a second peak in the distribution; which we have obtained for specific values of the aspect ratio in the case where the disorder is concentrated in a boundary layer
3. the more weakly disordered region (characterized by a flatter size distribution of avalanche sizes which is, nevertheless, not scale-invariant) obtained in the intermediate regimes of aspect ratio (as well as in the case of uniform disorder).
It is clear that the presence of inherent inhomogeneities in grain shape (which we describe quantitatively by aspect ratio) or bulk structure (which we describe by the classifications of โuniformโ or โboundaryโ disorder) in a sandpile induces the presence of strong disorder in avalanche statistics.
Additionally we present, in Figure 5, the mass-mass correlation function of a particular disordered sandpile; the curve has a peak, which indicates the average time between avalanches. Since the avalanche size distribution for this sandpile includes a preponderance of large events, we conclude that the peak in the correlation function corresponds approximately to the time between large avalanches. We also expect this timescale to manifest itself in the power spectrum of the avalanches; and we expect it to vary strongly with the level and nature of disorder in the sandpile. This work is in progress, as are efforts to relate the timescale found above to a characteristic spatial signature for large events.
We present in Figure 6 the normalised equal-time height-height correlation function $`<dh(r+r_0)dh(r_0)>/<dh(r_0)^2>`$ for a disordered sandpile. This shows that the height deviations (from the instantaneous expected column heights), in a disordered sandpile with $`L=256`$, are positively correlated over about $`80`$ columns, but also have a range where they are negatively correlated. In the inset we plot the related function $`1<dh(r+r_0)dh(r_0)>/<dh(r_0)^2><(dh(r+r_0)dh(r_0))^2>/2<dh(r_0)^2>`$. For separations $`r`$ much less than the correlation length of the system, we should have :
$$<(dh(r+r_0)dh(r_0))^2>|r|^{2\alpha }$$
(11)
and, therefore, we would expect the function in the inset of Figure 6 to manifest a similar $`r`$-dependence. A linear fit to points with $`r<30`$ (shown by the line in the inset of Figure 6) indicates a power-law dependence of the form $`r^{0.67}`$ for $`r<<L`$, implying that $`\alpha 0.34`$. As we will see below, this corresponds to the spatial roughening exponent of an ordered sandpile. An explanation of this behaviour is included in the next section.
### B Spatial and temporal roughening of sandpile surfaces
The hypothesis of dynamical scaling for sandpiles assumes that the roughening process occurs in two stages. First, the surface roughening is time-dependent, Eq. (2); then once the roughness becomes temporally constant, the surface is said to saturate, and all further deposition results in surface fluctuations governed by Eq. (3).
However, there is a subtlety concerning the first (i.e. time-dependent) stage; โearlyโ model sandpiles are wedge-shaped and the transition to saturation is accompanied by a gradual build-up to a pile that has a single, sloping surface with a suitable angle of repose.
We have taken this process into account to measure the dynamic exponent $`\beta `$, Eq. (2); in this case surface widths are evaluated from the sloping portion of the pile. For the roughening exponent $`\alpha `$, Eq. (3), we have measured surface widths from mature piles that have only a sloping surface.
Our results are:
* For disordered sandpiles ($`L=2048`$) we find $`\beta =0.42\pm 0.05`$; for ordered sandpiles ($`L=2048`$) $`\beta =0.17\pm 0.05`$.
* For disordered sandpiles above a crossover size of $`L_c=90`$ we find $`\alpha =0.723\pm 0.04`$; while for ordered piles we find $`\alpha =0.356\pm 0.05`$.
* Based on the above values we find the dynamical exponent $`z`$, has values of $`1.72\pm 0.29`$ and $`2.09\pm 0.84`$ for the disordered and ordered sandpiles.
The variation of the surface width, $`W`$, as a function of $`L`$, is shown in a log-log plot in Figure 7. This figure shows clearly the crossover in $`\alpha `$ as a function of system size, for disordered sandpiles; the scaling behaviour of ordered sandpiles is shown for comparison. Disordered sandpiles with sizes below $`L_c`$ have $`\alpha =0.37\pm 0.05`$; this is in accord with earlier work, where the second peak in the avalanche spectrum appeared only for disordered piles above crossover. The existence of this crossover length has been variously interpreted as a length related to reorganisation in the boundary layer of a sandpile or to variations in the angle of repose in a (disordered) sandpile . Disorder appears to be crucial for the existence of such experimentally observed crossovers, since for example ordered models , show no crossover in their measurements of $`\alpha `$. The crossover effect in a disordered sandpile is also indicated by the height-height correlation function (Figure 6). For separations $`r<<L_c90`$ in disordered sandpiles (with length $`L>>L_c`$), the exponent $`\alpha `$ obtained from the small $`r`$ behaviour of the correlation function is that of the ordered sandpile. This suggests that even in disordered sandpiles grains which are within a crossover length, $`L_c`$, of each other tend to order i.e. the examination of the height-height correlation function for separations $`r<<L_c`$ (Figure 6) or the direct measurement of $`\alpha `$ for system sizes $`L<<L_c`$, as reported above, yields the exponent of the ordered sandpile, $`\alpha 0.35`$.
The above values indicate that while there is not a change of universality class as one goes from an ordered to a disordered sandpile ($`z`$ stays the same, within the error bars), the disordered pile is clearly rougher with respect to both temporal and spatial fluctuations ($`\alpha `$ and $`\beta `$ higher).
It is important to note that our measurements of surface exponents are taken over many realisations of the surfaces concerned. Thus, even though, as demonstrated in earlier sections, the surface of a disordered sandpile is temporarily smoothed by the propagation of a large avalanche, it begins to roughen again as a result of deposition; the values of $`\alpha `$ and $`\beta `$ that we measure are averages over millions of such cycles and hence reflect the roughening of the interface, in an average sense. By contrast, no abnormally large events occur for the ordered sandpiles and this is reflected by the lower values of fluctuations and exponents.
The most striking aspect of these exponents is that they indicate that our present cellular-automaton model is a discrete version of earlier continuum equations , which were formulated independently, to model the pouring of grains onto a sloping surface. The exponents for our disordered pile are within error bars, exactly those that were measured for the height fluctuations of the surface in case 2, in ref. , while those for the ordered pile are exactly those that were measured for the fluctuations of the avalanches generated by the mobile grains, in the same case. This is in accord with the notion that the avalanches which flow on an ordered pile generate only mobile grains on the otherwise ordered surface, while as we have demonstrated above, avalanches that flow on a disordered pile, also change the configuration of the surface by altering the distribution of height fluctuations (measured by the surface widths). We are exploring these analogies further, but note that this agreement is already a strong validation of both models.
## IV Discussion and conclusions
We have presented a thorough investigation of the effects of avalanching on a sandpile surface, focusing on the interrelationship between the nature of the avalanches and the surfaces they leave behind. We have also postulated a principle of dynamical scaling for sandpile surfaces, and measured the roughening exponents for a sample disordered sandpile. Finally, we have related the characteristics of avalanching in our model system to those obtained experimentally.
Our current investigations concern several questions left unanswered above. These include the dependence of the crossover length $`L_c`$ on the disorder in the pile; as well as a fuller investigation of the effect of the nature of disorder (i.e. whether boundary or uniform). We would expect our correlation functions to depend strongly on the nature and magnitude of the disorder and we are undertaking a full quantitative study. Lastly, we hope that an extension of the present analysis to higher dimensions will yield more extensive comparisons with experiments than is presently available.
## ACKNOWLEDGMENTS
GCB acknowledges support from the Biotechnology and Biological Sciences Research Council, UK ($`218`$/FO$`6522`$). |
no-problem/9912/quant-ph9912014.html | ar5iv | text | # Quantum Memory for Light
## Abstract
We propose an efficient method for mapping and storage of a quantum state of propagating light in atoms. The quantum state of the light pulse is stored in two sublevels of the ground state of a macroscopic atomic ensemble by activating a synchronized Raman coupling between the light and atoms. We discuss applications of the proposal in quantum information processing and in atomic clocks operating beyond quantum limits of accuracy. The possibility of transferring the atomic state back on light via teleportation is also discussed.
Light is an ideal carrier of quantum information, but photons are difficult to store for a long time. In order to implement a storage device for quantum information transmitted as a light signal, it is necessary to faithfully map the quantum state of the light pulse onto a medium with low dissipation, allowing for storage of this quantum state. Depending on the particular application of the memory, the next step may be either a (delayed) measurement projecting the state onto a certain basis, or further processing of the stored quantum state, e.g., after a read-out via the teleportation process. The delayed projection measurement is relevant for the security of various quantum cryptography and bit commitment schemes . The teleportation read-out is relevant for full scale quantum computing.
In this Letter we propose a method that enables quantum state transfer between propagating light and atoms with an efficiency up to 100% for certain classes of quantum states. The long term storage of these quantum states is achieved by utilizing atomic ground states. In the end of the paper we propose an atom-back-to-light teleportation scheme as a read-out method for our quantum memory.
We consider the stimulated Raman absorption of propagating quantum light by a cloud of $`\mathrm{\Lambda }`$ atoms. As shown in the inset of Fig.1, the weak quantum field and the strong classical field are both detuned from the upper intermediate atomic state(s) by $`\mathrm{\Delta }`$ which is much greater than the strong field Rabi frequency $`\mathrm{\Omega }_s`$, the width of an upper level $`\gamma _i`$ and the spectral width of the quantum light $`\mathrm{\Gamma }_q`$. The Raman interaction โmapsโ the non-classical features of the quantum field onto the coherence of the lower atomic doublet, distributed over the atomic cloud.
In our analysis we eliminate the excited intermediate states, and we treat the atoms by an effective two-level approximation. We start with the quantum Maxwell-Bloch equations in the lowest order for the slowly varying operator $`\widehat{Q}`$: $`\widehat{Q}=\widehat{\sigma _{31}}e^{i(\omega _q\omega _s)t+i(k_qk_s)z}`$ (it will be assumed, that $`(k_qk_s)L1`$, where $`L`$ is the length of the atomic cloud, $`z`$ is the propagation direction, and $`\omega _{q,s}`$ and $`k_{q,s}`$ are frequencies and wavevectors of โquantumโ and โstrongโ fields respectively)
$`{\displaystyle \frac{d}{dt}}\widehat{Q}(z,t)=i\kappa _1^{}\widehat{E}_q(z,t)E_s^{}(z,t)\mathrm{\Gamma }\widehat{Q}(z,t)+\widehat{F}(z,t)`$ (2)
$`\left({\displaystyle \frac{}{z}}+{\displaystyle \frac{1}{c}}{\displaystyle \frac{}{t}}\right)\widehat{E}_q(z,t)=i\kappa _2\widehat{Q}(z,t)E_s(z,t)`$ (3)
$`\mathrm{\Gamma }`$ is the dephasing rate of the $`13`$ coherence which also includes the strong field power broadening $`\mathrm{\Gamma }_s\omega ^3\mathrm{}\kappa _1^2|E_s|^2/(3c^3)`$ due to spontaneous Raman scattering , $`\widehat{F}(z,t)`$ is the associated quantum Langevin force with correlation function $`\widehat{F^{}}(z,t)\widehat{F}(z^{},t^{})=2\mathrm{\Gamma }/n\delta (zz^{})\delta (tt^{})`$, and $`\kappa _1=_i\mu _{1i}\mu _{3i}/(\mathrm{}^2\mathrm{\Delta }_i)`$, $`\kappa _2=2\pi n\mathrm{}\omega \kappa _1/c`$, where $`\mu _{ji}`$ are dipole moments of the atomic transitions and $`n`$ is the density of the atoms. A one-dimensional wave equation is sufficient to describe the spatial propagation of light in a pencil-shaped sample with a Fresnel number $`=A/\lambda L`$ near unity ($`A`$ is the cross-sectional area of the sample and $`\lambda `$ is the optical wavelength) .
If the strong field is not depleted in the process of quantum field absorption and if most of the atomic population stays in the initial level $`1`$, Eqs.(2-3) can be integrated to get
$`\widehat{Q}(z,\tau )`$ $`=`$ $`e^{\mathrm{\Gamma }\tau }\widehat{Q}(z,0)e^{\mathrm{\Gamma }\tau }{\displaystyle _0^z}๐z^{}\widehat{Q}(z^{},0)\sqrt{{\displaystyle \frac{a(\tau )}{zz^{}}}}J_1(2\sqrt{a(\tau )(zz^{})})`$ (5)
$``$ $`i\kappa _1{\displaystyle _0^\tau }๐\tau ^{}e^{\mathrm{\Gamma }(\tau \tau ^{})}\widehat{E}_q(0,\tau ^{})E_s(\tau ^{})J_0(2\sqrt{z(a(\tau )a(\tau ^{}))})+{\displaystyle _0^\tau }๐\tau ^{}e^{\mathrm{\Gamma }(\tau \tau ^{})}\widehat{F}(z,\tau ^{})`$ (6)
$``$ $`{\displaystyle _0^\tau }๐\tau ^{}{\displaystyle _0^z}๐z^{}e^{\mathrm{\Gamma }(\tau \tau ^{})}\widehat{F}(z^{},\tau ^{})\sqrt{{\displaystyle \frac{a(\tau )a(\tau ^{})}{zz^{}}}}J_1(2\sqrt{(a(\tau )a(\tau ^{}))(zz^{})})`$ (7)
$`\widehat{E}_q(z,\tau )`$ $`=`$ $`\widehat{E}_q(0,\tau )i\kappa _2E_s(\tau )e^{\mathrm{\Gamma }\tau }{\displaystyle _0^z}๐z^{}\widehat{Q}(z^{},0)J_0(2\sqrt{a(\tau )(zz^{})})`$ (8)
$``$ $`\kappa _1^{}\kappa _2E_s(\tau ){\displaystyle _0^\tau }๐\tau ^{}e^{\mathrm{\Gamma }(\tau \tau ^{})}\widehat{E}_q(0,\tau ^{})E_s^{}(\tau ^{})\sqrt{{\displaystyle \frac{z}{a(\tau )a(\tau ^{})}}}J_1(2\sqrt{z(a(\tau )a(\tau ^{}))})`$ (9)
$``$ $`i\kappa _2E_s(\tau ){\displaystyle _0^\tau }๐\tau ^{}{\displaystyle _0^z}๐z^{}e^{\mathrm{\Gamma }(\tau \tau ^{})}\widehat{F}(z^{},\tau ^{})J_0(2\sqrt{(a(\tau )a(\tau ^{}))(zz^{})})`$ (10)
where $`\tau =tz/c`$, and $`a(\tau )=\kappa _1^{}\kappa _2_0^\tau ๐\tau ^{\prime \prime }|E_s(\tau ^{\prime \prime })|^2`$ and $`\widehat{Q}(z,0)`$ is the initial atomic coherence.
Integrating Eq.(7) over space we obtain the collective atomic spin operator, which is the atomic variable on which the quantum light field is mapped.
$`\widehat{๐ฌ}_L(\tau )n{\displaystyle _0^L}๐z\widehat{Q}(z,\tau )`$ $`=`$ $`ne^{\mathrm{\Gamma }\tau }{\displaystyle _0^L}๐z^{}J_0(2\sqrt{a(\tau )(Lz^{})})\widehat{Q}(z^{},0)`$ (11)
$`+`$ $`n{\displaystyle _0^\tau }๐\tau ^{}e^{\mathrm{\Gamma }(\tau \tau ^{})}{\displaystyle _0^L}๐z^{}J_0(2\sqrt{(a(\tau )a(\tau ^{}))(Lz^{})})\widehat{F}(z^{},\tau ^{})`$ (12)
$``$ $`in\kappa _1{\displaystyle _0^\tau }๐\tau ^{}e^{\mathrm{\Gamma }(\tau \tau ^{})}\widehat{E}_q(\tau ^{})E_s(\tau ^{})\sqrt{{\displaystyle \frac{L}{a(\tau )a(\tau ^{})}}}J_1(2\sqrt{a(\tau )a(\tau ^{})L})`$ (13)
Eq.(13) is the main result of this Letter. The first term represents the decaying memory of the initial atomic coherence in the sample, the second term is the contribution from the Langevin noise associated with the decay of the coherence, and the last term represents the contribution from the absorbed quantum light. It is thus the last term, that describes the quantum memory capability of the atomic system. Note that the strong classical field $`E_s(\tau ^{})`$ can be turned on and off, so that only the value of the quantum field in a certain time window is mapped onto the atomic system, where it is subsequently kept. We assume that the rate $`\mathrm{\Gamma }`$ is dominated by the power broadening contribution $`\mathrm{\Gamma }_s`$ when the classical field is turned on, and it can be quite small $`\mathrm{\Gamma }=\mathrm{\Gamma }_0`$ when the classical field is turned off to ensure long storage times. If the quantum field pulse $`\widehat{E}_q(\tau )`$ and the overlapping classical pulse $`E_s(\tau )`$ are long enough so that $`\mathrm{\Gamma }\tau 1`$ the initial atomic state decays and the state determined by $`\widehat{E}_q(\tau )`$ emerges instead. After the light pulses are turned off, the atomic โmemoryโ state decays slowly with the rate $`\mathrm{\Gamma }_0`$.
As an example of storing a quantum feature of light in atoms let us consider storing a squeezed state, which plays an important role in quantum information with continious variables . For infinitely broadband squeezed light the quadrature operator $`\widehat{X}_q(z,\tau )=\text{Re}\widehat{E}_q(z,\tau )`$ on the entry face of the sample can be written as $`\widehat{X}_q(0,\tau )\widehat{X}_q(0,\tau ^{})=2\pi \mathrm{}\omega /cX_0^2\delta (\tau \tau ^{})`$, where $`X_0^2`$ is the dimensionless light noise, $`X_0^2=1`$ in the case of broad band vacuum. In steady-state the variance of the atomic noise $`\widehat{X}=\text{Re}\widehat{๐ฌ}_L`$ becomes
$`X^2`$ $`=`$ $`nL\left(e^\alpha \left(I_0(\alpha )+I_1(\alpha )\right)\right)`$ (14)
$`+`$ $`nLX_0^2\left(1e^\alpha \left(I_0(\alpha )+I_1(\alpha )\right)\right)`$ (15)
where $`\alpha =aL/\mathrm{\Gamma }`$ is the optical depth of the sample, $`a=\kappa _1^{}\kappa _2|E_s|^2`$ and $`I_0`$ and $`I_1`$ are Bessel functions of the first kind. In the case of vacuum incident on the sample we recover the atomic vacuum noise $`X^2=nL`$, the number of atoms per unit area. The second term in (15), represents the light contribution to atomic noise, it is reduced when the light is squeezed, and in the case of ideally squeezed light $`X_0^2=0`$ only the first term contributes to the atomic noise variance. We define the dimensionless expression in the parenthesis as a mapping efficiency for the Gaussian fields $`\eta =(1X^2/nL)/(1X_0^2)`$ (for ideally squeezed light $`1\eta `$ quantifies the amount of spin squeezing). The results are plotted in Fig.1 (solid line) as a function of the optical depth $`\alpha `$. Storing squeezing in atoms with an efficiency higher than 90% requires an atomic sample with an optical depth of the order of $`60`$. Note that by absorption of EPR beams in separate atomic samples, we may, e.g., prepare entangled atomic gases, see also . If $`\mathrm{\Gamma }\mathrm{\Gamma }_s`$, and the decoherence is dominated by the strong field that is required for the operational memory, then $`\alpha (3/2\pi )\lambda ^2nL`$, i.e., the optical depth is the same as for a resonant narrowband field. The dependence on the optical depth arises because the more squeezed light is absorbed in the sample, the more the atoms become squeezed. If only a fraction of the light field is absorbed, the atomic spins will not only be correlated with each other but also with the field leaving the sample, and thus the squeezing will be degraded, see also .
Various schemes for quantum state exchange between light and atoms based on cavity QED Raman-type interactions have been proposed in the past . Quantum memory with a microwave cavity field as storage medium has been demonstrated in . The fact, that the present proposal does not utilize high finesse cavities significantly simplifies the experimental realization. The above result can be compared with the proposal and its experimental verification for squeezing the collective spin of an optically thick sample of $`V`$-type excited atoms via the interaction with squeezed light. As opposed to the theoretical bound of 50% mapping efficiency found in the present proposal offers in principle a perfect transfer of the state of light onto atoms.
A steady state analysis in frequency domain similar to that in leads to the following expression for the spectral collective atomic spin operator
$`\stackrel{~}{๐ฌ}_L(\mathrm{\Delta })`$ $`=`$ $`{\displaystyle \frac{in}{\kappa _2E_s}}\left(1e^{ik(\mathrm{\Delta })L}\right)\stackrel{~}{E}_q(\mathrm{\Delta })`$ (16)
$`+`$ $`{\displaystyle _0^L}๐z{\displaystyle \frac{n}{\mathrm{\Gamma }i\mathrm{\Delta }}}e^{ik(\mathrm{\Delta })(Lz)}\stackrel{~}{F}(z,\mathrm{\Delta })`$ (17)
where $`\mathrm{\Delta }`$ is the detuning from the two-photon resonance and $`k(\mathrm{\Delta })`$ is the Lorentzian absorption profile $`ik(\mathrm{\Delta })=a/(\mathrm{\Gamma }i\mathrm{\Delta })`$. The atomic noise variance $`X^2=๐\mathrm{\Delta }\stackrel{~}{X}(\mathrm{\Delta })\stackrel{~}{X}(\mathrm{\Delta })`$ gives the same result as Eq.(15).
The simplest approach to quantum field propagation in a medium is the model of scattering by a collection of frequency-dependent beam splitters . Each beam splitter removes a small fraction of a propagating light beam and it simultaneously couples in a small fraction of vacuum into the beam. The result for the noise spectrum of the transmitted light in our model coincides with such a simplified treatment and is given by
$$\stackrel{~}{X}^2(\mathrm{\Delta })=\stackrel{~}{X}_0^2(\mathrm{\Delta })e^{\frac{a\mathrm{\Gamma }L}{\mathrm{\Gamma }^2+\mathrm{\Delta }^2}}+\left(1e^{\frac{a\mathrm{\Gamma }L}{\mathrm{\Gamma }^2+\mathrm{\Delta }^2}}\right)$$
(18)
For infinite bandwidth squeezed incident light this spectrum approaches the vacuum value $`1`$, for the frequencies where light is strongly attenuated. The width of this noise region grows with optical depth of the system. It is within this spectral region that quantum features of the light field are transferred onto atoms.
In the case of the finite bandwidth of ideal squeezing $`\widehat{X}_q(0,\tau )\widehat{X}_q(0,\tau ^{})2\pi \mathrm{}\omega /c(\delta (\tau \tau ^{})\mathrm{\Gamma }_q/2e^{\mathrm{\Gamma }_q|\tau \tau ^{}|})`$, calculations based on either Eq.(13) or Eq.(17) have to be carried out numerically and the mapping efficiencies for different spectral widths of squeezing $`\mathrm{\Gamma }_q`$ are shown in Fig.1. We observe in the figure that when the entire bandwidth of squeezed light is completely absorbed in the sample, further growth of the optical depth leads only to the reduction of the spin squeezing, because the atoms which are not reached by the squeezed light are subject to the standard vacuum noise.
The macroscopic number of atoms in our atomic sample, of which most remain in the ground state, allows us to replace the sum of fermionic atomic operators by an effective bosonic operator $`\widehat{๐ฌ}_L`$ matching the bosonic operator of the light field. This restriction should be kept in mind when comparing our results to other analyses of spin-squeezing .
A suitable experimental setup for realization of the storage of field correlations in atoms is the cold atom fountain, e.g. as used in a frequency standard. A recent paper reports operation of a laser cooled cesium fountain clock in the quantum limited regime meaning that the variance $`X^2=nL`$ of the collective atomic spin associated with the $`F=4,m=0`$$`F=3,m=0`$ two level system has been achieved. This means that the setup is suitable for the observation of squeezing of $`X^2`$. The decoherence time $`\mathrm{\Gamma }_0`$ of the order of a second reached in the atomic standard setup in principle allows quantum memory on this time scale. We thus propose to prepare atoms in the $`F=3,m=0`$ state (our state $`1`$, the level $`F=4,m=0`$ plays the role of our state $`3`$) and to illuminate them by a Raman pulse containing the squeezed vacuum and the strong field as described above. After the pulse and after some delay the atoms are interrogated in a microwave cavity where their collective spin state is analyzed to verify that the memory works.
We now wish to address the experimental requirements for our proposal. For our two-level analysis to be valid, we assume that $`\mathrm{\Delta }\mathrm{\Gamma }_q,\mathrm{\Gamma }_s,\gamma _i`$ and $`\sigma _R\sigma _{\text{2-level}}`$ where $`\sigma _R=(6\pi )^4c^8I_{\text{sat}}^2/(2\mathrm{\Gamma }_qS\omega ^{11}\mathrm{}^3\mathrm{\Delta }_i^2)`$ is the stimulated Raman cross section for the quantum field, $`S=I_s/I_{\text{sat}}`$ is the saturation parameter and $`I_{\text{sat}}=\omega ^6/(9\pi c^5)\mu _{1i}\mu _{3i}`$ is the saturation intensity for the strong field for $`1i`$, $`3i`$ transitions, $`\sigma _{\text{2-level}}=3\lambda ^2\gamma _i^2/(8\pi \mathrm{\Delta }_i^2)`$ is the spontaneous 2-level cross section. In order to carry out the steady state solution of (2-3) we assume $`\mathrm{\Gamma }_s\tau _{\text{pulse}}^1`$ \- where $`\tau _{\text{pulse}}`$ is the duration of the Raman pulse. Finally, the condition on the bandwidth of the quantum field $`\mathrm{\Gamma }_q\tau _{\text{pulse}}^1`$ ensures that the pulse is long enough to contain all relevant correlations of the quantum state of the field. It is possible to satisfy all those conditions with the following set of parameters: $`\mathrm{\Gamma }_q=10^7`$Hz, $`\mathrm{\Delta }=10^9`$Hz, $`S>4`$, $`\tau _{\text{pulse}}=10`$ $`m`$sec. With the resonant optical depth of $`20`$ achievable for $`5\times 10^5`$ atoms a mapping efficiency exceeding 80% is possible (Fig.1). After the pulse is switched off the memory time $`\mathrm{\Gamma }_0^1`$ is set by the free evolution of the $`F=4,m=0`$$`F=3,m=0`$ system and as mentioned above it can be as long as a second.
We have analyzed the possibility to transfer (write-down) a quantum state of light onto an atomic sample. And we have suggested how to perform a delayed measurement of the quantum state. We will now briefly discuss how to map the atomic state back onto a light field by interspecies teleportation . To realize an effective teleportation of an atomic collective spin onto a light beam we suggest an approach similar to teleportation of light with EPR correlated light beams and a beam-splitter type interaction between one of the beams and the atomic collective spin. Making a homodyne measurement of the light quadrature and a Ramsey measurement of the atomic spin we may employ the protocol used for light teleportation and restore the atomic state in the other light beam.
To realize the โbeam-splitterโ we send a short pulse ($`\tau _{\text{pulse}}\mathrm{\Gamma }1`$ \- so that dissipation processes do not take place) of one of the EPR beams through our atomic sample in the small optical depth regime ($`\alpha a\tau _{\text{pulse}}L1`$). In our scheme the switching from high to small optical depth is made simply by adjusting the intensity of the coupling field $`E_s`$. In the weak coupling regime (small optical depth) the interaction between light and atoms (10) - (13) can be described by a linear approximation leading to a โbeam-splitterโ type interaction. Introducing a new rescaled atomic operator $`\widehat{q}=(nL)^{1/2}\widehat{๐ฌ}_L`$ and the field โareaโ operator $`\widehat{\theta }=\sqrt{\lambda /2\pi \mathrm{}\tau _{\text{pulse}}}_0^{\tau _{\text{pulse}}}๐\tau ^{}\widehat{E}_q(\tau ^{})`$ we obtain:
$`\widehat{q}_{\text{out}}`$ $`=`$ $`\widehat{q}_{\text{in}}ir\widehat{\theta }_{\text{in}}`$ (20)
$`\widehat{\theta }_{\text{out}}`$ $`=`$ $`\widehat{\theta }_{\text{in}}ir\widehat{q}_{\text{in}}`$ (21)
The condition for such a linearization is a weak interaction, hence our โbeam-splitterโ is highly asymmetric, $`r=\sqrt{\alpha }=\sqrt{\sigma _R\tau _{\text{pulse}}L/\mathrm{\Gamma }_q}1`$. Teleportation with asymmetric beam splitters is possible but it requires a higher degree of correlation in the EPR beams. A simple estimate suggests, that the residual noise in the EPR pair must be smaller than $`r`$. If one assumes a stronger coupling in order to approach the symmetric beam-splitter case, the field probes a component of the atomic coherence, which deviates from the uniform integral in Eq.(13) due to the spatial variation of the probe light. If, for example, the probe is damped by a factor of order 2, it is reasonable to decompose the probed atomic coherence as a roughly even mixture of the uniform integral $`\widehat{๐ฌ}_L`$ and a โnoiseโ operator which we, for simplicity may assume to be the standard vacuum noise. This noise is comparable to the โqudutyโ of noise of a direct detection of the atomic ensemble and reconstruction of a corresponding field state (โclassical teleportationโ).
We are grateful to Prof. Sam Braunstein for stimulating discussions of the atomic teleportation. This research has been funded by the Danish Research Council and by the Thomas B. Thriges Center for Quantum Information. AK acknowledge support of the ESF-QIT programme. |
no-problem/9912/quant-ph9912008.html | ar5iv | text | # Quantum Logic with a Single Trapped Electron
## I Introduction
The modern theory of information relies on the very foundations of quantum mechanics. This is because of information is physical, as recently emphasised by Landauer . It implies that the laws of quantum mechanics can be used to process and store information. The elementary quantity of classical information is the bit, which is represented by a dichotomic system; therefore, any physical realization of a bit needs a system with two states. The very novel characteristics of quantum information is that, by using quantum states to store information, a quantum system can be in a superposition of states. This means, in a sense, that the elementary quantity of quantum information, a quantum bit, can be in both the states at the same time.
Already in 1981 Feynman pointed out the impossibility for a classical computer to simulate the evolution of a quantum system in an efficient way. This opened the search of a more efficient way to simulate quantum systems until Deutsch provided a satisfactory theoretical description of a universal quantum computer. The quantum computer is a device which operates with simple quantum logic gates. These are analogous to the classical gates, which perform one elementary operation on two bits in a given way. Quantum logic gates differ from their classical counterpart in that they operate on quantum superpositions and perform operations on them . It has also been shown that any quantum computation can be built from a series of one-bit and two-bit quantum logic gates . The fundamental quantum logic gate is the controlled-NOT (CN) gate , in which one quantum bit (or qubit) is flipped (rotated by $`\pi `$ radians) depending upon the state of a second qubit.
A very promising candidate for quantum logic was recently introduced by Cirac and Zoller , who showed how to construct universal multibit quantum logic gates in a system of laser-cooled trapped ions. Other systems were devised as building blocks for a quantum computer , the search for new systems is, however, still open because none of the previous systems is yet claimed as the best candidate. One should devise a system with very low loss, almost de-coherence free, which can be well controlled with simple operations. However, before obtaining a suitable system one has to be sure that the mathematical models of quantum logic could be easily implemented in a real physical system. Up to now the experimental realization of such logic operations were shown to be possible with trapped ions , flying qubits , and cavity QED . There are claims that the quantum logic gates are obtained in NMR systems but this was also questioned . In these systems, however, the implementation of quantum logic is not at all easy and was not completely performed in all of them.
It is here our aim to show that other natural candidates to implement quantum logic could be trapped electrons. In fact, an electron is a real two-state system and when stored in a Penning trap permits very accurate measurements . Furthermore, in such a system the decoherence effects, which can destroy the quantum interference that enables the quantum logic implementation , are well controlled . Moreover, electrons being structureless, open other possibilities, e.g. the use of statistics that has not as yet been considered in the literature.
To introduce the system, in this paper we consider a single electron trapped in a Penning trap, and we show how to get a controlled-NOT gate on a pair of qubits. The two qubits comprise two internal (spin) states and two external (quantized harmonic motion) states. Although this minimal system consists of only two qubits, it illustrates the basic operations necessary for, and the problems associated with, quantum logic networks with electrons. The extension to two o more electrons needs more investigations. Here we are not interested in the scalability of the system, rather to show the physical implementation of quantum logic in a readly controllable way with the existing technologies.
## II The Model
We are considering the โgeoniumโ system consisting of an electron of charge $`e`$ and mass $`m`$ moving in a uniform magnetic field $`๐`$, along the positive $`z`$ axis, and a static quadrupole potential
$$V=V_0\frac{x^2+y^22z^2}{4d^2},$$
(1)
where $`d`$ characterizes the dimension of the trap and $`V_0`$ is the potential applied to the trap electrodes .
In this work, in addition to the usual trapping fields, we embed the trapped electron in a radiation field of vector potential $`๐_{\mathrm{ext}}`$. Traditional hyperbolic Penning traps form cavities for which it has not yet been possible to even classify the standing-wave fields. In marked contrast, the radiation modes of a simple cylindrical cavity are classified in a familiar way as either transverse magnetic or transverse electric modes . So, in the following, we always refer to such cylindrical traps.
The Hamiltonian for the trapped electron can be written as the quantum counterpart of the classical Hamiltonian with the addition of the spin term
$$H=\frac{1}{2m}\left[๐ฉe๐\right]^2+eV\frac{g}{2}\frac{e\mathrm{}}{2m}\sigma ๐,$$
(2)
where $`g`$ is the electronโs $`g`$ factor, and
$$๐=\frac{1}{2}๐๐ซ+๐_{\mathrm{ext}},$$
(3)
where $`๐ซ(x,y,z)`$, $`๐ฉ(p_x,p_y,p_z)`$ are respectively the position and the conjugate momentum operators, while $`\sigma (\sigma _x,\sigma _y,\sigma _z)`$ are the Pauli matrices in the spin space.
The motion of the electron in absence of the external field $`๐_{\mathrm{ext}}`$ is the result of the motion of three harmonic oscillators , the cyclotron, the axial and the magnetron, well separated in the energy scale, plus a spin precession around the $`z`$ axis. This can be easily understood by introducing the ladder operators
$`a_z`$ $`=`$ $`\sqrt{{\displaystyle \frac{m\omega _z}{2\mathrm{}}}}z+i\sqrt{{\displaystyle \frac{1}{2\mathrm{}m\omega _z}}}p_z`$ (4)
$`a_c`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left[\sqrt{{\displaystyle \frac{m\omega _c}{2\mathrm{}}}}(xiy)+\sqrt{{\displaystyle \frac{2}{\mathrm{}m\omega _c}}}(p_y+ip_x)\right]`$ (5)
$`a_m`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left[\sqrt{{\displaystyle \frac{m\omega _c}{2\mathrm{}}}}(x+iy)\sqrt{{\displaystyle \frac{2}{\mathrm{}m\omega _c}}}(p_yip_x)\right]`$ (6)
where the indexes $`z`$, $`c`$ and $`m`$ stand for axial, cyclotron and magnetron respectively. The above operators obey the commutation relation $`[a_i,a_j^{}]=\delta _{ij}`$, $`i,j=z,c,m`$.
When $`๐_{\mathrm{ext}}=0`$, the Hamiltonian (2) simply reduces to
$$H=\mathrm{}\omega _za_z^{}a_z+\mathrm{}\omega _ca_c^{}a_c\mathrm{}\omega _ma_m^{}a_m+\frac{\mathrm{}}{2}\omega _s\sigma _z,$$
(7)
where the angular frequencies are given by
$$\omega _z=\sqrt{\frac{|e|V_0}{md^2}};\omega _c=\frac{|e|B}{m};\omega _m\frac{\omega _z^2}{2\omega _c}.$$
(8)
and $`\omega _s=g|e|B/2m`$ is the spin precession angular frequency. In the previous expression for $`\omega _c`$ we neglected very small corrections which are not relevant for our purpose. In typical experimental configurations the respective frequency ranges are $`\omega _z/2\pi `$ MHz, $`\omega _c/2\pi `$ GHz, and $`\omega _m/2\pi `$ kHz.
Let us introduce the external radiation field as a standing wave along the $`z`$ direction and rotating, i.e. circularly polarized, in the $`xy`$ plane with frequency $`\mathrm{\Omega }`$ . In particular, we consider a standing wave within the cilindrical cavity with wave vector $`k`$ and amplitude $`|\alpha |`$. Then, we can write
$$๐_{\mathrm{ext}}=(i\left[e^{i\phi +i\mathrm{\Omega }t}e^{i\phi i\mathrm{\Omega }t}\right],\left[e^{i\phi +i\mathrm{\Omega }t}+e^{i\phi i\mathrm{\Omega }t}\right],0)\times |\alpha |\mathrm{cos}(kz+\varphi ),$$
(9)
where $`\phi `$ is the phase of the wave field which gives the direction of the electric (or magnetic) vector in the $`xy`$ plane at the initial time. We assume this can be experimentally controlled. The amplitude $`|\alpha |`$ should depend upon the transverse spatial variables through the Bessel function but we can consider it as a constant because of the small radius of the ciclotron motion. The phase $`\varphi `$ definines the position of the center of the axial motion with respect to the wave. Depending on its value the electron can be positioned in any place between a node and an antinode.
For frequencies $`\mathrm{\Omega }`$ close to $`\omega _c`$ and $`\omega _s`$, we can neglect the slow magnetron motion, then the Hamiltonian (2) becomes
$`H`$ $`=`$ $`\mathrm{}\omega _za_z^{}a_z+\mathrm{}\omega _ca_c^{}a_c+{\displaystyle \frac{\mathrm{}}{2}}\omega _s\sigma _z`$ (10)
$`+`$ $`\mathrm{}ฯต\left[a_ce^{i\phi +i\mathrm{\Omega }t}+a_c^{}e^{i\phi i\mathrm{\Omega }t}\right]\mathrm{cos}(k\widehat{z}+\varphi )`$ (11)
$`+`$ $`\mathrm{}\zeta \left[\sigma _{}e^{i\phi +i\mathrm{\Omega }t}+\sigma _+e^{i\phi i\mathrm{\Omega }t}\right]\mathrm{sin}(k\widehat{z}+\varphi ),`$ (12)
where
$$ฯต=\left(\frac{2|e|^3B}{\mathrm{}m^2}\right)^{1/2}|\alpha |,\zeta =\frac{g|e|}{2m}|\alpha |k,$$
(13)
and $`\sigma _\pm =(\sigma _x\pm i\sigma _y)/2`$. The fourth and fifth terms in the right hand side of the Hamiltonian (10) describe the interaction between the trapped electron and the standing wave which can give rise to a coupling between the axial and cyclotron motions, as well as between the axial and spin ones. In writing Eq. (10) we omitted terms coming from $`๐_{\mathrm{ext}}^2`$ which give a negligible contribution (at most an axial frequency correction) when the electron in positioned in a node or antinode as we shall do in the following.
## III Entangled States Preparation
The spin state is usually controlled through a small oscillatory magnetic field $`๐`$ that lies in the $`xy`$ plane
$$๐(t)=b(\mathrm{cos}(\omega _st+\theta ),\mathrm{sin}(\omega _st+\theta ),\mathrm{\hspace{0.17em}0}),$$
(14)
which causes Rabi oscillations at frequency $`\varpi _s=g|e|b/2m`$. The phase $`\theta `$ can be experimentally controlled; it gives the direction of the field at initial times . The Hamiltonian that follows from Eq. (14), in absence of the standing wave and in a frame rotating at frequency $`\omega _s`$, is
$$H_s=\mathrm{}\frac{\varpi _s}{2}\left[\sigma _+e^{i\theta }+\sigma _{}e^{i\theta }\right]=\mathrm{}\frac{\varpi _s}{2}\left[\sigma _x\mathrm{cos}\theta +\sigma _y\mathrm{sin}\theta \right].$$
(15)
The other non interacting terms do not affect the spin motion and can be neglected. The evolution of the spin state $`|\chi _s=u|+v|`$, with $`|u|^2+|v|^2=1`$, under such Hamiltonian will be
$$|\chi (t)_s=\left[u\mathrm{cos}\left(\frac{\varpi _st}{2}\right)ive^{i\theta }\mathrm{sin}\left(\frac{\varpi _st}{2}\right)\right]|+\left[v\mathrm{cos}\left(\frac{\varpi _st}{2}\right)iue^{i\theta }\mathrm{sin}\left(\frac{\varpi _st}{2}\right)\right]|.$$
(16)
Thus, depending on the interaction time, any superposition of spin states can be generated.
For what concerns the spatial degrees of freedom, we assume the cyclotron and the axial motions are deep cooled down to their respective lower states, i.e. $`|0_c`$ and $`|0_z`$. This could be achievable when the axial motion is decoupled from the external circuit usually used to extract information .
We now consider the spin and the axial degrees of freedom as qubits. Then, by choosing $`\varphi =0`$, i.e. positioning the electron in the node of the standing wave, Eq. (10) can be approximated by
$`H`$ $`=`$ $`\mathrm{}\omega _za_z^{}a_z+\mathrm{}\omega _ca_c^{}a_c+{\displaystyle \frac{\mathrm{}}{2}}\omega _s\sigma _z`$ (17)
$`+`$ $`\mathrm{}ฯต\left[a_ce^{i\phi +i\mathrm{\Omega }t}+a_c^{}e^{i\phi i\mathrm{\Omega }t}\right]`$ (18)
$`+`$ $`\mathrm{}\zeta k\sqrt{{\displaystyle \frac{\mathrm{}}{2m\omega _z}}}\left[\sigma _{}e^{i\phi +i\mathrm{\Omega }t}+\sigma _+e^{i\phi i\mathrm{\Omega }t}\right]\left(a_z+a_z^{}\right).`$ (19)
We distinguish two situations (in a frame rotating at frequency $`\mathrm{\Omega }`$): the first one in which $`\mathrm{\Omega }=\omega _s\omega _z`$ gives
$$H_{}=\mathrm{}\eta \left[\sigma _+a_ze^{i\phi }+\sigma _{}a_z^{}e^{i\phi }\right],$$
(20)
where $`\eta =k\zeta \sqrt{\mathrm{}/2m\omega _z}`$.
The second, for which $`\mathrm{\Omega }=\omega _s+\omega _z`$ gives
$$H_+=\mathrm{}\eta \left[\sigma _+a_z^{}e^{i\phi }+\sigma _{}a_ze^{i\phi }\right].$$
(21)
The action of Hamiltonian (20) for a time $`t`$ over an initial state $`|0_z|`$ leads to
$$|0_z|\mathrm{cos}(\eta t)|0_z|ie^{i\phi }\mathrm{sin}(\eta t)|1_z|.$$
(22)
Instead, the action of Hamiltonian (21) for a time $`t`$ over an initial state $`|0_z|`$ leads to
$$|0_z|\mathrm{cos}(\eta t)|0_z|ie^{i\phi }\mathrm{sin}(\eta t)|1_z|.$$
(23)
Practically, if the electron enters in the trap with e.g. its spin down, by applying selectively the Hamiltonians (15), (20) and (21) for appropriate times we can get states of the form
$$\alpha |0_z|+\beta |0_z|+\gamma |1_z|+\delta |1_z|,|\alpha |^2+|\beta |^2+|\gamma |^2+|\delta |^2=1,$$
(24)
which show entanglement between the two qubits.
Therefore, the manipulation between the four basis eigenstates spanning the two-qubit register $`\{|0_z|,|0_z|,|1_z|,|1_z|\}`$ is achievable.
## IV Logic Operations
Here we shall consider the spin as โtargetโ qubit, and the axial degree as โcontrolโ qubit. The basic logic operations on a single qubit (e.g. Hadamard gate) can be implemented in the target qubit by applying the Hamiltonian (15), while there is no way to control directly the axial qubit.
The CN gate represents, instead, a computation at the most fundamental level: the target qubit is flipped depending upon the state of the control qubit.
The truth table of the reduced CN gate is
$`|0_z|`$ $``$ $`|0_z|,`$ (25)
$`|0_z|`$ $``$ $`|0_z|,`$ (26)
$`|1_z|`$ $``$ $`|1_z|,`$ (27)
$`|1_z|`$ $``$ $`|1_z|.`$ (28)
To implement such a transformation we consider $`\mathrm{\Omega }=\omega _s`$ and $`\varphi =\pi /2`$, i.e. the electron is positioned in an antinode (this operation is routinely performed in actual experiments ). Then, the leading term of Eq. (10) (in a frame rotating at frequency $`\mathrm{\Omega }`$) will result
$$H=\mathrm{}\zeta \left[\sigma _+e^{i\phi }+\sigma _{}e^{i\phi }\right]\times \left[1\frac{\mathrm{}k^2}{4m\omega _z}\frac{\mathrm{}k^2}{2m\omega _z}a_z^{}a_z\right].$$
(29)
If we choose $`\phi =0`$, the above Hamiltonian reduces to
$$H=\mathrm{}2\zeta \left(1\frac{\mathrm{}k^2}{4m\omega _z}\right)\sigma _x+\mathrm{}2\zeta \frac{\mathrm{}k^2}{2m\omega _z}a_z^{}a_z\sigma _x.$$
(30)
Of course, for logic operations on the two qubits, only the interacting part of the above Hamiltonian is relevant. On the other hand the flipping effect of the first term of Hamiltonian (30) can be eliminated by a successive action of Hamiltonian (15) with $`\theta =0`$, for a time $`\tau `$ such that
$$\tau \varpi _s=4\zeta \left(1\frac{\mathrm{}k^2}{4m\omega _z}\right)t^{}\pm 2\pi n,$$
(31)
where $`n`$ is a natural number and $`t^{}`$ is the interaction time with Hamiltonian (30).
Hence, the relevant Hamiltonian for the CN gate is
$$H=\mathrm{}\kappa a_z^{}a_z\sigma _x,$$
(32)
where $`\kappa =\mathrm{}\zeta k^2/m\omega _z`$.
If we appropriately choose the interaction time $`t^{}=\pi /2\kappa `$ we can apply the transformation
$$U=\mathrm{exp}\left(i\pi a_z^{}a_z\sigma _x/2\right).$$
(33)
Thus, the net unitary transformation, in the $``$ basis, is
$`\left(\begin{array}{cccc}1,0,0,0& & & \\ 0,1,0,0& & & \\ 0,0,0,i& & & \\ 0,0,i,0& & & \end{array}\right).`$ (38)
This transformation is equivalent to the reduced CN gate of Eq. (25), apart from phase factors that can be eliminated by the appropriate phase settings of subsequent logic operations . Practically, the reduced CN gate consists here in a single step similarly to Ref. .
## V Information Measurements
We recall that in the geonium system the measurements are performed on the axial degree of freedom due to the nonexistence of good detectors in the microwave regime . The oscillating charged particle induces alternating image charges on the electrodes, which in turn cause an oscillating current to flow through an external circuit where the measurement is performed. The current will be proportional to the axial momentum $`p_z`$ . The very act of measurement changes, however, the state of the measured observable. Then, in order not to loose any stored information because of the measurement, we shall transfer the information contained in the axial qubit into the cyclotron degree of freedom prior to the measurement procedure. This will allow us to get a complete information about the qubits by coupling different cyclotron and spin observables with the axial degree of freedom.
To transfer the information from the axial motion to the cyclotron one, we again use the standing wave, but with another resonance, $`\mathrm{\Omega }=\omega _c\omega _z`$ in order to get from Eq. (10)
$$H=i\mathrm{}ฯตk\sqrt{\frac{\mathrm{}}{2m\omega _z}}\left(a_c^{}a_za_ca_z^{}\right).$$
(39)
Here we set $`\varphi =\phi =\pi /2`$. With the action of the Hamiltonian (39) for a well chosen interaction time, it is possible to transfer any previously entangled state as follows
$$|0_c\left[c_0|0_z|\chi _s+c_1|1_z|\chi ^{}_s\right]\left[c_0|0_c|\chi _s+c_1|1_c|\chi ^{}_s\right]|0_z,$$
(40)
where $`|\chi `$ and $`|\chi ^{}`$ represent two generic spin states. This is obtained when the interaction time is $`t=\sqrt{\pi m\omega _z/2\mathrm{}ฯตk}`$.
Once the information is transferred to the cyclotron degree of freedom, the axial motion is coupled with the external circuit, and it will reach the thermal equilibrium with the read-out apparatus.
Then, the measurements of $`a_c^{}a_c`$ and $`\sigma _z`$ can be done in the usaul way with the aid of the magnetic bottle which causes a shift of the axial resonance proportional to the respective quantum numbers
$$\mathrm{\Delta }\omega _z\stackrel{~}{\omega }_z\left(\frac{gs}{4}+n_c+\frac{1}{2}\right),$$
(41)
where $`\stackrel{~}{\omega }_z`$ is a constant, and $`n_c`$, $`s`$ are the cyclotron excitacion and spin quantum numbers. This frequency shift can be measured with very high precision .
In this model it could be also possible to obtain phase information about the quantum state of the register by means of the coupling between the meter (axial degree) and the system (cyclotron or spin) induced again by the standing waves (see e.g. Ref.).
## VI Conclusions
In conclusion, we have shown the possibility of using a trapped electron for fundamental quantum logic. That system has the advantage of a well defined and simple internal structure and, practically, the decoherence appears only in the axial degree of freedom as a consequence of measurements but the information stored in this degree of freedom, prior to the measurement, can be unitarily transferred into the cyclotron motion. The latter can be preserved from decoherence due to decay mechanisms by appropriately tuning the cavity . The spin is very stable against fields fluctuations . Eventually, the register $``$, in such a configuration, could only suffer of the time uncertainty in the switching on and off the interactions, possibly leading to nondissipative decoherence . The effect on the fidelity in performing the logical operations could arise, indeed, from the impurity of the motional ground states due to an imperfect cooling process. Anyway, we retain that the present model can be implemented with the current technology, and a comparison with the results obtained in the experiment of Ref. would be useful. With respect to the last Reference in the present case the complete information on the state of the two-qubit register is also obtainable.
We also whish to remark that, within the model of trapped electron, other schemes could be exploited, for example by encoding information in other degrees, or by using Schroedinger cat states as well ; in fact the latter were shown to be achievable in such systems .
The next step would be the extension of the above formalism to the case of two or more trapped electrons, in order to investigate real possibilities for quantum registers. One should consider that the realization of a 4-qubit system would be a real advancement because of the possibility of checking error correction strategies. As a final comment we can say that with this simple system we have introduced here, one can implement the Deutsch problem as well.
The authors are grateful for a critical reading of the manuscript by I. Marzoli. This work has been partially supported by INFM (through the 1997 Advanced Research Project โCATโ), by the European Union in the framework of the TMR Network โMicrolasers and Cavity QEDโ, by MURST under the โCofinanziamento 1997โ and by the CNR-ICCTI joint programme. |
no-problem/9912/physics9912001.html | ar5iv | text | # THE VARIANT PRINCIPLE
## I INTRODUCTION
Abstractly, the Nature can be examined as a system of states and actions. State is a general concept that defines existence, structure, organization, and conservation of all matterโs systems, and that stipulates properties, inner relationships of all things and phenomena. Action is an operation that manifests self-influence and inter-influence of states, that presents dynamic power and impulsion of motion and development. Generally, state is object on which actions do. Each state has its action. Self-action makes state conservable and developable. Action of one state on other forms interaction between them. Self-action and inter-action cause variation of state from one to other. That variation establishes a general law of motion.
Following this way I advance a new principle โ that is called Variant Principle. Utilizing this principle as the most general principle I hope that it is useful for research on a logically systematic method to review known laws and to predict unknown laws. And it is a groundwork to unify interactions of nature. I believe that some of the readers of this article will find out that this principle explains naturally inner origin of variation, rules evolutionary processes of things, and perhaps they will be the ones to complete the quest for theories of the Universe.
The article is organized as follows. In Section 2, I advance the ideas and concepts for leading the equation of motion. That is just the foundation of the variant principle. A phenomenon in physics is illustrated by this principle in Section 3. Conclusion is given in Section 4.
## II THE EQUATION OF MOTION
In the Nature, any state and its action are constituent elements of a subject that I call it actor,
$$A=(๐ธ\text{ }\&\text{ }\widehat{\mathrm{A}}),$$
(1)
where $`๐ธ`$ is state, and $`\widehat{\mathrm{A}}`$ is its action operator.
1. For any system in which there is only one actor $`\{A\}`$, that actor is in self-action. This fact causes actor either to be conserved or to be varied by action of itself with respect to all its possible inner degrees of freedom. Conservation makes actor invariant. But variation obeys an equation of motion,
$$\widehat{\mathrm{A}}๐ธ=0,$$
(2)
where action operator $`\widehat{\mathrm{A}}`$ may include differentiation, integration, and/or other formal operations doing with respect to some degrees of freedom (such as space, time, and/or some variable), depending on actually physical problems, and $`๐ธ`$ may naturally be a state function describing some considered object. The value โ$`0`$โ on the right hand side of Eq. (II) means that variation of actor approaches to stability โ invariance, i.e. self-action is equal to zero when variation finishes.
Solution of the equation of motion describes variant process of actor. Actor varies and finally becomes to a new actor, that is solution of the equation of motion when variation finishes.
2. For any system consisting of many actors $`\{A_1;A_2;\mathrm{}\}`$, each actor is in its self-action and actions from others. This fact causes each actor to be varied by actions of itself and others with respect to all its possible inner and outer degrees of freedom. This variation obeys an equation of motion,
$$(\widehat{\mathrm{A}}_1;\widehat{\mathrm{A}}_2;\mathrm{})(๐ธ_1;๐ธ_2;\mathrm{})=0,$$
(3)
where action operators $`\widehat{\mathrm{A}}_i`$ of actor $`A_i`$ are operations doing with respect to some degrees of freedom, and states $`๐ธ_i`$ of actor $`A_i`$ are functions characterized by considered objects. The value โ$`0`$โ on the right hand side of Eq. (III) means that actions are equal to zero when variations of actors finishes, i.e. variations of actors approaches to stability โ invariance. In fact, Eq. (III) is an advanced form of Eq. (II).
Solutions of the equations of motion of actors describe their variant processes. All actors vary and finally become a new actor $`A`$, that is solution of the equations of motion when variations of actors finishes:
$$A=[A_1,A_2,\mathrm{}],$$
(4)
where actors are in the same dimension of interaction.
* For a system consisting of many actors $`\{A_1;A_2;\mathrm{}\}`$, the whole system can be considered as a total actor which includes component actors,
$$\{A\}=\{A_1;A_2;\mathrm{}\}.$$
(5)
Thereby, actor $`A`$ is in self-action, and it either self-conserves or self-varies with respect to all its possible inner degrees of freedom. And variation obeys an equation of motion (II).
Hence, the variant principle is stated as follows:
* *In the Nature every actor varied by actions of itself and others with respect to all possible degrees of freedom to become some new actor is solution of the equation of motion that describes its variant process.*
Indeed, every variation is caused by action of actor onto state, variation is to escape from action, or in other words, state varies to be agreeable to action. This fact means that under actions actor must vary anyway with respect to all possible degrees of freedom โ transportation facilities to become new actor, and that its speed of variation is dependent on power of action, which is manifested by conservation of actor.
Eigenvalue of action is expressed as instrument to promote variation, as easiness of variation. Its value over some degree of freedom shows probability of variation following this direction.
Any actor which is done by some action must vary somehow over all possible degrees of freedom to become new actor which is no longer to be done by any action. That process shows continuous variation of actor from the beginning to closing.
Therefore, this reality proves that variation is imperative to have its cause, to have its agent, and that property of variation obeys the equation of motion.
Thereby, from Eqs. (II) and (III), equation of motion can be built for any physical law. Using these equations (II) and (III) for research into physics is considered in the next section. I hope that the readers will understand more profoundly about the variant principle.
## III The Rule of Universeโs Evolution
The simplest form of self-action is expansion of actor about some degree of freedom,
$$e^{\delta x\widehat{}_x}f(x)=f(x+\delta x).$$
(6)
Here is just the equation of motion for any quantity $`f(x)`$, with $`x`$ degree of freedom, and $`\delta x`$ infinitesimal of $`x`$.
Universeโs evolution is described as a law of causality essentially based on just this expansion. The form of Eq. (6) is nothing but Taylorโs series. Derivatives of $`f(x)`$ with respect to $`x`$ is just variations of $`f(x)`$ over the degree of freedom $`x`$.
Eq. (6) has an important application in modelling the multiplication and the combination of quanta.
Call $`\alpha ,\beta ,\gamma ,\mathrm{}`$ quanta. For each quantum there is a rule of multiplication as follows
$$\alpha ^ne^_\alpha \alpha ^n=\underset{i=0}{\overset{n}{}}C_i^n\alpha ^{ni}=(\alpha +1)^n$$
(7)
where $`n`$ is order of combination, $`\delta \alpha =1`$, and $`C_i^n`$ is binary coefficient.
Using Eq. (7) I consider two stages in the process of the Universeโs evolution: doublet and triplet.
For two interactive quanta the rule of multiplication reads
$$\alpha ^n,\beta ^n\frac{1}{2}(e^{\beta _\alpha }\alpha ^n+e^{\alpha _\beta }\beta ^n)=\underset{i=0}{\overset{n}{}}C_i^n\alpha ^{ni}\beta ^i=(\alpha +\beta )^n.$$
(8)
And similar to three interactive quanta
$`\alpha ^n,\beta ^n,\gamma ^n{\displaystyle \frac{1}{3}}(e^{(\beta +\gamma )_\alpha }\alpha ^n+e^{(\gamma +\alpha )_\beta }\beta ^n+e^{(\alpha +\beta )_\gamma }\gamma ^n)`$ $`=`$ $`{\displaystyle \underset{m}{\overset{n}{}}}{\displaystyle \underset{i}{\overset{m}{}}}C_m^nC_i^m\alpha ^{nm}\beta ^{mi}\gamma ^i`$ (9)
$`=`$ $`(\alpha +\beta +\gamma )^n.`$ (10)
And so fourth. Eqs. (8) and (10) can be drawn as schemata.
$$\begin{array}{cccccccccccccc}\mathrm{}& & & & \mathrm{}& & \mathrm{}& & \mathrm{}& & \mathrm{}& & & \\ & & & & & & & & & & & & & \\ \overline{2}& & & & & & \overline{1}& & \overline{1}& & & & & \\ & & & & & & & & & & & & & \\ 0& & & & & & & & & & & & & \\ & & & & & & & & & & & & & \\ \underset{ยฏ}{2}& & & & & & 1& & 1& & & & & \\ & & & & & & & & & & & & & \\ \underset{ยฏ}{2}\underset{ยฏ}{2}=\underset{ยฏ}{3}\underset{ยฏ}{1}& & & & & 1& & 2& & 1& & & & \\ & & & & & & & & & & & & & \\ \underset{ยฏ}{2}\underset{ยฏ}{2}\underset{ยฏ}{2}=\underset{ยฏ}{4}\underset{ยฏ}{2}\underset{ยฏ}{2}& & & & 1& & 3& & 3& & 1& & & \\ & & & & & & & & & & & & & \\ \mathrm{}& & & 1& & 4& & 6& & 4& & 1& & \\ & & & & & & & & & & & & & \\ \mathrm{}& \mathrm{}& & \mathrm{}& & \mathrm{}& & \mathrm{}& & \mathrm{}& & \mathrm{}& & \mathrm{}\end{array}$$
(11)
is the schema for Eq. (11), where $`\underset{ยฏ}{2}`$ means two quanta $`\alpha `$ and $`\beta `$. The numbers in the triangle is the binary coefficients which are called weights of classes. For example,
$$\underset{ยฏ}{2}\underset{ยฏ}{2}=\underset{ยฏ}{3}\underset{ยฏ}{1}=\begin{array}{ccccc}& & 1& & \\ 1& \text{โโโ}& 1& \text{โโโ}& 1\end{array}.$$
And similar to Eq. (10) it reads
$$\begin{array}{cccccccccccccc}\mathrm{}& & & & \overline{1}& & & & & & & & & \\ \overline{3}& & & \overline{1}& & & \overline{1}& & & & & & & \\ & & & & & & & & & & & & & \\ & & & & & & & & & & & & & \\ 0& & & & & & & & & & & & & \\ & & & & & & & & & & & & & \\ & & & & & & & & & & & & & \\ 3& & & & \mathrm{๐}& & & \mathrm{๐}& & & & & & \\ & & & & & & \mathrm{๐}& & & & & & & \\ & & & & & & & & & & & & & \\ & & & & & & & & & & & & & \\ & & & \mathrm{๐}& & & 2& & & \mathrm{๐}& & & & \\ \underset{ยฏ}{3}\underset{ยฏ}{3}=\underset{ยฏ}{6}\overline{3}& & & & & \mathrm{๐}& & & \mathrm{๐}& & & & & \\ & & & & & & & \mathrm{๐}& & & & & & \\ & & & & & & & & & & & & & \\ & & \mathrm{๐}& & & 3& & & 3& & & \mathrm{๐}& & \\ \underset{ยฏ}{3}\underset{ยฏ}{3}\underset{ยฏ}{3}=\underset{ยฏ}{10}881& & & & \mathrm{๐}& & & 6& & & \mathrm{๐}& & & \\ & & & & & & \mathrm{๐}& & & \mathrm{๐}& & & & \\ & & & & & & & & \mathrm{๐}& & & & & \\ & & & & & & & & & & & & & \\ & \mathrm{๐}& & & 4& & & 6& & & 4& & & \mathrm{๐}\\ & & & \mathrm{๐}& & & 12& & & 12& & & \mathrm{๐}& \\ \underset{ยฏ}{3}\underset{ยฏ}{3}\underset{ยฏ}{3}\underset{ยฏ}{3}& & & & & \mathrm{๐}& & & 12& & & \mathrm{๐}& & \\ & & & & & & & \mathrm{๐}& & & \mathrm{๐}& & & \\ \mathrm{}& & & & & & & & & \mathrm{๐}& & & & \end{array}$$
(12)
where $`\underset{ยฏ}{3}`$ means three quanta $`\alpha `$, $`\beta `$ and $`\gamma `$. The coefficients in the pyramid are weights of classes,
$$\begin{array}{ccccccccc}& & & & 1\hfill & & & & \\ & & & \mathrm{๐}\hfill & & & \mathrm{๐}\hfill & & \\ & \text{-}\hfill & \text{-}\hfill & \text{-}\hfill & \text{-}\hfill & \text{-}\hfill & \text{-}\hfill & \text{-}\hfill & \text{-}\hfill \\ \underset{ยฏ}{3}\underset{ยฏ}{3}=\underset{ยฏ}{6}\overline{3}=\hfill & \mathrm{๐}\hfill & & & 1\hfill & & & \mathrm{๐}\hfill & \\ & & & \mathrm{๐}\hfill & & & \mathrm{๐}\hfill & & \\ & & & & & \mathrm{๐}\hfill & & & \end{array},$$
$$\begin{array}{cccccccc}& & & & 1\hfill & & & \\ & \text{-}\hfill & \text{-}\hfill & \text{-}\hfill & \text{-}\hfill & \text{-}\hfill & \text{-}\hfill & \text{-}\hfill \\ \underset{ยฏ}{3}\overline{3}=18=\hfill & & 1\hfill & & & 1\hfill & & \\ & \mathrm{๐}\hfill & & & 2\hfill & & & \mathrm{๐}\hfill \\ & & & \mathrm{๐}\hfill & & & \mathrm{๐}\hfill & \end{array}.$$
It is easily to identify that the above schemata have the forms similar to the $`SU(2)`$ and the $`SU(3)`$ groups. This means that for $`n`$ quanta there is a corresponding schema according to the $`SU(n)`$ group, and the multiplication and the combination of the Universe conform to the $`SU`$ group. This rule is studied further in Ref. .
## IV CONCLUSION
The theory of causality is very useful to understand about the cause of variation. The coexistence of two different actors causes a contradiction. The solution to contradiction makes contradiction varied. That variation is just one of each actor inclining to become a new actor. It means the difference and the contradiction of two actors have inclining towards zero. Indeed, every system comes to equilibrium, stability. A some state which has any immanent contradiction must vary to become a new one having no contradiction.
The variant principle deals with the law of variation of actors, describes only actors with their actions and states, not to mention the difference and even the contradiction in them. In insight the variant principle is more elementary and easier to understand than the causal principle since everything is referred as actor existing in nature. Self-action and inter-action of actors onto their states cause the world to be in motion and in variation.
Although the variant principle gives a powerful fundamental for application to research into laws of nature, there is no rule arisen yet for formulizing self-action and inter-action operators. However, there are some ways to enter operators in the equation of motion that I hope that in some next article this ways will be synthesized to a standard rule.
For instance, in the quantum electromagnetic dynamics the equations of motion of the electron-positron and the electromagnetic field are:
$`i\gamma ^\mu _\mu \psi (x)+{\displaystyle \frac{m_ec}{\mathrm{}}}\psi (x)+{\displaystyle \frac{e}{\mathrm{}}}\gamma ^\mu A_\mu (x)\psi (x)`$ $`=`$ $`0,`$
$`\mathrm{}A_\mu +ie\overline{\psi }(x)\gamma _\mu \psi (x)`$ $`=`$ $`0.`$
The first line is the equation of motion of electron, the first term corresponds to the variation of electron with respect to space-time, the second gives conservation of electron, and the third is action of the electromagnetic field onto electron. The second line can be rewritten as
$$^\nu F_{\nu \mu }J_\mu =0,$$
that is nothing but the Maxwell equation, with $`F_{\nu \mu }=_\nu A_\mu _\mu A_\nu `$ the electromagnetic field tenser, $`A_\mu `$ the 4-dimensional potential, $`J_\mu =ie\overline{\psi }(x)\gamma _\mu \psi (x)`$ the 4-dimensional current density, the first term corresponds to the variation of the electromagnetic field, the second is the external current density of the electromagnetic field, (here the mass of photon is zero, so the mass term is not present).
This example is easy to show that:
* The variation done over some degree of freedom is expressed as derivation with respect to that degree of freedom.
* The conservation of actor is written as a term of actor multiplied by a constant characterized by its conservation.
* The influence of other actor on an actor is represented as a multiplication of two actors.
* The external actor stands equally with its variation, when an external influence does on an actor as an external current, an external source, or an external force.
In conclusion, it is the fact that some readers think that the variant principle is rather in philosophy than in physics. That is not true. Doing physics is discovery of nature, not only matter composition, phenomena, processes, but also more necessary, laws, principles, and rules. In reality, principia are profound elements of physics, and of course they have something closing to philosophy. The variant principle is one of principia for research on nature. And I hope that discovering principia will be one of new directions to do physics.
## Acknowledgments
We would like to thank Dr. D. M. Chi for useful discussions and valuable comments.
The present article was supported in part by the Advanced Research Project on Natural Sciences of the MT&A Center. |
no-problem/9912/gr-qc9912046.html | ar5iv | text | # Untitled Document
Can Black Holes be Created at the Birth of the Universe ?
Zhong Chao Wu
Dept. of Physics
Beijing Normal University
Beijing 100875, China
(Gravity Essay)
Abstract
We study the quantum creation of black hole pairs in the (anti-)de Sitter space background. These black hole pairs in the Kerr-Newman family are created from constrained instantons. At the $`WKB`$ level, for the chargeless and nonrotating case, the relative creation probability is the exponential of (the negative of) the entropy of the universe. Also for the remaining cases of the family, the creation probability is the exponential of (the negative of) one quarter of the sum of the inner and outer black hole horizon areas. In the absence of a general no-boundary proposal for open universes, we treat the creations of the closed and the open universes in the same way.
PACS number(s): 98.80.Hw, 98.80.Bp, 04.60.Kz, 04.70.Dy
Keywords: quantum cosmology, constrained gravitational instanton, black hole creation
e-mail: wu@axp3g9.icra.it
There are three ways of forming black hole in Nature. The first way is through the gravitational collapse of a massive body in astrophysics. In this scenario, the spacetime and matter content are treated classically. In general, the effect of the cosmological background is ignored. The second way originates from the quantum fluctuation of the matter content in the very early universe. Here the spacetime is again treated classically. The black hole formation is a result of the competing effects of the expansion of the universe and the gravitational attraction of the matter fluctuation. The third way is through the quantum creation of black holes in quantum cosmology, to which this paper is addressed. Here, both the spacetime and the matter content are quantized. This is the most dramatic type of black hole formation. Indeed, the black holes are essentially created from nothing at the same moment as the birth of the universe. Therefore, only black holes created this way are genuinely primordial.
It is believed that the Planckian era of the universe underwent an inflationary stage which was approximated by the de Sitter metric. In the Planckian stage, the potential of the scalar field behaves as an effective cosmological constant $`\mathrm{\Lambda }`$. On the other hand, extended theories of supergravity in which the $`O(N)`$ group is gauged have the anti-de Sitter space as their ground or most symmetric state. Therefore, it is of great interest to study quantum creations of black holes in these backgrounds.
In the No-Boundary Universe, the wave function of a closed universe is defined as a path integral over all compact 4-metrics with matter fields . The dominant contribution to the path integral is from the stationary action solution. At the $`WKB`$ level, the wave function can be approximated as $`\mathrm{\Psi }e^I`$, where $`I=I_r+iI_i`$ is the complex action of the solution.
The imaginary part $`I_i`$ and real part $`I_r`$ of the action represent the Lorentzian and Euclidean evolutions in real time and imaginary time, respectively. The probability of a Lorentzian orbit remains constant during its evolution. One can identify the probability, not only as the probability of the universe created, but also as the probabilities for other Lorentzian universes obtained through an analytic continuation from it .
An instanton is defined as a stationary action orbit and satisfies the Einstein equation everywhere. It was thought that, at the $`WKB`$ level, an instanton was the seed for the creation of the universe. Very recently, it was realized that this only applied to the case of creation with a stationary probability. Therefore, in order not to exclude many interesting phenomena and more realistic models from the study, one has to appeal to the concept of constrained instantons . Constrained instantons are the orbits with an action that is stationary under some restriction. The restriction can be imposed on a spacelike 3-surface of the created Lorentzian universe. The restriction is that the 3-metric and matter content are given at the 3-surface. The relative creation probability from the instanton is the exponential of the negative of the real part of the instanton action.
One can begin with a complex solution to the Einstein equation and other field equations in the complex domain of spacetime coordinates. If an instanton exists at all, then it should be a compact singularity-free section of the solution. If there are singularities in the compact section, then, in general, the action of the section is not stationary. The action may only be stationary with respect to the variations under some restrictions mentioned above. We call this section a constrained gravitational instanton. To find the constrained instanton, one has to closely investigate the singularities. The stationary action condition is crucial to the validation of the $`WKB`$ approximation, which we use to investigate the problem of quantum creation of a black hole pair.
In contrast to the case for a closed universe, a general no-boundary proposal for the quantum state of an open universe has not been presented. However, one can use analytic continuation from a complex constrained instanton to obtain the $`WKB`$ approximation to the wave function for open universes with some kind of symmetry. At this level, both the open and closed creations of universes can be dealt with in the same way. For examples, The $`S^4`$ space model with $`O(5)`$ symmetry and the $`FLRW`$ space model with $`O(4)`$ symmetry have been investigated this way.
The constrained gravitational instantons for the pair creation of black holes in the (anti-)de Sitter space background can be obtained from the complex solutions of the Kerr-Newman-(anti-)de Sitter family
$$ds^2=\rho ^2(\mathrm{\Delta }_r^1dr^2+\mathrm{\Delta }_\theta ^1d\theta ^2)+\rho ^2\mathrm{\Xi }^2\mathrm{\Delta }_\theta \mathrm{sin}^2\theta (adt(r^2+a^2)d\varphi )^2\rho ^2\mathrm{\Xi }^2\mathrm{\Delta }_r(dta\mathrm{sin}^2\theta d\varphi )^2,$$
(1)
where
$$\rho ^2=r^2+a^2\mathrm{cos}^2\theta ,$$
(2)
$$\mathrm{\Delta }_r=(r^2+a^2)(1\mathrm{\Lambda }r^23^1)2mr+Q^2,$$
(3)
$$\mathrm{\Delta }_\theta =1+\mathrm{\Lambda }a^23^1\mathrm{cos}^2\theta ,$$
(4)
$$\mathrm{\Xi }=1+\mathrm{\Lambda }a^23^1$$
(5)
and $`m,ma`$ and $`Q`$ are constants, representing mass, angular momentum, electric or magnetic charges. We shall not consider the dyonic case in the following. We shall respectively call the cases with de Sitter and anti-de Sitter backgrounds as closed and open models.
We use $`r_0,r_1,r_2`$ and $`r_3`$ to denote the four roots of $`\mathrm{\Delta }_r`$. For the closed model with positive $`\mathrm{\Lambda }`$, we assume all roots $`r_0,r_1,r_2`$ and $`r_3`$ are real and in ascending order. These roots are the negative, inner black hole, outer black hole and cosmological horizons, respectively. For the open model with negative $`\mathrm{\Lambda }`$, at least two roots, say $`r_0,r_1`$, are complex conjugates, and we assume $`r_2`$ and $`r_3`$ are real. If this is the case, then $`r_2`$ and $`r_3`$ must be positive and can be identified as the inner and outer black hole horizons, respectively.
For the closed model , the constrained instanton is constructed from the metric (3) by setting $`\tau =it`$. One makes two cuts at $`\tau =\pm \mathrm{\Delta }\tau /2`$ between the two horizons $`r_1,r_2`$ and glues them. The resultant manifold may have conical singularities at the two horizons. It has the $`f_1`$-fold cover around the horizon $`r_1`$ and the $`f_2`$-fold cover around the horizon $`r_2`$.
The Lorentzian metric for the created black hole pair is obtained through analytic continuation of the time coordinate from an imaginary value to a real value at the equator. The equator is two joint sections $`\tau =consts.`$ passing these horizons. It divides the instanton into two halves. We can impose the restriction that the 3-geometry characterized by the parameters $`m,a`$ and $`Q`$ is given at the equator for the Kerr-Newman-de Sitter family. The parameter $`\mathrm{\Delta }\tau `$ is the only degree of freedom left for the pasted manifold, since the field equation holds everywhere with the possible exception of these horizons. Thus, in order to check whether we get a stationary action solution for the given horizons, one only needs to see whether the above action is stationary with respect to this parameter. The equator where the quantum transition will occur has the topology $`S^2\times S^1`$.
The action due to the horizons is
$$I_{i,horizon}=\frac{\pi (r_i^2+a^2)(1f_i)}{\mathrm{\Xi }}.(i=1,2)$$
(6)
The action due to the volume is
$$I_v=\frac{\mathrm{\Delta }\tau \mathrm{\Lambda }}{6\mathrm{\Xi }^2}(r_2^3r_1^3+a^2(r_2r_1))\pm \frac{\mathrm{\Delta }\tau Q^2}{2\mathrm{\Xi }^2}\left(\frac{r_1}{r_1^2+a^2}\frac{r_2}{r_2^2+a^2}\right),$$
(7)
where $`+()`$ is for the magnetic (electric) case.
If one naively takes the exponential of the negative of half the total action, then the exponential is not identified as the wave function at the creation moment of the black hole pair. The physical reason is that what one can observe is only the angular differentiation, or the relative rotation of the two horizons. This situation is similar to the case of a Kerr black hole pair in the asymptotically flat background. There one can only measure the rotation of the black hole horizon from spatial infinity. To find the wave function for the given mass and angular momentum one has to make the Fourier transformation
$$\mathrm{\Psi }(a,h_{ij})=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}๐\delta e^{i\delta J\mathrm{\Xi }^2}\mathrm{\Psi }(\delta ,h_{ij}),$$
(8)
where $`\delta `$ is the relative rotation angle for the half time period $`\mathrm{\Delta }\tau /2`$, which is canonically conjugate to the angular momentum $`J=ma`$; and the factor $`\mathrm{\Xi }^2`$ is due to the time rescaling. The angle difference $`\delta `$ can be evaluated
$$\delta =_0^{\mathrm{\Delta }\tau /2}๐\tau (\mathrm{\Omega }_1\mathrm{\Omega }_2),$$
(9)
where the angular velocities at the horizons are $`\mathrm{\Omega }_i=a(r_i^2+a^2)^1`$.
In the magnetic case the vector potential determines the magnetic charge, which is the integral over the $`S^2`$ factor. However, in the electric case, one can only fix the integral
$$\omega =A,$$
(10)
where the integral is around the $`S^1`$ direction, and $`A`$ is the vector potential of the electric field . So, what one obtains in this way is $`\mathrm{\Psi }(\omega ,a,h_{ij})`$. However, one can get the wave function $`\mathrm{\Psi }(Q,a,h_{ij})`$ for a given electric charge through the Fourier transformation
$$\mathrm{\Psi }(Q,a,h_{ij})=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}๐\omega e^{i\omega Q}\mathrm{\Psi }(\omega ,a,h_{ij}).$$
(11)
The Fourier transformations (8) and (11) for the angular momentum and the electric charge are equivalent to adding extra terms into the action for the constrained instanton, and then the total action becomes
$$I=\pi (r_1^2+a^2)\mathrm{\Xi }^1\pi (r_2^2+a^2)\mathrm{\Xi }^1.$$
(12)
It is crucial to note that the action is independent of the time identification period $`\mathrm{\Delta }\tau `$ and therefore, the manifold obtained is qualified as a constrained instanton. The relative probability of the Kerr-Newman black hole pair creation from the constrained instanton is
$$P\mathrm{exp}(\pi (r_1^2+a^2)\mathrm{\Xi }^1+\pi (r_2^2+a^2)\mathrm{\Xi }^1).$$
(13)
This is the exponential of one quarter of the sum of the outer and inner black hole horizon areas.
These two Fourier transformations are critical. Without them one cannot even obtain the constrained gravitational instanton. The inclusion of the extra term due to the Fourier transformation for the electrically charged rotating black hole pair also recovers the duality between the magnetic and electric cases .
The construction of the constrained instanton using the inner and outer black hole horizons is quite counter-intuitive. One could also consider those constructions involving other horizons as the instantons. However, the real part of the action for our choice is always greater than that of the other choices for the given configuration, and the wave function or the probability is determined by the classical orbit with the greatest real part of the action .
By the same argument, one has to use the pair of complex horizons $`r_0,r_1`$ to construct the constrained instanton for the case of open creation of black hole pair in the anti-de Sitter background. The relative probability of the Kerr-Newman black hole pair creation takes a form similar to (13) with a replacement of $`r_1,r_2`$ by $`r_0,r_1`$. One can show that the sum of the four horizon areas is $`24\pi /\mathrm{\Lambda }`$. Therefore, one can rewrite the relative probability as
$$P\mathrm{exp}(\pi (r_2^2+a^2)\mathrm{\Xi }^1+\pi (r_3^2+a^2)\mathrm{\Xi }^1).$$
(14)
This is the exponential of the negative of one quarter of the sum of the outer and inner black hole horizon areas.
It is interesting to note that the difference of relative probabilities in the closed and open creations of black hole pairs is the negative sign in the exponent. This is very reasonable from a physical argument. Since for both cases, the probability is a decreasing function of the mass parameter. This conclusion should be welcomed by quantum cosmologists.
The case of the Kerr-Newman black hole family with spatially asymptotically flat infinity can be thought of as the limit of our case as we let $`\mathrm{\Lambda }`$ approach $`0`$ from below .
If one lets the angular momentum be zero, then it is reduced into the Reissner-Nordstr$`\ddot{\mathrm{o}}`$m-(anti-)de Sitter black hole case. If one further lets the charge be zero, then it is reduced into the Schwarzschild-(anti-)de Sitter black hole case. There are only three horizons for the chargeless and nonrotating case.
For the Schwarzschild-de Sitter black hole case, one has to use the black hole and cosmological horizons to construct the instanton, the creation probability is the exponential of the entropy of the universe, or the exponential of one quarter of the sum of the black hole and cosmological horizon areas . For the Schwarzschild-anti-de Sitter black hole case, one uses the pair of complex horizons to construct the instanton, and the creation probability is the exponential of the negative of the entropy. It is known that the entropy of the Schwarzschild-anti-de Sitter universe is one quarter of the black hole horizon area . It is noted that the entropy is a decreasing(increasing) function of the mass parameter for the closed (open) model.
From the no-hair theorem, all stationary black holes in the de Sitter, anti-de Sitter and Minkowski spacetime backgrounds are described by these Kerr-Newman families, so the problem of black hole creations in these backgrounds is completely resolved. All known cases in the closed model are with regular instantons , and they can be considered as special cases of our study. The well known $`S^4`$ de Sitter model without black hole and $`S^2\times S^2`$ Nariai model with a pair of maximal black holes have the maximal and minimal creation probabilities, respectively .
Our treatment of quantum creation of the Kerr-Newman-anti-de Sitter space family using the constrained instanton can be thought of as a prototype of quantum gravity for an open system, without appealing to the background subtraction approach . The beautiful aspect of our approach is that even in the absence of a general no-boundary proposal for open universes, we treat the creations of the closed and the open universes in the same way.
It can be shown that the probability of the universe creation without a black hole is greater than that with a pair of black holes in all these backgrounds.
References:
1. J.B. Hartle and S.W. Hawking, Phys. Rev. D28, 2960 (1983).
2. S.W. Hawking and N. Turok, Phys. Lett. B425, 25 (1998), hep-th/9802030.
3. Z.C. Wu, Int. J. Mod. Phys. D6, 199 (1997), gr-qc/9801020.
4. Z.C. Wu, Gene. Relativ. Grav. 30, 1639 (1998), hep-th/9803121.
5. N. Turok and S.W. Hawking, Phys. Lett. B432, 271 (1998), hep-th/9803156.
6. Z.C. Wu, Phys. Rev. D31, 3079 (1985).
7. G.W. Gibbons and S.W. Hawking, Phys. Rev. D15, 2738 (1977).
8. Z.C. Wu, Phys. Lett. B445, 274 (1999); gr-qc/9810077.
9. S.W. Hawking and S.F. Ross, Phys. Rev. D52, 5865 (1995), hep-th/9504019.
10. R.B. Mann and S.F. Ross, Phys. Rev. D52, 2254 (1995), gr-qc/9504015.
11. R. Bousso and S.W. Hawking, hep-th/9807148.
12. S.W. Hawking and D.N. Page, Commun. Math. Phys. 87, 577 (1983).
13. R. Bousso and S.W. Hawking, Phys. Rev. D52, 5659 (1995), gr-qc/9506047.
14. F. Mellor and I. Moss, Phys. Lett. B222, 361 (1989).
15. I.J. Romans, Nucl. Phys. B383, 395 (1992).
16. S.W. Hawking, in General Relativity: An Einstein Centenary Survey, eds. S.W. Hawking and W. Israel, (Cambridge University Press, 1979). |
no-problem/9912/chao-dyn9912025.html | ar5iv | text | # Anomalous scaling in a shell model of helical turbulence
## 1 Introduction
In a helical flow both energy and helicity are inviscid invariants which are cascaded from the integral scale to the dissipation scale . If these scales for the helicity are separated there will be an inertial range in which an equivalent of the four-fifth law for helicity transfer holds. This is a scaling relation for a third order structure function with a different tensorial structure from the structure function associated with the flux of energy. For helicity flux this is, $`\delta ๐ฏ_{}(l)[๐ฏ_{}(r)\times ๐ฏ_{}(r+l)]=(2/15)\overline{\delta }l^2`$, where $`\overline{\delta }`$ is the mean dissipation of helicity. This relation is called the โtwo-fifteenth lawโ due to the numerical prefactor . The inertial ranges for helicity cascade and for energy cascade are different because the dissipation of helicity scales as $`D_H(k)kD_E(k)`$, thus the helicity will be dissipated within the inertial range for energy cascade. From balancing the helicity flux and the helicity dissipation a Kolmogorov scale $`\xi =K_H^1`$ for helicity dissipation can be defined ,
$$\xi (\nu ^3\overline{\epsilon }^2/\overline{\delta }^3)^{1/7},$$
(1)
where $`\nu `$ is the kinematic viscosity, $`\overline{\epsilon }`$ is the mean energy dissipation per unit mass and $`\overline{\delta }`$ is the mean helicity dissipation per unit mass. This scale is larger than the usual Kolmogorov scale $`\eta =K_E^1(\nu ^3/\overline{\epsilon })^{1/4}`$.
The physical picture for fully developed helical turbulence is shown schematically in figure 1. The mean dissipations $`\overline{\delta }`$ and $`\overline{\epsilon }`$ are solely determined by the forcing in the integral scale. There will then be an inertial range with coexisting cascades of energy and helicity with third order structure functions determined by the four-fifth โ and the two-fifteenth laws. This is followed by an inertial range between $`K_H`$ and $`K_E`$ corresponding to non-helical turbulence, where the dissipation of positive and negative helicity vortices balance and the two-fifteenth law is not applicable.
## 2 The anomalous scaling exponents
There is now experimental evidence that the K41 scaling relations are not exact. There are corrections for moments different from 3, expressed through anomalous scaling exponents, $`\delta v(l)^pl^{\zeta (p)}`$ where $`\zeta (p)p/3`$. Understanding and quantitatively determining the anomalous scaling exponents is one of the most intriguing and unsolved problems in turbulence. The intermittency corrections to the K41 scaling could depend on the transfer of helicity, maybe similar to the way the different sectors in anisotropic turbulence might give rise to sub-leading corrections of scaling exponents . Furthermore, the helicity cascade itself leads to a set of anomalous scaling exponents related to moments of the third order correlator of the two-fifteenth law. There is at present no experimental measurements from helical turbulence of the scaling exponents associated with the two-fifteenth law.
It was recently shown numerically by Biferale et al. that in the case of a shell model the anomalous scaling exponents for the helicity transfer has a strong difference between odd and even powers such that the scaling exponent $`\zeta ^H(p)`$ is not a convex function.
Biferale et al. used a shell model consisting of two coupled GOY shell models. We will show here that the results obtained holds for the standard GOY shell model as well. Shell models are toy-models of turbulence which by construction have second order inviscid invariants similar to energy and helicity in 3D turbulence. Shell models can be investigated numerically for high Reynolds numbers, in contrast to the Navier-Stokes equation, so that high order statistics and anomalous scaling exponents are easily accessible. Shell models lack any spatial structures so we stress that only certain aspects of the turbulent cascades have meaningful analogies in the shell models. This should especially be kept in mind when studying helicity which is intimately linked to spatial structures, and the dissipation of helicity to reconnection of vortex tubes . The following thus only concerns the spectral aspects of the helicity and energy cascades.
The GOY model is the most well studied shell model. It is defined from the governing equation,
$$\dot{u_n}=ik_n(u_{n+2}u_{n+1}\frac{ฯต}{\lambda }u_{n+1}u_{n1}+\frac{ฯต1}{\lambda ^2}u_{n1}u_{n2})^{}\nu k_n^2u_n+f_n$$
(2)
with $`n=1,\mathrm{},N`$ where the $`u_n`$โs are the complex shell velocities. The wave numbers are defined as $`k_n=\lambda ^n`$, where $`\lambda `$ is the shell spacing. The second and third terms are dissipation and forcing. The model has two inviscid invariants, energy, $`E=_nE_n=_n|u_n|^2`$, and โhelicityโ, $`H=_nH_n=_n(ฯต1)^n|u_n|^2`$. The model has two free parameters, $`\lambda `$ and $`ฯต`$. The โhelicityโ only has the correct dimension of helicity if $`|ฯต1|^n=k_n1/(1ฯต)=\lambda `$. In this work we use the standard parameters $`(ฯต,\lambda )=(1/2,2)`$ for the GOY model.
A natural way to define the structure functions of moment $`p`$ is through the transfer rates of the inviscid invariants,
$`S_p^E(k_n)=(\mathrm{\Pi }_n^E)^{p/3}k_n^{p/3}k_n^{\zeta ^E(p)}`$ (3)
$`S_p^H(k_n)=(\mathrm{\Pi }_n^H)^{p/3}k_n^{2p/3}k_n^{\zeta ^H(p)}`$ (4)
The energy flux is defined in the usual way as $`\mathrm{\Pi }_n^E=d/dt|_{n.l.}(_{m=1}^nE_m)`$ where $`d/dt|_{n.l.}`$ is the time rate of change due to the non-linear term in (2). The helicity flux $`\mathrm{\Pi }_n^H`$ is defined similarly. By a simple algebra we have the following expression for the fluxes,
$`\mathrm{\Pi }_n^E=(1ฯต)\mathrm{\Delta }_n+\mathrm{\Delta }_{n+1}=\overline{\epsilon }`$ (5)
$`\mathrm{\Pi }_n^H=(1)^nk_n(\mathrm{\Delta }_{n+1}\mathrm{\Delta }_n)=\overline{\delta }`$ (6)
where $`\mathrm{\Delta }_n=k_{n1}Imu_{n1}u_nu_{n+1}`$, $`\overline{\epsilon }`$ and $`\overline{\delta }`$ are the mean dissipations of energy and helicity respectively. The first equalities hold without averaging as well. These equations are the shell model equivalents of the four-fifth โ and the two-fifteenth law.
In the definition (3), (4) of the structure functions there is a slight ambiguity in the definition of $`x^{p/3}`$ for negative $`x`$ and $`p`$ not a multiplum of 3. The complex roots for $`(1)^{1/3}`$ are $`(1,1/2\pm i\sqrt{3}/2)`$ and for $`(1)^{2/3}`$ they are $`(1,1/2\pm i\sqrt{3}/2)`$. The common way of circumventing the ambiguity is by defining $`x^{p/3}=sgn(x)|x|^{p/3}`$, which neglects the imaginary roots<sup>1</sup><sup>1</sup>1This can not always be done. Had we defined the structure functions from some, say, sixth order correlator, we are in trouble since $`z=(1)^{1/6}`$ has no real roots.. With this definition we have,
$`S_p(k_n)={\displaystyle _0^{\mathrm{}}}[\psi _n(x)+\psi _n(x)]x^{p/3}๐x{\displaystyle _0^{\mathrm{}}}\psi _n^+(x)x^{p/3}๐x`$ $`(p\text{ even})`$ (7)
$`S_p(k_n)={\displaystyle _0^{\mathrm{}}}[\psi _n(x)\psi _n(x)]x^{p/3}๐x{\displaystyle _0^{\mathrm{}}}\psi _n^{}(x)x^{p/3}๐x`$ $`(p\text{ odd})`$ (8)
where $`\psi _n(x)`$ is the probability density function (pdf) for $`\mathrm{\Pi }_n`$. $`\psi _n^+(x)`$ is (twice) the symmetric part of the pdf and $`\psi _n^{}(x)`$ is (twice) the anti-symmetric part. Note that $`\psi _n^+(x)`$ is itself a pdf while $`\psi _n^{}(x)`$ is not. $`\psi _n^{}(x)`$ is, except for a normalization, a pdf only if $`\psi _n(x)>\psi _n(x)`$ for all positive $`x`$.
The scaling exponents are determined from the scaling of the pdfโs through,
$$_{\mathrm{}}^{\mathrm{}}x^{p/3}\psi _{\lambda k}(x)๐x=\lambda ^{\zeta (p)}_{\mathrm{}}^{\mathrm{}}x^{p/3}\psi _k(x)๐x,$$
(9)
so the scaling exponents for $`p`$ even is related to the scaling of $`\psi _n^+`$ while for $`p`$ odd they are related to the scaling of $`\psi _n^{}`$. We have performed a simulation of the standard GOY model with $`(ฯต,\lambda ,\nu ,N)=(1/2,2,10^9,26)`$ and a forcing of the form $`f_n=0.01\delta _{2,n}/u_2^{}`$, corresponding to a constant energy input. The simulation was about $`5000`$ large eddy turnover times.
Figure 2 shows the anomalous scaling exponents, $`\zeta ^E(p),\zeta ^H(p)`$ for the energy and the helicity calculated according to (7) and (8). Using (7) the scaling exponent $`\zeta ^H(p)`$ can be defined for any real positive $`p`$, which from the Hรถlder inequality is a convex curve. Similarly using (8) assuming $`\varphi _n^{}(x)`$ to be a positive function we can define a continuous curve $`\stackrel{~}{\zeta }^H(p)`$ which is also from the Hรถlder inequality a convex curve. The scaling exponent $`\zeta ^H(p)`$ defined for integer $`p`$ jumps between the two curves shown in figure 2. The scaling exponents differs from the ones found by Biferale et al. for the two-component GOY model. We find that $`\zeta ^H(2p)`$ is slightly larger than $`\zeta ^E(2p)`$. The scaling regime in which $`\zeta ^H(p)`$ is calculated is $`K_\text{I}<k<K_H`$ while for the energy $`K_\text{I}<k<K_E`$. The negative part of the probability density is negligible in the case of energy transfer, $`\psi _n(x)\psi _n^+(x)\psi _n^{}(x)`$ for $`x>0`$, but for helicity transfer the negative tail is big which gives the strong even-odd oscillations between the two curves. Note that $`\zeta ^E(3)=1`$ and $`\zeta ^H(3)=2`$ are just the four-fifth โ and the two-fifteenth law.
Figure 3 shows the probability distribution function (PDF) for helicity flux, defined by $`\mathrm{\Psi }(x)=_{\mathrm{}}^x\psi (y)๐y`$ for shell numbers $`n=3`$ and $`n=6`$ both in the inertial range for helicity flux. The negative tail is plotted as $`1\mathrm{\Psi }(x)`$ which for a symmetric pdf gives two overlapping curves. We can similarly define the PDF $`\mathrm{\Psi }_n^\pm (x)_0^x\psi _n^\pm (y)๐y`$. A simple algebra gives $`\mathrm{\Psi }_n^+(x)=\mathrm{\Psi }_n(x)+(1\mathrm{\Psi }_n(x))1`$ and $`\mathrm{\Psi }_n^{}(x)=\mathrm{\Psi }_n(x)(1\mathrm{\Psi }_n(x))+(12\mathrm{\Psi }(0))`$. So we see that the scaling of $`\mathrm{\Psi }_n^+(x)`$ is related to the scaling of the mean of the two curves in the right panels in figure 3, while the scaling of $`\mathrm{\Psi }_n^{}(x)`$ is related to the gap between the two curves.
## 3 Conclusion
Coexisting cascades of energy and helicity are possible in the GOY shell model. The scaling of the odd order moments of the helicity transfer depends on the scaling of the anti-symmetric part of the probability density function for the helicity flux. This defines a convex anomalous scaling curve through the point $`\zeta ^H(3)=2`$ which is the two-fifteenth law. The even order moments of the helicity flux has anomalous scaling exponents close to the ones found for the energy flux. In the simulation a scale break at $`K_H`$ is not observed. This implies that the anomalous scaling exponents for the energy flux are not influenced by the cascade of helicity. |
no-problem/9912/cond-mat9912177.html | ar5iv | text | # New Solutions of the T-Matrix Theory of the Attractive Hubbard Model
## Abstract
We present a novel solution method with which we obtain the spectral functions of the attractive Hubbard model in two dimensions in a non self-consistent formulation of the T-matrix approximation. We use a partial fraction decomposition of the self energy from which we obtain the single-particle spectral functions to a remarkably high accuracy. Here we find the poles and residues of the self energy to a relative accuracy of $`10^{80}`$ for an $`8\times 8`$ lattice, and plot the resulting density of states. Our results show pseudogap physics as $`T_c`$ is approached from above; these analytical results are in agreement with recent Quantum Monte Carlo data.
One can motivate a study of the attractive Hubbard model (AHM) by suggesting that it might provide a rudimentary description of the correlated electrons in the high-$`T_c`$ materials. One feature that theorists are keen to see reproduced in any model is the appearance of a normal-state pseudogap above $`T_c`$ that evolves smoothly into the energy gap of the superconducting state. For example, QMC studies have produced evidence of pseudogap physics in the two-dimensional AHM.
If the T-matrix approximation applied to the AHM can be shown to suppress the density of states (DOS) at the Fermi level for temperatures above $`T_c`$, this would lend credence to the notion that the pseudogap is the result of strong pairing fluctuations. In this brief report, we demonstrate that one can calculate the DOS to an arbitrarily high accuracy for the non self-consistent (NSC) version of the T-matrix approximation, and that one in fact obtains results very similar to the QMC data. We stress that this calculational method should be applicable to any non self-consistent microscopic theory involving meromorphic functions.
To begin, we must calculate the fully interacting DOS within the NSC T-matrix approximation. Such a calculation is made possible by the fact that we have access to the exact solution of the NSC pair propagator, viz. $`G_+^{\mathrm{pp}}(\stackrel{}{Q},i\nu _n)=\chi (\stackrel{}{Q},i\nu _n)/(1|U|\chi (\stackrel{}{Q},i\nu _n))`$ where the analytic continuation of $`\chi (\stackrel{}{Q},i\nu _n)`$ to the complex plane is given by
$$\overline{\chi }(\stackrel{}{Q},z)=\frac{1}{N}\underset{\stackrel{}{k}}{}\frac{f[\xi _\stackrel{}{k}]+f[\xi _{\stackrel{}{Q}\stackrel{}{k}}]1}{z\xi _\stackrel{}{k}\xi _{\stackrel{}{Q}\stackrel{}{k}}}.$$
(1)
The latter function is meromorphic with a finite number of simple poles on the real axis (as usual, $`f`$ is the Fermi-Dirac distribution and $`\xi `$ is the non-interacting single-particle energy relative to the chemical potential). Likewise, the analytic continuation of the pair propagator,
$$\overline{G}_+^{\mathrm{pp}}(\stackrel{}{Q},z)=\frac{\frac{1}{N}_\stackrel{}{k}\frac{f[\xi _\stackrel{}{k}]+f[\xi _{\stackrel{}{Q}\stackrel{}{k}}]1}{z\xi _\stackrel{}{k}\xi _{\stackrel{}{Q}\stackrel{}{k}}}}{1\frac{|U|}{N}_\stackrel{}{k}\frac{f[\xi _\stackrel{}{k}]+f[\xi _{\stackrel{}{Q}\stackrel{}{k}}]1}{z\xi _\stackrel{}{k}\xi _{\stackrel{}{Q}\stackrel{}{k}}}},$$
(2)
can be shown to be a meromorphic function with a finite number of simple poles. This demonstrates that the pair propagator can be written as a rational polynomial with real coefficients, a consequence of which is that there are only two possibilities for the placement of its singularities: they lie either on the real axis (corresponding to the normal state) or in conjugate pairs equally spaced above and below the real axis (corresponding to the unstable state below $`T_c`$). Moreover, since the degree of the polynomial in the denominator is larger by one than that of the polynomial in the numerator, the high-frequency asymptotic behaviour of the pair propagator is governed by $`\overline{G}_+^{\mathrm{pp}}(\stackrel{}{Q},z)1/z`$ as $`|z|\mathrm{}`$.
Thus, in the normal state, the pair propagator admits a partial fraction decomposition into a series of simple poles. We write this as
$`\overline{G}_+^{\mathrm{pp}}(\stackrel{}{Q},z)={\displaystyle \frac{\mathrm{sgn}(E_\stackrel{}{Q}^{(1)})R_\stackrel{}{Q}^{(1)}}{zE_\stackrel{}{Q}^{(1)}}}`$
$`{\displaystyle \frac{\mathrm{sgn}(E_\stackrel{}{Q}^{(2)})R_\stackrel{}{Q}^{(2)}}{zE_\stackrel{}{Q}^{(2)}}}{\displaystyle \frac{\mathrm{sgn}(E_\stackrel{}{Q}^{(3)})R_\stackrel{}{Q}^{(3)}}{zE_\stackrel{}{Q}^{(3)}}}\mathrm{}`$ (3)
where the energies $`E_\stackrel{}{Q}^{(l)}`$ are real and the residues $`R_\stackrel{}{Q}^{(l)}>0`$ are strictly positive. \[Below we denote the number of such poles for each $`\stackrel{}{Q}`$ component by $`s_\stackrel{}{Q}`$.\] Quite simply, such a decomposition can be executed numerically in MapleV to arbitrary accuracy (without an associated increase in computing time), and in what follows we used Digits:=80 to obtain a relative accuracy of $`10^{80}`$.
We now show how this result can be exploited to give us a practical, numerical method for calculating the self-energy. The first step is to put the partial fraction form of the pair propagator, Eq. (New Solutions of the T-Matrix Theory of the Attractive Hubbard Model), into the self energy using the identity $`G_+^{\mathrm{pp}}(\stackrel{}{Q},i\nu _n)=\overline{G}_+^{\mathrm{pp}}(\stackrel{}{Q},z=i\nu _n)`$ to recover the Matsubara frequency components. Then, we complete various Matsubara frequency sums, and the final result is
$`\overline{\mathrm{\Sigma }}(\stackrel{}{k},z)=`$ (4)
$`{\displaystyle \frac{U^2}{N}}{\displaystyle \underset{\stackrel{}{Q}}{}}{\displaystyle \underset{l=1}{\overset{s_\stackrel{}{Q}}{}}}{\displaystyle \frac{\mathrm{sgn}(E_\stackrel{}{Q}^{(l)})R_\stackrel{}{Q}^{(l)}\left(f[\xi _{\stackrel{}{Q}\stackrel{}{k}}]+b[E_\stackrel{}{Q}^{(l)}]\right)}{z+\xi _{\stackrel{}{Q}\stackrel{}{k}}E_\stackrel{}{Q}^{(l)}}}`$
where $`b`$ is the Bose distribution function. Once the self-energy is calculated numerically by this procedure, the Greenโs function follows from Dysonโs equation, and the single-particle spectral function follows in the usual way. The DOS can then be constructed according to
$$๐ฉ(\omega )=\frac{1}{\pi }\frac{2}{N}\underset{\stackrel{}{k}}{}\mathrm{Im}\overline{\mathrm{G}}(\stackrel{}{\mathrm{k}},\omega +\mathrm{i}\eta ).$$
(5)
This yields a DOS that is a series of $`\delta `$-function peaks; when plotting our results we incorporate a small artificial broadening to smooth the curves.
Above we show two figures for the resulting DOS for an $`8\times 8`$ two-dimensional square lattice for $`|U|`$ being half the band width. The chemical potential is varied to fix the density to be that of a 1/4-filled system as the temperature is lowered. The pseudogap is clearly visible, whereas at $`T/t=1`$ (not shown) the spectrum shows no anomaly at the Fermi level. Note that in this system (fixed density $``$ one) $`T_c=0`$ .
This work was supported by the NSERC of Canada. |
no-problem/9912/math9912221.html | ar5iv | text | # Classifying subcategories of modules
## Introduction
A basic problem in mathematics is to classify all objects one is studying up to isomorphism. A lesson this author learned from stable homotopy theory \[HS\] is that while this is almost always impossible, it is sometimes possible, and very useful, to classify collections of objects, or certain full subcategories of the category one is working in. In particular, if $`R`$ is a commutative ring, the thick subcategories of small objects in the derived category $`๐(R)`$ have been classified. Recall that a thick subcategory is a triangulated subcategory closed under summands. Thick subcategories correspond to unions of subsets of $`\mathrm{Spec}R`$ of the form $`V(๐)`$, where $`๐`$ is a finitely generated ideal of $`R`$. In particular, when $`R`$ is Noetherian, they correspond to arbitrary unions of closed sets of $`\mathrm{Spec}R`$. This line of research was initiated by Hopkins \[Hop87\], where he wrote a down a false proof of this classification. Neeman \[Nee92\] later corrected Hopkinsโ proof in the Noetherian case, and Thomason \[Tho\] generalized the result to arbitrary commutative rings (and, in fact, to quasi-compact, quasi-separated schemes).
The author has long thought that the analogous classification in the ostensibly simpler category of $`R`$-modules is the classification of torsion theories when $`R`$ is a Noetherian and commutative ring \[Ste75, Section VI.6\]. After all, these too correspond to arbitrary unions of closed sets in $`\mathrm{Spec}R`$. However, we show in this paper that the analog of a thick subcategory in $`๐(R)`$ is not a torsion theory of $`R`$-modules, but just an Abelian subcategory of $`R`$-modules closed under extensions. We call this a wide subcategory. We use the classification of thick subcategories mentioned above to give a classification of wide subcategories of finitely presented modules over a large class of commutative coherent rings. To be precise, our classification works for quotients of regular commutative coherent rings by finitely generated ideals. Recall that a coherent ring is regular if every finitely generated ideal has finite projective dimension. Thus our classification includes, for example, the polynomial ring on countably many variables over a principal ideal domain, and all finitely generated algebras over a field. A corollary of our result is that if $`R`$ is Noetherian as well, then every wide subcategory of finitely presented $`R`$-modules is in fact the collection of all finitely presented modules in a torsion theory. It is interesting that we have no direct proof of this fact, but must resort to the rather difficult classification of thick subcategories in the derived category.
One can also attempt to classify thick subcategories closed under arbitrary coproducts, or arbitrary products. These are called localizing and colocalizing subcategories, respectively. For $`๐(R)`$ when $`R`$ is a Noetherian commutative ring, they were classified by Neeman \[Nee92\], and correspond to arbitrary subsets of $`\mathrm{Spec}R`$. We give an analogous classification of wide subcategories closed under arbitrary coproducts in the Noetherian case. Once again, the proof of this relies on comparison with the derived category.
## 1. Wide subcategories
Suppose $`R`$ is a ring. In this section, we define wide subcategories of $`R`$-modules and construct an adjunction to thick subcategories of $`๐(R)`$. The natural domain of this adjunction is actually $`_{\text{wide}}(R)`$, the lattice of wide subcategories of $`๐_0`$, the wide subcategory generated by $`R`$. When $`R`$ is coherent, we identify $`๐_0`$ with the finitely presented modules, but we do not know what it is in general.
We recall that a thick subcategory of a triangulated category like $`๐(R)`$ is a full triangulated subcategory closed under retracts (summands). This means, in particular, that if we have an exact triangle $`X\stackrel{}{}Y\stackrel{}{}Z\stackrel{}{}\mathrm{\Sigma }X`$ and two out of three of $`X`$, $`Y`$, and $`Z`$ are in the thick subcategory, so is the third.
The analogous definition for subcategories of an Abelian category is the following.
###### Definition 1.1.
A full subcategory $`๐`$ of $`R\text{-mod}`$, or any Abelian category, is called *wide* if it is Abelian and closed under extensions.
When we say a full subcategory $`๐`$ is Abelian, we mean that if $`f:M\stackrel{}{}N`$ is a map of $`๐`$, then the kernel and cokernel of $`f`$ are in $`๐`$. Thus a wide subcategory $`๐`$ need not be closed under arbitrary subobjects or quotient objects. However, $`๐`$ is automatically closed under summands. Indeed, if $`MNP`$ and $`M๐`$, then $`N`$ is the kernel of the self-map of $`M`$ that takes $`(n,p)`$ to $`(0,p)`$. Thus $`N๐`$. In particular, $`๐`$ is replete, in the sense that anything isomorphic to something in $`๐`$ is itself in $`๐`$.
Torsion theories and Serre classes are closely related to wide subcategories. Recall that a Serre class is just a wide subcategory closed under arbitrary subobjects, and hence arbitrary quotient objects. Similarly, a (hereditary) torsion theory is a Serre class closed under arbitrary direct sums. In particular, the empty subcategory, the $`0`$ subcategory, and the entire category of $`R`$-modules are torsion theories, and so wide subcategories. The category of all $`R`$-modules of cardinality $`\kappa `$ for some infinite cardinal $`\kappa `$ is a Serre class (but not a torsion theory), and hence a wide subcategory. The category of finite-dimensional rational vector spaces, as a subcategory of the category of abelian groups, is an example of a wide subcategory that is not a Serre class. The thick subcategories studied in \[HP99b, HP99a\] are, on the other hand, more general than wide subcategories.
Note that the collection of all wide subcategories of $`R\text{-mod}`$ forms a complete lattice (though it is not a set). Indeed, the join of a collection of wide subcategories is the wide subcategory generated by them all, and the meet of a collection of wide subcategories is their intersection.
The following proposition shows that wide subcategories are the analogue of thick subcategories.
###### Proposition 1.2.
Suppose $`๐`$ is a wide subcategory of $`R\text{-mod}`$. Define $`f(๐)`$ to be the collection of all small objects $`X๐(R)`$ such that $`H_nX๐`$ for all $`n`$. Then $`f(๐)`$ is a thick subcategory.
Note that the collection of all thick subcategories is also a complete lattice, and the map $`f`$ is clearly order-preserving.
###### Proof.
Since wide subcategories are closed under summands, $`f(๐)`$ is closed under retracts. It is clear that $`Xf(๐)`$ if and only if $`\mathrm{\Sigma }Xf(๐)`$. It remains to show that, if we have an exact triangle $`X\stackrel{}{}Y\stackrel{}{}Z\stackrel{}{}\mathrm{\Sigma }X`$ and $`X,Zf(๐)`$, then $`Yf(๐)`$. We have a short exact sequence
$$0\stackrel{}{}A\stackrel{}{}H_nY\stackrel{}{}B\stackrel{}{}0$$
where $`A`$ is the cokernel of the map $`H_{n+1}Z\stackrel{}{}H_nX`$, and $`B`$ is the kernel of the map $`H_nZ\stackrel{}{}H_{n1}X`$. Hence $`A`$ and $`B`$ are in $`๐`$, and so $`H_nY๐`$ as well. Thus $`Yf(๐)`$. โ
Note that Proposition 1.2 remains true if $`R\text{-mod}`$ is replaced by any Abelian category, or indeed, if $`๐(R)`$ is replaced by a stable homotopy category \[HPS97\] and $`R`$ is replaced by the homotopy of the sphere in that category.
Proposition 1.2 implies that the homology of a small object in $`๐(R)`$ must lie in the wide subcategory generated by $`R`$.
###### Corollary 1.3.
Let $`๐_0`$ be the wide subcategory generated by $`R`$. If $`X`$ is a small object of $`๐(R)`$, then $`H_nX๐_0`$ for all $`n`$ and $`H_nX=0`$ for all but finitely many $`n`$.
###### Proof.
By hypothesis, the complex $`S^0`$ consisting of $`R`$ concentrated in degree $`0`$ is in $`f(๐_0)`$. Therefore the thick subcategory $`๐`$ generated by $`S^0`$ is contained in $`f(๐_0)`$. But $`๐`$ is precisely the small objects in $`๐(R)`$. (This is proved in \[HPS97, Corollary 2.3.12\] for commutative $`R`$, but the proof does not require commutativity). Hence $`H_nX๐_0`$ for all small objects $`X`$ and all $`n`$. It remains to prove that $`H_nX=0`$ for all but finitely many $`n`$ if $`X`$ is small. This is proved analogously; the collection of all such $`X`$ is a thick subcategory containing $`S^0`$. โ
This corollary tells us that the proper domain of $`f`$ is $`_{\text{wide}}(R)`$, the lattice of wide subcategories of $`๐_0`$. We would like $`f`$ to define an isomorphism $`f:_{\text{wide}}(R)\stackrel{}{}_{\text{thick}}(๐(R))`$, where $`_{\text{thick}}(๐(R))`$ is the lattice of thick subcategories of small objects in $`๐(R)`$. We now construct the only possible inverse to $`f`$.
Given a thick subcategory $`๐_{\text{thick}}(๐(R))`$, we define $`g(๐)`$ to be the wide subcategory generated by $`\{H_nX\}`$, where $`X`$ runs though objects of $`๐`$ and $`n`$ runs through $``$. By Corollary 1.3, $`g(๐)_{\text{wide}}(R)`$. Also, $`g`$ is obviously order-preserving. We also point out that, like $`f`$, $`g`$ can be defined in considerably greater generality.
We then have the following proposition.
###### Proposition 1.4.
The lattice homomorphism $`g`$ is left adjoint to $`f`$. That is, for $`๐_{\text{wide}}(R)`$ and $`๐_{\text{thick}}(๐(R))`$, we have $`g(๐)๐`$ if and only if $`๐f(๐)`$.
###### Proof.
Suppose first that $`g(๐)๐`$. This means that for every $`X๐`$, we have $`H_nX๐`$ for all $`n`$. Hence $`Xf(๐)`$. Thus $`๐f(๐)`$. Conversely, if $`๐f(๐)`$, then for every $`X๐`$ we have $`H_nX๐`$ for all $`n`$. Thus $`g(๐)๐`$. โ
###### Corollary 1.5.
Suppose $`R`$ is a ring. If $`๐_{\text{wide}}(R)`$, then $`gf(๐)`$ is the smallest wide subcategory $`๐^{}`$ such that $`f(๐^{})=f(๐)`$. Similarly, if $`๐_{\text{thick}}(๐(R))`$, then $`fg(๐)`$ is the largest thick subcategory $`๐^{}`$ such that $`g(๐^{})=g(๐)`$.
###### Proof.
This corollary is true for any adjunction between partially ordered sets. For example, if $`f(๐^{})=f(๐)`$, then $`gf(๐^{})=gf(๐)`$. But $`gf(๐^{})๐^{}`$, so $`gf(๐)๐^{}`$. Furthermore, combining the counit and unit of the adjunction shows that $`fgf(๐)`$ is contained in and contains $`f(๐)`$. The other half is similar. โ
It follows from this corollary that $`f`$ is injective if and only $`gf(๐)=๐`$ for all $`๐_{\text{wide}}(R)`$ and that $`f`$ is surjective if and only if $`fg(๐)=๐`$ for all $`๐_{\text{thick}}(๐(R))`$.
In order to investigate these questions, it would be a great help to understand $`๐_0`$, the wide subcategory generated by $`R`$. We know very little about this in general, except that $`๐_0`$ obviously contains all finitely presented modules and all finitely generated ideals of $`R`$. We also point out that $`๐_0`$ is contained in the wide subcategory consisting of all modules of cardinality $`\kappa `$, where $`\kappa `$ is the larger of $`\omega `$ and the cardinality of $`R`$. In particular, $`๐_0`$ has a small skeleton, and so there is only a set of wide subcategories of $`๐_0`$.
The only case where we can identify $`๐_0`$ is when $`R`$ is a coherent ring. A brief description of coherent rings can be found in \[Ste75, Section I.13\]; an excellent reference for deeper study is \[Gla89\].
###### Lemma 1.6.
A ring $`R`$ is coherent if and only if the wide subcategory $`๐_0`$ generated by $`R`$ consists of the finitely presented modules.
###### Proof.
Suppose first that $`๐_0`$ is the collection of finitely presented modules. Suppose $`๐`$ is a finitely generated left ideal of $`R`$. Then $`R/๐`$ is a finitely presented module, so $`๐`$, as the kernel of the map $`R\stackrel{}{}R/๐`$, is in $`๐_0`$. Hence $`๐`$ is finitely presented, and so $`R`$ is coherent.
The collection of finitely presented modules over any ring is clearly closed under cokernels and is also closed under extensions. Indeed, suppose we have a short exact sequence
$$0\stackrel{}{}M^{}\stackrel{}{}M\stackrel{}{}M^{\prime \prime }\stackrel{}{}0$$
where $`M^{}`$ and $`M^{\prime \prime }`$ are finitely presented (in fact, we need only assume $`M^{}`$ is finitely generated). Choose a finitely generated projective $`P`$ and a surjection $`P\stackrel{}{}M^{\prime \prime }`$. We can lift this to a map $`P\stackrel{}{}M`$. Then we get a surjection $`M^{}P\stackrel{}{}M`$, as is well-known. Furthermore, the kernel of this surjection is the same as the kernel of $`P\stackrel{}{}M^{\prime \prime }`$, which is finitely generated since $`M^{\prime \prime }`$ is finitely presented. Hence $`M`$ is finitely presented.
Now suppose that $`R`$ is coherent. We show that the kernel of a map $`f:M\stackrel{}{}N`$ of finitely presented modules is finitely presented. The point is that the image of $`f`$ is a finitely generated submodule of the finitely presented module $`N`$. Because the ring is coherent, this means that the image of $`f`$ is finitely presented. The kernel of $`f`$ is therefore finitely generated, but it is a submodule of the finitely presented module $`M`$, so it is finitely presented, using coherence again. โ
Noetherian rings can be characterized in a similar manner as rings in which $`๐_0`$ is the collection of finitely generated modules.
## 2. Surjectivity of the adjunction
The goal of this section is to show that the map $`f:_{\text{wide}}(R)\stackrel{}{}_{\text{thick}}(๐(R))`$ is surjective for all commutative rings $`R`$. This is a corollary of Thomasonโs classification of thick subcategories in $`๐(R)`$.
Suppose $`R`$ is a commutative ring. Denote by $`J(\mathrm{Spec}R)2^{\mathrm{Spec}R}`$ the collection of order ideals in $`\mathrm{Spec}R`$, so that $`SJ(\mathrm{Spec}R)`$ if and only if $`๐ญS`$ and $`๐ฎ๐ญ`$ implies that $`๐ฎS`$. Note that an open set in the Zariski topology of $`\mathrm{Spec}R`$ is in $`J(\mathrm{Spec}R)`$, so an arbitrary intersection of open sets is in $`J(\mathrm{Spec}R)`$. Also, note that $`J(\mathrm{Spec}R)`$ is a complete distributive lattice.
We will construct a chain of maps
$$J(\mathrm{Spec}R)^{\text{op}}\stackrel{๐}{}_{\text{tors}}(R)\stackrel{๐}{}_{\text{Serre}}(R)\stackrel{๐ผ}{}_{\text{wide}}(R)\stackrel{๐}{}_{\text{thick}}(๐(R)),$$
each of which is a right adjoint. We have of course already constructed $`f`$.
To construct $`i`$, note that $`_{\text{tors}}(R)`$ denotes the lattice of all torsion theories of $`R`$-modules. Recall that a torsion theory is a wide subcategory closed under arbitrary submodules and arbitrary direct sums. The map $`i`$ is defined by
$$i(S)=\{M|M_{(๐ญ)}=0\text{ for all }๐ญS\}.$$
Its right adjoint $`r`$ has
$$r(๐ฏ)=\{๐ญ|M_{(๐ญ)}=0\text{ for all }M๐ฏ\}=\underset{M๐ฏ}{}(\mathrm{Spec}R\mathrm{supp}M).$$
One can check that $`ri(S)=S`$, so that $`i`$ is an embedding. In case $`R`$ is a Noetherian commutative ring, $`i`$ is an isomorphism \[Ste75, Section VI.6\].
To construct $`j`$, let $`๐ฎ_0`$ denote the Serre class generated by $`R`$. Recall that a Serre class is a wide subcategory closed under arbitrary subobjects. If $`R`$ is Noetherian, then $`๐ฎ_0`$ is the finitely generated $`R`$-modules, but in general it will be larger than this. The symbol $`_{\text{Serre}}(R)`$ denotes the lattice of Serre subclasses of $`๐ฎ_0`$. The map $`j`$ takes a torsion theory $`๐ฏ`$ to its intersection with $`๐ฎ_0`$. Its left adjoint $`s`$ takes a Serre subclass of $`๐ฎ_0`$ to the torsion theory it generates. Since a torsion theory is determined by the finitely generated modules in it (since it is closed under direct limits), the composite $`sj`$ is the identity. Thus $`j`$ is also an embedding, for an arbitrary ring $`R`$. When $`R`$ is Noetherian, $`j`$ is an isomorphism. Indeed, in this case, the collection of modules all of whose finitely generated submodules lie in a Serre subclass $`๐ฎ`$ of finitely generated modules is a torsion theory, and is therefore $`s(๐ฎ)`$. Hence $`js`$ is the identity as well.
The map $`\alpha `$ takes a Serre class to its intersection with $`๐_0`$. Its adjoint $`\beta `$ takes a wide subcategory to the Serre class it generates. When $`R`$ is Noetherian, $`๐_0`$ and $`๐ฎ_0`$ coincide, so one can easily see that $`\beta \alpha `$ is the identity, so that $`\alpha `$ is injective. However, $`\alpha `$ will not be injective in general, as we will see below.
Note that the composite $`f\alpha ji`$ takes $`SJ(\mathrm{Spec}R)`$ to the collection of all small objects $`X`$ in $`๐(R)`$ such that $`(H_nX)_{(๐ญ)}=0`$ for all $`๐ญS`$ and all $`n`$. Since $`H_n(X_{(๐ญ)})(H_nX)_{(๐ญ)}`$, this is the same as the collection of all small $`X`$ such that $`X_{(๐ญ)}=0`$ for all $`๐ญS`$.
The following theorem is the main result of \[Tho\]. To describe it, recall that the open subsets of the Zariski topology on $`\mathrm{Spec}R`$, where $`R`$ is commutative, are the sets $`D(๐)`$ where $`๐`$ is an ideal of $`R`$ and $`D(๐)`$ consists of all primes that do not contain $`๐`$. The open set $`D(๐)`$ is quasi-compact if and only if $`D(๐)=D(๐)`$ for some finitely generated ideal $`๐`$ of $`R`$. This fact is well-known in algebraic geometry, and can be deduced from the argument on the top of p. 72 in \[Har77\]. Now we let $`\stackrel{~}{J}(\mathrm{Spec}R)`$ denote the sublattice of $`J(\mathrm{Spec}R)`$ consisting of arbitrary intersections of quasi-compact open sets.
###### Theorem 2.1 (Thomasonโs theorem).
Let $`R`$ be a commutative ring. Let $`h`$ denote the restriction of $`f\alpha ji`$ to $`\stackrel{~}{J}(\mathrm{Spec}R)`$. Then $`h:\stackrel{~}{J}(\mathrm{Spec}R)^{\text{op}}\stackrel{}{}_{\text{thick}}(๐(R))`$ is an isomorphism.
The following corollary is immediate, since $`f\alpha ji`$ is surjective.
###### Corollary 2.2.
Let $`R`$ be a commutative ring. Then the map $`f:_{\text{wide}}(R)\stackrel{}{}_{\text{thick}}(๐(R))`$ is surjective. In particular, for any thick subcategory $`๐`$, we have $`fg(๐)=๐`$.
Note that, since $`i`$ and $`j`$ are injective for all rings $`R`$, torsion theories and Serre classes cannot classify thick subcategories of $`๐(R)`$ in general. There are torsion theories and Serre classes of $`R`$-modules that do not correspond to any thick subcategory of small objects in $`๐(R)`$. When $`R`$ is Noetherian, we have $`\stackrel{~}{J}(\mathrm{Spec}R)=J(\mathrm{Spec}R)`$, so torsion theories and Serre classes do correspond to thick subcategories, but this will not be true in general.
## 3. Regular coherent rings
The goal of this section is to show that the map $`f:_{\text{wide}}(R)\stackrel{}{}_{\text{thick}}(๐(R))`$ is an isomorphism when $`R`$ is a regular coherent commutative ring. Regularity means that every finitely presented module has finite projective dimension; see \[Gla89, Section 6.2\] for many results about regular coherent rings. An example of a regular coherent ring that is not Noetherian is the polynomial ring on infinitely many variables over a principal ideal domain.
As we have already seen, $`f`$ is injective if and only $`gf(๐)=๐`$ for all wide subcategories of finitely presented modules (when $`R`$ is coherent). We start out by proving that $`gf(๐_0)=๐_0`$ when $`R`$ is coherent.
###### Proposition 3.1.
Suppose $`R`$ is a ring, and $`M`$ is a finitely presented $`R`$-module. Then there is a small object $`X๐(R)`$ such that $`H_0XM`$.
###### Proof.
Write $`M`$ as the cokernel of a map $`f:R^m\stackrel{}{}R^n`$. Recall that, given a module $`N`$, $`S^0N`$ denotes the complex that is $`N`$ concentrated in degree $`0`$. Define $`X`$ to be the cofiber of the induced map $`S^0R^m\stackrel{}{}S^0R^n`$. Then $`X`$ is small and $`H_0X=M`$ (and $`H_1X`$ is the kernel of $`f`$). โ
###### Corollary 3.2.
Suppose $`R`$ is a coherent ring, and $`๐_0`$ is the subcategory of finitely presented modules. Then $`gf(๐_0)=๐_0`$.
In order to prove that $`gf(๐)=๐`$ in general, however, given a finitely presented module $`M`$, we would have to find a complex $`X`$ that is small in $`๐(R)`$ such that $`H_0XM`$ and each $`H_nX`$ is in the wide subcategory generated by $`M`$. The obvious choice is $`S^0M`$, the complex consisting of $`M`$ concentrated in degree $`0`$. However, $`S^0M`$ cannot be small in $`๐(R)`$ unless $`M`$ has finite projective dimension, as we show in the following lemma.
###### Lemma 3.3.
Suppose $`R`$ is a ring and $`M`$ is an $`R`$-module. If the complex $`S^0M`$ is small in $`๐(R)`$, then $`M`$ has finite projective dimension.
###### Proof.
Define an object $`X`$ of $`๐(R)`$ to have finite projective dimension if there is an $`i`$ such that $`H_jF(X,S^0N)=0`$ for all $`R`$-modules $`N`$ and all $`j`$ with $`|j|>i`$. Here $`F(X,S^0N)`$ is the function complex $`\mathrm{Hom}_R(QX,N)`$ in $`๐()`$ obtained by replacing $`X`$ by a cofibrant chain complex $`QX`$ quasi-isomorphic to it. (In the terminology of \[Hov98, Chapter 4\], the model category $`\text{Ch}(R)`$ of chain complexes over $`R`$ with the projective model structure is a $`\text{Ch}()`$-model category, and we are using that structure). If $`X=S^0M`$, then $`H_iF(S^0M,S^0N)=\mathrm{Ext}^i(M,N)`$, so $`S^0M`$ has finite projective dimension if and only if $`M`$ does. It is easy to see that complexes with finite projective dimension form a thick subcategory containing $`R`$. Therefore every small object of $`๐(R)`$ has finite projective dimension. โ
Conversely, we have the following proposition.
###### Proposition 3.4.
Suppose $`R`$ is a coherent ring and $`M`$ is a finitely presented module of finite projective dimension. Then $`S^0M`$ is small in $`๐(R)`$.
###### Proof.
It may be possible to give a direct proof of this, but we prefer to use model categories. Theorem 7.4.3 of \[Hov98\] asserts that any cofibrant complex $`A`$ that is small in the category $`\text{Ch}(R)`$ of chain complexes and chain maps, in the sense that $`\text{Ch}(R)(A,)`$ commutes with direct limits, will be small in $`๐(R)`$. Of course, $`S^0M`$ is small in $`\text{Ch}(R)`$, but it will not be cofibrant. To make it cofibrant, we need to replace $`M`$ by a projective resolution. Since $`M`$ is finitely presented and the ring $`R`$ is coherent, each term $`P_i`$ in a projective resolution for $`M`$ will be finitely generated. Since $`M`$ has finite projective dimension, the resolution $`P_{}`$ is finite. Hence $`P_{}`$ is small in $`\text{Ch}(R)`$, and so also in $`๐(R)`$. Since $`P_{}`$ is isomorphic to $`S^0M`$ in $`๐(R)`$, the result follows. โ
This proposition leads immediately to the following theorem.
###### Theorem 3.5.
Suppose $`R`$ is a regular coherent ring, and $`๐`$ is a wide subcategory of finitely presented $`R`$-modules. Then $`gf(๐)=๐`$.
###### Proof.
We have already seen that $`gf(๐)๐`$. Suppose $`M๐`$. Then $`S^0M`$ is small by Proposition 3.4, so clearly $`S^0Mf(๐)`$. Thus $`Mgf(๐)`$. โ
The author believes that this theorem should hold without the regularity hypothesis, though obviously a cleverer proof is required. Theorem 3.5 and Corollary 2.2 lead immediately to the following classification theorem.
###### Theorem 3.6.
Suppose $`R`$ is a regular commutative coherent ring. Then the map $`f:_{\text{wide}}(R)\stackrel{}{}_{\text{thick}}(๐(R))`$ is an isomorphism. Hence the restriction of $`\alpha ji`$ defines an isomorphism $`\stackrel{~}{h}:\stackrel{~}{J}(\mathrm{Spec}R)\stackrel{}{}_{\text{wide}}(R)`$ as well.
###### Corollary 3.7.
Suppose $`R`$ is a regular commutative coherent ring, $`๐`$ is a wide subcategory of finitely presented $`R`$-modules, and $`M๐`$. If $`N`$ is a finitely presented submodule or quotient module of $`M`$, then $`N๐`$. In particular, if $`R`$ is also Noetherian, every wide subcategory of finitely generated modules is a Serre class.
Indeed, the first statement is obviously true for any wide subcategory coming from $`\stackrel{~}{J}(\mathrm{Spec}R)`$.
## 4. Quotients of regular coherent rings
The goal of this section is to understand the relationship between wide subcategories of $`R`$-modules and wide subcategories of $`R/๐`$-modules, where $`๐`$ is a two-sided ideal of $`R`$. This will allow us to extend Theorem 3.6 to quotients of regular commutative coherent rings by finitely generated ideals.
Given $`๐_{\text{wide}}(R/๐)`$, we can think of $`๐`$ as a full subcategory of $`R`$-modules where $`๐`$ happens to act trivially. As such, it will be closed under kernels and cokernels, but not extensions. Define $`u(๐)`$ to be the wide subcategory of $`R`$-modules generated by $`๐`$. In order to be sure that $`u(๐)`$ is contained in $`๐_0(R)`$, we need to make sure that $`R/๐๐_0(R)`$. The easiest way to be certain of this is if $`๐`$ is finitely generated as a left ideal; under this assumption, we have just defined a map $`u:_{\text{wide}}(R/๐)\stackrel{}{}_{\text{wide}}(R)`$.
As usual, this map has a left adjoint $`v`$. Given $`๐_{\text{wide}}(R)`$, we define $`v(๐)`$ to be the collection of all $`M๐`$ such that $`๐`$ acts trivially on $`M`$, so that $`M`$ is naturally an $`R/๐`$-module. Then $`v(๐)`$ is a wide subcategory. It is not clear that $`v(๐)๐_0(R/๐)`$ in general. However, if $`R`$ is coherent, then any $`M`$ in $`g(๐)`$ will be finitely presented as an $`R`$-module, and so finitely presented as an $`R/๐`$-module.
Altogether then, we have the following lemma, whose proof we leave to the reader.
###### Lemma 4.1.
Suppose $`R`$ is a coherent ring and $`๐`$ is a two-sided ideal of $`R`$ that is finitely generated as a left ideal. Then the map $`u:_{\text{wide}}(R/๐)\stackrel{}{}_{\text{wide}}(R)`$ constructed above is right adjoint to the map $`v`$ constructed above.
We claim that $`vu(๐)=๐`$ for all $`๐_{\text{wide}}(R/๐)`$, so that $`u`$ is in fact an embedding. To see this, we need a description of $`u(๐)`$, or, more generally, a description of the wide subcategory generated by a full subcategory $`๐`$ that is already closed under kernels and cokernels. It is clear that this wide subcategory will have to contain all extensions of $`๐`$, so let $`๐_1`$ denote the full subcategory consisting of all extensions of $`๐`$. An object $`M`$ is in $`๐_1`$ if and only if there is a short exact sequence
$$0\stackrel{}{}M^{}\stackrel{}{}M\stackrel{}{}M^{\prime \prime }\stackrel{}{}0$$
where $`M^{}`$ and $`M^{\prime \prime }`$ are in $`๐`$. Since $`๐`$ is closed under kernels and cokernels, $`0๐`$ (unless $`๐`$ is empty), so that $`๐_1๐`$.
We claim that $`๐_1`$ is still closed under kernels and cokernels. We will prove this after the following lemma.
###### Lemma 4.2.
Suppose $`๐`$ is an abelian category and the full subcategory $`๐`$ of $`๐`$ is closed under kernels and cokernels. Let $`๐_1`$ be the full subcategory consisting of extensions of $`๐`$. Suppose $`M๐`$, $`N๐_1`$, and $`f:M\stackrel{}{}N`$ is a map. Then $`\mathrm{ker}f๐`$ and $`\mathrm{cok}f๐_1`$.
###### Proof.
Because $`N๐_1`$, we can construct the commutative diagram below,
$$\begin{array}{ccccccccc}0& & 0& & M& =& M& & 0\\ & & & & f& & pf& & & & \\ 0& & N^{}& & N& \underset{p}{}& N^{\prime \prime }& & 0\end{array}$$
where the rows are exact and $`N^{},N^{\prime \prime }๐`$. The snake lemma then gives us an exact sequence
$$0\stackrel{}{}\mathrm{ker}f\stackrel{}{}\mathrm{ker}(pf)\stackrel{}{}N^{}\stackrel{}{}\mathrm{cok}f\stackrel{}{}\mathrm{cok}(pf)\stackrel{}{}0.$$
Since $`\mathrm{ker}(pf)`$ and $`N^{}`$ are in $`๐`$, we find that $`\mathrm{ker}f=\mathrm{ker}`$ is in $`๐`$. Similary, we find that $`\mathrm{cok}f`$ is an extension of $`\mathrm{cok}`$ and $`\mathrm{cok}(pf)`$, so $`\mathrm{cok}f๐_1`$. โ
###### Proposition 4.3.
Suppose $`๐`$ is an abelian category, and $`๐`$ is a full subcategory of $`๐`$ closed under kernels and cokernels. Let $`๐_1`$ be the full subcategory consisting of extensions of $`๐`$. Then $`๐_1`$ is also closed under kernels and cokernels.
###### Proof.
Suppose $`f:M\stackrel{}{}N`$ is a map of $`๐_1`$. Then we have the commutative diagram below,
$$\begin{array}{ccccccccc}0& & M^{}& \stackrel{i}{}& M& & M^{\prime \prime }& & 0\\ & & fi& & f& & & & & & \\ 0& & N& =& N& & 0& & 0\end{array}$$
where the rows are exact and $`M^{},M^{\prime \prime }๐`$. The snake lemma gives an exact sequence
$$0\stackrel{}{}\mathrm{ker}(fi)\stackrel{}{}\mathrm{ker}f\stackrel{}{}M^{\prime \prime }\stackrel{}{}\mathrm{cok}(fi)\stackrel{}{}\mathrm{cok}f\stackrel{}{}0.$$
Hence $`\mathrm{ker}f`$ is an extension of $`\mathrm{ker}(fi)`$ and $`\mathrm{ker}`$, both of which are in $`๐`$ by Lemma 4.2. So $`\mathrm{ker}f๐_1`$. Similarly, $`\mathrm{cok}f=\mathrm{cok}`$, which is in $`๐_1`$ by Lemma 4.2. โ
###### Corollary 4.4.
Suppose $`๐`$ is an abelian category, and $`๐`$ is a full subcategory of $`๐`$ closed under kernels and cokernels. Let $`๐_0=๐`$, and, for $`n1`$, define $`๐_n`$ to be the full subcategory of extensions of $`๐_{n1}`$. Let $`=_{n=0}^{\mathrm{}}๐_n`$. Then $``$ is the wide subcategory generated by $`๐`$.
###### Proof.
Note that the union that defines $``$ is an increasing one, in the sense that $`๐_n๐_{n+1}`$. This makes it clear that $``$ is closed under extensions. Proposition 4.3 implies that $``$ is closed under kernels and cokernels. Therefore $``$ is a wide subcategory. Since any wide subcategory containing $`๐`$ must contain each $`๐_n`$, the corollary follows. โ
###### Theorem 4.5.
Suppose $`R`$ is a coherent ring, and $`๐`$ is a two-sided ideal of $`R`$ that is finitely generated as a left ideal. Then the map $`u:_{\text{wide}}(R/๐)\stackrel{}{}_{\text{wide}}(R)`$ is an embedding.
###### Proof.
It suffices to show that $`gf(๐)=๐`$, where $`๐`$ is a wide subcategory of $`๐_0(R/๐)`$. According to Corollary 4.4, $`u(๐)=_n๐_n`$, where $`๐_n`$ is the collection of extensions of $`๐_{n1}`$. We prove by induction on $`n`$ that if $`๐`$ acts trivially on some $`M๐_n`$, then in fact $`M๐`$. The base case of the induction is clear, since $`๐_0=๐`$. Now suppose our claim is true for $`n1`$, and $`๐`$ acts trivially on $`M๐_n`$. Write $`M`$ as an extension
$$0\stackrel{}{}M^{}\stackrel{}{}M\stackrel{}{}M^{\prime \prime }\stackrel{}{}0$$
where $`M^{},M^{\prime \prime }๐_{n1}`$. Then $`๐`$ acts trivially on $`M^{}`$ and $`M^{\prime \prime }`$, so $`M^{},M^{\prime \prime }๐`$. Furthermore, this is an extension of $`R/๐`$-modules; since $`๐`$ is a wide subcategory of $`R/๐`$-modules, $`M๐`$. The induction is complete, and we find that $`gf(๐)=๐`$. โ
###### Theorem 4.6.
Suppose $`R`$ is a commutative coherent ring such that
$$f_R:_{\text{wide}}(R)\stackrel{}{}_{\text{thick}}(๐(R))$$
is an isomorphism, and $`๐`$ is a finitely generated ideal of $`R`$. Then $`f_{R/๐}`$ is also an isomorphism. In particular, $`f_R`$ is an isomorphism for all rings $`R`$ that are quotients of regular commutative coherent rings by finitely generated ideals.
###### Proof.
The second statement follows immediately from the first and Theorem 3.6. To prove the first statement, note that, by hypothesis and Thomasonโs theorem 2.1, the map
$$G_R=\alpha ji:\stackrel{~}{J}(\mathrm{Spec}R)\stackrel{}{}_{\text{wide}}(R)$$
is an isomorphism. It suffices to show that $`G_{R/๐}`$ is a surjection. So suppose $`๐`$ is a wide subcategory of finitely presented $`R/๐`$-modules. Then $`u๐=G(S)`$ for some $`S\stackrel{~}{J}(\mathrm{Spec}R)`$. Since $`M_{(๐ญ)}=0`$ for any $`R`$-module $`M`$ such that $`๐M=0`$ and any $`๐ญ`$ not containing $`๐`$, we have $`S=TD(๐)`$ for a unique $`TV(๐)=\mathrm{Spec}(R/๐)`$. One can easily see that $`TJ(\mathrm{Spec}R/๐)`$, but we claim that in fact $`T\stackrel{~}{J}(\mathrm{Spec}R/๐)`$. Indeed, $`T=SV(๐)`$, so this claim boils down to showing that the inclusion $`V(๐)\mathrm{Spec}R`$ is a proper map. This is well-known; the inclusion of any closed subset in any topological space is proper.
Naturally, we claim that $`G_{R/๐}(T)=๐`$. We have $`๐u๐=G(S)G(T)`$, since $`ST`$. To show the converse, it suffices to show that $`uG_{R/๐}(T)u๐=G(S)`$, since $`u`$ is an embedding by Theorem 4.5. But any module $`M`$ in $`G_{R/๐}(T)`$ has $`M_{(๐ญ)}=0`$ for all $`๐ญT`$ and for all $`๐ญD(๐)`$. Therefore $`uG_{R/๐}(T)G(TD(๐))=G(S)`$. โ
###### Corollary 4.7.
Suppose $`R`$ is a finitely generated commutative $`k`$-algebra, where $`k`$ is a principal ideal domain. Then every wide subcategory of finitely generated $`R`$-modules is a Serre class.
Indeed, any such $`R`$ is a quotient of a finitely generated polynomial ring over $`k`$, which is regular by Hilbertโs syzygy theorem, by a finitely generated ideal. This corollary covers most of the Noetherian rings in common use, though of course it does not cover all of them. We remain convinced that $`f`$ should be an isomorphism for all commutative coherent rings.
## 5. Localizing subcategories
In this section, we relate certain subcategories of $`R\text{-mod}`$ to localizing subcategories of $`๐(R)`$. Recall that a localizing subcategory is a thick subcategory closed under arbitrary direct sums. We use the known classification of localizing subcategories of $`๐(R)`$ when $`R`$ is a Noetherian commutative ring to deduce a classification of wide subcategories of $`R`$-modules closed under arbitrat coproducts. We know of no direct proof of this classification.
Let $`_{\text{wide}}^{}(R)`$ denote the lattice of wide subcategories of $`R`$-modules closed under arbitrary coproducts, and let $`_{\text{thick}}^{}(๐(R))`$ denote the lattice of localizing subcategories of $`๐(R)`$. Just as before, we can define a map $`f:_{\text{wide}}^{}(R)\stackrel{}{}_{\text{thick}}^{}(R)`$, where $`f(๐)`$ is the collection of all $`X`$ such that $`H_nX๐`$ for all $`n`$. The proof of Proposition 1.2 goes through without difficulty to show that $`f(๐)`$ is localizing.
Similarly, we can define $`g:_{\text{thick}}^{}(๐(R))\stackrel{}{}_{\text{wide}}^{}(R)`$ by letting $`g(๐)`$ be the smallest wide subcategory closed under coproducts containing all the $`H_nX`$, for $`X๐`$ and for all $`n`$. The proof of Proposition 1.4 goes through without change, showing that $`g`$ is left adjoint to $`f`$.
###### Lemma 5.1.
For any ring $`R`$ and any wide subcategory $`๐`$ of $`R`$-modules closed under coproducts, we have $`gf(๐)=๐`$.
###### Proof.
Since $`g`$ is left adjoint to $`f`$, $`gf(๐)๐`$. But, given $`M๐`$, $`S^0M`$ is in $`f(๐)`$, and hence $`M=H_0S^0Mgf(๐)`$. โ
It would be surprising if an arbitrary localizing subcategory of $`๐(R)`$ were determined by the homology groups of objects in it, but this is nevertheless the case when $`R`$ is Noetherian and commutative. Given a prime ideal $`๐ญ`$ of such an $`R`$, denote by $`k_๐ญ`$ the residue field $`R_{(๐ญ)}/๐ญ`$ of $`๐ญ`$.
###### Theorem 5.2.
Suppose $`R`$ is a Noetherian commutative ring. Then $`f:_{\text{wide}}^{}(R)\stackrel{}{}_{\text{thick}}^{}(๐(R))`$ is an isomorphism. Furthermore, there is an isomorphism between the Boolean algebra $`2^{\mathrm{Spec}R}`$ and $`_{\text{wide}}^{}(R)`$ that takes a set $`A`$ of prime ideals to the wide subcategory closed under coproducts generated by the $`k_๐ญ`$ for $`๐ญA`$.
###### Proof.
We have a map $`\alpha :\mathrm{\hspace{0.17em}2}^{\mathrm{Spec}R}\stackrel{}{}_{\text{wide}}^{}(R)`$, defined as in the statement of the theorem. The composition $`f\alpha `$ is proved to be an isomorphism, for $`R`$ Noetherian and commutative, in \[Nee92\] (see also \[HPS97, Sections 6 and 9\]). Since $`f`$ is injective, we conclude that $`f`$, and hence also $`\alpha `$, is an isomorphism. โ
For example, the wide subcategory closed under coproducts of abelian groups corresponding to the prime ideal $`0`$ is the collection of rational vector spaces; the wide subcategory closer under coproducts corresponding to the set $`\{0,p\}`$ is the collection of $`p`$-local abelian groups. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.