id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9912/astro-ph9912383.html | ar5iv | text | # Gravitational Lensing of Relativistic Fireball
The gravitational lensing of a relativistic fireball can produce the time delayed multiple images with quite different spectra and temporal patterns in contrast with a nonrelativistic source and hence can imitate the source recurrence. In particular, the enigmatic four multiple gamma-ray bursts detected during two days at October 1996 in the same sky region may be due to a single fireball event multiply imaged by the foreground galactic nucleus or cluster of galaxies.
PACS numbers: 95.30.Sf; 95.85.P; 98.62.Sb
Keywords: astrophysics, general relativity, gravitational lenses, gamma-ray bursts
The identical spectra and time histories of multiple images are generally considered as necessary consequences of the gravitational lens (GL) phenomenon (for review see e.g. ). However these signatures of gravitational lensing are suitable only in the case of both nonrelativistic source and GL when multiple images are generated by the same region on the surface of the source albeit shifted in time. Meanwhile we demonstrate in the following that boosting of light in the relativistic source such as a gamma-ray burst (GRB) fireball after an appropriate lensing may provide the multiple images without any similarities of spectra and light-curves.
The observed cosmological GRBs are the promising targets for the gravitational lensing of relativistic sources. The similarities of spectra and temporal patterns of multiple images were used until now as observation signatures in searches of the possible gravitational lensing of GRBs with a hope to determine or confine e.g. the contents and density of compact objects in dark matter and average redshift of GRBs . The GRB, within the framework of a standard cosmological scheme of their origin, is generated by the nearly spherical or beamed low mass leptonic relativistic fireball resulting from the coalescence of tight neutron star binary or collapse of some massive star in distant galaxy (for reviews see e.g. ). A typical expected recurrence rate of cosmological GRBs is one per million years per galaxy. Hence if possible observation of GRB repetition were not due to the time delayed multiple lensed images, it would create hard problems for the standard cosmological scheme of GRB origin. The hunting for GRB recurrence is the difficult task because of the low angular resolution of nowadays gamma-telescopes. Recent statistical analysis of GRB samples provide only the upper limits on the possible repetition of GRBs . Meanwhile among more than 2500 GRBs detected nowadays there are surprising multiple GRBs coming at October 1996. At that time the orbital telescopes BATSE, KONUS, TGRS and Ulysses detected independently four GRB events from the same sky region during two days . The probability of accidental projection of these four GRBs is very small, $`3.110^5÷3.310^4`$, whereas the clustering is not so significant if the four events combine into the three bursts . However in the latter case one of the bursts would be more than 20 minutes in duration. Nevertheless the recurrence of cosmological GRBs is a rather natural if some part of GRBs are generated not by the coalescence of neutron star binaries but by the accidental collisions of neutron stars in the dense central stellar clusters of distant galactic nuclei . In this case the observation of multiple GRBs is a consequence of a serendipitious event of massive black hole formation accompanied by the four events of neutron star collisions during dynamical collapse of extremely dense central stellar cluster in some distant galactic nucleus .
Here we propose the other resolution of the ‘fast GRB recurrence’ problem. We demonstrate in the following that a cluster of four GRB events observed during two days at October 1996 can be all due to a single relativistic fireball event at cosmological distance expanding with a large bulk Lorentz factor $`\mathrm{\Gamma }1`$ and multiply imaged by the GL. The basic idea of specific gravitational lensing with resulting different spectra and temporal patterns of separate time delayed images is taking into account the strong aberration (boosting) of light from the relativistically expanding fireball. This boosting confines the viewed portion of the expanding fireball by the narrow confinement angle $`\theta _c\mathrm{\Gamma }^11`$ relative to the fireball center (see Fig. 1 for details).
The emission of the fireball beyond the confinement angle $`\theta _c`$ comes to the detector without the boosting and generally sinks down to the noise. In this case two ‘rays’ separated by the angle $`\theta >\theta _c`$ originate from the different parts of the fireball connected by the space-like world lines. These regions are causally disconnected and their emission may be generated under different physical conditions. An additional supposed requirement for the realization of this model is a small-scale turbulent structure of emitting shocks in the fireball with a typical length much less than the instant fireball radius. In this case the outgoing rays would not retain information on the temporal structure of the central source of energy. This highly turbulent structure seems quite reasonable for the GRB fireball because of developing of instabilities in the relativistic shocks (see e.g. ).
Gravitational lensing of nonrelativistic source provides multiple images only with time-shifted but similar spectral and temporal structures. On the contrary outgoing rays from the relativistic fireball separated by the angle $`\theta >\theta _c`$ in general can be generated under the different physical conditions. After an appropriate gravitational lensing they would appear to the distant low-resolution telescope as recurrent separate events from the same prospective source on the sky but with the quite different spectral and temporal structures. This specific feature of the gravitational lensing of relativistic fireballs may influence the results of statistical searches of GRB repetition of . In the following we determine the necessary physical parameters of the possible GL to imitate the recurrent multiple GRBs with different spectra and light curves from the same GRB event.
For a general mass distribution in the GL (deflector) with the Newtonian potential $`\varphi (\stackrel{}{r})`$ the deflection angle of a separate ray can be expressed (see e. g. ) as a two dimensional vector
$$\stackrel{}{\alpha }=\frac{2}{c^2}𝑑s\stackrel{}{n}\times (\stackrel{}{n}\times \varphi ),$$
(1)
where $`\stackrel{}{n}`$ is a unit vector along the ray and the integral is performed along the ray too. Fig. 1 represents the schematic view of the possible GRB (i. e. relativistic fireball) lensing geometry with a center of GL (deflector) in general shifted from the line connected the observer and source which is the same as the path of the undeflected ray (i. e. in the absence of deflector). The estimation for potential is $`\varphi \sigma ^2`$ for the GL composed of constituent masses (e. g. stars or galaxies in the cluster) moving with a virial velocity dispersion $`\sigma `$. It follows then from Eq. (1) that a value of the deflection angle $`\alpha =|\stackrel{}{\alpha }|`$ is always small, $`\alpha \varphi /c^2\sigma ^2/c^21`$. In the limit of a small deflection angle there are general relations for a geometric time delay $`\mathrm{\Delta }t_g`$ which is conditioned by the path difference between the deflected and undeflected rays
$$\mathrm{\Delta }t_g=(1+z_d)\frac{\alpha \xi }{2c}$$
(2)
and for a separation $`\xi `$ between deflected and deflected rays
$$\xi =\frac{D_dD_{ds}}{D_s}\alpha ,$$
(3)
where $`D_s`$, $`D_d`$, and $`D_{ds}`$ are the angular diameter distances from the observer to the source of GRB, to the deflector (lens) and from the deflector to the source correspondingly and $`z_d`$ is a deflector redshift. Besides the geometric time delay $`\mathrm{\Delta }t_g`$ there is an additional general relativistic time delay $`\mathrm{\Delta }t_{gr}`$ due to traversing through the region with a gravitational field (“Shapiro delay”)
$$\mathrm{\Delta }t_{gr}=\frac{2}{c^3}𝑑s\varphi .$$
(4)
The both geometrical $`\mathrm{\Delta }t_g`$ and gravitational $`\mathrm{\Delta }t_{gr}`$ time delays are of the same order of magnitude for the general ray position as can be verified from Eqs. (2) and (4) and using estimation $`\alpha \varphi /c^2`$ from Eq. (1). So the corresponding total time delay $`\mathrm{\Delta }t=\mathrm{\Delta }t_g+\mathrm{\Delta }t_{gr}GM(\xi )/c^3`$ is defined by the effective mass $`MM(\xi )`$ of the GL inside the radius $`r=\xi `$. It must be taken in mind that in some degenerate cases of GL symmetry this estimation of time delay between images may provide only the upper limit on $`\mathrm{\Delta }t`$ because the time delays of different rays tend to compensate each other.
Now we can formulate two necessary requirements for production of lensing images of causally disconnected regions of relativistic fireball. For the general case of both the lens (deflector) and relativistic fireball (source) at comparable (cosmological) distances $`D_dD_s`$ the angle between two outgoing rays is $`\theta =\alpha (D_d/D_s)\alpha `$. Using deflection angle estimation $`\alpha \sigma ^2/c^2`$ and causal disconnection condition $`\theta >\theta _c\mathrm{\Gamma }^1`$ we find the first requirement on the velocity dispersion $`\sigma `$ in the GL: $`\sigma c/\mathrm{\Gamma }^{1/2}`$ with expected $`\mathrm{\Gamma }10^3`$ for GRB case. The second requirement on the effective total mass $`M`$ of the GL follows from the estimation of the time delay $`\mathrm{\Delta }t(1,2)=\mathrm{\Delta }t(1)\mathrm{\Delta }t(2)`$ between two images: $`Mc^3\mathrm{\Delta }t(1,2)/G`$ with $`\mathrm{\Delta }t(1,2)1`$ day. One of this requirements may be replaced by the equivalent one for the GL radius $`R<\mathrm{\Gamma }c\mathrm{\Delta }t(1,2)`$ with the use of virial relation $`RGM/\sigma ^2`$.
To specify more explicitly these requirements we consider a simple model of the gravitational lensing of relativistic fireball by the singular isothermal sphere (SIS) with a center of SIS shifted to distance $`x`$ from the line connecting the source and observer (which is the path of undeflected ray) as shown on Fig. 1. It is convenient to suppose that SIS is confined within some finite radius $`R`$. The spherically symmetric and constant temperature mass distribution $`M(r)`$ in this model of GL is connected with a current radius $`r`$ by the relation
$$r=\frac{GM(r)}{\sigma ^2},$$
(5)
where $`\sigma =constc`$ is a line-of-sight velocity dispersion in the SIS. The deflection angle of this GL is independent on the impact parameter of the ray (while it is much less than $`R`$) and equals
$$\alpha =4\pi \left(\frac{\sigma }{c}\right)^2.$$
(6)
This Eq. specifies the first requirement for causal disconnection of images, $`\alpha \mathrm{\Gamma }^1`$, which provides the restriction on the velocity dispersion in the GL:
$$\sigma \frac{c}{(4\pi \mathrm{\Gamma })^{1/2}}310^3\mathrm{\Gamma }_3^{1/2}\text{ km s}^1,$$
(7)
where $`\mathrm{\Gamma }_3=\mathrm{\Gamma }/10^3`$.
To determine the required scales of time delay $`\mathrm{\Delta }t(1,2)1`$ day and required deflection angle $`\alpha `$ we consider two images produced by the lensing of two rays, ‘ray 1’ and ‘ray 2’, coming through opposite sides of the SIS with corresponding impact parameters $`r_1`$ and $`r_2`$ such that $`\mathrm{max}(r_1,r_2)R`$. The gravitational potential inside the SIS (at $`r<R`$) is
$$\varphi (r<R)=\sigma ^2\left(1+\mathrm{ln}\frac{R}{r}\right).$$
(8)
Because of $`\alpha =const`$ any two rays coming through opposite sides of the SIS have the same separation $`\xi _1=\xi _2=\xi =(r_1+r_2)/2`$. As a consequence the geometric time delay between ray 1 and ray 2 equals zero according to Eq. (2). So the only time delay between considered two rays results from the difference of gravitational time delays $`\mathrm{\Delta }t(1.2)=\mathrm{\Delta }t_{gr}(1)\mathrm{\Delta }t_{gr}(2)`$. After the simple but lengthy calculations we find
$$\mathrm{\Delta }t(1,2)=4\pi (1+z_d)\left(\frac{\sigma }{c}\right)^2\frac{x}{c}=4\pi (1+z_d)\frac{GM(x)}{c^3},$$
(9)
where the last equality follows from Eq. (5). According to limitation of Eq. (7) the shift of SIS center $`x=(r_2r_1)/2`$ is
$$x\frac{c\mathrm{\Delta }t(1,2)}{1+z_d}\mathrm{\Gamma }\frac{0.84}{1+z_d}\frac{\mathrm{\Delta }t(1,2)}{1\text{ day}}\mathrm{\Gamma }_3\text{ pc}.$$
(10)
Using $`\mathrm{\Delta }t(1,2)`$ from Eq. (9) as an observed time delay between different GRB lensing images of the same relativistic fireball we find the required mass of the GL:
$$MM(x)\frac{1.410^9}{1+z_d}\frac{\mathrm{\Delta }t(1,2)}{1\text{ day}}\mathrm{M}_{};$$
(11)
This estimation with an order of magnitude accuracy is valid for the more complex mass distributions inside the GL and in fact provides only the low limit of the possible GL mass because of the possible time delay compensation between different images.
It follows from Eqs. (7) and (11) that the necessary restrictions on the parameters of GL which produce (i) the one day time delayed and (ii) causally independent images (with different spectra and temporal histories) of the same single GRB fireball are the following: the required velocity dispersion of the constituent masses in the GL for the generation of multiple GRB events at October 1996 is $`\sigma 310^3`$ km s<sup>-1</sup> and the total mass of the GL $`M10^9\mathrm{M}_{}`$ for the effective redshift of GRBs $`z1÷2`$. These required values of $`M`$ and $`\sigma `$ may be realized in the real case of lenses such as a dense enough cluster of galaxies or massive central stellar cluster in galactic nucleus. In addition Eqs. (3) and (7) provide another restriction: for a rather general case of $`D_sD_d`$ the deflector must be situated at the distance from the source $`D_{ds}\xi /\alpha 10^3\mathrm{\Gamma }_3R`$. For the angular diameter distance $`D_{ds}`$ of cosmological scale the suitable $`R1`$ Mpc is a characteristic scale for the cluster of galaxies and respectively for $`D_{ds}1`$ kpc (GRB originated in distant galactic disk) the suitable $`R1`$ pc is a scale of the central stellar clusters in the host galactic nucleus. These both examples of promising GL candidates are again in agreement with other previous requirements. At the same time these both deflectors are the most common examples of the nowadays observed cosmological lenses. Meanwhile the considered model works for fireballs emerging inside (or closely) to deflectors: whether (i) in one of the galaxies in the host cluster or (ii) in the galactic disk near the host galactic nucleus. This proximity of the source to the deflector increases the probability of the lensing with respect, for example, to the lensing of quasars (by distant foreground galaxy or cluster of galaxies).
The gravitational potential of real GL must be nonspherical to generate 4 or more images. The simplest model is the SIS potential perturbed by a quadratic share e.g. caused by rotation, which in general generates one faint and four bright images. Massive rotating black hole can also produces the suitable nonspherical gravitational field.
In general we demonstrate that identities of spectra and time histories cannot serve as specific signatures of gravitational lensing in the case of relativistic sources. In particular the gravitational lensing of relativistic fireball from a single GRB event may imitate the GRB recurrence. |
no-problem/9912/hep-lat9912053.html | ar5iv | text | # 1 Spatial correlation of the topological charge density on pure gauge configurations. The insert enlarges the large 𝑟 region. The solid line corresponds to the four parameter fit described in the text.
COLO-HEP-441
December 1999.
Spatial Correlation of the Topological Charge in Pure SU(3) Gauge Theory and in QCD
Anna Hasenfratz
Physics Department, University of Colorado,
Boulder, CO 80309 USA
## Abstract
We study the spatial correlator of the topological charge density operator in pure SU(3) gauge theory and in two flavor QCD. We show that the data for distances up to about 1 fm is consistent with a vacuum consisting of individual instantons and closely bound pairs. The percentage of paired objects is twice as large on the dynamical configurations than on the pure gauge ones, implying increased molecule formations due to fermionic interactions.
Our understanding of the QCD vacuum has increased considerably in the last few years. Lattice calculations using different algorithms have consistent predictions for the topological susceptibility in pure gauge QCD , and the first results in dynamical systems have been published recently ,. Results concerning the density and size distribution of instantons are less consistent, but there is increasing evidence that the pure gauge QCD vacuum is filled with instantons of average radius $`0.3`$ fm, with a density of about 1 fm<sup>-4</sup> ,,.
Instanton Liquid Models (ILM) describe the phenomenological properties of instantons in the QCD vacuum closely . Even the Random ILM provides an accurate description of the pure gauge instanton vacuum indicating that the gauge interaction and consequently the spatial correlation between instantons is small. The situation is quite different for systems with dynamical light quarks. In the zero quark mass limit the topological susceptibility is zero and there are no unpaired topological objects in the vacuum. Yet one expects instantons to be present. Chiral symmetry is spontaneously broken, $`<\overline{\psi }\psi >0`$ in the zero quark mass limit. According to the Casher-Banks formula, the chiral condensate (summed over quark flavors) is
$$<\overline{\psi }\psi >=\pi \rho (0),$$
(1)
implying that $`\rho (0)`$, the density of the eigenmodes of the Dirac operator at eigenvalue zero, is finite. Since instantons are the leading candidates to create near-zero eigenmodes of the Dirac operator, the vacuum is likely to be filled with instantons. Fermions create an attractive force between oppositely charged instantons. It is therefore natural to expect that the instantons of the zero quark mass vacuum form instanton-antiinstanton pairs, molecules. In the case of finite quark mass the topological susceptibility does not vanish but it is proportional to the quark mass
$$\chi =<\overline{\psi }\psi >\frac{m_q}{n_f^2}+O(m_\pi ^4)=\frac{f_\pi ^2m_\pi ^2}{4n_f}+O(m_\pi ^4),$$
(2)
where $`f_\pi =132`$MeV is the pion decay constant. The vacuum in this case has unpaired objects in addition to the molecules.
Even though the above picture describing spatial correlation between instantons sounds very natural, no evidence from lattice calculations has supported it so far. The culprit is most likely the lattice approach. In order to reveal individual topological objects, vacuum fluctuations have to be removed, the vacuum has to be smoothed. Almost all smoothing procedures distort the vacuum, they destroy molecules especially easily. In addition the smoothed configurations are frequently analyzed by a pattern-recognition algorithm to identify individual instantons. Most algorithms have a built-in cut-off which limits the nearest objects it can resolve, further limiting the possibility of finding closely bound pairs.
In this paper we study the topological density correlator
$$C(r)=\frac{1}{V}d^4xq(x)q(x+z),r=|z|,$$
(3)
where $`q(x)`$ is the topological charge density measured using the improved charge operator of Ref. ,. We measured $`C(r)`$ on two sets of configurations with similar lattice spacings. The first set is a pure gauge ensemble of $`16^3\times 32`$ configurations at $`\beta =6.0`$, the is a dynamical ensemble of the same size, generated with two flavors of staggered fermions at $`\beta =5.7`$ with $`ma=0.01`$. The lattice spacing of the pure gauge ensemble is $`a0.095`$ fm, of the dynamical ensemble is $`a0.11`$ fm. Both ensembles are from the NERSC QCD archive. For this study 170 of the pure gauge configurations and 83 of the dynamical configurations (the entire available data set) were used.
The gauge configurations have to be smoothed to reveal the topological content. Here we used APE smearing with parameter $`c=0.45`$ as the properties of this smoothing has been extensively tested in Refs. ,. $`N`$ steps of APE smearing smoothes a configuration to distance $`d_sa\sqrt{Nc/3}`$ but preserves the properties of the vacuum at longer distances. First we will compare the topological density correlators on the pure gauge and dynamical configurations after the same number, $`N=30`$, APE steps. Since the lattice spacings of the two ensembles are approximately the same, that corresponds to about the same physical smoothing and even observables that change with smoothing can be compared. Afterwards we will vary $`N`$ between 10 and 60 to justify the above assumption.
First we consider the topological susceptibility. The susceptibility is largely independent of the number of smoothing steps on both configuration sets. Several previous studies found that the topological charge has very large autocorrelation time on dynamical configurations, sometimes hundreds of molecular dynamics time units. We did not find this problem here. The average charge on the dynamical configurations is $`<Q>=0.16\pm 0.25`$ and the charge distribution also appears standard. We found $`\chi ^{1/4}=193(4)`$MeV on the pure gauge ensemble and $`\chi ^{1/4}=130(5)`$MeV on the dynamical ensemble. This decrease of the susceptibility is expected according to Eq. 2. Using the published pion mass value $`m_\pi a=0.25`$ of the dynamical ensemble in Eq. 2 we obtain $`f_\pi =(105\pm 20)`$ MeV, fairly close to the experimental value $`f_\pi =132`$ MeV. (Note that Eq. 2 differs from the equivalent equation of Ref..<sup>1</sup><sup>1</sup>1The author is indebted to Peter Hasenfratz for checking Eq. 2. )
Figures 1 and 2 show $`C(r)`$ versus $`r`$ for both configuration sets after $`N=30`$ APE smoothing steps. The inserts of the figures enlarge the large $`r`$ tail. At this smoothing level the vacuum fluctuations are largely removed. A few properties of the instanton vacuum can be immediately deduced from the figures. At small distances $`C(r)`$ is dominated by the auto-correlator of individual instantons and $`C(r)`$ can be approximated by a dilute gas. In the dilute gas approximation the height of the correlator is proportional to the number of topological objects of the configuration,$`N_0`$, and the width is related to the average radius of the instantons, $`\overline{\rho }`$. At distances $`r2\overline{\rho }`$, the correlator probes the close neighbors of the instantons. If there is no spatial correlation between topological objects, $`C(r)`$ should be about zero for $`r2\overline{\rho }`$<sup>2</sup><sup>2</sup>2In the continuum reflexion positivity requires $`C(r)<0`$ for all $`r0`$. On the lattice that is not the case for small $`r`$. Since smoothing removes most of the vacuum fluctuations one expects $`C(r)`$ to be dominated by non-perturbative vacuum structures. . If pairing of oppositely charged objects occur, $`C(r)`$ is expected to be negative, reaching its minimum around $`r2\overline{\rho }`$. Comparing now the pure gauge and dynamical correlators of figures 1 and 2 we observe that the widths of the correlators are almost identical but the height at $`r=0`$ is very different. This implies that $`\overline{\rho }`$ is approximately the same for the two ensembles but the dynamical configurations have about three times fewer topological objects. The tails of the two distributions reveal further differences. While $`C(r)`$ on the pure gauge ensemble is consistent with zero for $`r0.7fm`$, $`C(r)`$ on the dynamical ensemble shows an unmistakable negative dip signaling opposite-charge correlation.
To quantify these differences, we model the vacuum with a simple picture. Assume that there are $`N_0`$ topological objects in the vacuum with uniform radius $`\overline{\rho }.`$ If the topological density of a single instanton of radius $`\overline{\rho }`$ centered at the origin is $`q_0(x)`$ and the instantons are non-overlapping, the topological density of the configuration in this model is
$$\stackrel{~}{q}(x)=\underset{i=1}{\overset{N_0}{}}s_iq_0(xx_i),$$
(4)
where $`x_i`$ is the center of the the ith topological object and $`s_i=\pm 1`$ depending on whether it is an instanton or antiinstanton. The topological correlator is
$$\stackrel{~}{C}(z)=\underset{i,j}{}s_is_jC_\rho (zx_i+x_j)$$
(5)
where $`C_\rho (z)`$ is the topological correlator of a single instanton. If the instantons are randomly distributed, averaging over configurations will give
$$<\stackrel{~}{C}(z)>=N_0C_\rho (z).$$
(6)
If only $`N_0n_p`$ objects are randomly distributed and the other $`n_p`$ objects form molecules (i.e. $`2n_p`$ oppositely charged instantons are paired), the average of the topological correlator is
$$<\stackrel{~}{C}(z)>=N_0C_\rho (z)n_pC_d(z),$$
(7)
where $`C_d(z)`$ is the correlator of an instanton-instanton pair separated by distance $`d`$, averaged over the direction of $`d`$. $`C_\rho `$ and $`C_d`$ can be calculated using the analytical form of the single instanton charge distribution. If the instanton size is $`\overline{\rho }0.3`$ fm and the pairs are closely bound, i.e. $`d0.6`$ fm, this model can be valid in the $`0r0.60.9`$ fm range. The measured $`C(r)`$ then can be fitted with the four parameters $`N_0,`$ $`n_p`$, $`\overline{\rho }`$ and $`d`$. In the fit we keep data with $`0r/a58`$ as for $`r/aL/2=8`$ finite size effects become important. Varying the upper range of the fit between $`r/a=5`$ and 8 hardly effects the fit parameters, it changes the $`\chi ^2`$ of the fit only. The solid lines in figures 1 and 2 correspond to the best fit. The fit parameters listed in table 1 confirm our earlier observations. The average instanton size on both ensembles is about $`0.3`$ fm. The instanton density on the pure gauge ensemble is about $`1`$ fm<sup>-4</sup>, in agreement with expectations, while the density on the dynamical configurations is about a third of that. On both ensembles we observe pairs at a distance of about $`2\overline{\rho }`$. On the pure gauge configurations 15 percent of the instantons are in pairs, while on the dynamical configurations close to 30 percent are paired up. It is not surprising to find pairs even on the pure gauge configurations, as there is an attractive gauge interaction between oppositely charged instantons, but pairing is clearly enhanced on the dynamical ensemble.
Until now we have compared the dynamical and pure gauge ensembles at the same level of APE smoothing. In the remainder of this paper we study the dependence on the level of smoothing. The smoothing has two different effects on a configuration: it removes vacuum fluctuations and it also distorts and annihilates instantons. These two effects cannot be separated. The first effect is the important one for small amounts of smoothing, but as the configuration is smoothed many times, the second one becomes more relevant. We have measured the correlator $`C(r)`$ at $`N=10,20,30,40,50`$ and $`60`$ levels of APE blocking on both ensembles and also at $`N=15`$ on the dynamical configurations only. The same level of APE smoothing can have very different effect at different lattice spacing. The smearing distance $`d_s=a\sqrt{Nc/3}`$ characterizes the removal of vacuum fluctuations but it is not clear what is the variable (if one exists at all) that describes how instantons are annihilated or otherwise distorted by APE smoothing at different lattice spacings. In the following we report the results as the function of the smearing distance. The qualitative form of the correlator is the same at all levels of smearing and we fit the data with four parameters as described above. The fit is excellent in every case except at the $`N=60`$ level smoothing where the smearing distance is comparable to the lattice size and finite size effects become important.
Figure 3 shows the ratio of paired objects and $`N_0`$ as the function of $`d_s`$. The ratio decreases rapidly up to $`d_s0.2`$ fm but changes slower after that, indicating that most of the vacuum fluctuations have been removed and further change is due to the annihilation of topological objects. Comparing data at the same smoothing distance even increases the previously observed difference between the pure gauge and dynamical ensembles; the dynamical configurations show more than twice the pairing rate of the pure gauge ones.
It is interesting to ask, which instantons contribute to the topological susceptibility, all of them or only the unpaired ones? Figure 4 shows the density of the unpaired objects, $`(N_02n_p)/V`$ as a function of the smearing distance $`d_s`$. The horizontal lines indicate the value of the corresponding topological susceptibilities in fm<sup>-4</sup>. The agreement implies that the unpaired instantons follow a Poisson distribution and only they contribute to the topological susceptibility.
We have studied only one set of dynamical ensemble with two quark flavors. It would be important to extend this work to include different, preferably smaller quark masses and four quark flavors. In both cases one expects the fermionic effects to be stronger. We mention here that the NERSC archive contains two smaller (less then 50 configurations) dynamical configuration sets, both at heavier quark masses. We have analyzed the available configurations, but within errors they show no significant difference compared to the dynamical configurations discussed here.
In summary, we have demonstrated important differences between pure gauge and dynamical QCD configurations. On the dynamical configurations the topological susceptibility is considerably smaller than on the pure gauge configurations. This is due both to the decreased density of instantons and to the stronger pairing of oppositely charged objects. In the case we studied here we found that the density of instantons decreased by about a factor of three while pairing occurred twice as frequently on the dynamical configurations.
The author is indebted to T. DeGrand for carefully reading the manuscript. This calculation would not have been possible without the configurations available at NERSC. We would like to thank the Colorado High Energy experimental group for allowing us to use their work stations. This work was supported by the U.S. Department of Energy. |
no-problem/9912/hep-lat9912009.html | ar5iv | text | # Computers for Lattice QCD
## 1 INTRODUCTION
The overriding objective of present work in lattice QCD is to achieve an accurate representation of continuum QCD and to use this ability to study physically important properties of the underlying theory. This research demands a combination of efficient numerical algorithms, physical quantities appropriately represented for numerical study and fast computers to carry out the needed calculations. While the development of algorithms and inventions of methods to tackle new problems are natural tasks for theoretical or computational physics, it is most often the large machines that produce the actual numerical results of importance to physics. Thus, the development, characteristics and availability of large scale computer resources are necessarily central topics in lattice QCD.
Fortunately there is much progress to report since this topic was last addressed in a plenary talk at a lattice meeting. In particular, since that review the 300 Flops PC-PACES in Tsukuba and 120 and 180 Gflops QCDSP machines at Columbia and the RIKEN Brookhaven Research Center have come into operation. In addition construction is beginning on the next APE machine, APEmille. These large machines offer critical opportunities for progress on the most demanding calculations at the frontier of lattice QCD, especially exploration beyond the quenched approximation in which the full effects of dynamical fermions are incorporated.
While a survey of present computer resources (see below) shows a marked decline in the importance of large-scale commercial supercomputers (only the Hitachi machine in Tsukuba belongs to this category), there is a promising appearance of a new class of machines: workstation farms. While also commercial machines, workstation farms are often assembled by the group wishing to use them and are built from cost effective PC’s or workstations connected with a commercial network. While present workstation farms cannot deliver the high performance offered by the large projects mentioned above, they are easily assembled and can quickly exploit advances in PC technology.
In Table 1 we follow Sexton’s example and attempt to give a picture of the computer resources presently available for lattice QCD research.
## 2 THREE LARGE PROJECTS
The PC-PACES, QCDSP and APEmille machines represent somewhat varied examples of how large-scale computer resources can be provided for lattice QCD studies.
### 2.1 PC-PACES
The first of this generation of machine to be completed was PC-PACES which has been working since the middle of 1996. It contains 2048 independent processors connected by a 3-dimensional hyper-crossbar switch joining processors and I/O nodes into a $`17\times 16\times 8`$ grid. All nodes with two out of three identical Cartesian coordinates are connected by their own 300 MB/sec crossbar switch.
The processor is an enhanced version of a standard Hewlett Packard reduced instruction set computer and carries out 64-bit floating point arithmetic. The processor enhancement involves the addition of extra floating point registers allowing the efficient transfer of long vectors into the processor. The peak speed of each processor is 300 Mflops so the peak speed of the 2048-node machine is 0.6 Teraflops.
The resulting architecture achieves remarkable efficiencies for QCD code (above 50%) and appears to be useful to other applications as well. Perhaps equally impressive is the fact that FORTRAN, $`C`$ and $`C`$++ compliers are available which support the high performance features of the machine. While much of the lattice QCD code is written in these high level languages, careful assembly language programming is used for the critical inner loops.
The machine consumes 275 KWatts and was built within a $22M budget giving a cost performance figure of $73/Mflops. The machine can be partitioned into a number of disjoint units, each with its own queue. Typically the queues are reconfigured about once per month. During their large quenched hadron mass calculation the machine was configured as a single partition about one third of the time.
### 2.2 QCDSP
The next machines of this present generation to be finished are the QCDSP machines at Columbia and the RIKEN Brookhaven Research Center. These machines are based on a very simple node made up of a commercial, 50 Mflops, 32-bit digital signal processor chip (DSP), a custom gate-array (NGA) and 2 MB of memory. These are mounted on a single 4.5$`\times `$7.5cm SIMM card as shown in the picture in Figure 1. The NGA enhances DSP performance by providing a 32-word buffer between the DSP and memory. This buffer acts in different ways depending on which of nine images of memory the DSP addresses, varying from normal memory access (with a minimum of 2 wait-states) to a fetch ahead mode which permits 25 Mword/sec data motion from memory and 0 wait-state random access by the DSP to a portion of the 32-word buffer.
The NGA also provides inter-processor communication, supporting simultaneous $``$40Mb/sec serial communication between the node and each of its eight nearest neighbors in four dimensions. This net 40 MB/sec bandwidth per node is sufficient for efficient execution of lattice QCD code even when there are as few as $`2^4`$ sites per node. In addition, the NGA implements pass-through or “worm-hole” operations that allow efficient global floating point sums and broadcasts.
Most of the code for these QCDSP machines is written in $`C`$++ with the critical low level routines written in DSP assembly language. The fastest code for Wilson or domain wall fermions achieves 30% of peak performance or 15 Mflops on a single node. Thus the 8,192 node machine at Columbia and the 12,288 node machine at the RBRC at Brookhaven can achieve 0.12 and 0.18 Teraflops respectively. The RBRC machine was the final machine constructed and, including Brookhaven assembly labor, cost $1.85M giving a cost performance figure of $10/Mflops and received the 1998 Gordon Bell Prize for cost performance. It consumes approximately 60 KWatts. By a rearrangement of cables, a QCDSP machine can be configured as a single large machine run from a single host to variety of smaller machines run by many hosts, with some hosts running an array of machines.
### 2.3 APEmille
APEmille is the next in a series of QCD machines built by the Italian and now DESY groups. This machine is constructed of custom floating point processors, with a peak, single-precision speed of 538 Mflops. Each processor has 32 Mbytes of memory and is connected to its six nearest neighbors in a 3-dimensional mesh. The total off-node bandwidth provided to a single processor is 132 MB/sec. The processors are organized into clusters of eight with each such cluster fed instructions SIMD-style by a single controller. Four of these eight-node clusters are then controlled by a built-in Linux “workstation” with I/O capability. One large machine can be subdivided in software into a number of independent SIMD units.
The machine will be programmed in two languages $`C`$++ and TAO, with the later language providing backward compatability with APE100 TAO code. A 2048-node machine is expected to consume 20 KWatts. Such a machine would have a peak speed of 1 Teraflops and is expected to sustain at least 50%, or 0.5 Teraflops. At a cost of $2.5M this will yield a machine with a cost performance of $5/Mflops. There is a 32-processor unit working now. Two 250 Gflops and two 128 Gflops units are planned for INFN and at least one 250 Gflops unit for DESY, all by next summer.
## 3 SMALLER MACHINES
In addition to the three large projects described above, there are a number of smaller machines that provide important resources for lattice QCD calculations. The UKQCD collaboration has the use of a 152-node T3E. With 64 Mbyte, 0.9 Gflops nodes, this machine has a peak performance of 137 Gflops and sustains 41 Gflops on QCD code. Usually there is a large, 128-node partition devoted to production running. There is a 128-node Fujitsu VPP700E with 2 Gbyte, 2.4 Gflops nodes at RIKEN. This machine sustains 150 Gflops and is perhaps 20% available for lattice calculations. An 80-node, 80 Gflops VPP500 is installed at KEK with an upgrade to a 550 Gflops sustained machine expected early next year. In the U.S., Fermilab continues to use its ACP-MAPS machine which sustains 5 Gflops. The LANL group has access to a 64-node 32 Gflops Origin2000 which sustains 3.6 Gflops for QCD. Finally the MILC group has access to a variety of machines include an Origin2000, a T3E and an SP-2, yielding a total sustained performance of 5.8 Gflops.
## 4 WORKSTATION FARMS
An important recent development is the appearance of clusters of workstations, networked together to tackle a single problem. This approach to parallel computing has the strong advantage that it can be pursued with commodity components that are easily assembled and uses standard, platform-independent software, e.g. Linux and MPI. We consider four examples:
Indiana University Physics Cluster. Each node in this 32-node cluster is a 350MHz Pentium II with 64 MB of memory, joined through a switch using 100MB/sec Fast Ethernet. Running a straight port of MPI MILC code, S. Gottlieb reports the following benchmarks. A staggered inverter applied to a $`4^4`$ lattice on a single node, yields 118 Mflops while for a $`14^4`$ lattice the performance falls to 70 Mflops (presumably due to less efficient cache usage). Identical code run on the entire 32-node machine achieves 0.288 Gflops on a $`8^3\times 16`$ lattice ($`4^4`$ sites per node). This number increases to 1.4 Gflops for a $`28^3\times 56`$ lattice ($`14^4`$ sites per node). Thus, the effects of the somewhat slow network are to decrease the single node 118 Mflops to 9 Mflops for the smaller lattice while for the large lattice of $`14^4`$ sites per node the decrease is less severe: 70 to 44 Mflops. While the Fast Ethernet is too slow to allow maximum efficiency, it is very inexpensive, allowing a total machine cost of $25K in Fall of ’98 and impressive cost performance numbers of $87 and $18/Mflops for the two cases.
Roadrunner Cluster. This is 64-node machine at the University of New Mexico with each node made up of two Pentium II, 450 MHz, 128 MB processors for a cost of approximately $400K. S. Gottlieb’s benchmarks (using only one of the two processors) show single-node performance of 127 and 71 Mflops for the $`4^4`$ and $`14^4`$ cases. On a 32-node partition, he finds 1.25 and 2.0 Gflops for the $`8^3\times 16`$ and $`28^3\times 56`$ examples. This cluster has a much faster network using Myrinet (1.28 Gb/sec) and a much better 39 Mflops performance per node on the small lattice with the greatest communication demands. The higher performance is balanced by the higher cost of this commercial machine. Arbitrarily discounting the $400K by 1/3 since only one of two processors was used, we obtain $106/Mflops and $67/Mflops for these two different sized lattices.
ALICE. This is an 128-node, Alpha-based machine with a Myrinet multistage crossbar network being assembled in Wuppertal. Each node is a 466 MHz 21264 Alpha processor with single-node performance for QCD code of 175 Mflops (double precision). The full system is expected achieve 20 Gflops (double) and $`>`$30 Gflops (single precision). Based on a list price of about $0.9M, this machine provides $45/Mflops, a number sure to be lower when the actual price is determined.
NCSA NT Myrinet Cluster. This cluster of 96, dual 300MHz Pentium II nodes has been used by D. Toussaint and K. Orginos for MILC benchmarks and by P. Mackenzie and J. Simone for a Canopy test, each using a $`12^3\times 24`$ lattice. While the performance per node decreases from 22.5 to 12.8 for the Canopy test as the number of nodes increases from 1 to 12, the MILC code runs at essentially 50 Mflops/node as the number of nodes varies between 1 and 64. Thus, a 64-node machine sustains 3.2 Gflops for the MILC code. It is impressive that the Canopy code could be ported without great difficulty and its performance is expected to increase as extensions to MPI are added which provide features needed for the efficient execution of Canopy.
## 5 THE FUTURE
In light of the above discussion it is interesting to discuss possible future directions for lattice QCD computing. Figure 2 shows the computing power and price performance of the various systems just discussed. That summary suggests that workstation farms will soon offer unprecedented, cost-effective, 10-50 Gflops computing. By allowing rapid upgrades to current technology and using standard software, this approach provides an easy migration path to increasingly cost effective hardware. If a relatively fast network is provided, the general interconnectivity supports a variety of communication patterns which are easily accessible from portable software. Workstation farms should provide increasingly accessible and powerful resources for lattice QCD calculations.
However, the largest size of a farm that can be efficiently devoted to evolving a single Markov stream is limited by communication technology. Commodity Ethernet is too slow and the more effective 1.28 Gb/sec Myrinet product, at $1.6K/node, does not benefit from a mass market. In addition, the intrinsic latency of a general purpose network product, limits its applicability for lattice QCD. For a 500 Mflops processor, 10 $`\mu `$ sec is required for the application of a single staggered $`D`$ operator to the even sites of a $`2^4`$ local lattice. However, the smallest latency achieved by a Myrinet board is 5 $`\mu `$s and sixteen independent transfers are needed for such a $`D`$ application implying a minimum 8$`\times `$ performance decrease for such a demanding problem.
Thus, it seems likely that high end computing, needed for example to evolve necessarily small lattices with dynamical quarks, will be done on purpose-built machines of the PC-PACES, APEmille, QCDSP variety. If fact, all three groups are already planning their next machine. While no details are available from Tsukuba, the APE machine, APEnext, aims for 3-6 Tflops peak speed with 64-bit precision, 1 Tbyte of memory and 1 Gb/sec disk I/O. The next Columbia machine, QCDSP10, is planned as an evolution of the QCDSP architecture. The new processor may be a 0.67 Gflops, C67 Texas Instruments DSP with 128 KB of on-chip memory. The chip also supports 64-bit IEEE floating point at 0.17 Gflops. We plan to begin development this Fall and hope to have a 16K node, 10 Tflops peak, 5 Tflops sustained machine in operation within three years. In addition, we would like to construct a larger, community machine supporting a more competitive scale of high end computing for the entire U.S. lattice QCD activity.
## 6 ACKNOWLEDGMENTS
The author is indebted to many people for providing much of the information presented here, in particular, S. Gottlieb, R. Gupta, B. Joo, K. Kanaya, R. Kenway, T. Lippert, P. Mackenzie, R. Petronzio, S. Ohta, M. Okawa, J. Simone, D. Toussaint, R. Tripiccione, and T. Yoshie. |
no-problem/9912/quant-ph9912066.html | ar5iv | text | # Supersymmetric derivation of the hard core deuteron’s bound state
## Abstract
A supersymmetric construction of potentials describing the hard core interaction of the neutron-proton system for low energies is proposed. It considers only the binding energy case and uses the approximation of the Yukawa potential given by Hulthén. Recent experimental data for the binding energy of the deuteron are used to give the involved orders of magnitude.
Supersymmetry, Deuteron binding energy
34.20.Cf, 03.65.Ge, 03.65.Ca
Preprint UVA-ES-10-99
The neutron-proton (n-p) system is the simplest of the composite nuclei. It can exist in a stable bound state, the deuteron. This is a nuclide for which no excited bound states are known, i.e., it exists only in its ground state. For a n-p system in which there are no electrostatic forces (i.e., the force due to the two magnetic moments can be considered as small as an irrelevant correction to the nuclear forces) the interaction is depicted by a short range central force. Forms of the related potentials commonly used are the square well, gaussian and Yukawa potentials (see for example .) Among them, the gaussian and the square well potentials have deserved special attention because their mathematical simplicity. On the other hand, the Yukawa potential has a deeper theoretical significance because its close connection with the ‘exchange forces’ responsible of the binding between nucleons. Historically, this potential is often quoted by the prediction of the existence of the $`\pi `$ meson. Hulthén noticed that the Yukawa potential can be conveniently approximated by the expression
$$V(r)=\frac{𝒱_0}{e^{r/\alpha }1},$$
(1)
where $`r`$ is the distance between the nucleons, $`\alpha >0`$ is a fixed length (the range of the potential) and $`𝒱_0>0`$ is a fixed energy (the strength of the potential).
For low energies, the Schrödinger equation for the Yukawa potential is not solvable analytically and the potential (1) is used as the stand in for it (see and references quoted therein). In this range of energies the problem of describing the n-p interactions can be characterized in two forms: that concerning to bound states or to scattering states. In the following, we shall only focus on the former case. On the other hand, empirically is known that the deuteron has a total spin 1. The obvious interpretation is that the spins of the neutron and the proton are parallel in the ground state, which would thus be described by a $`{}_{}{}^{3}S_{1}^{}`$-state. Therefore, we will restrict ourselves to the study of the nuclear binding energy $`S`$-states.
Let us stress now that the nucleons cannot approach each other closer than a certain distance, otherwise, the nucleus would not show an almost constant nuclear density. Then we must resort to phenomenological arguments . The usual approach is to modify the nuclear potential at small distances to be consistent with the experimental data. A simpler approach considers the so-called hard core model which is characterized by a short range infinite repulsion inside the attractive nuclear interaction . In other words, a realistic potential must contain more than a radial term of the form (1): it must also include an infinitely high barrier term.
The present paper investigates the supersymmetric nature of the hard core deuteron’s binding energy by doing calculations on the Hulthén’s potential. We shall show that the new potential so derived presents a repulsive barrier term and can be either isospectral (e.g. it shares the same spectrum) or almost isospectral (the same spectrum except the ground state) to the Hulthén’s potential. As a particular case, the new potential is chosen to be a representation of the hard core nuclear force describing the n-p low energy binding interactions. The involved orders of magnitude are given by using the experimental data of $`2.22456614(41)\mathrm{MeV}`$ for the deuteron binding energy $`_d`$ recently reported in .
As usual for potentials depending on $`r`$, the Schrödinger equation for the Hulthén’s potential (1) reduces to an eigenvalue equation for a particle in a one dimensional effective potential $`V_{\mathrm{}}(x)=\mathrm{}(\mathrm{}+1)/x^2V(x)`$, where $`\mathrm{}`$ is the azimuthal quantum number and $`x=r/\alpha `$ is a dimensionless radial coordinate. We are looking for the S states of binding energy (i.e., negative energies and $`\mathrm{}=0`$), therefore the following equation holds
$$\left[\frac{d^2}{dx^2}+\frac{V_0}{e^x1}k^2\right]\psi (x)=0,$$
(2)
where $`\psi (x)xR(x)`$, with $`R(x)`$ the standard radial wavefunction, and $`E=k^2=(2\mu \alpha ^2/\mathrm{}^2)`$, $`V_0=(2\mu \alpha ^2/\mathrm{}^2)𝒱_0`$, with $`\mu `$ the reduced mass.
The standard procedure of solution carries out the eigenfunctions
$$\psi _n(x)=C_ne^{kx}(1e^x){}_{2}{}^{}F_{1}^{}(2k+1+n,1n,2k+1;e^x),n=1,2,\mathrm{},$$
(3)
with $`C_n`$ a normalization constant
$$C_n\alpha ^{3/2}\frac{\mathrm{\Gamma }(n+2k)}{\mathrm{\Gamma }(n+1)\mathrm{\Gamma }(2k+1)}[2k(n+k)(n+2k)]^{1/2}.$$
The corresponding eigenvalues are given by
$$E_n=k_n^2=\left(\frac{V_0n^2}{2n}\right)^2,V_0>n^2,n=1,2,\mathrm{}$$
(4)
Remark that the problem involves two mutually dependent parameters, $`V_0`$ and $`k`$. In practice, the eigenvalue of the energy may be given by experiment while the strength $`V_0`$ of the potential is to be determined. Therefore, equation (4) can be used to evaluate $`V_0`$ in terms of $`E_n`$.
The dotted curve on Figure 1 represents the potential (1) allowing only one bound state, the deuteron’s ground state, labeled by a subscript $`H`$. In Figure 2 we have plotted the well known corresponding probability density $`|\psi _H(x)|^2`$.
As regards the supersymmetric scheme, we have in the first place to look for a superpotential $`w(x)`$ solving the Riccati equation
$$w^{}(x)+w^2(x)=V(x)ϵ,$$
(5)
where the prime denotes derivative with respect to $`x`$ and the factorization energy $`ϵ=\kappa ^2`$ is, in principle, any real number. By a simple calculation we get for the particular solution
$$w(x)=\kappa \frac{1}{e^x1},\kappa >0,$$
(6)
which is useful in the cases when the strength can be rewritten as $`V_0=1+2\kappa `$. As usual, the susy partner $`\stackrel{~}{V}`$ of $`V`$ is given by the shape invariance condition
$$\stackrel{~}{V}(x)=V(x)+2w^{}(x),$$
(7)
leading to
$$\stackrel{~}{V}(x)=\frac{1+2\kappa }{e^x1}+\frac{1}{2\mathrm{sinh}^2(x/2)}.$$
(8)
Potential (8) is a well known result in susy quantum mechanics (see ). It has been used to study susy phase-equivalent potentials and to establish some interesting connections between the susy and the variational method .
Observe now the appearance of the r.h.s. term in (8). This term presents a singularity of order $`2/x^2`$ at origin and behaves just as a repulsive centrifugal term with $`\mathrm{}=1`$ in the neighborhood of $`x=0`$.
$$\stackrel{~}{V}(x)\frac{1+2\kappa }{x}+\frac{2}{x^2},x<<1.$$
(9)
As the value $`x=1`$ implies $`r=\alpha `$, the approximation (9) holds in the range of the initial potential $`V(x)`$. In the region $`x>1`$, the potential $`\stackrel{~}{V}(x)`$ rapidly becomes negligible (see Figure 1.) Here, the strength $`V_0=1+2\kappa `$ plays the role of a coupling constant.
The above results can be used to determine the eigenfunctions and eigenvalues connected with the new potential $`\stackrel{~}{V}(x)`$. The procedure consits now in factorizing the related Hamiltonias by
$$H=A^{}A+ϵ,\stackrel{~}{H}=AA^{}+ϵ,$$
(10)
with
$$A\frac{d}{dx}+w(x).$$
(11)
It is straightforward to check that equations (5), (7) and (11) automatically lead to (10). Then, an intertwining relationship holds:
$$\stackrel{~}{H}A=AH.$$
(12)
If $`\psi (x)`$ is an eigenfunction of $`H`$ with eigenvalue $`E`$, equation (12) gives
$$\stackrel{~}{H}(A\psi )=E(A\psi ),A\psi 0.$$
Therefore, if $`\psi L^2(𝐑)`$, we get the normalized eigenstate of $`\stackrel{~}{H}`$
$$\stackrel{~}{\psi }(x)=(Eϵ)^{1/2}A\psi (x).$$
(13)
Now, let us stress on the information displayed by equations (10)–(13). First, in the case when $`ϵ=E_n`$, for any $`n=1,2,\mathrm{}`$, the l.h.s. equation in (10) applies on $`\psi _n(x)`$ as $`H\psi _n(x)=E_n\psi _n(x)`$, and consequently $`A^{}A\psi _n(x)=0`$. It is easy to check that $`A\psi _n(x)=0`$ is a sufficient condition to get square integrable functions. Therefore, equation (13) means that $`\psi _n(x)`$ has not a susy partner $`\stackrel{~}{\psi }_n(x)`$ and the couple of Hamiltonians (10) corresponds to a case of unbroken supersymmetry .
On the other hand, when $`ϵE_n`$ for every $`n=1,2,\mathrm{}`$, there is no square integrable eigenfunction of $`H`$ annihilated by $`A`$, and equation (13) means that every $`\psi _n(x)`$ will have a susy partner $`\stackrel{~}{\psi }_n(x)`$. Now, from the r.h.s. of (10), it is clear that a function $`\stackrel{~}{\psi }_{}(x)`$, obeying $`A^{}\stackrel{~}{\psi }_{}=0`$, leads to $`\stackrel{~}{H}\stackrel{~}{\psi }_{}(x)=ϵ\stackrel{~}{\psi }_{}(x)`$. Hence, if $`\stackrel{~}{\psi }_{}L^2(𝐑)`$, it must be added to the new set $`\{\stackrel{~}{\psi }\}`$. When $`\stackrel{~}{\psi }_{}(x)`$ is an square integrable function the Hamiltonians (10) present unbroken supersymmetry, otherwise they correspond to a case of broken supersymmetry .
For the superpotential we are dealing with, one gets $`\stackrel{~}{\psi }_{}(x)e^{\kappa x}(1e^x)^1`$, which is obviously not square integrable in $`[0,\mathrm{})`$ for $`\kappa >0`$. Then, the supersymmetric behaviour of the Hamiltonians (10) lies in the selection of $`ϵ`$, i.e., if $`ϵ`$ is chosen to be either an eigenvalue of $`H`$ or not.
In the following we shall consider the case when the initial potential $`V(x)`$ allows the binding of only two states. The purpose of this convention will be apparent in the sequel. To get a system with only two energy levels, the strength of the Hulthen’s potential has to be in the domain $`4<V_0<9`$. Providing these values of $`V_0`$ we get for the energies: $`16<E_1<2.25`$, and $`1.56<E_2<0`$.
In order to get an idea of the orders of magnitude involved, we note that, although there is no a priori reason why $`\alpha `$ should not be different for different sorts of the stationary systems described by $`V(x)`$, we can take its numerical value as $`\alpha =3\stackrel{˚}{f}`$. Such assertion is justified by the fact that the mean distance between nucleons (i.e., the size of the nucleus) is in the range of 2 or 4 $`\stackrel{˚}{f}`$ . Therefore, we get $`(\mathrm{}^2/\alpha ^2m_p)4.6113\mathrm{MeV}`$. Here, we have assumed that the neutron and proton masses are equal to $`m_p`$, hence $`2\mu =m_p`$. In this way, the experimental value of the deuteron binding energy $`_d`$ becomes in a dimensionless value $`E_d0.4825`$ ($`k_d0.6946`$). Remark that, for the above selected domain of $`V_0`$, the deuteron energy lies in the domain of the exited state $`E_2`$ and not in the domain of the ground state $`E_1`$.
Let us consider now the factorization energy fixed as $`ϵ=E_1=k_1^2`$. In this case, as discussed above, we will have a couple (10) with unbroken supersymmetry. Hence $`A\psi _1(x)=0`$, and the potential $`\stackrel{~}{V}(x)`$, with $`V_01+2\kappa =1+2k_1`$, misses the ground state of $`V(x)`$ and admits only one bound state (see equation (13)):
$$\stackrel{~}{\psi }(x)(E_2E_1)^{1/2}\left[\frac{d}{dx}\mathrm{ln}\psi _2(x)+w(x)\right]\psi _2(x),$$
(14)
with eigenvalue
$$E_2=\left(\frac{V_04}{4}\right)^2=\left(\frac{2k_13}{4}\right)^2.$$
(15)
We go a steep further and impose that the numerical value of $`E_2`$ be determined by experiment and let it be equal to $`E_d`$, the dimensionless value of the deuteron binding energy, therefore
$$k_1=\frac{4k_d+3}{2}2.8892,V_0=4k_d+46.7784$$
(16)
which agrees with the previously stated domains for $`V_0`$, $`E_1`$ and $`E_2`$. Then, an initial potential (1), with range $`\alpha =3\stackrel{˚}{f}`$ and strength $`𝒱_031.2572\mathrm{MeV}`$, has a susy partner (8) allowing the binding of a single state $`\stackrel{~}{\psi }(x)`$ with an energy exactly equal to $`_d`$. The main characteristic of this new potential is its centrifugal term, which reduces the range of approaching between the nucleons. Therefore, we have constructed a radial potential describing the hard core binding interaction between the nucleons.
The probability density connected with the ground state (and single!) eigenfunction (14) of the hard core Hamiltonian $`\stackrel{~}{H}`$, for the numerical data mentioned above, has been plotted on Figure 2. Its features can be contrasted with those of the no core Hamiltonian $`H`$, labeled by $`|\psi _H(x)|^2`$, in the same figure. Observe the displacement to the right of $`\stackrel{~}{\psi }`$ with respect to $`\psi _H`$. An easy calculation shows that $`\stackrel{~}{\psi }(x)(1e^x)\psi _H(x)`$, hence, near the origin, $`\stackrel{~}{\psi }`$ goes to zero as $`x^2`$ whereas $`\psi _H`$ goes just as $`x`$, and the probability to find the state $`\stackrel{~}{\psi }(x)`$ near to zero is minor than the probability to find $`\psi _H(x)`$ at the same place.
We shall now discuss some of the various implications of our results. First, the susy procedure accomplishes the derivation of a short range potential $`\stackrel{~}{V}(x)`$ allowing only one bound state. This potential exhibits many of the qualitative features concerned with a hard core potential. It behaves predominantly as an effective radial potential $`V_{\mathrm{}}(x)`$, with $`\mathrm{}=1`$, near the origin (see equation (9)) and goes negligible with the increasing of $`x`$ in the region $`x>1`$.
Second, from Figure 2, it is clear that there is a considerable probability of finding the two nucleons at distances larger than $`\alpha `$. Therefore, the nuclear force connected with $`\stackrel{~}{V}(x)`$ plays a relevant role only in a weak way because the neutron and the proton are outside each other’s range so much of the time. That is, of course, a well known feature in the behaviour of the deuteron’s ground state . It is interesting to remark on the smoothest of $`\stackrel{~}{\psi }(x)`$, this is a nodeless function, just as one might expect for the eigenfunction of a single stable bound state. Therefore, the wavefunction (14) corresponds to the hard core ground state eigenfunction of the deuteron.
In summary, the method developed in the paper relied on the assumption that it is possible to describe the interacting system of a proton plus a neutron by a Schrödinger equation. The method allowed the construction of a hard core potential for the deuteron and the estimation of the potential energy necessary to give the observed deuteron’s binding energy. It is remarkable that even very refined experiments at low energies do not suffice to determine more than the range and strength of the involved potential, leading the detailed shape completely indeterminate.
Observe that the hard core hypothesis makes the strength (16), of the potential (1), increased by $`V_03V_{0H}`$, where $`V_{0H}=2k_d+1`$. In general, the nuclear forces are quite complicated and the assumption of a pure $`{}_{}{}^{3}S`$-state does not suffice to explain the deuteron quadrupole moment, this makes necessary to introduce a tensor force . However, it is well known that the Hulthén’s potential gives a good approximation for the binding energy in the terms discussed at the very begining of the paper. On the other hand, as mentioned above, there are different ways to modify the nuclear potential in order to get a hard core model, the potential derived here could be tested by the nucleon-nucleon scattering approaches, where the hard core hypothesis seems to be compatible with the empirical data .
As regards the unbroken susy results, we have to say that recent susy treatments, involving general solutions to the Riccati equation (5), have shown the way to introduce susy partner potentials sharing exactly the same spectra . That is, we could obtain a new Hamiltonian $`\stackrel{~}{H}`$ which, joined in a susy couple with $`H`$, present unbroken supersymmetry by extending the superpotential (6) to a general solution of (5). Although that method has been sucssesfully applied in the study of some interesting potentials (see e.g. ), it is out of the present scope and will be given elsewhere.
I would like express my gratitude to L.M. Nieto for his unending patience during the long discussions concerning this paper. I had learnt quite a lot of him and J. Negro during my scientific stay in Valladolid (Spain). I am also greatly indebted to Miss M. Lomeli for her constant help in typing the manuscript.
This work has been partially supported by Junta de Castilla y León, Spain (project C02/97). The kind hospitality at Departamento de Física Teórica, Universidad de Valladolid, Spain, is also aknowledged. |
no-problem/9912/quant-ph9912098.html | ar5iv | text | # Local environment can enhance fidelity of quantum teleportation
## I Introduction
Quantum teleportation is fundamentally important as an operational test of the presence and the strength of entanglement. Moreover, a recent series of beautiful experiments , which realized teleportation in practice, opened a window for a wide range of its possible technological applications.
In this paper, teleportation is understood as any strategy which uses local quantum operations and classical communication (LOCC) to transmit an unknown state via a shared pair. In an ideal teleportation scheme, the EPR-channel is constituted by a pure, maximally entangled bipartite state:
$$\psi _{}=\frac{1}{\sqrt{2}}(|01|10).$$
(1)
The state is shared by a sender (Alice) and a receiver (Bob). By sharing $`\psi _{}`$ with Alice, Bob can produce an exact replica of another (input) state originally held by Alice. In reality, however, interactions with the environment and imperfections of preparation result in Alice and Bob sharing a state which is always mixed. Consequently, at Bob’s end, the teleported state can only be a distorted copy of the input initially held by Alice. Moreover, if the bipartite state is mixed too much, it will not provide for any better transmission fidelity than that of an ordinary classical communication channel . To do better than a classical channel, the shared quantum state must be entangled. A natural question then is : can any entangled state provide better than classical fidelity of teleportation?
Early attempts to answer this question, concentrated on the characterization of the states which can offer non-classical fidelity within the original teleportation scheme supplemented by local unitary rotations. Henceforth we will call such a scheme the standard teleportation scheme (STS). Fidelity of teleportation achievable in STS is uniquely determined by the bipartite state’s fully entangled fraction. It was defined in as
$$f(\varrho )=\underset{\psi }{\mathrm{max}}\psi |\varrho \psi .$$
(2)
In the definition, the maximum is taken over all maximally entangled states $`\psi `$ i.e. over $`\psi =U_1U_2\psi _+`$, where
$$\psi _+=\frac{1}{\sqrt{d}}\underset{i=1}{\overset{d}{}}|i|i$$
(3)
$`U_1`$ and $`U_2`$ are unitary transformations. Later, it was shown that in order to be useful for STS, the states acting on a Hilbert space $`C^dC^d`$ must have $`f>1/d`$ . Moreover, it was shown that no bound entangled state (see ) can offer better fidelity than classical communication . Somewhat earlier, in Refs. , the authors identified a class of states which do not permit any increase of $`f`$, neither by any trace preserving (TP) LOCC nor even by some less restricted non-TP LOCC actions. Mixtures of a maximally mixed state and $`\psi _+`$ belong, among others, to this class.
One could then be tempted to speculate that $`f`$ could not be increased by any TP LOCC operations. If so, then STS would be a unique teleportation scheme in the sense that no other scheme would provide better fidelity than STS. On the other hand, one could still suspect that by some intelligent, sophisticated LOCC operation, Alice and Bob would be able to increase $`f`$ for some states anyway. An important question was then to be answered:
Is it possible to design a teleportation scheme, for which at least some states with $`f1/d`$ would give non-classical fidelity?
In this paper, we answer this question by presenting a class of two-qubit states with $`f1/2`$, which can, nevertheless, be used for teleportation with non-classical fidelity. For that, however, one has to allow for some dissipative interaction between the states and their local environment first. This means that dissipation, which is usually associated with decoherence and destruction of teleportation, increases $`f`$ of some initially non-teleporting states to above $`1/2`$. In other words, some states can produce non-classical fidelity within the original teleportation scheme but only after being ’corrupted’ by the environment !
To our knowledge, this is a previously unknown effect. In particular, it is different than that used in the so called filtering method of improving some of the states’ parameters . Filtering includes a selection process based on a readout of measurement outcomes. In our examples, on the other hand, Alice and Bob do not need to know the outcomes at all. Hence, in particular, unlike filtering, the actions in our examples are entirely trace preserving.
We begin our presentation by recalling some of the general results on optimal teleportation fidelity in Sect.II(c.f. Ref. ). This allows us to conclude that an optimal teleportation scheme should include maximization of $`f`$ by means of TP LOCC operations. Then, in Sect. III we put the problem in the context of increasing $`f`$ by the maps of the form $`I\mathrm{\Lambda }`$. We can limit the possible successful maps by showing that, e.g., for two qubits, the bistochastic processes cannot do the job. We also show that the states with $`f`$ improvable by $`I\mathrm{\Lambda }`$ action must violate the so called reduction criterion. Subsequently, in Sect. IV we present the examples of states, for which $`f`$ can be non-trivially increased by TP LOCC operations. The paper ends with the summary of the results and the conclusions in Sect. V.
## II Optimal fidelity in a general teleportation scheme
Let Alice and Bob share a pair of particles in a given state $`\varrho `$ acting on a Hilbert space $`_A_B=C^dC^d`$. Additionally, let Alice have a third particle in an unknown pure state $`\psi _C=C^d`$ to be teleported. In the most general teleportation scheme, Bob and Alice apply some trace preserving (TP) (hence without selection of the ensemble) LOCC operation $`𝒯`$ to the particles which they share and to the third (Alice’s) particle. After the operation is completed, the final state of Bob’s particle (from the pair) is
$$\varrho _{Bob}^\psi =\mathrm{Tr}_{A,C}\left[𝒯(|\psi \psi |\varrho )\right].$$
(4)
The resulting mapping of the input state (the state of the third particle) onto $`\varrho _{Bob}(\psi )`$ establishes a teleportation channel $`\mathrm{\Lambda }`$ (it depends on both, $`𝒯`$ and $`\varrho `$):
$$\mathrm{\Lambda }(|\psi \psi |)=\varrho _{Bob}(\psi ).$$
(5)
The aim of teleportation is to bring $`\varrho _{Bob}(\psi )`$ as close to $`|\psi \psi |`$ as possible. A useful measure of the quality of teleportation is then provided by teleportation’s fidelity
$$=\overline{\psi |\varrho _{Bob}(\psi )|\psi }.$$
(6)
Fidelity is a function of map $`\mathrm{\Lambda }`$ and, like $`\mathrm{\Lambda }`$, it depends on both, teleporting state $`\varrho `$ and the strategy of teleportation $`𝒯`$ . One can show that in the standard teleportation scheme, the maximal fidelity achievable from a given bipartite state $`\varrho `$ is
$$=\frac{fd+1}{d+1}$$
(7)
where $`f`$ is the fully entangled fraction of $`\rho `$ given by formula (2). To achieve this fidelity, Alice and Bob have to rotate their respective parts of the teleporting state $`\rho `$ so that the maximum of formula (2) is attained on singlet $`\psi _{}`$. The original teleportation scheme applied with the rotated bipartite state $`\rho `$ will now produce the maximal fidelity (8).
If, on the other hand, Alice and Bob do not share any quantum state, then their best strategy is :
1. Alice performs an optimal measurement of the system to be teleported and sends the outcome to Bob (classically).
2. On the basis of her results, Bob tries to reconstruct the state.
The optimal teleportation fidelity for this strategy is equal to the optimal fidelity of the state estimation for a single system. It is given by
$$_{cl}=\frac{2}{1+d}.$$
(8)
One can easily see now that, in order to perform better than classical communication, STS needs bipartite states with $`f>1/d`$. With $`f1/d`$, Alice and Bob can just as well discard their bipartite state and communicate classically.
There is no reason why STS should represent the most efficient teleportation scheme using states with $`f>1/d`$. One can show, however, that the optimal teleportation scheme (OTS) is a generalization of STS . OTS consists of two steps:
1. Alice and Bob try to maximize $`f`$ by applying TP LOCC (not necessarily unitary) operations to the original state $`\varrho `$.
2. They apply STS using the transformed state.
Let then $`f_{max}(\varrho )`$ denote the maximal $`f`$ attainable from $`\varrho `$ by means of TP LOCC operations. The maximal teleportation fidelity from state $`\varrho `$ is then given by
$$_{max}=\frac{f_{max}d+1}{d+1}.$$
(9)
Thus, to find the optimal teleportation fidelity for a given bipartite state $`\rho `$, one must find $`f_{max}`$. In other words, the fidelity of STS can be improved if:
1. $`f`$ can be increased by LOCC,
2. The final $`f`$ is in quantum region i.e. it is greater than $`1/d`$.
Henceforth, when referring to a process of increasing $`f`$, we will understand it as increasing so that the final value is above $`1/d`$ (Within the range $`f1/d`$, the fully entangled fraction can be increased relatively easily. This, however, does not produce any better fidelity than $`_{cl}`$).
## III Some general results on improving $``$ by local intractions
### A A simplified formula for maximal $`f`$ attainable by local interaction
When local TP transformations are used to increase $`f`$ of a general bipartite state $`\varrho C^dC^d`$, then the best attainable result is
$$f_A=\underset{\mathrm{\Lambda }}{\mathrm{max}}\text{Tr}\left((\mathrm{\Lambda }I)\varrho P_+\right).$$
(10)
The maximum is here taken over all TP completely positive (CP) maps $`\mathrm{\Lambda }`$ and $`P_+=|\psi _+\psi _+|`$, with $`\psi _+`$ given by (3). Stinespring decomposition of $`\mathrm{\Lambda }`$ gives
$$\mathrm{\Lambda }()=\underset{i}{}V_i()V_i^{}$$
(11)
with $`_iV_i^{}V_i=I`$. Moreover, we can utilize the fact that $`AI\psi _+=IA^T\psi _+`$ (superscript $`T`$ denotes transposition in basis $`\left\{|i\right\}`$) and rewrite formula (10) as
$$f_A=\underset{\mathrm{\Gamma }}{\mathrm{max}}\text{Tr}\left(\varrho (I\mathrm{\Gamma })P_+\right),$$
(12)
with
$$\mathrm{\Gamma }()=\underset{i}{}W_i()W_i^{}$$
(13)
and $`W_i=V_i^{}`$ (the star denotes complex conjugation). Naturally, like $`\mathrm{\Lambda }`$, $`\mathrm{\Gamma }`$ is trace preserving, too.
We can now recall that there is an isomorphism between the TP CP maps and the bipartite states with one subsystem maximally mixed. The isomorphism is given by
$$\varrho ^{}=(I\mathrm{\Lambda })P_+.$$
(14)
Thus, for any TP CP map, the corresponding state has a maximally mixed subsystem $`A`$ and for any state with a maximally mixed subsystem $`A`$, there exists a map that realizes it via the above formula. Consequently, we can obtain the following form for $`f_A`$
$$f_A(\varrho )=\underset{\varrho ^{}}{\mathrm{max}}\text{Tr}(\varrho \varrho ^{}),$$
(15)
where the maximum is taken over all states $`\varrho ^{}`$ with maximally mixed subsystem $`A`$. An analogous formula holds for $`f_B`$. In general, the values $`f_A`$ and $`f_B`$ are likely to be different from one another.
Formula (15) allows for identification of those maps which definitely cannot improve $`f`$. Take, for instance, the maps describing the action of random external fields . They are of the form
$$\mathrm{\Lambda }()=\underset{i}{}p_iU_i()U_i^{},$$
(16)
with $`U_i`$ denoting unitary transformations. The corresponding $`\varrho ^{}=(I\mathrm{\Lambda })P_+`$ is a mixture of maximally entangled vectors. Consequently, $`\text{Tr}(\varrho \varrho ^{})`$ cannot exceed $`f(\varrho )`$ which is equal to the maximal overlap of $`\varrho `$ with one maximally entangled vector.
In addition to preserving trace, maps (16) preserve the identity, i.e. $`\mathrm{\Lambda }(I)=I`$. Maps preserving both the trace and the identity are called bistochastic. In general, the class of bistochastic maps can be wider than the class specified by (16). For two qubits, however, the two classes coincide. To see this, one can note that, in general, the set of states corresponding to the set of bistochastic maps via the isomorphism consists of the states with both subsystems maximally mixed. For two-qubit systems such states are mixtures of maximally entangled vectors . Each such vector can be written as $`IU\psi _+`$ for some unitary $`U`$. Hence, the maps corresponding to mixtures of such vectors are mixtures of unitary maps. Thus, for two qubits the bistochastic maps cannot increase $`f`$. One may conjecture that this should be the case in higher dimensions, too.
### B Increasing $`f`$ by local actions and the reduction criterion for separability
Let us now derive some constraints for the states with $`f`$ improvable by local interaction. A state suitable for a teleportation channel must be entangled, i.e., it must be impossible to represent it by a mixture of product states .
$$\varrho \underset{i}{}p_i\varrho _i\stackrel{~}{\varrho }_i.$$
(17)
Such states violate different separability criteria. Here, we consider the so called reduction criterion for separability. It is given by the following conditions satisfied by all separable states :
$$\varrho _AI\varrho 0,I\varrho _B\varrho 0.$$
(18)
The inequalities mean that the operators on the left hand sides must be positive, i.e., they must have nonnegative eigenvalues only. In a two-qubit case, the reduction criterion is equivalent to separability (hence it is also a sufficient condition for separability), while it becomes a weaker “detector” of entanglement in higher dimensions. In other words, there exist non-separable (entangled) states in higher dimensions which do not violate the reduction criterion.
Suppose now that for some state $`\varrho `$ one has $`f_A(\varrho )>f(\varrho )`$, i.e., $`f`$ can be improved by a local TP operation on subsystem $`A`$. Naturally, we require that the improvement is non-trivial, i.e., $`f_A>1/d`$. We will show now that this condition implies violation of the reduction criterion. Indeed, since $`f_A>1/d`$, then there exists a state $`\varrho ^{}`$ whose one subsystem (say, $`\varrho _A^{}`$) has maximal entropy and:
$$\text{Tr}(\varrho \varrho ^{})>1/d.$$
(19)
Maximum entropy means that $`\varrho _A^{}=I/d`$. This implies $`\text{Tr}((\varrho _AI)\varrho ^{})=\text{Tr}\left(\varrho _A\varrho _A^{}\right)=1/d`$. By putting this into inequality (19), we obtain
$$\text{Tr}\left((\varrho _AI\varrho )\varrho ^{}\right)<0$$
(20)
The trace of a composition of two positive operators is nonnegative. Operator $`\varrho ^{}`$ is positive. Consequently, in order to satisfy the last inequality, the operator $`\varrho _AI\varrho `$ cannot be positive.
Since all the entangled two-qubit states violate the reduction criterion, the condition for improvability of $`f`$ derived above, does not put any new restrictions on the class of states with improvable $`f`$ here . Nevertheless, the condition should be useful while investigating bipartite states in more dimensions. This is because not all the entangled states there violate the reduction criterion.
## IV Beating the standard teleportation scheme
Before showing how to do better than STS, we will still need to introduce some methods of dealing with the fully entangled fraction of two-qubit states.
### A Fully entangled fraction in the Hilbert-Schmidt representation
An arbitrary state of a two-qubit system can be represented as
$$\varrho =\frac{1}{4}(II+𝒓\mathbf{}𝝈I+I𝒔\mathbf{}𝝈+\underset{m,n=1}{\overset{3}{}}t_{nm}\sigma _n\sigma _m).$$
(21)
Here, $`I`$ stands for the identity operator, $`𝒓`$ and $`𝒔`$ belong to $`R^3`$, $`\{\sigma _n\}_{n=1}^3`$ are standard Pauli matrices, $`𝒓\mathbf{}𝝈=_{i=1}^3r_i\sigma _i`$. Coefficients $`t_{mn}=\mathrm{Tr}(\rho \sigma _n\sigma _m)`$ form a real $`3\times 3`$ matrix later denoted by $`T`$. Note that $`𝒓`$ and $`𝒔`$ are local parameters as they determine the reductions of $`\varrho `$:
$`\varrho _1`$ $``$ $`\mathrm{Tr}__2\varrho ={\displaystyle \frac{1}{2}}(I+𝒓\mathbf{}𝝈),`$ (22)
$`\varrho _2`$ $``$ $`\mathrm{Tr}__1\varrho ={\displaystyle \frac{1}{2}}(I+𝒔\mathbf{}𝝈).`$ (23)
Matrix $`T`$ , on the other hand, is responsible for the correlations
$$E(𝒂,𝒃)\text{Tr}(\varrho 𝒂\mathbf{}𝝈𝒃\mathbf{}𝝈)=(𝒂,T𝒃).$$
(24)
One can notice now, that for any two-qubit state $`\varrho `$, one can find a product unitary transformation $`U_1U_2`$ which will transform $`\varrho `$ to a form with diagonal $`T`$. This statement follows from the fact that for any $`2\times 2`$ unitary transformation $`U`$, there is a unique $`3\times 3`$ rotation $`O`$ such that
$$U\widehat{𝒏}\mathbf{}𝝈U^{}=(O\widehat{𝒏})\mathbf{}𝝈.$$
(25)
Now, if a state is subjected to a $`U_1U_2`$ transformation, the parameters $`𝒓,𝒔`$ and $`T`$ are transformed into
$`𝒓^{}=O_1𝒓,`$ (26)
$`𝒔^{}=O_2𝒔,`$ (27)
$`T^{}=O_1TO_2^{}.`$ (28)
with $`O_i`$’s corresponding to $`U_i`$’s via formula (25). Thus, for every two-qubit state $`\rho `$, we can always find such $`U_1`$ and $`U_2`$ so that the corresponding rotations will diagonalize $`T`$ . Moreover, by selecting suitable rotations, one can make $`t_{11}`$ and $`t_{22}`$ non-positive. In what follows, the states with diagonal $`T`$ and $`t_{11},t_{22}0`$ will be called canonical.
For the states with diagonal matrix $`T`$ (hence also for the canonical states), the fully entangled fraction is given by (c.f.)
$$f=\{\begin{array}{c}\frac{1}{4}(1+_i|t_{ii}|)\mathrm{if}\mathrm{det}T0\hfill \\ \frac{1}{4}\left(1+\mathrm{max}_{ikj}(|t_{ii}|+|t_{jj}||t_{kk}|)\right)\mathrm{if}\mathrm{det}T>0\hfill \end{array}.$$
(29)
One can show now that if $`\mathrm{det}T0`$, then $`f1/2`$, i.e., $`f`$ belongs to the classical region. Thus, while analyzing $`f`$ in the quantum region, it will be convenient to investigate a relatively simple function $`N(\varrho )`$, instead of a more involved matrix $`T`$. Function $`N(\varrho )`$ is given by
$$N(\varrho )=\underset{i}{}|t_{ii}|.$$
(30)
It has the following important properties:
1. $`f(\varrho )=\frac{1}{4}(1+N(\varrho ))`$ for $`f\frac{1}{2}`$
2. $`N(\varrho )1`$ if and only if $`f\frac{1}{2}`$
It then contains all the information necessary to analyze $`f`$.
### B The canonical form in terms of the matrix elements
By applying the formula for $`t_{ij}`$, one can easily show that diagonality of $`T`$ is equivalent to the following conditions for the matrix elements of $`\varrho `$ written in the standard basis ($`|1=|00`$, $`|2=|01`$ etc.):
$`\varrho _{12}=\varrho _{34}`$ (31)
$`\varrho _{14}=\varrho _{32}`$ (32)
$`\varrho _{23}\text{ and }\varrho _{14}\text{ are real}.`$ (33)
Moreover, since $`t_{11}=2(\varrho _{14}+\varrho _{23})`$ and $`t_{22}=2(\varrho _{23}\varrho _{14})`$, the condition $`t_{11},t_{22}0`$ is equivalent to
$`\varrho _{23}0`$ (34)
$`|\varrho _{23}|\left|\varrho _{14}\right|`$ (35)
Thus, any state $`\varrho `$ can be locally rotated to a form with matrix elements satisfying the above constraints. This gives the following expression for $`N(\varrho )`$:
$$N(\varrho )=|12(\varrho _{22}+\varrho _{33})|2\varrho _{23}.$$
(36)
Now, for
$$\varrho _{22}+\varrho _{33}\frac{1}{2}$$
(37)
we have $`t_{33}0`$ hence $`\mathrm{det}T0`$. Consequently, by eq. (29) the fully entangled fraction is given by
$$f(\varrho )=\frac{1}{4}(1+N(\varrho ))=\frac{1}{2}(\varrho _{22}+\varrho _{33}2\varrho _{23}).$$
(38)
Then, with $`2\varrho _{23}`$ large enough, one has $`f1/2`$ and $`f`$ is attained on singlet $`\psi _{}`$: $`f=\psi _{}|\varrho |\psi _{}`$.
### C A local action which improves $`f`$.
With the canonical form of $`\varrho `$ at hand, it is not all that difficult to eventually find examples of states with improvable $`f`$. After some trials, we focused our attention on a simple family of states which in their canonical form have $`\varrho _{24}=\varrho _{13}=0`$:
$$\varrho =\left[\begin{array}{cccc}\varrho _{11}& 0& 0& \varrho _{14}\\ 0& \varrho _{22}& p_{23}& 0\\ 0& p_{23}& \varrho _{33}& 0\\ \varrho _{14}& 0& 0& \varrho _{44}\end{array}\right]$$
(39)
Here $`p_{23}0`$ and $`\varrho _{14}`$ is real. We assumed also that $`\varrho `$ satisfies the condition (37) and that $`p_{23}(1\varrho _{22}\varrho _{33})/2`$, so that the state has $`f=\psi _{}|\varrho |\psi _{}1/2`$. Explicitly, $`f`$ is given by
$$f(\varrho )=\frac{1}{2}(\varrho _{22}+\varrho _{33}+2p_{23}).$$
(40)
We know (see Sec.III) that bistochastic maps cannot improve $`f`$. So, to improve it, we must try a non-bistochastic map. A possible simple candidate is, e.g., a map which acts on Bob’s qubit and transforms it as follows:
$$\varrho _B\stackrel{~}{\varrho }_B=\mathrm{\Lambda }(\varrho )=W_0\varrho _BW_0^{}+W_1\varrho _BW_1^{}$$
(41)
where the operators $`W_i`$ are given by
$$W_1=\left[\begin{array}{cc}1& 0\\ 0& \sqrt{p}\end{array}\right],W_2=\left[\begin{array}{cc}0& \sqrt{1p}\\ 0& 0\end{array}\right]$$
(42)
It is easy to check that $`W_i`$’s satisfy $`W_1^{}W_1+W_2^{}W_2=I`$, hence the operation is trace preserving. Moreover, one can notice that $`\mathrm{\Lambda }`$ can be regarded as resulting from the interaction of a two-level atom (Bob’s qubit) with electromagnetic field (an environment). Such an interaction produces the following transitions:
$$|0_a|0_e|0_a|0_e$$
(43)
$$|1_a|0_e\sqrt{p}|0_a|1_e+\sqrt{1p}|1_a|0_e.$$
(44)
where the subscripts $`a`$ and $`e`$ denote atomic and field states respectively. The parameter $`p`$ is then interpreted as the probability of photon emission from the atom in its upper state $`|1_a`$. This kind of interaction is called the amplitude damping channel and one can check that, if repeatedly applied to a qubit, it produces an exponential decay characteristic to spontaneous emission. The completely positive map $`\mathrm{\Lambda }`$ is then obtained from the amplitude damping channel by tracing out the environment variables .
Let us then put $`\sqrt{p}=\mathrm{sin}\theta `$ and apply transformation (41) to Bob’s part of the total (2-qubit) system. The 2-qubit operator corresponding to $`W_i`$ is $`A_iIW_i`$ and, consequently, we obtain
$$\varrho \varrho ^{}=A_1\varrho A_1^{}+A_2\varrho A_2^{}$$
(45)
with
$$A_1=\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& \mathrm{cos}\theta & 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& \mathrm{cos}\theta \end{array}\right]$$
(46)
and
$$A_2=\left[\begin{array}{cccc}0& \mathrm{sin}\theta & 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& \mathrm{sin}\theta \\ 0& 0& 0& 0\end{array}\right].$$
(47)
Note that like the original state $`\varrho `$, the new state $`\stackrel{~}{\varrho }`$ is in its canonical form, too.
$$\stackrel{~}{\varrho }=\left[\begin{array}{cccc}\varrho _{11}+\varrho _{22}\mathrm{sin}^2\theta & 0& 0& \varrho _{14}\mathrm{cos}\theta \\ 0& \varrho _{22}\mathrm{cos}^2\theta & p_{23}\mathrm{cos}^2\theta & 0\\ 0& p_{23}\mathrm{cos}^2\theta & \varrho _{33}+\varrho _{44}\mathrm{sin}^2\theta & 0\\ \varrho _{14}\mathrm{cos}\theta & 0& 0& \varrho _{44}\mathrm{cos}^2\theta \end{array}\right]$$
(48)
The change of $`f`$ associated with the transformation is now given by $`\mathrm{\Delta }_B=\psi _{}|\stackrel{~}{\varrho }|\psi _{}f(\varrho )`$. A simple calculation shows that
$$\mathrm{\Delta }_B=\left(1\mathrm{cos}\theta \right)\left[\frac{1+\mathrm{cos}\theta }{2}\left(\varrho _{44}\varrho _{22}\right)p_{23}\right].$$
(49)
Here, the index $`B`$ indicates that Bob’s qubit has been transformed. One can check that if one transforms Alice’s qubit instead of Bob’s then the resulting $`\mathrm{\Delta }_A`$ is given by
$$\mathrm{\Delta }_A=\left(1\mathrm{cos}\theta \right)\left[\frac{1+\mathrm{cos}\theta }{2}\left(\varrho _{44}\varrho _{33}\right)p_{23}\right].$$
(50)
Finally, one can swap places of $`1`$ and $`\mathrm{cos}\theta `$ on the diagonal of the first transformation matrix $`A_1`$ and adjust $`A_2`$ accordingly. This, translated into changes of $`f`$, result in expressions like (49) and (50) but with $`\varrho _{44}`$ substituted by $`\varrho _{11}`$. In other words, single qubit, trace preserving transformations like that defined by (45) can improve fidelity of states in form (29) provided that
$$\left[\mathrm{max}(\varrho _{11},\varrho _{44})\mathrm{min}(\varrho _{22},\varrho _{33})\right]p_{23}0.$$
(51)
The maximal increase $`\mathrm{\Delta }=\mathrm{max}\{\mathrm{\Delta }_A,\mathrm{\Delta }_B\}`$ achievable in this way is
$$\mathrm{\Delta }=\frac{\left[\mathrm{max}(\varrho _{11},\varrho _{44})\mathrm{min}(\varrho _{22},\varrho _{33})p_{23}\right]^2}{2\left[\mathrm{max}(\varrho _{11},\varrho _{44})\mathrm{min}(\varrho _{22},\varrho _{33})\right]}$$
(52)
To obtain a more clear picture of the situation, let us write the diagonal elements of $`\varrho `$ as:
$$\varrho _{11}=\frac{1\epsilon \gamma }{4}\varrho _{44}=\frac{1\epsilon +\gamma }{4}$$
(53)
$$\varrho _{22}=\frac{1+\epsilon \delta }{4}\varrho _{33}=\frac{1+\epsilon +\delta }{4}$$
(54)
To satisfy $`\left(\varrho _{22}+\varrho _{33}+2p_{23}\right)1`$ (so that $`f(\varrho )=\psi _{}|\varrho |\psi _{}1/2`$), one needs a non-negative $`\epsilon `$ and :
$$\frac{1\epsilon }{4}p_{23}\frac{1}{4}\sqrt{\left(1+\epsilon \right)^2\delta ^2}.$$
(55)
(the upper limit for $`p_{23}`$ guaranties positivity of $`\varrho `$). Thus, the method improves $`f`$ on states with $`0<\epsilon <1`$ and $`\left|\gamma \right|+\left|\delta \right|2\epsilon >4p_{23}`$. One can easily check that in this class, the ”most improvable” border state ($`4p_{23}=1\epsilon `$, i.e., $`f=1/2`$) is
$$\varrho =\frac{1}{2}\left[\begin{array}{cccc}0& 0& 0& 0\\ 0& 32\sqrt{2}& 1\sqrt{2}& 0\\ 0& 1\sqrt{2}& 1& 0\\ 0& 0& 0& 2\sqrt{2}2\end{array}\right]$$
(56)
Since $`f(\varrho )=1/2`$ then standard teleportation scheme using $`\varrho `$ does not offer any better fidelity than classical. On the other hand, if we transform $`\varrho `$ by transformation (45) with $`\mathrm{cos}\theta =(\sqrt{2}1)/(4\sqrt{2}5)`$ (this choice maximizes $`\mathrm{\Delta }`$), then the new state still satisfies the condition (37), and we obtain $`f(\stackrel{~}{\varrho })0.53>1/2`$. The new state can than be used for teleportation with non-classical fidelity
$$\frac{2.06}{3}>\frac{2}{3}$$
(57)
In other words, the state $`\varrho `$ gets “better” when corrupted by environment. The improvement is small, nevertheless it is significant. It changes the character of the state: from non-teleporting to teleporting.
While analyzing this result, one may notice that the states with the fully entangled fraction improvable by the map (45) form a rather restricted class. In particular, this map cannot increase the entangled fraction of states like
$`\varrho ={\displaystyle \frac{1}{2}}|\psi _{}\psi _{}|+{\displaystyle \frac{1}{2}}|0000|.`$
It would then be very interesting to provide a complete characterization the class of states which allow to improve fidelity by some local process, as well as the class of local processes capable to improve fidelity for some states. This task is, however, beyond the scope of this paper.
## V Conclusions
We have examined the problem of optimal teleportation fidelity with given bipartite quantum states. To this end, we investigated a possibility of increasing the fully entangled fraction by means of trace preserving LOCC operations and discovered a class of LOCC operations which non-trivially increase $`f`$ on some of the two-qubit states. To a surprise, the successful operations do not represent any sophisticated action of Alice or Bob. Instead, they result from a common (dissipative) interaction between the teleporting state and the local environment. The unexpected conclusion then is that a dissipative interaction, normally associated with the destruction of quantum teleportation, can sometimes facilitate it.
P.B. acknowledges stimulating discussions with Richard Bonner and Benjamin Baumslag. M.H., P.H. and R.H. are supported by Polish Committee for Scientific Research, contract No. 2 P03B 103 16. P.B. is partially supported by Svenska Institutet, project ML2000. |
no-problem/9912/astro-ph9912233.html | ar5iv | text | # The SMC X-ray transient XTE J0111.2–7317 : a Be/X-ray binary in a SNR?
## 1 Introduction
The Be/X-ray and supergiant binary systems represent the class of massive X-ray binaries. A survey of the literature reveals that of the 96 proposed massive X-ray binary pulsar systems, 67% of the identified systems fall within the Be/X-ray group of binaries. The orbit of the Be star and the compact object, presumably a neutron star, is generally wide and eccentric. The optical star exhibits H$`\alpha `$ line emission and continuum free-free emission (revealed as excess flux in the IR) from a disk of circumstellar gas. Most of the Be/X-ray sources are also very transient in the emission of X-rays.
Progress towards a better understanding of the physics of these systems depends on a multi-wavelength programme of observations. From observations of the Be star in the optical and IR, the physical conditions under which the neutron star is accreting matter can be determined. In combination with hard X-ray timing and flux observations, this yields a near complete picture of the accretion process. It is thus vital to identify the optical counterparts to these X-ray systems in order to further our understanding.
The X-ray transient XTE J0111.2–7371 was discovered by the RXTE X-ray observatory in November 1998 (Chakrabarty et al. 1998a,b) as a 31s X-ray pulsar. This detection was confirmed by observations from the BATSE telescope on the CGRO spacecraft which detected the source in the 20–50 keV band. The quick-look results provided by the ASM/RXTE team indicate that the X-ray source was active for the period November 1998 – January 1999. A Be star counterpart has been proposed by Israel et al. (1999) based upon optical spectroscopy.
Reported here are optical and infra-red measurements taken while the source was X-ray active which confirm the proposed identity of the counterpart to XTE J0111.2–7317. The counterpart is shown to be most consistent with a main sequence B0–B2 star. In addition, strong evidence is presented from the H$`\alpha `$ imaging for a surrounding nebula, possibly a SNR.
## 2 Optical spectroscopy
Optical spectra were taken on 9-10 January 1999 from the SAAO 1.9m telescope. The instrument used was the grating spectrograph with the SITe CCD detector. See Table 1 for details of the observation configurations.
The only significant lines detected were from strong H$`\alpha `$ and H$`\beta `$ emission. The equivalent widths obtained were $`27\pm `$0.3Å for H$`\alpha `$ and $`3.8\pm `$0.2Å for H$`\beta `$. Furthermore the average redshift of the lines from their rest position was measured and this corresponded to a recessional velocity of 165$`\pm `$15 km/s.
The other bright star in the error circle was also checked but did not show any evidence of H$`\alpha `$ in emission.
## 3 Optical and IR photometry
Photometry of the source was obtained from South Africa using the 1.9m and the 1.0m telescopes in January 1999. The 1.9m IR data were collected using the Mk III photometer in the J and H bands. The 1.0m data were obtained using the Tek8 CCD, giving a field of approximately 3 arcminutes, and a pixel scale of 0.3” per pixel. Observations were made through standard Johnson UBVRI and Strömgren-Crawford uvby$`\beta `$ filters. The exact dates and filters used are specified in Table 2. The 1.0m data were reduced using IRAF and Starlink software, and the instrumental magnitudes were corrected to the standard system using E region standards.
Figure 1 shows a V band image from our observations which shows the X-ray uncertainty (30” radius error circle) and the candidate proposed by Israel et al., 1999.
The observed $`\beta `$ index was determined by taking the ratio of the fluxes in the H$`\beta `$-wide and the H$`\beta `$-narrow filters. This was found to be 2.38 before any correction for the circumstellar emission was applied (see Section 5.1 for a further discussion of this point).
## 4 H$`\alpha `$ imaging
Of particular interest with this source is the H$`\alpha `$ image. A 1000s exposure taken on 21 January 1999 with the 1.0m telescope revealed a clear extended structure around the Be star. Consequently a deeper 2000s image was recorded on 24 January 1999. This latter image is shown in Figure 2. Also shown in the figure is an (H$`\alpha `$–R) image. This latter image was obtained by normalising the two separate images, registering the fields to sub-pixel accuracy, and then subtracting one from the other. Despite the accurate registration, variations in the PSF have caused some residual structures. Nonetheless, the result is an image which clearly shows the sources of H$`\alpha `$ within the field. In addition to the extended structure around XTE J0111.2–7317 and its associated Be star, one can see two other strong stellar H$`\alpha `$ emitters at the eastern and southern edges of the field. These are probably also both Be stars.
## 5 Discussion
### 5.1 Spectral Class
The equivalent width obtained for the target was $`27\pm `$0.3Å for H$`\alpha `$. Typical equivalent widths found in Be stars are in the range 0 to -40Å, while those found in supergiants stars lie below -4Å. Some hypergiants (luminosity class Ia+) can reach -7Å(eg Wray977, Kaper et al, 1995), but none have been reported greater than -10Å. Thus the high H$`\alpha `$ value is a strong indicator that this source is a Be star.
The determination of the spectral type and luminosity class in a Be star is not as straightforward as in a non-emission line B-type star due to the presence of the surrounding envelope, which distorts the characteristic photospheric spectrum. In the $`(by)_0`$$`M_V`$ plane Be stars appear redder than the non emission B stars, due to the additional reddening caused by the hydrogen free-bound and free-free recombination in the circumstellar envelope. In the $`c_0`$$`M_V`$ plane the earlier Be stars present lower values than absorption-line B stars, which is caused by emission in the Balmer discontinuity, while the later Be stars deviate towards higher values, indicating absorption in the Balmer discontinuity of circumstellar origin (Fabregat et al. 1996).
Thus, in a Be star one has to correct for both circumstellar and interstellar reddening before any calibration can be used. There is no easy way to decouple these two reddening contributions. The iterative procedure of Fabregat & Torrejón (1998) has been used here to correct for both circumstellar and interstellar reddening.
The mean values of the interstellar and circumstellar reddening are $`E^{is}(by)`$=0.104$`\pm `$0.033 and $`E^{cs}(by)`$=0.091$`\pm `$0.025, respectively. The errors reflect the accuracy of the Fabregat & Torrejón method. From the relation $`E(BV)=1.35E(by)`$ (Crawford & Mandwewala 1976) we find that the total (i.e., interstellar plus circumstellar) extinction is $`E(BV)`$=0.26$`\pm `$0.05.
In support of the method we point out that the derived mean interstellar excess colour $`E^{is}(BV)`$=0.14$`\pm `$0.04 is compatible with the average extinction reported for the SMC. Schwering & Israel (1991) have measured the extinction E(B-V) as lying in the range 0.07–0.09, though there is no doubt that in some localised regions in the SMC the value can be as high as 0.25.
The $`c_0`$ index is the primary temperature parameter for O and B type stars (Crawford 1978). Using the calibration $`logT_{eff}`$=$`0.186c_0^20.580c_0+4.402`$ (Reig et al. 1997) we obtain an effective temperature of 21800$`\pm `$1200 K. Using the calibration of Balona (1994) a virtually identical value of 22000 K is obtained. Such temperature is typical of a B1-B2 star (Zombek 1992). The same spectral class is obtained from the $`(by)_0`$ index, which is also a temperature indicator. The derived value, –104, agrees with a B1-B2 main sequence star (Moon 1985).
Another way of determing the spectral type is by means of the Q method. The Q parameter is defined as Q=(U-B)-0.72(B-V) and it is independent of reddening. We obtain Q=–0.892, which according to Halbedel (1993) corresponds to a B1e star, in agreement with the above analysis.
The $`\beta `$ index (Crawford & Mander 1966) provides a measure of luminosity class for O and B type stars. However, it is also strongly affected by circumstellar emission, with the extra complication that there is not any other independent index which can be related to the stellar luminosity. The value of $`\beta _0`$=2.647 or equivalently $`M_v=2.2\pm 0.7`$ (Balona & Shobbrook 1984) is consistent with a B2V star (Crawford 1978; Moon 1985). However, the distance implied from$`M_v`$ and V is about 3 times lower than the distance to the SMC. A better approach is to make use of the knowledge of the distance modulus to the SMC. The apparent V magnitude is $`m_v`$=15.32$`\pm `$0.01. Asuming a distance modulus of $`(m_vM_v)_0`$=18.8$`\pm `$0.1 (Westerlund 1997, Table 2.4) and the derived $`E^{is}(BV)=`$0.14$`\pm `$0.04, the absolute magnitude $`M_v`$ comes out to be $`M_v=3.8\pm 0.2`$, closer to a B0V than to a B2III star.
If, for the sake of discussion, we assume that the correct spectral class is B1V, then the intrinsic B–V for such an object is -0.26. The photometric measurements presented here have B–V=-0.08$`\pm `$0.02, suggesting an E(B–V)=0.18$`\pm `$0.02, just about consistent with our above value from the Stromgren photometry of E(B–V)=0.26$`\pm `$0.05. If the optical/IR photometry are then dereddened by our estimate of the interstellar extinction ($`E^{is}(BV)`$=0.14) it is possible to compare the results with the model atmosphere expected for stars in the range B0-B2 ($`T_{eff}`$=30000K – 19000K and log g=4.0). These comparisons are shown in Figure 3 where the spectra have all been normalised to the B band flux. The agreement is good over the B – R range, though at longer wavelengths the usual infrared excess arising from the circumstellar disk is clearly present.
The data presented in this work include the first reported IR measurements of this system and confirm it to be an IR source similar to other Be/X-ray systems. Our data indicate an apparent colour index of (J–H) $``$ 0.40$`\pm `$0.36, very similar to that seen from other SMC X-ray transients such as RX J0117.6-7330 (Coe et al. 1998).
### 5.2 Extended emission
Two previous Be/X-ray binary systems have been shown by Hughes and Smith (1994) to exhibit extended H$`\alpha `$ emission close to, or directly linked to the Be star. In both those objects the authors expressed the view that it was unlikely that we were observing the SNR left over from the production of the neutron star in the Be/X-ray binary system. This argument was based upon the high transverse speeds that would be needed to explain the separation of the neutron star from the deduced centre of the SNR. Such high radial velocities (approximately 100 km/s) are not seen in the optical lines, though the inclination of the motion to the line of sight would obviously affect the amount that was observable. They therefore thought it more likely that the SNR was produced from another object in the same OB association that gave rise to the Be/X-ray system. However, they did express some disquiet over finding two such systems.
In the case presented here of XTE J0111.2–7317, the association of the extended structure with the Be star seems much stronger. It is morphologically much closer than the systems of Hughes and Smith (1994) and so such high SN kick velocities are not required. One observational test that will be possible when we have better orbital data, is to search for the high eccentricity that would inevitably be associated with a large SN kick.
An alternative explanation for the extended emission is the presence of a wind bow shock such as that detected by Kaper et al (1997) from Vela X-1. In that case the extended H$`\alpha `$ emssion was explained as arising from supersonic motion of the system through the interstellar medium. However, such bow shocks are more likely to occur in supergiant systems than in a Be star system. More detailed radio and H$`\alpha `$ imaging may help resolve the structure of the emission.
In addition to the above 3 sources, Yokogawa et al (1999) have identified a 4th system in the SMC (AX J0105-722) which appears to be associated with the the radio SNR DEM S128. So we now have 4 systems apparently associated with SNR out of the 16 probable Be/X-ray binary pulsars in the SMC.
### 5.3 Distance
The conclusion that this object is in the SMC is supported by the average velocity shift of the optical lines of 166$`\pm `$15 km/s. This compares favourably with the systemic value of 166$`\pm `$3 km/s obtained by Feast (1961) for the SMC.
## 6 Conclusions
In summary, the optical photometric measurements presented here indicate that the most likely candidate to the X-ray transient XTE J0111.2–7317 is a B1$`\pm `$1 main sequence star. It is noteworthy that this classification falls within the narrow range of spectral types in Be/X-ray binaries, namely O9–B2. Optical spectroscopic observations in the wavelength range 4000–4800 Å are encouraged in order to refine this classification. Further studies of the extended structure will also be important in identifying its nature.
### Acknowledgments
We are grateful to the very helpful staff at the SAAO for their support during these observations. All of the data reduction was carried out on the Southampton Starlink node which is funded by the PPARC. NJH is in receipt of a PPARC studentship and PR acknowledges support from the European Union through the Training and Mobility Research Network Grant ERBFMRX/CT98/0195.
In addition, we gratefully acknowledge helpful contributions from the referee, Dr. L. Kaper. |
no-problem/9912/hep-ph9912489.html | ar5iv | text | # Is BFKL ruled out by the Tevatron gaps between jets data?
## 1 Introduction
Events with large rapidity gaps in the hadronic final state and a large momentum transfer across the gap, characterised by the presence of a hard jet on each side of the gap, have been observed in both $`p\overline{p}`$ collisions at the Tevatron and in $`\gamma p`$ collisions at HERA . Such events are unexpected in standard Regge phenomenology since the cross section is predicted to fall as $`s^{\alpha |t|}`$, where $`\alpha 0.25`$ GeV<sup>-2</sup>, whilst events with $`|t|>1000\mathrm{GeV}^2`$ are routinely observed at the Tevatron. Clearly some other explanation must be sought. Uniquely in diffractive physics, high-$`t`$ events are amenable to the use of perturbative QCD since the gap producing mechanism is squeezed to small distances . Such calculations have been carried out within the leading logarithmic approximation of BFKL by Mueller and Tang , and it is the aim of this talk to present comparisons of these calculations with the latest data from the Tevatron. The situation is greatly complicated by the possibility that rapidity gaps formed by whatever process can be destroyed by multiple interactions between spectator partons in the colliding hadrons. Detailed comparisons made and conclusions drawn from any dynamic model of high-$`t`$ rapidity gap formation must therefore include a careful treatment of such physics. In this analysis, we use a model implemented in the P YTHIA Monte Carlo generator to simulate the effects of multi-parton interactions.
## 2 DØ data versus the BFKL pomeron
The analysis presented here was stimulated to some extent by the recent DØ measurements of the fraction of dijet events containing a large rapidity gap as a function of $`E_{T2}`$, the $`E_T`$ of the second hardest jet, and the rapidity difference between the two leading jets, $`\mathrm{\Delta }\eta `$. The DØ results are shown in figure 1. Jets are found using a cone algorithm with cone radius $`0.7`$ and the `OVLIM` parameter set to $`0.5`$. The inclusive dijet sample is defined by the following cuts:
* $`|\eta _1|,|\eta _2|>1.9`$, i.e. jets are forward or backward
* $`\eta _1\eta _2<0`$, i.e. opposite side jets
* $`E_{T2}>15`$ GeV
* $`\mathrm{\Delta }\eta >4`$, i.e. jets are far apart in rapidity.
The sub-sample of gap events is obtained by employing the further cut that there be no particles emitted in the central region $`|\eta |<1`$ with energy greater than 300 MeV. The BFKL curve is clearly ruled out by the data. The DØ BFKL curves are based on the calculation of Mueller and Tang implemented into the standard H ERWIG 5.9 release . In particular, the asymptotic cross-section of is used; in the limit $`y\mathrm{\Delta }\eta 1`$,
$$\frac{d\sigma (qqqq)}{dt}(C_F\alpha _s)^4\frac{2\pi ^3}{t^2}\frac{\mathrm{exp}(2\omega _0y)}{(7\alpha _sC_A\zeta (3)y)^3}$$
(1)
where $`\omega _0=\omega (0)=C_A(4\mathrm{ln}2/\pi )\alpha _s`$. The $`\alpha _s^4`$ in the pre-factor runs with $`t`$ according to the two-loop beta function, $`\omega _0=0.3`$ and the $`\alpha _s`$ in the denominator $`=0.25`$. The falling of the BFKL curve with increasing $`E_{T2}`$ is driven by the running of the coupling in the pre-factor since the gap fraction goes like $`\alpha _s^4/\alpha _s^2`$.
## 3 Key issues
In this analysis, we choose somewhat different parameters. We also use the full Mueller Tang calculation without the asymptotic approximation. This is also available in H ERWIG 5.9 and is available from the authors. We choose to fix $`\alpha _s=0.17`$. To leading logarithmic accuracy $`\alpha _s`$ is simply an unknown parameter. Higher order corrections will indeed cause the coupling to run, however it is not clear how this should be done in a consistent way. In this paper we restrict ourselves to the leading logarithmic approximation and treat the coupling as a free parameter. Moreover, we are guided by recent HERA data on the double dissociation process which can be described by the leading logarithmic BFKL formalism with $`\alpha _s=0.17`$. We also note that a fixed coupling constant was needed in order to explain the high-$`t`$ data on $`p\overline{p}`$ elastic scattering via three gluon exchange . Furthermore, NLO corrections suggest a fixed value for the leading eigenvalue of the BFKL equation, $`\omega (0)`$, which in turn suggests the use of a fixed coupling in the LLA kernel.
### Underlying events and gap survival
As mentioned above, it is critical in any estimate of gap formation rates to take into account the possibility that gaps can be destroyed by secondary scatters, which may be perturbative or non-perturbative, between spectator partons in the colliding hadrons. Several models are available , but it would be fair to say that all are as yet in a early stage of development and are not tuned to $`p\overline{p}`$ data. We choose the model as implemented in P YTHIA 6.127 . Here the probability to have several parton-parton interactions in the same collision is modelled using perturbative QCD. The probability for additional interactions is not fixed but varies according to an impact-parameter picture, where central collisions are more likely to have multiple interactions. The partons in the proton are assumed to be distributed according to a double-Gaussian as described in . There are several parameters in this model and we have used the default setting for each.<sup>1</sup><sup>1</sup>1Setting the switch MSTP(82)=4 in P YTHIA , with everything else default, will give the model as we have used it. Our strategy is to generate high-$`t`$ photon exchange events (hard BFKL pomeron exchange has not been implemented in P YTHIA ) with and without multiple interactions, and take the percentage change in the number of rapidity gap events, defined as in the DØ analysis, as the gap survival factor. We find that gap survival in this model is to first order independent of $`E_T^{jet}`$ and $`\mathrm{\Delta }\eta `$, i.e. it can be treated as a multiplicative factor. The gap survival factor $`𝒮`$ does vary strongly with centre of mass energy, which is not unexpected since the number density of partons in the colliding hadrons, and therefore the probability of having a secondary scatter, increases with energy. In summary, we find $`𝒮(1800\mathrm{GeV})=22\%`$, $`𝒮(630\mathrm{GeV})=35\%`$. Full details can be found in .
A key point to notice is the interplay between gap survival and underlying event : multiple interactions also give rise to the so-called jet pedestal and underlying event effects. This means that the jets measured in hadron-hadron collisions cannot be compared directly to e.g. predictions from fixed order perturbation theory. In Figure 2 we show jet profiles obtained from P YTHIA with (mi4) and without (mi0) multiple interactions (and with $`|\delta \varphi |<0.7`$). The proton remnant is at $`\delta \eta >0`$. It is clear that multiple interactions introduce a jet pedestal of more than 1 GeV of $`E_T`$ per unit rapidity. For comparison, also shown is the jet pedestal from H ERWIG . We note that H ERWIG predicts a greater amount of energy outside the jet cone than P YTHIA without multiple interactions. Again, a full discussion of these differences can be found in .
In the DØ jet measurements the excess $`E_T`$ from the underlying event is taken into account by correcting the jet $`E_T`$ using minimum bias data. In particular, the correction is determined by looking at the $`E_T`$ flow in regions away from the jets. The correction is made by subtracting approximately 1 GeV from the $`E_T`$ of each reconstructed jet . In particular, in the gap fraction measurement, this subtraction is performed for all jets, including those in gap events. But, requiring a large rapidity gap also selects events without multiple interactions, where the jet pedestal is absent, or at least much smaller; multiple interactions destroy gaps, and therefore a gap event cannot have a multiple interaction. Since jet cross sections fall faster than $`1/E_T^4`$, such a correction can decrease the measured jet rate by up to 30% for 18 GeV jets. Our contention therefore is that the jets in gap events should not be corrected for underlying event, and therefore the gap fraction should rise less steeply with $`E_T`$ than in figure 1.
## 4 Gap fractions
Figures 3 and 4 show our results for the gap fractions as functions of $`\mathrm{\Delta }\eta `$ and $`E_{T2}`$ respectively. The stars are the H ERWIG BFKL simulation with fixed $`\alpha _s=0.17`$, with $`1`$ GeV subtracted from each jet in order to simulate the DØ underlying event correction and the open circles are the DØ data. The gap fractions are constructed using a standard P YTHIA QCD simulation without colour singlet exchange, and without multiple interactions. We have used both CTEQ2M and CTEQ3M parton distribution functions , and have found the differences to be small. Our philosophy is that the DØ data have been corrected for the effects of multiple interactions in non-singlet exchange events, and we should therefore generate none, whereas we must undo the erroneous correction to the colour singlet sample. The combination of fixing $`\alpha _s`$ and correcting the gap events erroneously for multiple interactions produces the rise of the gap fraction at low jet $`E_T`$. The solid circles show the gap fraction using a running $`\alpha _s`$ in the BFKL sample. Even with the underlying event correction, this sample is unable to fit the data. The overall normalisation of the simulated gap fractions is multiplied by a factor of 0.6. That this is a reasonable thing to do can be appreciated once it is realised that our results have not been fitted to the data and that the overall normalisation is acutely sensitive to the magnitude of $`\alpha _s`$. Furthermore, the overall normalisation of the BFKL cross-section is uncertain since, within the leading logarithmic approximation, one does not know a priori the scale at which to evaluate the leading logarithms. Given these points, we conclude that the DØ data are in agreement with the leading order BFKL result. Figure 5 shows our result for the gap fraction as a function of $`\mathrm{\Delta }\eta 2\eta ^{}`$ compared to the CDF data . Note that CDF do not attempt to correct their jets to include the effect of an underlying event. We therefore generate the P YTHIA non-singlet sample with multiple interactions (labelled mi4), and do not perform the 1 GeV / jet subtraction from the H ERWIG BFKL sample. In this plot, our theory points are obtained using a renormalisation factor of unity (compared to 0.6 in the DØ case). We then find reasonable agreement with the data except at the larger values of $`\eta ^{}`$ where we are quite unable to explain a fall in the $`\eta ^{}`$ distribution. Recall however that DØ do not see a fall at large $`\mathrm{\Delta }\eta `$. Further clarification of the situation will require an increase in statistics.
We have also computed the ratio of the gap fractions at 630 GeV and 1800 GeV. We find that, even including gap survival effects, $`R(630/1800)1`$ at the parton level. When hadronisation effects are taken into account, however, we find that the ratio rises significantly to $`3`$, with a strong dependence on $`\mathrm{\Delta }\eta `$. DØ find $`R(630/1800)=3.4\pm 1.2`$ , and CDF find $`R(630/1800)=2.4\pm 0.9`$. In the DØ case the effect may be attributed to the different parton $`x`$ ranges of the 630 GeV and 1800 GeV measurements (although we note that the CDF result is calculated at fixed $`x`$). The restriction $`x<1`$ forces the gap and non-gap cross-sections to fall to zero at some maximum $`\mathrm{\Delta }\eta `$, $`\mathrm{\Delta }\eta _{\mathrm{max}}`$. Now, the colour connection that exists between the jets in the non-gap sample drags the jets closer together in rapidity. This has a small effect away from $`\mathrm{\Delta }\eta _{\mathrm{max}}`$ (since the $`\mathrm{\Delta }\eta `$ spectrum is roughly flat) however as $`\mathrm{\Delta }\eta \mathrm{\Delta }\eta _{\mathrm{max}}`$ it leads to a more rapid vanishing of the non-gap cross-section than occurs in the gap cross-section. This effect, combined with the fact that $`\mathrm{\Delta }\eta _{\mathrm{max}}(630\mathrm{GeV})<\mathrm{\Delta }\eta _{\mathrm{max}}(1800\mathrm{GeV})`$, leads to an enhancement of the measured 630 GeV gap fraction at large $`\mathrm{\Delta }\eta `$ at the hadron level, and hence the larger value of $`R(630/1800)`$.
## 5 Conclusions and future possibilities
We have explicitly demonstrated that the Tevatron data on the gaps-between-jets process at both 630 GeV and 1800 GeV are in broad agreement with the predictions obtained using the leading order BFKL formalism. However, we are not able to explain the behaviour of the CDF gap fraction at large $`\mathrm{\Delta }\eta `$. Agreement is obtained using the same fixed value of $`\alpha _s=0.17`$ as was used to explain the recent HERA data on high-$`t`$ double diffraction dissociation.
Care must be taken in the interpretation of our findings, however. The BFKL formalism itself suffers from being evaluated only to leading logarithmic accuracy. The uncertainties of the overall normalisation which follow will not be removed until an understanding of BFKL dynamics at non-zero $`t`$ beyond the leading logarithmic approximation in achieved.
An understanding of the effects of underlying event and its impact on gap survival is crucial to the interpretation of the gaps between jets data, and indeed diffractive data as a whole.
As pointed out in , the gap fraction defined in terms of a region void of hadronic activity is not strictly infrared safe. A better observable would be to define a gap to be a region that does not contain any jets with transverse momenta above some perturbatively large scale. Work along these lines has also been performed in .
One major disadvantage of the gaps between jets process arises from the need to measure both jets since this limits the reach in rapidity. In , it was suggested to focus instead on the double dissociation sample (the gaps between jets events form a subsample of this generally much larger sample). By dropping the requirement to observe jets one not only gains in rapidity reach and statistics but also from the reduced systematics associated with this more inclusive observable.
## Acknowledgements
We should like to thank Andrew Brandt, Dino Goulianos, Mark Hayes, Mike Seymour and Torbjörn Sjöstrand for helpful discussions. This work was supported by the EU Fourth Framework Programme ‘Training and Mobility of Researchers’, Network ‘Quantum Chromodynamics and the Deep Structure of Elementary Particles’, contract FMRX-CT98-0194 (DG 12-MIHT). BC would like to thank the UK’s Particle Physics and Astronomy Research Council for support. |
no-problem/9912/hep-ph9912347.html | ar5iv | text | # Quark Confinement Physics in Quantum Chromodynamics
## 1 Introduction
Recent studies of the lattice QCD in the maximally abelian (MA) gauge suggest the remarkable properties of the QCD vacuum, such as abelian dominance and monopole condensation, which provide the dual superconductor picture of the QCD vacuum as is described by the dual Ginzburg-Landau (DGL) theory. In the MA gauge, QCD is reduced into an abelian gauge theory including color-magnetic monopoles. According to the lattice QCD results, the nonperturbative quantities as the string tension and the chiral condensate are almost reproduced only by the diagonal gluon part, while the off-diagonal gluon does not contribute to such the long-range physics, namely, abelian dominance. Furthermore, the world-line of the color-magnetic monopole in the confinement phase appears as the global network, which indicates monopole condensation. Then, the DGL theory can be constructed by extracting the diagonal gluon as the relevant degrees of freedom and taking into account monopole condensation. Based on the DGL theory, the quark confinement is explained by the flux-tube formation through the dual Meissner effect, and chiral symmetry breaking is described as the function of monopole condensate.
In this paper, we focus such the dual superconductor picture of the QCD vacuum in the MA gauge, and confirm the connection between nonperturbative QCD and the DGL theory. Then, we would like to apply the DGL theory to hadron physics, especially, to the analysis of the scalar glueball properties.
## 2 Abelian dominance and monopole condensation in the MA gauge
Abelian dominance and monopole condensation in the MA gauge are the keywords to connect the QCD with the DGL theory, and the recent lattice QCD simulations show the former on the string tension and the chiral condensate, and the latter as the large clustering of the monopole world-line. In such situation, we still have interest in this subject, since the physical essence of abelian dominance is not understood yet. Furthermore, we must not jump to a conclusion that the global network of the monopole world-line is really the evidence of monopole condensation, which also should be evaluated quantitatively.
To answer these questions, we first study the gluon propagator in the MA gauge and evaluate the mass of the off-diagonal gluon field using the SU(2) lattice QCD simulation. This study is based on the following idea. If the off-diagonal gluon has a mass such as the massive vector boson, its propagator $`G_{\mu \mu }^{}{}_{}{}^{\mathrm{off}}(r)`$ would be described by the Yukawa-type function $`\mathrm{exp}(M_{\mathrm{off}}r)/r^{3/2}`$, and if we find the linear behavior for $`\mathrm{ln}(r^{3/2}G_{\mu \mu }^{}{}_{}{}^{\mathrm{off}}(r))`$, the mass $`M_{\mathrm{off}}`$ can be extracted from its slope. As a result, we find that the off-diagonal gluon has the large mass $`M_{\mathrm{off}}1`$ GeV as shown in Fig. 1. That is to say, the interaction range of the off-diagonal gluon is limited within the short distance corresponding to its inverse mass $`M_{\mathrm{off}}^10.2`$ fm. Thus, the off-diagonal gluon does not contribute to the long-range physics, which predicts general infrared abelian dominance in the MA gauge.
As for monopole condensation, we study the inter-monopole potential and evaluate the dual gluon mass using the SU(2) lattice QCD simulation. The dual gluon field $`B_\mu `$ is introduced to satisfy $`_\mu B_\nu _\nu B_\mu ={}_{}{}^{}F_{\mu \nu }^{}`$ and $`_\mu {}_{}{}^{}F_{\mu \nu }^{}=k_\nu `$. Here, $`k_\nu `$ is the color-magnetic monopole current. The idea used here is quite similar to the evaluation of the off-diagonal gluon mass. If monopole condensation is occurred, the dual gluon becomes massive due to the dual Higgs mechanism. Then, its mass $`m_B`$ can be extracted by fitting the Yukawa potential $`V_\mathrm{M}(r)\mathrm{exp}(m_Br)/r`$, since the dual gluon behaves as the massive vector boson. From this analysis, we find that the dual gluon acquires the mass $`m_B0.5`$ GeV as shown in Fig. 2, which is just the quantitative evidence of monopole condensation.
As an interesting application of abelian dominance for the inter-quark potential, we can calculate the quark single-particle potential $`U(x)`$ for the low-lying hadron ($`m_q`$=300 MeV). Here, $`U(x)`$ is defined by the superposition of the inter-quark potential $`V(r)=c/r+\sigma r`$ ($`\sigma 1`$ GeV/fm, $`c0.4`$) with the weight of the color charge distribution $`\rho (𝐱)=\overline{\psi }_q\gamma _0\stackrel{}{H}\psi _q\stackrel{}{Q}`$ as $`\stackrel{}{Q}^2U(x)=d^3x\rho (𝐱^{})V(|𝐱𝐱^{}|)`$. Solving the self-consistent equations between the quark wave function and the quark potential, we obtain the color charge distribution and the quark single-particle potential as shown in Figs. 3 and 4. The color charge distribution is spread over a intermediate region $`r0.5`$ fm. The quark single-particle potential is found to be flat at the short distance, which can be connected with the bag model.
## 3 The DGL theory and application to the scalar glueball analysis
The DGL theory can be constructed by taking into account abelian dominance and monopole condensation in the MA gauge in QCD. The DGL lagrangian is given as
$$_{\mathrm{DGL}}=\frac{1}{4}\left(_\mu \stackrel{}{B}_\nu _\nu \stackrel{}{B}_\mu \frac{1}{n}\epsilon _{\mu \nu \alpha \beta }n^\alpha \stackrel{}{j}^\beta \right)^2+\underset{\alpha =1}{\overset{3}{}}\left[\left|(_\mu +ig\stackrel{}{ϵ}_\alpha \stackrel{}{B}_\mu )\chi _\alpha \right|^2\lambda \left(\left|\chi _\alpha \right|^2v^2\right)^2\right],$$
(1)
where $`\stackrel{}{B}_\mu `$ and $`\chi _\alpha `$ denote the dual gluon field with two components $`(B_\mu ^3,B_\mu ^8)`$ and the complex scalar monopole field, respectively. The quark field is included in the current $`\stackrel{}{j}_\mu =e\overline{q}\gamma _\mu \stackrel{}{H}q`$. Here, $`\stackrel{}{ϵ}_a`$ is the root vector of SU(3) algebra, and $`n^\mu `$ denotes an arbitrary constant 4-vector, which corresponds to the direction of the Dirac string. The gauge coupling $`e`$ and the dual gauge coupling $`g`$ hold the relation $`eg=4\pi `$.
Monopole condensation is characterized by $`0|\chi _\alpha |0`$=$`v`$, and the dual gluon field acquires the mass $`m_B=\sqrt{3}gv`$ 0.5 GeV through the dual Higgs mechanism. Then, the DGL theory describes the QCD vacuum as the dual superconductor. The quark confinement is explained by the dual Meissner effect, which forces the color-electric field between the quarks to form the flux-tube configuration, and leads the linear inter-quark potential. This flux-tube also provides intuitive picture of hadrons. If we apply this flux-tube picture to the glueball, it would be identified with the flux-tube ring, since the glueball is considered to have no valence quarks, and the lowest state is the scalar glueball. From the flux-tube ring solution in the DGL theory, we find the mass and the size of the scalar glueball as 1.6 GeV and 0.5 fm, respectively. It is interesting to note that this mass spectrum is consistent with the recent lattice QCD results $`M(0^{++})`$ = 1.50 - 1.75 GeV.
Here, we find another aspect of the scalar glueball in the DGL theory, which is closely related to the dual Higgs mechanism. Taking monopole condensation into account, the monopole field can be defined as $`\chi _\alpha \left(v+\stackrel{~}{\chi }_\alpha /\sqrt{2}\right)e^{i\eta _\alpha /v}`$, where $`\stackrel{~}{\chi }_\alpha `$ and $`\eta _\alpha `$ are real variables denoting the magnitude of the vacuum fluctuation and the phase, respectively. Here, $`\alpha `$=1, 2, 3 labels the color-magnetic charge of the monopole field, dual-red, dual-blue and dual-green. Since the origin of the monopole field is the off-diagonal gluon field in the MA gauge in QCD, this field $`\stackrel{~}{\chi }_\alpha `$ would present the scalar gluonic excitation corresponding
to the dual Higgs particle. In particular, the Weyl symmetric monopole field defined by $`\stackrel{~}{\chi }^{(0)}(\stackrel{~}{\chi }_1+\stackrel{~}{\chi }_2+\stackrel{~}{\chi }_3)/\sqrt{3}`$ is the color-singlet field so that it can be regarded as the scalar glueball with the mass $`m_\chi =2\sqrt{\lambda }v1.6`$ GeV. Although the relation between the flux-tube ring is not clear, it can be considered as another feature of the scalar glueball.
Here, we concentrate on the calculation of the $`\stackrel{~}{\chi }^{(0)}q\overline{q}`$ vertex function, which plays an important role to understand how the scalar glueball interacts with the quarks. The lowest diagram is shown in Fig. 5. The scalar glueball interacts with the dual gluon at first, and then, the dual gluon interacts with the quarks. We show the typical behavior of the vertex function in the scalar channel, as a function of the coupled quark momentum in Fig. 6. Here, we have set $`pq`$=0 for simplicity. We find that the heavy quark ($`m_c1.6`$ GeV) interacts with the scalar glueball about four times stronger than the light quarks ($`m_{u,d,s}0.30.5`$ GeV). It seems to indicate the flavor dependence of the interaction of the scalar glueball. It is interesting to study how this interaction property reflects on the scalar glueball decay into the two pseudo-scalar mesons and the glueball-quarkonium mixing states, which are now investigating.
Fig. 5(upper). $`\stackrel{~}{\chi }^{(0)}q\overline{q}`$ vertex.
Fig. 6(right). The vertex function of $`\stackrel{~}{\chi }^{(0)}q\overline{q}`$ in the scalar channel vs. coupled quark momentum. |
no-problem/9912/cond-mat9912184.html | ar5iv | text | # The Role of Splayed Disorder and Channel Flow on the Dynamics of Driven 3D Vortices
## 1 Introduction
In recent years, experiments have shown that the critical currents of high-temperature superconductors can be greatly enhanced by the introduction of long, straight defects into the material, particularly if these defects are splayed at some small angle with respect to the crystalline c-axis. These experiments have generally been performed such that the magnetic field is less than the matching field, although recent efforts have explored the opposite case. Explanations of this effect are incomplete. It has been suggested that random misalignment of the pins inhibits large-scale, low-energy excitations, and that entanglement further prohibits vortex motion. At the same time tilted columns yield an increase in the bending energy of the vortices and promote hopping at defect crossings. Our results indicate that the physics of splay enhancement may be missing an important ingredient. We find that splaying columnar defects slightly closes off channels through which vortices can flow, producing enhanced critical currents.
## 2 Model
We have conducted molecular dynamics simulations of driven, interacting vortices in three spatial dimensions. The forces on the vortices result from the vortex-vortex interaction, the vortex-defect interaction (modeled as a short-range attractive potential), elastic bending, thermal Langevin forces, damping, and an external Lorentz force. The energy scales are set by $`V_{vortex}=(\mathrm{\Phi }_0/4\pi \lambda )^2`$, with $`\mathrm{\Phi }_0`$ the flux quantum and $`\lambda `$ the magnetic penetration depth. Periodic boundary conditions are imposed in the a-b plane so as to maintain a constant flux density, while open boundary conditions are imposed along the c-axis. We have chosen to simulate 30 vortices on 20 planes in a $`16\lambda \times 16\lambda `$ periodic cell, with 20 defects per plane. At this vortex density, the vortex repulsion is strong enough to prohibit vortex entanglement. The defects were arranged either as vertical columns, columns splayed at an angle $`\mathrm{\Theta }`$ with respect to the c-axis, and uncorrelated point defects. The defect radius is taken to be $`\lambda `$, the inter-planar spacing is 12$`\AA `$, and the coherence length $`\xi `$ is 24$`\AA `$.
## 3 Results
A typical result is shown in Fig. 1 for columnar, point, and splayed defects at low temperature. The current density is normalized by the BCS depairing current density, while the resistivity is normalized by the Bardeen-Stephen flux-flow resistivity. For columnar defects at very low currents, vortices are either pinned to defects or held in place by their mutual repulsion. As the current density increases, some of the interstitial vortices shear off into channels of vortex flow. Point defects leave no open channels for vortices, but the scattering of the defects reduces their ability to pin vortices. Splayed defects also leave no channels, and, at small angles, are able to effectively pin entire vortices.
Fig. 2 shows the critical current density $`J_c(T)`$ for each configuration, where $`J_c`$ is defined by a threshold criteria $`\rho (J_c)=0.05`$. For point defects, $`J_c`$ drops rapidly with increasing temperature, reflecting their weak pinning efficiency. For columnar defects, $`J_c`$ drops slowly with increasing temperature as more channels open for the vortices. Splayed defects have the largest $`J_c`$ for all temperatures.
We consider the variation in $`J_c`$ for different splay angles at low temperature in Fig. 3. $`J_c`$ shows an enhancement at small angles due to the reduction of available channels. For large angles, the vortices can no longer accommodate to the defects, and splayed defects become similar to point defects. The value of the maximum, roughly 5, compares extremely well to the number observed in measurements in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub>.
In summary, our simulations show that the critical current enhancement from splay is the result of reduced vortex channel flow rather than entanglement for this vortex density. We are exploring other regions of parameter space to determine optimal conditions for splay enhancement. |
no-problem/9912/astro-ph9912419.html | ar5iv | text | # The role of advection in the accreting corona model for active galactic nuclei and Galactic black holes
## 1 Introduction
There are direct observational evidences that the gas surrounding central black holes in active galactic nuclei (AGN) and in galactic black holes (GBH) consists of two phases: colder, optically thick phase and hotter optically thin phase (for a review, see Mushotzky, Done & Pounds 1993; Tanaka & Lewin 1995; Madejski 1999). The emission of radiation is clearly powered by accretion onto the black hole. However, it is still not clear which of the two phases is responsible for accretion.
In the classical disc/corona models (e.g. Liang & Price 1977; Bisnovatyi-Kogan & Blinnikov 1977; Haardt & Maraschi 1991) the accretion proceeds through the disc and the coronal gas does not contribute significantly to the angular momentum transfer, presumably because of its magnetic coupling to the disc.
In clumpy accretion flow models the accretion proceeds predominantly through the cold clumps of gas (Collin–Souffrin et al. 1996, Czerny & Dumont 1998; Krolik 1998; see also Torricelli-Ciamponi & Courvoisier 1998).
However, there are also models in which the hot gas carries most of the mass. The division of the flow into the two phases is predominantly radial in models based on advection dominated accretion flow (ADAF) solutions (e.g. Ichimaru 1977; Abramowicz et al. 1995; Narayan, Kato & Honma 1997; Esin, McClintock & Narayan 1997), in the Compton cooled solutions for ion tori (Shapiro, Lightman & Eardley 1976; hereafter SLE), or in models considering both cooling mechanism (Björnsson et al. 1996). In those models the accretion proceeds through the cold accretion disc in the outer region and through the hot plasma region in the inner region.
In this paper we discuss the model of an accreting corona. Initial formulation of the basic assumptions was given by Życki, Collin-Souffrin & Czerny (1995) and the final formulation of the model was outlined by Witt, Czerny & Życki (1997; hereafter WCZ). In this model the accretion proceeds both through a disc and a corona, in proportions determined by the model and varying with the distance from the black hole. The model was applied to Nova Muscae 1991 (Czerny, Witt & Życki 1996) and AGN (Czerny, Witt & Życki 1997).
In this paper we address the problem of advection in the corona. An optically thin accreting corona should show similar general behavior as a general optically thin flow not accompanied by the disc, i.e. we might expect both advection dominated solutions and radiatively cooled solutions. Advection was included in the model by WCZ and was found to be always negligible. Here we explain the nature of this phenomenon.
## 2 Model
### 2.1 Corona structure
We assume that the corona itself accretes, i.e. that the energy in the corona is due to the direct release of the gravitational energy in the hot phase, without the necessity of having a mediator (e.g. magnetic field) between the disc and the corona. We further assume that the energy release can be described by the $`\alpha `$ viscosity prescription of Shakura & Sunyaev (1973), with the energy generation rate proportional to the gas pressure (we neglect radiation pressure in the corona because the corona is optically thin).
We assume a two-temperature plasma in the corona as in the classical paper of SLE, i.e. the ion temperature is different from the electron temperature. The energy balance is computed assuming that the release of gravitational energy heats the ions, the Coulomb coupling transfers this energy to electrons and finally electrons cool down by inverse Compton process, with the disc emission acting as a source of soft radiation flux. The hot corona is radiatively coupled to the disc, as described by Haardt & Maraschi (1991). We assume isotropic emission within the corona ($`\eta =1/2`$) and for the (energy integrated) disc albedo we adopt a value $`a=0.2`$. Compared to our previous paper (WCZ) we now use an accurate prescription for the Compton amplification factor, $`A`$ (defined by $`F_\mathrm{c}=AF_{\mathrm{soft}}`$), namely we compute $`A`$ from Monte Carlo simulations of Comptonization. Our Monte Carlo code employs the method described by Pozdnyakov, Sobol & Sunyaev (1983) and Górecki & Wilczewski (1984). Assuming slab geometry (Thomson optical depth $`\tau _{\mathrm{es}}`$ and electron temperature $`T_\mathrm{e}`$) and the soft photons spectrum as a black body of temperature $`T_\mathrm{s}`$, we compute $`A`$ on a grid of $`T_\mathrm{e},\tau _{\mathrm{es}},\mathrm{and}T_\mathrm{s}`$ and interpolate for values of interest at each radius.
We neglect the dynamical term in the corona since it was shown to be relatively unimportant (WCZ). Therefore we can assume that the corona is in hydrostatic equilibrium and instead of solving original complex differential equations we adopt a set of simplified equations given in Appendix D of WCZ, with modifications concerning the advection and the Compton amplification factor. We also neglect the effect of the vertical outflow of the gas from the corona discussed by WCZ, i.e. we assume that the total accretion rate (the sum of accretion rates through the disc and the corona) is constant, independent of radius.
### 2.2 Disc–corona transition
The main feature of the model is that the division of the accretion flow into the hot and cold phases is not arbitrary. The thermal instability in an irradiated medium (Krolik, McKee & Tarter 1981) results in its spontaneous stratification into two stable phases. The criterium for the transition from cold to hot phase is formulated in terms of a specific value of the ionization parameter $`\mathrm{\Xi }`$ which we define as
$$\mathrm{\Xi }\frac{\eta F_\mathrm{c}}{cP_{\mathrm{gas}}}$$
(1)
where $`\eta F_\mathrm{c}`$ is the fraction of the coronal flux directed towards the disc and $`P_{\mathrm{gas}}`$ is the coronal gas pressure (see also Krolik 1998). Following Begelman, McKee & Shields (1983) we adopt the following scaling,
$$\mathrm{\Xi }=0.65(T_\mathrm{e}/10^8\mathrm{K})^{3/2}$$
(2)
where $`T_\mathrm{e}`$ is the electron temperature of the corona at a given radius.
This description of the disc/corona transition can be easily argued for in a qualitative way. The essence of the transition is a change from Compton cooling mechanism to atomic cooling, including bremsstrahlung. Therefore, the transition zone corresponds to certain assumed contribution of bremsstrahlung to the total cooling. The criterion (Eq. 2) applies accurately to the systems with either Compton heating of the corona or a heating proportional to the density and fixes the bremsstrahlung contribution to 2/3 of the total cooling. Similar criterion formulated by Krolik (1998) gives the value of 3/7. Since the relative bremsstrahlung contribution decreases rapidly with the departure from the transition zone into the corona, while the pressure remains roughly constant, we use this criterion as the basic criterion for pressure and we neglect bremsstrahlung as a cooling mechanism (see also Krolik 1998).
This additional equation enables us to actually determine the fraction of energy, $`f`$, which is liberated in the corona (N.B. in our previous paper WCZ we used $`\xi 1f`$). Since in our model $`f`$ describes also the fraction of the mass accreting through the corona, we have
$$\dot{M}_\mathrm{c}(r)=f(r)\dot{M};\dot{M}_\mathrm{d}(r)=[1f(r)]\dot{M}$$
(3)
where $`\dot{M}_\mathrm{d}`$, $`\dot{M}_\mathrm{c}`$ and $`\dot{M}`$ are the disc, coronal and the total mass accretion rates. The total accretion rate does not depend on $`r`$ if mass loss (e.g. through a wind as in WCZ) is neglected.
The formulation of the model allows for computing the corona structure independently form the disc internal structure. The disc/corona coupling is through surface pressure, $`P_{\mathrm{gas}}`$, the coronal radiation flux, $`F_\mathrm{c}`$ and the disc soft flux, $`F_{\mathrm{soft}}`$ uniquely specified by $`f`$ and albedo. There is no need for subsequent iterations between the computations of the corona structure and the disc vertical structure as long as the local disc emission is well approximated by a blackbody (more generally: a thermal emission with a constant ratio of the colour and effective temperatures) and the albedo is fixed. When the local corona parameters are determined, they provide the surface boundary conditions for the equations of the cold disc structure. The disc vertical structure can be solved if the viscous transfer within the disc is specified (Różańska et al. 1999). The imposed boundary conditions are automatically satisfied by the model, and the disc structure is determined uniquely.
The radial variation of the relative proportion of the disc and coronal accretion flows may mean that either a fraction of the coronal mass cools and settles down on the disc surface thus joining the disc flow instead of falling radially into black hole, or a fraction of the disc flow evaporates from the disc surface and joins the coronal flow. These changes are forced by the requirement of the hydrostatic and thermal equilibrium between the disc and the corona at each radius. However, the dynamics of this phenomenon is beyond the scope of our present model.
### 2.3 Advection
In the present paper we include the advection term in the corona in all computations. The advection is described as in WCZ, i.e. the energy balance takes the form
$$F_\mathrm{c}=F_{\mathrm{CC}}+F_{\mathrm{adv}}$$
(4)
and
$$4\pi r^2F_{adv}=f(r)\dot{M}c_s^2\delta ;\delta =\frac{d\mathrm{ln}P}{d\mathrm{ln}r}\frac{5}{2}\frac{d\mathrm{ln}T_\mathrm{i}}{d\mathrm{ln}r},$$
(5)
where $`F_\mathrm{c}`$ is the energy flux generated in the corona, $`F_{\mathrm{CC}}`$ is the Compton cooling of the corona by the soft disc photons and $`c_s`$ is the sound velocity in the corona. Equation 5 can be converted to give
$$\frac{F_{\mathrm{adv}}}{F_\mathrm{c}}=\delta \frac{T_\mathrm{i}}{T_{\mathrm{vir}}},$$
(6)
where $`T_{\mathrm{vir}}`$ is the virial temperature,
$$T_{\mathrm{vir}}\frac{GM}{r}\frac{m_\mathrm{H}}{k}.$$
(7)
For the numerical solution of WCZ, $`\delta =0.75`$ (see Appendix C in that paper). In general, however, $`\delta `$ is a function of radius and should be computed consistently. In the next Section we will show solutions for a number of fixed values of $`\delta `$, while in Section 4 we will discuss solutions with $`\delta `$ determined consistently through iterations.
The parameters of the model are: the viscosity parameter $`\alpha `$ in the corona and the dimensionless accretion rate $`\dot{m}`$ measured in the Eddington units
$$\dot{M}_{\mathrm{Edd}}=3.52\frac{M}{10^8\mathrm{M}_{\mathrm{}}}[\mathrm{M}_{\mathrm{}}/yr]$$
(8)
where $`M`$ is the mass of the central black hole and we assumed the pseudo–Newtonian efficiency of accretion equal 1/16 and pure hydrogen opacities.
We show the results for the value of the viscosity parameter $`\alpha =0.3`$ and the mass of the black hole $`M=10M_{}`$ but we discuss the trends of solutions with these parameters varied.
### 2.4 Spectra
At each radius, the equations of the structure of the corona determine the electron temperature, the optical depth of the corona and the soft photon flux from the disc. The effect of the Comptonization of the disc flux by the corona is calculated at each radius separately, using semi-analytical formulae of Czerny & Zbyszewska (1991). We neglect here the anisotropy of the Compton scattering which should be taken into account in detailed spectral modeling (e.g. Poutanen & Svensson 1996, Haardt, Maraschi & Ghisellini 1997). However, very accurate computation of the spectra is not the main goal of the present paper.
The final disc spectrum is computed by integration over the disc surface assuming an inclination angle equal zero (i.e. top view). This integration procedure is an essential element of our model since the corona properties are radius–dependent: outer parts of the corona are predominantly responsible for the high energy extension of the spectrum and its hard X-ray slope while the inner, cooler parts of the corona mostly influence the soft X-ray range by producing moderately comptonized disc component.
All computations are done for a non–rotating black hole and the relativistic corrections are neglected.
## 3 Corona properties for constant $`𝜹`$
In this Section we show results obtained assuming a value of the advection parameter $`\delta `$ (Eq. 5) constant with radius.
### 3.1 The relative strength of the corona
The most important prediction of the model is a strong radial dependence of the strength of the corona. This dependence changes qualitatively with the accretion rate, mainly due to the presence of advection. Examples of radial dependences of the fraction of energy generated in the corona, $`f(r)`$, are shown in Figure 1 for a number of values of $`\delta `$ and $`\dot{m}`$.
Spatial extent of the corona is always finite, independently of $`\dot{m}`$ and $`\delta `$. The corona covers only an inner part of the disc from a certain outer radius $`R_{\mathrm{max}}`$ inwards.
At $`R_{\mathrm{max}}`$ all the energy is liberated in the corona for low accretion rates, if $`\delta 0`$ (and for any $`\dot{m}`$ when $`\delta =0`$, i.e. if there is no advection). Consequently, at $`r=R_{\mathrm{max}}`$ the accretion flow proceeds entirely through the corona. At larger radii no corona solutions of our equations exist since the Compton cooling provided by the disc is too large, under the adopted assumptions about the corona structure. There is therefore a strong and discontinuous change of accretion flow structure at $`R_{\mathrm{max}}`$. For $`r>R_{\mathrm{max}}`$ all the accretion proceeds through the disc whilst at $`r=R_{\mathrm{max}}`$ the accretion proceeds through the corona, with the cold disc heated only by X-ray illumination. We envision that the change occurs through rapid heating and evaporation of the disc surface which takes place in order to maintain the thermal balance condition, but the detailed dynamics of this process is beyond the scope of our present model. Closer in, the relative strength of the corona decreases and, consequently, the relative fraction of the disc accretion increases.
The dependence of the radial extension of the corona, $`R_{\mathrm{max}}`$, for $`\delta =0.75`$ on the accretion rate is shown in Figure 2. The size of the corona is considerable, covering the disc up to $`130R_{\mathrm{Schw}}`$ for accretion rate approaching $`0.04\dot{M}_{\mathrm{Edd}}`$, but $`R_{\mathrm{max}}`$ decreases significantly for smaller accretion rates, down to about $`10R_{\mathrm{Schw}}`$ for sources radiating at 0.1 per cent of the Eddington luminosity. For accretion rates below $`4\times 10^4\dot{M}_{\mathrm{Edd}}`$ the corona ceases to exist, as the Compton cooling is too strong while heating becomes inefficient. (see also Section 5.3).
Although the relative fraction of energy dissipated in the corona increases with radius (up to $`R_{\mathrm{max}}`$) the actual energy flux decreases outwards. We can see that from the simple analytical solutions given in Appendix C of WCZ. Since $`f`$ increases with the radius $`r`$ approximately as $`fr^{3/8}`$, the X-ray flux $`F_\mathrm{X}(r)`$ decreases with radius: $`F_\mathrm{X}(r)r^{3+3/8}`$. This may be important for detailed computations of relativistic smearing of the X-ray reprocessed component. However, we do not discuss this spectral component in the present paper.
A second branch of solutions appears when the accretion rate $`\dot{m}`$ approaches a certain critical value, $`\dot{m}_{\mathrm{adv}}`$. On this branch the cooling is dominated by advection and $`\dot{m}_{\mathrm{adv}}`$ is a function of $`\delta `$: $`\dot{m}_{\mathrm{adv}}=0.192,0.069,0.044`$ for $`\delta =0.2,0.5,0.75`$, respectively (Figure 1). The exact topology of the two solutions is a rather complex function of $`\delta `$ and $`\dot{m}`$. For $`\dot{m}`$ just above $`\dot{m}_{\mathrm{adv}}`$ the advective solution coexists with the radiative one in a range of radii. The two branches cross for somewhat higher $`\dot{m}`$ and then separate, creating two spatial regions where both solutions can exist, with an intermediate range of $`r`$, where no solution is possible. The outer ring then shrinks rapidly as $`\dot{m}`$ increases and disappears, the inner one does the same but more slowly. Mathematically, the solutions continue to (unphysical) $`f>1`$ region creating closed loops.
For comparison we also plot in Figure 1 the radial dependence of the relative strength of corona obtained when the advection term was neglected ($`\delta =0`$). Obviously, in this case only one branch of solutions appears. It is similar to the Compton–cooled branches of solutions with $`\delta >0`$ for low accretion rates, $`\dot{m}0.01`$, i.e. the advection is then only a small correction to the energy balance. For larger $`\dot{m}`$ the $`\delta =0`$ solution is quite different from the advection-corrected solutions: the corona extends to much larger radii and no qualitative change with increasing $`\dot{m}`$ is seen. It is clearly the advection term which is responsible for the complex topology of solutions seen in Figure 1.
The merging of the radiatively cooled solution with the advectively cooled solution has been previously found by Chen et al. (1995) and Björnsson et al. (1996) for an optically thin flow, not accompanied by any disc. Therefore, we also constructed an analogous $`\mathrm{log}\dot{m}\mathrm{log}\mathrm{\Sigma }`$ diagram for our coronal solution at $`5R_{\mathrm{Schw}}`$ (Figure 3). We plot coronal surface density $`\mathrm{\Sigma }_\mathrm{c}`$ versus the coronal accretion rate, $`\dot{m}_\mathrm{c}=\dot{m}f(r)`$ where the value of factor $`f(r)`$ depends on the solution branch.
The two solution branches for a given total $`\dot{m}`$ produce two points on this plot, since $`f`$ is different on the two branches. The two solutions merge for $`\dot{m}=0.09`$ (at our assumed radius $`5R_{\mathrm{Schw}}`$, $`\alpha =0.3`$ and $`\delta =0.75`$), resulting in a continuous curve with both the fraction of energy carried by advection and the fraction of energy dissipated within the corona increasing with $`f\dot{m}`$. The uppermost point is characterized by $`f=1`$, i.e. the whole energy is generated within the corona, but advection transports $`90`$ per cent of the energy rather than 100 per cent. In our model the fully advective branch does not develop, due to the presence of the disc providing soft photons for Compton cooling. Mathematically, the fully-advective branch would appear for $`F_{\mathrm{soft}}=0`$, which would require unphysical condition $`f=1/[1\eta (1a)]>1`$. Thus the necessary presence of the soft photons from the disc suppresses in our model the fully advective branch, present in other optically thin solutions, e.g. ADAFs (see Section 5.1).
The existence of the advection dominated branch of solutions was not found by WCZ, and the reason for that is given in Section 3.2.2.
The viscosity parameter $`\alpha `$ has a strong influence on the existence and properties of the corona. For $`\alpha `$ lower than our assumed 0.3 the solutions are more limited in $`\dot{m}`$ and radius, the more so the higher the advection coefficient $`\delta `$. For example, for $`\alpha =0.03`$ and $`\delta =0.2`$ only very spatially limited solutions exist. For the same $`\alpha `$ but $`\delta =0.5`$ no solutions exit.
### 3.2 Corona properties
#### 3.2.1 Advection
In Figure 4 we show the ratio of the energy flux advected inwards with the coronal flow to the total dissipated flux for $`\delta =0.75`$. For low accretion rates only the radiatively cooled solution exists and advection is not important. However, when $`\dot{m}`$ approaches $`\dot{m}=0.1`$, up to 40 per cent of the flux on this branch is carried by advection. This fraction depends on the radius.
For high accretion rates ($`\dot{m}0.04`$) the second solution appears. This solution is cooled mostly by advection and it is similar to ADAF solutions. Since in that case most of the energy (more than 90 per cent) is liberated in the corona the accretion proceeds mostly through the corona itself.
#### 3.2.2 Ion temperature and the geometry of the corona
The ion temperature decreases almost inversely with radius (see Figure 5 in WCZ and formulae in Appendix C in that paper). The pressure scale height of the corona, defined as
$$H_\mathrm{P}=\left(\frac{kT_\mathrm{i}R^3}{GMm_\mathrm{H}}\right)^{1/2}$$
(9)
increases almost linearly with radius and the ratio $`H_\mathrm{P}/R`$ is almost constant. The coronal accretion flow resembles actually a spherical accretion, similarly to the case of pure ADAF flows. In such flows the ion temperature is of the order of the virial temperature and geometrical thickness of the flow is of the order of r. Sound velocity is close to the Keplerian velocity and is proportional to the radial velocity , where the proportionality coefficient is given by viscosity parameter $`\alpha `$. However, our coronal solutions are generally cooler and $`H_P/R`$ ratio, although constant, is not equal 1. Nevertheless, the dependence of the radial velocity on radius is quite similar to the case of ADAF. In Figure 5 we plot the ratio of the radial to sound velocity, computed from the formula:
$$v_\mathrm{r}=\frac{f\dot{M}}{4\pi R\mathrm{\Sigma }_\mathrm{c}}.$$
(10)
We see, that far from the marginally stable orbit $`v_r/v_s`$ ratio is of order of $`\alpha `$ and is almost constant throughout the disc. The flow is then moderately subsonic, depending on the value of the viscosity parameter. Close to the marginally stable orbit our coronal flow, as well as ADAF solution, become transonic and continue as a free fall onto a black hole.
In this paper we use a simplified description of the vertical hydrostatic equilibrium and we have to check afterwards whether the solution can actually be in hydrostatic equilibrium. The approximate criterion is that the ion temperature should be smaller than the local virial temperature. We see from Figure 4 that $`T_\mathrm{i}`$ is usually lower than $`T_{\mathrm{vir}}`$ on the radiative branch, but $`T_\mathrm{i}>T_{\mathrm{vir}}`$ on the advective one. The same problem refers to the corona height as a function of radius. Since $`H_\mathrm{P}/R=\sqrt{T_\mathrm{i}/T_{\mathrm{vir}}}`$, the ratio $`H_\mathrm{P}/R`$ can be larger than 1, i.e. the corona can be (very) geometrically thick. The reason for $`T_\mathrm{i}`$ exceeding $`T_{\mathrm{vir}}`$ can be seen from Equation 6: $`T_\mathrm{i}=(T_{\mathrm{vir}}/\delta )\times (F_{\mathrm{adv}}/F_\mathrm{c})`$, i.e. $`T_\mathrm{i}`$ can approach and exceed $`T_{\mathrm{vir}}`$ if advection is dominant.
The super-virial ion temperature is the reason why advection-dominated solutions were not found by WCZ. In that paper the vertical structure was calculated much more carefully, assuming the hydrostatic equilibrium at the basis of the corona and allowing for the transonic vertical outflow from the corona. These solutions automatically prohibited the violation of the hydrostatic equilibrium at the basis of the corona. In that case the set of solutions for increasing accretion rates simply ended up as soon as the ion temperature reached the virial temperature (see Figure 2 of WCZ and Section 3.1 and 4.1 therein).
The same problem was noted by Narayan & Yi (1994) in the case of pure ADAF solutions and expressed as the problem of the Bernoulli constant being positive for ADAF. Therefore, the model with a hot medium being in hydrostatic equilibrium does not offer correct solution beyond certain value of accretion rate.
#### 3.2.3 Electron temperature and the optical depth
Since the density in the corona decreases outwards the efficiency of Coulomb interaction between the ions and electrons decreases as well. However, both the disc and the corona bolometric luminosities go down rapidly with the radius. Therefore the electron temperature, $`T_\mathrm{e}`$, rises outwards and the ratio of $`T_\mathrm{i}/T_\mathrm{e}`$ decreases outwards. The highest value of $`T_\mathrm{e}`$ is of the order of $`1.5\times 10^9`$ K, and it depends only weakly on $`\dot{m}`$ and the viscosity parameter, $`\alpha `$. Such a universal value is an interesting property of our model for lower accretion rates. At higher accretion rates, however, the outer radius of the corona contracts rapidly due to the advection and the maximum corona temperature also rapidly drops down with an increase of the accretion rate.
The optical depth of the corona is practically independent of radius, and very weakly dependent on other parameters; it is always between 0.1 and 0.2 (see Appendix C in WCZ).
## 4 Solutions with consistent $`𝜹\mathbf{(}𝒓\mathbf{)}`$
### 4.1 Corona structure
The assumption that the advection parameter $`\delta `$ is constant as a function of radius is not correct in general. As can be seen from equation 5, $`\delta `$ is determined by radial derivatives of the ion temperature and pressure, hence it can be a function of radius. Since the topology of solutions depends rather sensitively on $`\delta `$ (Figure 1), we can expect significant changes of the topology if we consistently compute the $`\delta (r)`$ dependence.
As a first step, we show in Figure 6 the coefficient $`\delta `$ computed numerically from the solution obtained for initial constant $`\delta _0=0.2`$ and $`\dot{m}=0.2`$ (labelled $`\delta _1`$) For these parameters two solution branches exist: with radiative and advective cooling dominant (cf. Figure 1). On both branches the computed $`\delta `$ has a strong radial dependence. However, as Figure 1 shows again, the radiative branch is present for any $`\delta `$, therefore we do not expect significant changes to its character, when solution with proper $`\delta (r)`$ is computed. The same is not true for the advective branch: the higher the $`\delta `$, the narrower the range of this branch’s existence in the $`r`$$`\dot{m}`$ plane. Since the computed $`\delta `$ is now significantly larger than the initial one, we can expect a decrease of the importance of the advective branch.
Proper self-consistent solutions describing the flow can in principle be obtained either by solving radial differential equations containing explicitly the advection term expressed as derivatives given by equation 5, or by an iterative procedure correcting the distribution of $`\delta (r)`$ at each iteration step. First method was successfully applied by Chen, Abramowicz & Lasota (1997) and Narayan, Kato & Honma (1997) to obtain global solutions for ADAF flow. We used the second method, iterating solving the corona structure equations for a given $`\delta (r)`$ and computing the next approximation to $`\delta (r)`$. Our algorithm is very similar to the method used by Chen (1995): in order to find solution at a given radius $`r`$, we solve the equations at $`r`$ and two auxiliary radii, $`r\mathrm{\Delta }r`$ and $`r+\mathrm{\Delta }r`$ (assuming $`\delta (r\pm \mathrm{\Delta }r)=\delta (r)`$, but the solution is insensitive to this particular condition). We then compute the corrected $`\delta `$ from equation 5 and compute the corrected structure. When convergence is achieved, we proceed to the next radius.
Iterating the solution for the advection-dominated branch quickly leads to its disappearance. The radiative branch shrinks somewhat ($`R_{\mathrm{max}}`$ decreases), especially for $`\dot{m}`$ such that the advected fraction close to $`R_{\mathrm{max}}`$ was substantial in the non-iterated solution. In other words, only the solutions with rather low advective cooling survive.
Figure 6 (points labelled $`\delta _{\mathrm{}}`$) shows an example of the iterated $`\delta (r)`$ dependence. The iterated $`\delta (r)`$ is positive for larger radii, where advection is a cooling process, but it changes sign for smaller radii, as advection becomes a locally heating process. The same trend was obtained for optically thick discs calculated taking into account advection, departure from the Keplerian rotation and the transonic character of the flow close to the marginally stable orbit (Muchotrzeb & Paczyński 1982; Abramowicz et al. 1988). The heating role of advection increases dramatically close to the marginally stable orbit since the energy dissipation there approaches zero.
In Figure 7 we show examples of the converged solutions of the corona structure as functions of $`\dot{m}`$ and $`\alpha `$. The corona is strongest at is outer edge, although $`f`$ is not always 1 at $`r=R_{\mathrm{max}}`$. Similarly to solutions with constant $`\delta `$, the electron temperature is $`100`$ keV while the optical thickness is $`0.15`$. Advection is never important as a cooling process. At small radii, $`r20R_{\mathrm{Schw}}`$, $`\delta `$ changes sign, so the advective flux contributes to heating. In this region the solution exist for $`\dot{m}4\times 10^4`$, and up to $`\dot{m}_\mathrm{c}1`$, i.e. the Eddington luminosity in the corona. The topology of solutions represented on the $`\dot{m}`$$`\mathrm{\Sigma }`$ diagram change as well (Figure 3): the solution forms a single branch only.
The solutions are sensitive to the value of the viscosity parameter $`\alpha `$. For low $`\alpha `$ the solutions are generally rather limited spatially. Moreover, for $`\alpha 0.1`$ the ion temperature strongly exceeds the virial temperature, so the solutions can hardly be considered physically acceptable.
The value of the mass of the central black hole influences the results only very slightly. Solutions for $`M=10^8\mathrm{M}_{\mathrm{}}`$ practically overlap those for $`M=10\mathrm{M}_{\mathrm{}}`$. This is to be expected (see e.g. Björnsson & Svensson 1992 and references therein), since the only dependence on the central mass in our model is through the temperature of the soft flux, which can affect the Compton amplification factor, but the dependence $`A(T_0)`$ is very weak for steep (soft) spectra.
### 4.2 Radiation spectra
Spectra predicted by the model are rather soft and dominated by the disc emission. Examples are plotted in Figure 8 for parameters characteristic for AGN and GBH. Generally, the power law components are harder for lower $`\dot{m}`$.
For accretion rates of order of $`0.05\dot{M}_{\mathrm{Edd}}`$, typically expected in Seyfert galaxies, the amount of hard X-ray emission predicted by the model is negligible and the high energy index far too steep, so the present model does not offer a promising description. On the other hand, the original model of WCZ (with neglected advection) reproduced well the typical properties of AGN (see Czerny et al. 1997 for radio quiet quasars and Seyfert galaxies, and Kuraszkiewicz et al. 1999 for NLS1 galaxies; see also the corona parameters determined for MCG+8-11-11 by Grandi et al. 1998).
For the Galactic black holes the general trends are similar to those for AGN. For very low accretion rates ($`\dot{m}0.01`$) the model predicts considerable hard X-ray emission from the corona, although the spectra are somewhat steeper than those resulting from the original model by WCZ (Janiuk & Czerny 1999). Very steep spectra are predicted for $`\dot{m}0.1`$, i.e. corresponding to the high or very high state of stellar black hole systems, while $`\mathrm{\Gamma }2`$ in the observed spectra of e.g. soft X-ray transients (Życki, Done & Smith 1998).
## 5 Discussion
In this paper we considered the accreting corona model. Main features of the model and differences from other hot accretion flow solutions are, firstly, the presence of a cool, optically thick disc supplying soft photons for Comptonization in the corona, and secondly, the condition for (vertical) stratification of the flow into the two phases due to thermal instability. Since in this model the accreting coronal plasma is hot, optically thin and geometrically thick, it is necessary to include the radial energy transport (advection), similarly to the optically thin ADAF solutions and optically thick, slim discs.
The structure of solutions crucially depends on whether the advective flux is solved for consistently – i.e. the coefficient $`\delta `$ (Eq. 5) is a function of radius – or $`\delta (r)`$ is assumed constant, although there are certain properties of the solutions independent of the $`\delta `$-prescription.
For a constant $`\delta `$, at a given radius (smaller than a certain maximum value) there is either one solution – radiatively cooled through Compton process – or two solutions, one radiatively cooled and the other advection-dominated. For radii larger than the maximum one no coronal solutions exist. Whenever two solutions are possible, there is always a critical value of the accretion rate for which the two solutions merge and no solutions are found for higher accretion rates.
Similar effect is seen when studying the radial structure of coronal solutions. At very low accretion rates (below $`\dot{m}4\times 10^4`$) there are no coronal solutions. At larger $`\dot{m}`$ the radiatively cooled solution appears, which covers an inner part of the disc. This solution was previously found by WCZ. The fraction of the disc covered by the corona increases with the accretion rate but the fraction of radiation emitted by the corona decreases. At even higher $`\dot{m}`$ (above $`\dot{m}0.04`$, depending on $`\delta `$) a second, advection–dominated solution emerges, if the corona structure equations allow for the ion temperature to be higher than the virial temperature. The two solutions merge at the outer edge of the corona. With further increase of the accretion rate, the region covered by the corona contracts rapidly, with no corona present for $`\dot{m}0.1`$ (for the viscosity parameter $`\alpha =0.3`$). Smaller $`\alpha `$ leads to solution merger for even lower accretion rates, as found previously by e.g. Abramowicz et al. (1995) and Björnsson et al. (1996).
The existence of the advection-dominated solutions was automatically excluded in the original formulation of the model by WCZ – instead, they simply observed the disappearance of the single, radiation cooled solution with an increase of the accretion rate due to the ion temperature reaching the virial value.
When the $`\delta (r)`$ function is solved for, we obtain only one solution branch. It is cooled by radiation i.e. with advective cooling negligible. In fact, advection changes sign for $`r20R_{\mathrm{Schw}}`$ i.e. it acts as a heating process. The properties of this branch are similar to those of the radiatively cooled branch obtained for a constant $`\delta `$: the solution exist for $`\dot{m}>4\times 10^4`$, and the corona is strongest at its outer edge. For radii such that advection is a heating process there, this solution can exist up to rather high $`\dot{m}`$, even formally exceeding $`\dot{M}_{\mathrm{Edd}}`$ in the corona. It disappears only when the temperature of the disc flux reaches the (decreasing) electron temperature in the corona, but it is strongly super-virial already for lower $`\dot{m}`$.
### 5.1 Comparison with other hot, optically thin solutions
Our solutions show general similarity to other hot, optically thin disc solutions (SLE; Abramowicz et al. 1995; Chen et al. 1995; Björnsson et al. 1996; Narayan & Yi 1995; Zdziarski 1998; see Kato et al. 1998 for general discussion). There are also, however, certain important differences, as a direct consequence of the assumptions specific to our model.
When plotted on the $`\dot{m}`$$`\mathrm{\Sigma }`$ diagram (Figure 3), our solutions for constant $`\delta `$ form two branches, merging for certain maximum $`\dot{m}`$. Both the existence of the two branches and their merging was found previously. However, as opposed to ’conventional’ ADAF solutions, strongly advection-dominated branch does not appear in our model, due to substantial Compton cooling. For the same reason our solutions disappear for lower viscosity, $`\alpha 0.03`$, while no such effect is observed in the above mentioned solutions, where the adopted cooling mechanism can usually be made sufficiently inefficient (see below).
We obtain an increase of the maximum allowed coronal $`\dot{m}`$, when we solve for the advective flux (i.e. $`\delta (r)`$). This has also been found previously by Chen (1995). Again, however, the advection-dominated branch does not appear in our solution, and the solution disappears for low viscosity. None of these effects has appeared in the Chen (1995) work.
Hot discs cooled by comptonization of external soft photons were considered by Zdziarski (1998), as a generalization of the SLE solution. However, the source of soft photons was not specified in that work i.e. the soft flux could implicitly be assumed to be small, if required. Therefore the general properties of the solutions found were in close agreement with the previous solutions where bremsstrahlung was usually assumed as the radiative cooling process. For example, the strongly advective branch is present in Zdziarski (1998) solution, in spite of the efficient radiative cooling. The Compton parameter $`y4kT_\mathrm{e}/m_\mathrm{e}c^2\tau `$ is $`1`$ in that work (it is $`0.1`$ in our solutions), which indeed requires very low $`F_{\mathrm{soft}}`$ in order not to suppress the advectively-cooled branch.
Where an ADAF-like disc is assumed to co-exist with a cold, optically thick disc (e.g. Esin et al. 1997), the transition radius is an adjustable parameter. In our work the additional equation (Eq. 1) resulting from the thermal instability condition closes the structure equations, thus enabling us to compute the transition radius.
### 5.2 Super-virial ion temperature
The ion temperature in an optically thin ADAF flow tends to be larger than the local virial temperature and the Bernoulli constant for such a flow is positive. Therefore such solutions are not possible without some kind of outflow (e.g. Narayan et al. 1997, Blandford & Begelman 1999).
The same problem affects the advection dominated branch of our coronal solutions and the iterated solutions obtained for low $`\alpha `$. Such a corona violates the assumption of the hydrostatic equilibrium. Coronal gas with ion temperature exceeding the local virial temperature cannot flow in, as assumed in our model or ADAF solutions. If strong outflow indeed developed, no accretion would take place, switching off the energy source. A moderate transonic outflow from the corona surface perpendicular to the disc does not provide a solution. It was already included in the original formulation of the model by WCZ and it did not prevent the disappearance of coronal solutions for accretion rates higher than $`\dot{m}0.1`$. A magnetic wind can provide a solution, provided that the outflow launched at certain radius carries away more angular momentum than what is locally necessary for accretion to proceed (Blandford & Begelman 1999). However, at this stage such solutions are rather arbitrary.
### 5.3 Corona formation at its outer edge
Our model does not provide a mechanism for disc evaporation. Instead, it allows to check whether the corona, if formed, can exist in hydrostatic and thermal equilibrium. The outer edge of our corona is therefore the maximum radius at which those conditions are satisfied. Since we did not consider two-dimensional flow, our transition from a bare disc to a coronal solution is sharp. The transition would become smooth if the radial conduction were included (e.g. Honma 1996), but this would require two-dimensional computations, since the radial thickness of the transition zone is expected to be same as the vertical thickness (see discussion by Dullemond 1999). The coexistence of the bare disc and the disc/corona at $`R_{\mathrm{max}}`$ does not contradict the analysis of Dullemond & Turolla (1999) since the coronal part is not strongly advection-dominated, with approximately half of the energy (or less) carried by advection and the remaining energy radiated away locally, as in SLE.
The mechanism leading to corona formation is still unspecified. It may be related either to disc instabilities, or magnetic phenomena. The coronal solutions are generally within the standard disc radiation pressure instability zone (e.g. Janiuk & Czerny 1999), but this does not seem a strong argument in favour of the first possibility.
The transition radius, $`R_{\mathrm{max}}`$, in our models is generally rather smaller than the outer radius of the ADAF flow in typical applications, which is assumed $`10^4R_{\mathrm{Schw}}`$ (e.g. Esin et al. 1997). Since our description of the flow applies in principle to the outer part of an ADAF solution, where the hot flow and the cold disc flow are assumed to overlap (Esin et al. 1997), it may mean that large ADAF radii would also be difficult to achieve if the disc/corona coupling was correctly included. However, our result should be rather treated as an indication of a possible problem and not a definite answer. The model depends rather sensitively on the adopted description of the disc/corona coupling, expressed through Equation 2, as was shown for a non-advective corona by Janiuk & Czerny 2000. The simple change from the bremsstrahlung contribution from 2/3 (our assumption) to 3/7 (Krolik 1998) would only change the results quantitatively by radially expanding the region of coronal solutions but lowering the optical depth of the corona. However, better description of the transition region with disc evaporation included may change the results more significantly. It may also provide an explanation of the formation of ADAF part. Unfortunately, although some preliminary, partial results are available (e.g. Meyer & Meyer-Hofmeister 1994, Dullemond 1999, Różańska & Czerny 1999), they are still not directly applicable to the models and further research is needed. It may also be true that the radial extension of the overlapping region will appear to be small, as suggested by Dullemond (1999), thus further complicating the prediction of location of the disc/ADAF transition.
## 6 Conclusions
* Advection-dominated branch of coronal solutions does not represent a physically acceptable description of the flow.
* Accreting corona solutions are predominantly Compton-cooled and, for small radii, exist for all accretion rates larger than $`4\times 10^4\dot{M}_{\mathrm{Edd}}`$.
* Spectral slopes predicted by accreting corona models for AGN and GBH are too steep in comparison with observations so the disruption of the innermost part of the disc or magnetically driven outflow seem to be required.
## Acknowledgements
This work was supported in part by grant 2P03D01816 of the Polish State Committee for Scientific Research.
This paper has been processed by the authors using the Blackwell Scientific Publications style file. |
no-problem/9912/quant-ph9912102.html | ar5iv | text | # Field quantization by means of a single harmonic oscillator
## I Harmonic oscillator in superposition of frequencies
The standard quantization of a harmonic oscillator is based on quantization of $`p`$ and $`q`$ but $`\omega `$ is a parameter. To have, say, two different frequencies one has to consider two independent oscillators. On the other hand, it is evident that there exist oscillators which are in a superposition of different frequencies. The example is an oscillator wave packet associated with distribution of center-of-mass momenta.
This simple observation raises the question of the role of superpositions of frequencies for a description of a single harmonic oscillator. We know that frequency is typically associated with an eigenvalue of some Hamiltonian or, which is basically the same, with boundary conditions. A natural way of incorporating different frequencies into a single harmonic oscillator is by means of the frequency operator
$`\mathrm{\Omega }={\displaystyle \underset{\omega _k,j_k}{}}\omega _k|\omega _k,j_k\omega _k,j_k|`$ (1)
where all $`\omega _k0`$. For simplicity we have limited the discussion to the discrete spectrum but it is useful to include from the outset the possibility of degeneracies. The corresponding Hamiltonian is defined by
$`H`$ $`=`$ $`\mathrm{}\mathrm{\Omega }{\displaystyle \frac{1}{2}}\left(a^{}a+aa^{}\right)`$ (2)
where $`a=_{n=0}^{\mathrm{}}\sqrt{n+1}|nn+1|`$. The eigenstates of $`H`$ are $`|\omega _k,j_k,n`$ and satisfy
$`H|\omega _k,j_k,n=\mathrm{}\omega _k\left(n+{\displaystyle \frac{1}{2}}\right)|\omega _k,j_k,n.`$ (3)
The standard case of the oscillator whose frequency is just $`\omega `$ coresponds either to $`\mathrm{\Omega }=\omega \mathrm{𝟏}`$ or to the subspace spanned by $`|\omega _k,j_k,n`$ with fixed $`\omega _k=\omega `$. Introducing the operators
$`a_{\omega _k,j_k}=|\omega _k,j_k\omega _k,j_k|a`$ (4)
we find that
$`H`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{\omega _k,j_k}{}}\mathrm{}\omega _k\left(a_{\omega _k,j_k}^{}a_{\omega _k,j_k}+a_{\omega _k,j_k}a_{\omega _k,j_k}^{}\right).`$ (5)
The algebra of the oscillator is
$`[a_{\omega _k,j_k},a_{\omega _l,j_l}^{}]`$ $`=`$ $`\delta _{\omega _k\omega _l}\delta _{j_kj_l}|\omega _k,j_k\omega _k,j_k|\mathrm{𝟏}`$ (6)
$`a_{\omega _k,j_k}a_{\omega _l,j_l}`$ $`=`$ $`\delta _{\omega _k\omega _l}\delta _{j_kj_l}(a_{\omega _k,j_k})^2`$ (7)
$`a_{\omega _k,j_k}^{}a_{\omega _l,j_l}^{}`$ $`=`$ $`\delta _{\omega _k\omega _l}\delta _{j_kj_l}(a_{\omega _k,j_k}^{})^2.`$ (8)
The dynamics in the Schrödinger picture is given by
$`i\mathrm{}_t|\mathrm{\Psi }`$ $`=`$ $`H|\mathrm{\Psi }=\mathrm{}\mathrm{\Omega }\left(a^{}a+{\displaystyle \frac{1}{2}}\mathrm{𝟏}\right)|\mathrm{\Psi }.`$ (9)
In the Heisenberg picture we obtain the important formula
$`a_{\omega _k,j_k}(t)`$ $`=`$ $`e^{iHt/\mathrm{}}a_{\omega _k,j_k}e^{iHt/\mathrm{}}`$ (10)
$`=`$ $`|\omega _k,j_k\omega _k,j_k|e^{i\omega _kt}a=e^{i\omega _kt}a_{\omega _k,j_k}(0).`$ (11)
Taking a general state
$`|\psi ={\displaystyle \underset{\omega _k,j_k,n}{}}\psi (\omega _k,j_k,n)|\omega _k,j_k|n`$ (12)
we find that the average energy of the oscillator is
$`H=\psi |H|\psi ={\displaystyle \underset{\omega _k,j_k,n}{}}|\psi (\omega _k,j_k,n)|^2\mathrm{}\omega _k\left(n+{\displaystyle \frac{1}{2}}\right).`$ (13)
The average clearly looks as an average energy of an ensemble of different and independent oscillators. The ground state of the ensemble, i.e. the one with $`\psi (\omega _k,j_k,n>0)=0`$ has energy
$`H={\displaystyle \frac{1}{2}}{\displaystyle \underset{\omega _k,j_k}{}}|\psi (\omega _k,j_k,0)|^2\mathrm{}\omega _k<\mathrm{}.`$ (14)
The result is not surprising but still quite remarkable if one thinks of the problem of field quantization.
The very idea of quantizing the electromagnetic field, as put forward by Born, Heisenberg, Jordan and Dirac , is based on the observation that the mode decomposition of the electromagnetic energy is analogous to the energy of an ensemble of independent harmonic oscillators. In 1925, after the work of Heisenberg, it was clear what to do: One had to replace each classical oscillator by a quantum one. But since each oscillator had a definite frequency, to have an infinite number of different frequencies one needed an infinite number of oscillators. The price one payed for this assumption was the infinite energy of the electromagnetic vacuum.
The infinity is regarded as an “easy” one since one can get rid of it by redefining the Hamiltonian and removing the infinite term. The result looks correct and many properties typical of a quantum harmonic oscillator are indeed observed in electromagnetic field. However, once we remove the infinite term by the procedure of “normal reordering” the resulting Hamiltonian is no longer physically equivalent to the one of the harmonic oscillators. For a single oscillator we can indeed add any finite number and the new Hamiltonian will describe the same physics. But having two or more such oscillators we cannot remove the ground state energies by a single shift of energy: Each oscillator has to be shifted by a different number and, accordingly, we change the energy differences between the levels of the global Hamiltonian describing the multi-oscillator system. And this is not just “shifting the origin of the energy scale”. Alternatively, one can add up all the ground state corrections and remove the overall energy shift by a different choice of the origin of the energy scale. This would have been acceptable if the shift were finite. Subtraction of infinite terms is in mathematics as forbidden as division by zero. (Example: $`1+\mathrm{}=2+\mathrm{}1=2`$ is as justified as $`10=201=2`$.)
The oscillator which can exist in superpositions of different frequencies is a natural candidate as a starting point for Dirac-type field quantization. We do not need to remove the ground state energy since in the Hilbert space of physical states the correction is finite. The question we have to understand is whether one can obtain the well known quantum properties of the radiation field by this type of quantization.
## II Field operators: Free Maxwell fields
The energy and momentum operators of the field are defined in analogy to $`H`$ from the previous section
$`H`$ $`=`$ $`{\displaystyle \underset{s,\kappa _\lambda }{}}\mathrm{}\omega _\lambda |s,\stackrel{}{\kappa }_\lambda s,\stackrel{}{\kappa }_\lambda |{\displaystyle \frac{1}{2}}\left(a^{}a+aa^{}\right)`$ (15)
$`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{s,\kappa _\lambda }{}}\mathrm{}\omega _\lambda \left(a_{s,\kappa _\lambda }^{}a_{s,\kappa _\lambda }+a_{s,\kappa _\lambda }a_{s,\kappa _\lambda }^{}\right)`$ (16)
$`\stackrel{}{P}`$ $`=`$ $`{\displaystyle \underset{s,\kappa _\lambda }{}}\mathrm{}\stackrel{}{\kappa }_\lambda |s,\stackrel{}{\kappa }_\lambda s,\stackrel{}{\kappa }_\lambda |{\displaystyle \frac{1}{2}}\left(a^{}a+aa^{}\right)`$ (17)
$`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{s,\kappa _\lambda }{}}\mathrm{}\stackrel{}{\kappa }_\lambda \left(a_{s,\kappa _\lambda }^{}a_{s,\kappa _\lambda }+a_{s,\kappa _\lambda }a_{s,\kappa _\lambda }^{}\right)`$ (18)
where $`s=\pm 1`$ corresponds to circular polarizations. Denote $`P=(H/c,\stackrel{}{P})`$ and $`Px=Ht\stackrel{}{P}\stackrel{}{x}`$. We employ the standard Dirac-type definitions for mode quantization in volume $`V`$
$`\widehat{\stackrel{}{A}}(t,\stackrel{}{x})`$ $`=`$ $`{\displaystyle \underset{s,\kappa _\lambda }{}}\sqrt{{\displaystyle \frac{\mathrm{}}{2\omega _\lambda V}}}\left(a_{s,\kappa _\lambda }e^{i\omega _\lambda t}\stackrel{}{e}_{s,\kappa _\lambda }e^{i\stackrel{}{\kappa }_\lambda \stackrel{}{x}}+a_{s,\kappa _\lambda }^{}e^{i\omega _\lambda t}\stackrel{}{e}_{s,\kappa _\lambda }^{}e^{i\stackrel{}{\kappa }_\lambda \stackrel{}{x}}\right)`$ (19)
$`=`$ $`e^{iPx/\mathrm{}}\widehat{\stackrel{}{A}}e^{iPx/\mathrm{}}`$ (20)
$`\widehat{\stackrel{}{E}}(t,\stackrel{}{x})`$ $`=`$ $`i{\displaystyle \underset{s,\kappa _\lambda }{}}\sqrt{{\displaystyle \frac{\mathrm{}\omega _\lambda }{2V}}}\left(a_{s,\kappa _\lambda }e^{i\omega _\lambda t}e^{i\stackrel{}{\kappa }_\lambda \stackrel{}{x}}\stackrel{}{e}_{s,\kappa _\lambda }a_{s,\kappa _\lambda }^{}e^{i\omega _\lambda t}e^{i\stackrel{}{\kappa }_\lambda \stackrel{}{x}}\stackrel{}{e}_{s,\kappa _\lambda }^{}\right)`$ (21)
$`=`$ $`e^{iPx/\mathrm{}}\widehat{\stackrel{}{E}}e^{iPx/\mathrm{}}`$ (22)
$`\widehat{\stackrel{}{B}}(t,\stackrel{}{x})`$ $`=`$ $`i{\displaystyle \underset{s,\kappa _\lambda }{}}\sqrt{{\displaystyle \frac{\mathrm{}\omega _\lambda }{2V}}}\stackrel{}{n}_{\kappa _\lambda }\times \left(a_{s,\kappa _\lambda }e^{i\omega _\lambda t}e^{i\stackrel{}{\kappa }_\lambda \stackrel{}{x}}\stackrel{}{e}_{s,\kappa _\lambda }a_{s,\kappa _\lambda }^{}e^{i\omega _\lambda t}e^{i\stackrel{}{\kappa }_\lambda \stackrel{}{x}}\stackrel{}{e}_{s,\kappa _\lambda }^{}\right)`$ (24)
$`=`$ $`e^{iPx/\mathrm{}}\widehat{\stackrel{}{B}}e^{iPx/\mathrm{}}.`$ (25)
Now take a state (say, in the Heisenberg picture)
$`|\mathrm{\Psi }`$ $`=`$ $`{\displaystyle \underset{s,\stackrel{}{\kappa }_\lambda ,n}{}}\mathrm{\Psi }_{s,\stackrel{}{\kappa }_\lambda ,n}|s,\stackrel{}{\kappa }_\lambda ,n`$ (26)
$`=`$ $`{\displaystyle \underset{s,\stackrel{}{\kappa }_\lambda }{}}\mathrm{\Phi }_{s,\stackrel{}{\kappa }_\lambda }|s,\stackrel{}{\kappa }_\lambda |\alpha _{s,\stackrel{}{\kappa }_\lambda }`$ (27)
where $`|\alpha _{s,\stackrel{}{\kappa }_\lambda }`$ form a family of coherent states:
$`a|\alpha _{s,\stackrel{}{\kappa }_\lambda }=\alpha _{s,\stackrel{}{\kappa }_\lambda }|\alpha _{s,\stackrel{}{\kappa }_\lambda }`$ (28)
The averages of the field operators are
$`\mathrm{\Psi }|\widehat{\stackrel{}{A}}(t,\stackrel{}{x})|\mathrm{\Psi }`$ $`=`$ $`{\displaystyle \underset{s,\kappa _\lambda }{}}|\mathrm{\Phi }_{s,\stackrel{}{\kappa }_\lambda }|^2\sqrt{{\displaystyle \frac{\mathrm{}}{2\omega _\lambda V}}}\left(\alpha _{s,\kappa _\lambda }e^{i\kappa _\lambda x}\stackrel{}{e}_{s,\kappa _\lambda }+\alpha _{s,\kappa _\lambda }^{}e^{i\kappa _\lambda x}\stackrel{}{e}_{s,\kappa _\lambda }^{}\right)`$ (29)
$`\mathrm{\Psi }|\widehat{\stackrel{}{E}}(t,\stackrel{}{x})|\mathrm{\Psi }`$ $`=`$ $`{\displaystyle \underset{s,\kappa _\lambda }{}}|\mathrm{\Phi }_{s,\stackrel{}{\kappa }_\lambda }|^2\sqrt{{\displaystyle \frac{\mathrm{}\omega _\lambda }{2V}}}\left(\alpha _{s,\kappa _\lambda }(0)e^{i\kappa _\lambda x}\stackrel{}{e}_{s,\kappa _\lambda }\alpha _{s,\kappa _\lambda }^{}(0)e^{i\kappa _\lambda x}\stackrel{}{e}_{s,\kappa _\lambda }^{}\right)`$ (30)
$`\mathrm{\Psi }|\widehat{\stackrel{}{B}}(t,\stackrel{}{x})|\mathrm{\Psi }`$ $`=`$ $`i{\displaystyle \underset{s,\kappa _\lambda }{}}|\mathrm{\Phi }_{s,\stackrel{}{\kappa }_\lambda }|^2\sqrt{{\displaystyle \frac{\mathrm{}\omega _\lambda }{2V}}}\left(\alpha _{s,\kappa _\lambda }e^{i\kappa _\lambda x}\stackrel{}{n}_{\kappa _\lambda }\times \stackrel{}{e}_{s,\kappa _\lambda }\alpha _{s,\kappa _\lambda }^{}e^{i\kappa _\lambda x}\stackrel{}{n}_{\kappa _\lambda }\times \stackrel{}{e}_{s,\kappa _\lambda }^{}\right)`$ (31)
These are just the classical fields. More precisely, the fields look like averages of monochromatic coherent states with probabilities $`|\mathrm{\Phi }_{s,\stackrel{}{\kappa }_\lambda }|^2`$. The energy-momentum operators satisfy also the standard relations
$`H`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle _V}d^3x\left(\widehat{\stackrel{}{E}}(t,\stackrel{}{x})\widehat{\stackrel{}{E}}(t,\stackrel{}{x})+\widehat{\stackrel{}{B}}(t,\stackrel{}{x})\widehat{\stackrel{}{B}}(t,\stackrel{}{x})\right),`$ (32)
$`\stackrel{}{P}`$ $`=`$ $`{\displaystyle _V}d^3x\widehat{\stackrel{}{E}}(t,\stackrel{}{x})\times \widehat{\stackrel{}{B}}(t,\stackrel{}{x}).`$ (33)
It should be stressed, however, that these relations have a completely different mathematical origin than in the usual formalism where the integrals are necessary in order to make plane waves into an orthonormal basis. Here orthogonality follows from the presence of the projectors in the definition of $`a_{s,\kappa _\lambda }`$ and the integration in itself is trivial since
$`\widehat{\stackrel{}{E}}(t,\stackrel{}{x})\widehat{\stackrel{}{E}}(t,\stackrel{}{x})+\widehat{\stackrel{}{B}}(t,\stackrel{}{x})\widehat{\stackrel{}{B}}(t,\stackrel{}{x})`$ $`=`$ $`\widehat{\stackrel{}{E}}\widehat{\stackrel{}{E}}+\widehat{\stackrel{}{B}}\widehat{\stackrel{}{B}}`$ (34)
$`\widehat{\stackrel{}{E}}(t,\stackrel{}{x})\times \widehat{\stackrel{}{B}}(t,\stackrel{}{x})`$ $`=`$ $`\widehat{\stackrel{}{E}}\times \widehat{\stackrel{}{B}}.`$ (35)
Therefore the role of the integral is simply to produce the factor $`V`$ which cancels with $`1/V`$ arising from the term $`1/\sqrt{V}`$ occuring in the mode decomposition of the fields. To end this section let us note that
$`\mathrm{\Psi }|H|\mathrm{\Psi }`$ $`=`$ $`{\displaystyle \underset{s,\kappa _\lambda }{}}\mathrm{}\omega _\lambda |\mathrm{\Phi }_{s,\kappa _\lambda }|^2\left(|\alpha _{s,\kappa _\lambda }|^2+{\displaystyle \frac{1}{2}}\right)`$ (36)
$`\mathrm{\Psi }|\stackrel{}{P}|\mathrm{\Psi }`$ $`=`$ $`{\displaystyle \underset{s,\kappa _\lambda }{}}\mathrm{}\stackrel{}{\kappa }_\lambda |\mathrm{\Phi }_{s,\kappa _\lambda }|^2\left(|\alpha _{s,\kappa _\lambda }|^2+{\displaystyle \frac{1}{2}}\right).`$ (37)
The contribution from the vacuum fluctuations is nonzero but finite.
## III Spontaneous and stimulated emission
The next test we have to perform is to check the examples that were responsible for the success of Dirac’s quantization in atomic physics. It is clear that no differences are expected to occur for single-mode problems such as the Jaynes-Cummings model. In what follows we will therefore concentrate on spontaneous and stimulated emission from two-level atoms.
Beginning with the dipole and rotating wave approximations we arrive at the Hamiltonian
$`H`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{}\omega _0\sigma _3+{\displaystyle \frac{1}{2}}{\displaystyle \underset{s,\stackrel{}{\kappa }_\lambda }{}}\mathrm{}\omega _\lambda \left(a_{s,\stackrel{}{\kappa }_\lambda }^{}a_{s,\stackrel{}{\kappa }_\lambda }+a_{s,\stackrel{}{\kappa }_\lambda }a_{s,\stackrel{}{\kappa }_\lambda }^{}\right)+\mathrm{}\omega _0d{\displaystyle \underset{s,\stackrel{}{\kappa }_\lambda }{}}\left(g_{s,\stackrel{}{\kappa }_\lambda }a_{s,\stackrel{}{\kappa }_\lambda }\sigma _++g_{s,\stackrel{}{\kappa }_\lambda }^{}a_{s,\stackrel{}{\kappa }_\lambda }^{}\sigma _{}\right)`$ (38)
where $`d\stackrel{}{u}=+|\widehat{\stackrel{}{d}}|`$ is the matrix element of the dipole moment evaluated between the excited and ground states, and $`g_{s,\stackrel{}{\kappa }_\lambda }=i\sqrt{\frac{1}{2\mathrm{}\omega _\lambda V}}\stackrel{}{e}_{s,\stackrel{}{\kappa }_\lambda }\stackrel{}{u}`$. The Hamiltonian represents a two-level atom located at $`\stackrel{}{x}_0=0`$.
The Hamiltonian in the interaction picture has the well known form
$`H_I`$ $`=`$ $`\mathrm{}\omega _0d{\displaystyle \underset{s,\stackrel{}{\kappa }_\lambda }{}}\left(g_{s,\stackrel{}{\kappa }_\lambda }e^{i(\omega _0\omega _\lambda )t}a_{s,\stackrel{}{\kappa }_\lambda }\sigma _++g_{s,\stackrel{}{\kappa }_\lambda }^{}e^{i(\omega _0\omega _\lambda )t}a_{s,\stackrel{}{\kappa }_\lambda }^{}\sigma _{}\right).`$ (39)
Consider the initial state
$`|\mathrm{\Psi }(0)`$ $`=`$ $`{\displaystyle \underset{s^{},\stackrel{}{\kappa }_\lambda ^{},m}{}}\mathrm{\Psi }_{s^{},\stackrel{}{\kappa }_\lambda ^{},m}|s^{},\stackrel{}{\kappa }_\lambda ^{},m,+`$ (40)
$`=`$ $`{\displaystyle \underset{s^{},\stackrel{}{\kappa }_0^{}}{}}\mathrm{\Psi }_{s^{},\stackrel{}{\kappa }_0^{},0}|s^{},\stackrel{}{\kappa }_0^{},0,++{\displaystyle \underset{s^{},\stackrel{}{\kappa }_n^{}}{}}\mathrm{\Psi }_{s^{},\stackrel{}{\kappa }_n^{},n}|s^{},\stackrel{}{\kappa }_n^{},n,+.`$ (41)
The states corresponding to $`n=0`$ play a role of a vacuum. As a consequence the vacuum is not represented here by a unique vector, but rather by a subspace of the Hilbert space of states. It is also clear that the energy of this vacuum may be nonzero since no normal ordering of observables is necessary.
Using the first-order time-dependent perturbative expansion we arrive at
$`|\mathrm{\Psi }(t)`$ $`=`$ $`|\mathrm{\Psi }(0)`$ (42)
$`+\omega _0d{\displaystyle \underset{s,\stackrel{}{\kappa }_0}{}}{\displaystyle \frac{e^{i(\omega _0\omega _{\lambda _0})t}1}{\omega _0\omega _\lambda }}\mathrm{\Psi }_{s,\stackrel{}{\kappa }_{\lambda _0},0}g_{s,\stackrel{}{\kappa }_{\lambda _0}}^{}|s,\stackrel{}{\kappa }_{\lambda _0},1,`$ (43)
$`+\omega _0d{\displaystyle \underset{s,\stackrel{}{\kappa }_n}{}}{\displaystyle \frac{e^{i(\omega _0\omega _{\lambda _n})t}1}{\omega _0\omega _{\lambda _n}}}\mathrm{\Psi }_{s,\stackrel{}{\kappa }_n,n}\sqrt{n+1}g_{s,\stackrel{}{\kappa }_n}^{}|s,\stackrel{}{\kappa }_n,n+1,.`$ (44)
One recognizes here the well known contributions from spontaneous and stimulated emissions. It should be stressed that although the final result looks familiar, the mathematical details behind the calculation are different from what we are accustomed to. For example, instead of
$`a_{s_1,\stackrel{}{\kappa }_1}^{}|s,\stackrel{}{\kappa },m|s_1,\stackrel{}{\kappa }_1,1;s,\stackrel{}{\kappa },m,`$ (45)
which would hold in the standard formalism for $`\stackrel{}{\kappa }_1\stackrel{}{\kappa }`$, we get simply
$`a_{s_1,\stackrel{}{\kappa }_1}^{}|s,\stackrel{}{\kappa },m=0,`$ (46)
a consequence of $`a_{s_1,\stackrel{}{\kappa }_1}^{}a_{s,\stackrel{}{\kappa }}^{}=0`$.
###### Acknowledgements.
This work was done mainly during my stay in Arnold Sommerfeld Institute in Clausthal. I gratefully acknowledge a support from the Alexander von Humboldt Foundation. |
no-problem/9912/gr-qc9912079.html | ar5iv | text | # 2+1-dimensional black holes with momentum and angular momentum
## 1 Introduction
The discovery that 2+1-dimensional sourcefree Einstein gravity with a negative cosmological constant admits black hole spacetime was initially surprising because this theory does not admit local gravitational degrees of freedom: If the Ricci tensor is constant so is the Riemann tensor, spacetime has constant negative curvature, and is therefore locally anti-de Sitter (adS). Subsequently multi-black-hole configurations were found and classified , but only in the time-symmetric context, when the gravitational momentum variables (extrinsic curvature) vanish. After a condensed review of these time-symmetric spacetimes in Section 2, we discuss in section 3 the more general case when the momenta do not vanish. Our conclusions are summarized in Section 4.
## 2 Time-symmetric multi-black-holes
### 2.1 Initial States
By definition, time-symmetric geometries possess a spacelike surface $`S`$ such that reflection about this surface is an isometry. For multi-black-hole solutions this surface is Cauchy, so it suffices to classify the states at the moment of time-symmetry, on the two-dimensional spacelike $`S`$ whose extrinsic curvature vanishes. Because the three-dimensional spacetime has constant curvature $`\mathrm{\Lambda }<0`$ (where $`\mathrm{\Lambda }`$ is the cosmological constant; usually replaced by $`\mathrm{}^2=1/\mathrm{\Lambda }`$), the instrinsic geometry of $`S`$ is also one of constant negative curvature. Any such space can be put together out of pieces<sup>1</sup><sup>1</sup>1One standard construction uses a single piece, a fundamental domain of the discrete group $`𝒢`$ of isometries that specifies which parts of the domain’s boundary are to be identified, so that $`S=H^2/𝒢`$. We will however use an equivalent but somewhat different description. For details see . of its universal covering space, the simply-connected two-dimensional space of constant negative curvature, $`H^2`$.
The space $`H^2`$ is conveniently represented as the Poincaré disk, the region $`r<\mathrm{}`$ of the plane with polar coordinates $`(r,\theta )`$ and with the metric
$$ds^2=\frac{4}{\left(1\frac{r^2}{\mathrm{}^2}\right)^2}\left(dr^2+r^2d\theta ^2\right)$$
(1)
The map between $`H^2`$ and the plane of polar coordinates $`(r,\theta )`$ is an equal-angle (conformal) map in which geodesics are represented as arcs of Euclidean circles normal to the “limit circle” $`r=\mathrm{}`$ (Fig. 1).
The basic time-symmetric “single” black hole is that due to Bañados, Teitelboim and Zanelli (BTZ) , with metric
$$ds^2=\left(\frac{\rho ^2}{\mathrm{}^2}m\right)dt^2+\left(\frac{\rho ^2}{\mathrm{}^2}m\right)^1d\rho ^2+\rho ^2d\varphi ^2.$$
(2)
Putting $`m=1=\mathrm{}`$ for simplicity we find the coordinates $`(\rho ,\varphi )`$ of the space part of (2) to be related to the $`(r,\theta )`$ of (1) by
$$r^2=\frac{\rho \mathrm{cosh}\varphi 1}{\rho \mathrm{cosh}\varphi +1}\mathrm{cos}\theta =\sqrt{\frac{\rho ^21}{\rho ^2\mathrm{cosh}^2\varphi 1}};$$
(3)
a polar coordinate plot of (3) as in Fig. 1b shows that the coordinates $`\rho ,\varphi `$ cover the Poincaré disk if $`\varphi `$ is given an infinite range. But the coordinate $`\varphi `$ of (2) is intended to have the usual range $`2\pi `$ of a polar angle. Fig. 1b shows in heavy outline a strip of half this size. We obtain the BTZ geometry by laying a second, identical copy of this strip on top and sewing the edges together (Fig. 1c).
Multi-black-hole initial states can be constructed in an analogous way: gluing regions together by boundary geodesics of equal length makes a smooth union since the extrinsic curvature of a 2D geodesic vanishes. Any corners of the boundary should be $`90^{}`$, so that a regular neighborhood is created when four corners are glued together. Figure 2a shows an example. An identical copy is to be glued along the heavily drawn boundaries. The lightly drawn boundaries (with arrows) then become geodesic circles, and these are glued to each other. The result has the topology of a doubly punctured torus shown in Fig. 2b. Each puncture flares out to infinity, and in such a region it is isometric to an exterior region ($`\rho >\mathrm{}\sqrt{m}`$) of the BTZ initial state. One may describe this geometry as the initial state of two black holes that are joined through a common internal torus.
In the internal region of a general time-symmetric multi-black-hole inital state there are a number of minimal, homotopically inequivalent, closed geodesics (the curve with arrow and the dotted curves in Fig. 2b). Cutting the surface along these geodesics decomposes it into “flares” and trousers-shaped “cores.” The general core geometry is obtained from two geodesic hexagons as in Fig. 2c, and it is determined by three parameters, which can be taken to be the circumferences of the trousers’ legs and waist.
When assembling the general surface<sup>2</sup><sup>2</sup>2We confine attention to orientable geometries; non-orientable ones have a double covering that is orientable. out of cores and flares, we have to match the circumferences of the geodesics at the “seams,” but we can join them with an arbitrary “twist” (a rotation along the circles). When joining a flare to a core, the twist extends to an isometry $`\varphi \varphi +`$const of the exterior BTZ geometry and produces no new geometry; but a twist between two cores generally changes the geometry, so at each of those seams we loose one circumference parameter and gain one twist parameter, with no net change in the total number of parameters. Thus there are three parameters for each core component. If the surface has genus $`g`$ and $`k`$ exteriors, there are $`2g2+k`$ cores, and the number of parameters is $`6g6+3k`$. If $`g>1`$ we can have $`k=0`$, a finite, closed “universe.”
The hexagon constituents (Fig. 2c) of cores and corresponding infinite 2-gon components of flares are possible coordinate neighborhoods in which the metric can be given a standard form, such as (1). The coordinate transformations at the boundary, analogous to (3), increase in number and complexity with $`g`$ and $`k`$, but are well-defined when the gluing scheme is given by a figure like Fig. 2a, and the $`6g6+3k`$ parameters are specified. In this sense our figures (if labeled with the parameters) are similar to Feynman diagrams, representing well-defined mathematical expressions.
### 2.2 Time Development of a black hole
If we extend the metric of (2) to an infinite range of $`\varphi `$, we obtain a coordinate description of adS spacetime analogous to Rindler coordinates in Minkowski spacetime. Figure 3a shows the coordinates of (2) on the $`\varphi =0`$ section of adS spacetime as embedded in 2+1-dimensional flat space, as well as their continuation in the usual way to $`\rho <\mathrm{}\sqrt{m}`$. Here the horizontal axis is spacelike and planes perpendicular to it are timelike. The numbers label the values of the coordinate $`t`$.
Expressed in new coordinates,
$$P^2=\mathrm{}^2m\rho ^2,T=\mathrm{}\varphi ,\mathrm{\Phi }=t/\mathrm{}$$
(4)
the metric (2) is the same expression as in the old coordinates. The new coordinates interchange the role of $`t`$ and $`\varphi `$ in the region $`\rho `$ or $`P<\mathrm{}\sqrt{m}`$, where these coordinates are timelike, and the spacelike surfaces are analogous to a Kantowski-Sachs universe. Therefore Fig. 3a can also be regarded as a picture of the $`t=0`$ section of (2). In that case the numbers label the values of $`\varphi `$, and the initial state is the curve labeled $`P=\mathrm{}\sqrt{m}`$.
To return to the interpretation of (2) as a black hole, we make $`\varphi `$ periodic by identifying $`\varphi =\pi `$ and $`\varphi =+\pi `$. Then all $`P`$ or $`\rho =`$ const curves are circles, with $`\rho =0`$ the throat of the “wormhole” geometry, which collapses to zero size at O as the timelike $`\rho `$ increases. The origin O, which previously was a coordinate singularity, becomes a non-Hausdorff singularity at $`P=0`$, as in Misner space . This singular line extends to infinity and marks the endpoint $`E`$ of null infinity. Therefore the past of null infinity has a boundary, the horizon of the black hole.
Figure 3b is a representation adS spacetime as the interior of a cylinder in “sausage coordinates” . Each horizontal slice of the cylinder is a Poincaré disk as in Fig. 1b, and the time coordinate is one in which the adS space appears static. The mantle of the cylinder represents infinity. The heavily outlined region is half ($`\varphi =0`$ to $`\varphi =\pi `$, for example) of the BTZ spacetime (2). After doubling, the spacetime is no longer static, for example because the boundaries where the gluing takes place approach each other and intersect in a geodesic that ends at the point $`E`$ at infinity. The heavily-outlined lozenge-shaped regions on the left front and right rear of the cylinder are the two s. The one in front has endpoint at $`E`$. The horizon (striped surface) is the backward lightcone from $`E`$.
### 2.3 Time development of multi-black-holes
Each exterior region of a multi-black-hole initial state is isometric to a BTZ exterior, therefore the time development of each exterior will also be isometric to that shown in Fig. 3b. In particular, as seen from one such exterior, the other black holes lie behind that exterior’s horizon. The whole spacetime up to the non-Hausdorff singularity can be obtained by doubling the regions of adS space corresponding to the initial neighborhoods (such as those of Fig. 2a or c). The boundaries (“seams”) of these spacetime regions are generated by timelike geodesics normal to the initial boundaries.
Such two-dimensional, timelike boundaries are totally geodesic, and therefore fit together smoothly. Their intrinsic geometry is constant negative curvature (two-dimensional adS). The normal geodesics do not generate a complete surface, but only the part of a two-dimensional adS space that lies in the domain of dependence of the initial surface. Because all normal geodesics to a time-symmetric surface in adS spacetime intersect in one point, all the seams also intersect in one point $`T`$. When they are analytically extended to complete surfaces they intersect along spacelike geodesics, which will form the non-Hausdorff singularity after gluing. Thus the top (and bottom) of the region to be doubled looks like a pyramid-shaped “tent” whose ridge lines are the singularities (Fig. 4a). In the interior the ridge lines come together at the point $`T`$. From there they run to infinity, where they define the endpoints of each exterior’s . The horizon is obtained by running a lightcone backwards from each of these endpoints to the points of intersection with another such backward lightcone. (For details of this construction see .)
## 3 Angular momentum
The general BTZ metric describes a “single” black hole (with two asymptotically adS regions) that has angular momentum $`J`$ in addition to mass $`m`$. As we will see below, the metric with $`J0`$ can be obtained from the time-symmetric one, which has $`J=0`$, by changing the rules by which its two halves are glued together. To fix ideas we first consider such rule change within the time-symmetric class.
### 3.1 Alternative ways of gluing
Consider a three-black-hole initial state, obtained by gluing together two copies of the region between three disjoint geodesics on the Poincaré disk. Previously we have identified each point of the heavily drawn curves in the upper disk of Fig. 4b with the one vertically below it on the lower disk. The geodesic’s neighborhood is invariant under “translation” isometries that move each point on the geodesic by a constant distance. Therefore we get an equally smooth surface if we move the points on the lower disk by such an isometry, so that the gluing identifies points that are connected by the arrows in Fig. 4b. The result of gluing with a shift depends on the amount of shift, for example because the size of the minimal closed geodesics around the adjacent black holes, and hence their masses, depend on it. However, a change in mass is all that can happen to the initial state, because we know that all time-symmetric three-black-hole initial states are characterized by just three mass parameters. The shift can always be “transformed away” by changing the geodesic seams that are to be glued together.<sup>3</sup><sup>3</sup>3The same circumstance in flat space is illustrated by gluing a cylinder out of a piece of paper either with or without such a shift. In either case one gets a cylinder. If there is no shift, the seam is parallel to the cylinder’s axis. If there is a shift, the cylinder’s radius is smaller, and the seam is a helix on the cylinder.
In the space-time picture of a black hole (Fig. 3b) or of multi-black-holes (Fig. 4a) the seams are timelike hypersurfaces of constant negative curvature. These surfaces are invariant under a 3-parameter group of isometries. Again we can consider different gluing rules, depending on what isometry is applied at the seam. We want to distinguish those ways of re-gluing time-symmetric black holes that lead to new types of spacetimes.
We have already seen that we do not get a new class of spacetimes from re-gluing a seam by an isometry that leaves the surface of time-symmetry invariant. Similarly, if we re-glue a time-symmetric BTZ black hole by a time translation, we again get a spacetime with a surface of time-symmetry, that is, another time-symmetric black hole. This is illustrated in Fig. 4c in the timelike subspace obtained by slicing Fig. 3b with a vertical plane from left back to right front. Two copies of this plane are shown in perspective. The two pairs of curves on the planes are the seams, at $`\varphi =0`$ and $`\pi `$ on the top plane, and $`\varphi =\pi `$ and $`2\pi `$ on the bottom plane. The arrows connect points that are to be glued together.<sup>4</sup><sup>4</sup>4This alternative to gluing along vertical arrows can also be illustrated in Fig. 3a, where the numbers at the bottom are values of $`\varphi `$: cut the figure into two halves by a vertical plane through the center and perpendicular to the picture plane, rotate one half against the other about a horizontal axis, and re-glue. The heavy lines on the planes connect smoothly to form a closed geodesic<sup>5</sup><sup>5</sup>5In an accurate plot of sausage coordinates these geodesic segments would not look straight as they do in this qualitative picture. in the new surface of time-symmetry produced by this re-gluing. (In the three-dimensional picture the new surface of time-symmetry is obtained from the old one (labeled $`t=0`$ in Fig. 3b) by a “Lorentz transformation” isometry that has the geodesic $`t=0,\varphi =\pi `$ as an axis.)
In order to obtain a new class of black holes, with angular momentum, we re-glue a time-symmetric black hole by an isometry in the seam that has a fixed point at $`t=0`$.
### 3.2 BTZ black hole with angular momentum
In the metric (2) for the static BTZ black hole, introduce new coordinates $`T,\phi ,R`$,
$`t=T+\left({\displaystyle \frac{J}{2m}}\right)\phi `$
$`\varphi =\phi +\left({\displaystyle \frac{J}{2m\mathrm{}^2}}\right)T`$ (5)
$`R^2=\rho ^2\left(1{\displaystyle \frac{J^2}{4m^2\mathrm{}^2}}\right)+{\displaystyle \frac{J^2}{4m}}`$
where $`J<2m\mathrm{}`$ is a constant with dimension of length, and define another new constant
$$M=m+\frac{J^2}{4m\mathrm{}^2}.$$
(6)
In terms of these new quantities the metric (2) becomes
$$ds^2=N^2dT^2+N^2dR^2+R^2\left(d\phi +\frac{J}{2R^2}dT\right)^2$$
(7)
where
$$N^2=\left(\frac{R}{\mathrm{}}\right)^2M+\left(\frac{J}{2R}\right)^2.$$
(8)
Equation (7) is the metric for a black hole with angular momentum $`J`$. In this metric, the new coordinate $`\phi `$ is taken as periodic. When it changes by its period, $`2\pi `$, the old coordinates of (2) change by
$$tt+\frac{\pi J}{m}\varphi \varphi +2\pi $$
(9)
This tells us that in order to obtain a black hole with angular momentum, by re-gluing a time-symmetric one, we should apply a “boost” by $`\pi J/m`$ at the $`\varphi =2\pi `$ seam about the (old) horizon.
The construction by re-gluing a $`J=0`$ black hole spacetime does not yield the full $`J0`$ spacetime, because the pieces that we glue together end at the ridge lines ($`r=0`$) that become non-Hausdorff singularities in the $`J=0`$ case. When $`J0`$ the spacetime can be extended beyond the ridge line, to $`R=0`$, which would correspond to negative $`r^2`$. Otherwise stated, we do not obtain a representation of the full spacetime when we cut it into two pieces, because the two cuts we make along the seams intersect each other when we get too far from the initial surface. Nevertheless, a piece of the spacetime is enough to characterize it, and we can use it to deduce the number of parameters needed.
### 3.3 Multi-black-holes with angular momenta
Our purpose is to characterize spacetimes that have angular momenta and are counterparts to the time-symmetric multi-black-hole spacetimes of section 2. The basic building block is the three-black-hole spacetime of Fig. 4a. It has three totally geodesic seams, each of which is a 1+1-dimensional adS spacetime, of constant negative curvature $`\mathrm{\Lambda }`$, exactly the same as the seams the single black hole of section 3.2. Each of these seams can therefore be re-glued smoothly by applying an adS isometry. These isometries form a three-parameter group. As before, only a one-dimensional subset leads to spacetimes without any surface of time-symmetry, so there is one effective parameter per seam. The general three-black-hole state is therefore characterized by three configuration parameters, which can be taken to be the three masses, and three momentum parameters, the boost angles at the three seams.
The actual angular momentum of any one of the black holes (and therefore also its actual mass $`M`$, equation (6)) depends on the boost parameters of its two adjacent seams, and on the fixed point of the boost. Note that a boost at a seam is an isometry only within that seam (and in a neighborhood of the seam), but it cannot in general be extended to the whole spacetime. Once we have a three-black-hole spacetime with angular momentum, we can forget about the time-symmetric geometry that was used to construct it. The core will extend only to each leg’s local (outer) horizon at $`R_\mathrm{H}`$ (where $`R_{rmH}`$ is the larger root of $`N^2(R_\mathrm{H})=0`$). The geometry is locally reflection-symmetric about $`R_\mathrm{H}`$, and it can be matched there to other cores with the same local geometry.
The general time-symmetric multi-black-hole spacetime considered here was put together out of three-black-hole cores and and exteriors (Fig. 2b). We can put a spacetime together in topologically the same way if the cores have angular momenta. We only have to match masses and angular momenta at the seams, because the neighborhood of each seam (including the entire interior region, $`R<R_\mathrm{H}`$) depends only on that seam’s $`M`$ and $`J`$. So, when matching two cores we lose two of the parameters characterizing the separate cores. In the time-symmetric case we also gained one parameter, which we called a twist because it describes a rotation along the seam. In the case of space-time, the neighborhood of a seam is the same as a single black hole metric (7). It is therefore locally invariant no only under a change in $`\varphi `$ (“twist”) but also under a change in $`T`$ “boost”. Re-identifying an internal seam with a twist and a boost gives us two additional parameters back. Thus our general multi-black-hole is characterized by twice the number of parameters necessary to specify a time-symmetric one.
## 4 Conclusions
We have seen that sourceless 2+1-dimensional Einstein theory with a negative cosmological constant admits solutions with all spatial two-dimensional topologies that can carry a constant negative curvature metric.<sup>6</sup><sup>6</sup>6In addition, the torus topology is also admitted, but not as a time-symmetric state. For example, identifying $`t`$ and $`\varphi `$ periodically for $`\rho <\mathrm{}\sqrt{m}`$ in (2) yields the torus topology. Among these are multi-black-hole spacetimes that have several asymptotically anti-de Sitter regions, each of which is characterized by a mass and an angular momentum. A number of additional parameters are needed (except for the three-black-hole configuration) to characterize the internal structure. Half of these are configuration parameters that specify internal sizes or angular relationships; the other half are momentum parameters, which vanish if the space-time is time-symmetric. |
no-problem/9912/chao-dyn9912035.html | ar5iv | text | # Semiclassical properties of eigenfunctions and occupation number distribution for a model of two interacting particles
## Abstract
Quantum-classical correspondence for the shape of eigenfunctions, local spectral density of states and occupation number distribution is studied in a chaotic model of two coupled quartic oscillators. In particular, it is shown that both classical quantities and quantum spectra determine global properties of occupation numbers and inverse participation ratio.
, , and
thanks: Present address: Max-Planck-Institut für Kernphysik, D-69117 Heidelberg, Germany. Email: Luis.Benet@mpi-hd.mpg.de
Recently, the study of the quantum manifestations of classical chaotic systems (quantum chaos) has turned from spectral statistics to properties of eigenfunctions. For the former, statistical aspects of spectral fluctuations are well established: Random matrix predictions follow for chaotic systems and Poisson-like statistics for the integrable ones . For eigenfunctions, however, the approach seems not so straightforward. The inherent difficulty here arises essentially from the dependence on the basis. This forces us either to define basis independent quantities, or to specify a basis.
In this letter we shall follow the second possibility, trying to develop a framework as general as possible. We shall concentrate on the quantum-classical correspondence of the quantities such as the shape of eigenfunctions (EF), the local spectral density of states (LDOS) and the single-particle occupation number distribution ($`n_s`$). We study a Hamiltonian that displays classical chaos and has a spreading width in the single particle basis, that is sufficiently large to allow statistical treatment of the components. We find excellent agreement between these quantities and their classical analogues in the semiclassical region. The classical analogues are defined through phase space integrals. Therefore, they do not depend on dynamical properties such as integrability or chaos of the full Hamiltonian. We shall also show that this correspondence allows one to approximately obtain some important characteristics for which quantum phases of eigenfunctions play no role. In this way, in the semiclassical limit, one can obtain mean values of single-particle operators without diagonalization of large matrices. In particular, we present calculations for the inverse participation ratio which distinguishes localized states from extended ones in the unperturbed basis. These computations require only the knowledge of the classical analogue of EF and the spectra of the unperturbed Hamiltonian. We present our approach for the two-body problem, but essential parts of the approach can be easily extended to the $`N`$-body problem.
Let us begin with considering a two-body Hamiltonian of the form $`H=H_0+V`$, and assume that its classical dynamics is fully chaotic. Here, the unperturbed Hamiltonian is separable and, therefore, integrable in terms of two one-particle Hamiltonians, i.e. $`H_0=h_1+h_2`$. In turn, $`V`$ is the potential which couples the motion of the particles. For our purposes, we shall assume that both $`H`$ and $`H_0`$ remain invariant under the particle interchange ($`h_1=h_2=h`$).
For the quantum treatment of such Hamiltonians the unperturbed basis $`H_0`$ seems a convenient choice for the representation of exact eigenfunctions. The rate of convergence certainly depends on the strength of the perturbation $`V`$. For instance, if exact eigenfunctions are extended all over the energy range considered, as it is the case for certain potentials near the dissociation limit , the convergence will be very slow. We, therefore, assume that $`V`$ is such that the perturbed eigenfunctions are extended over a certain energy range as to allow convergence, but that this range is large enough in terms of the number of principal components, i.e. that the spreading width is sufficiently large .
The unperturbed basis is defined by the eigenfunctions $`|\mathrm{\Phi }_k^0`$ of $`H_0`$, reordered according to increasing eigenvalues, $`E_k^0<E_l^0`$ for $`k<l`$. This basis functions are written as properly symmetrized linear combinations of products of single-particle basis states. The single-particle basis is defined by the Schrödinger equation, $`h|\phi _i=ϵ_i|\phi _i`$, and the combination of the basis states is such that $`E_k^0=ϵ_{i_1}+ϵ_{i_2}`$. In this sense, the basis defined by the mean field approximation coincides with the unperturbed basis when $`V`$ is the residual interaction.
Denoting by $`|\mathrm{\Psi }_i`$ the eigenstates of the total Hamiltonian and by $`E_i`$ the corresponding eigenvalues, in terms of the basis states we have
$$|\mathrm{\Psi }_i=\underset{k}{}C_k^i|\mathrm{\Phi }_k^0.$$
(1)
The expansion coefficients $`C_k^i`$ define some global quantities that we consider here. First, the shape of eigenfunctions (EF), also called the F-function, is defined as the distribution obtained by an average of the squared expansion coefficients as a function of the unperturbed energy
$$F_k^i\overline{\left|C_k^i\right|^2}=F(E_i,E_k^0).$$
(2)
Here, the average is defined over a small window of perturbed eigenstates around $`E_i`$. The average has been introduced in order to smooth the fluctuations arising from individual wave functions considered. It has been shown that the F-function defines a kin of thermodynamic partition function for systems of finite number of interacting particles , if the components meet certain statistical requirements. These will certainly be met, if the classical Hamiltonian leads to chaotic motion and the spreading width is large enough.
The second quantity of our interest, the LDOS gives the distribution of unperturbed eigenstates in terms of the perturbed ones. The LDOS is related to the EF by
$$P_k^iF(E_i,E_k^0)\rho (E_i),$$
(3)
where $`\rho (E_i)`$ is the level density for exact eigenstates, and the F-function is taken now for a fixed value of the unperturbed energy $`E_k^0`$. Therefore, the LDOS is a function of the perturbed energy $`E_i`$.
These quantities have well-defined classical interpretations. For instance, the classical EF is the distribution resulting from the time-dependent unperturbed energy $`_0(t)`$, obtained by substituting the solutions of the equations of motion for the Hamiltonian $`H`$ into the expression for $`H_0`$. Since $`H`$ is assumed to generate fully chaotic and thus ergodic dynamics, we can replace the time integration along one typical orbit by a phase space integral. Therefore, one can write for the classical EF the phase space integral
$$g(,_0)=Ad𝐩d𝐪\delta (H(𝐩,𝐪))\delta (_0H_0(𝐩,𝐪)),$$
(4)
where $`𝐪=(q_1,q_2)`$, $`𝐩=(p_1,p_2)`$ are the position and momentum vectors, and $`A`$ is a normalization constant. For the classical EF in Eq. (4), the independent variable is $`_0`$; the total energy $``$ is fixed.
The classical LDOS can be obtained in the same terms by integrating the equations of motion for $`H_0`$ and substituting the solutions into the expression for $`H`$. Since $`H_0`$ is an integrable Hamiltonian, one is forced to consider an average over different initial conditions. Then, Eq. (4) serves also to define the classical LDOS; $`_0`$ is now held fixed and $``$ is the independent variable. In the following, we shall use the notation $`g(_0)`$ to indicate the classical EF and $`g()`$ the classical LDOS, when referring to Eq. (4). In this notation, the variable that is explicitly written is the independent variable.
We notice that the classical and quantum quantities, for instance the EF as given by Eq. (4) and the F-function Eq. (2), differ in the way they are normalized. The former, being a probability distribution, is normalized according to $`g(,_0)𝑑_0=1`$, which actually defines the value of the constant $`A`$. For the latter, the normalization is the unitarity condition for the expansion coefficients, i.e. $`_k|C_k^i|^2=1`$. For a fair comparison of the classical and quantum results we require the normalizations to be of the same type. This is achieved including the local density of states, i.e. $`_k|C_k^i|^2\delta (E^0E_k^0)𝑑E^0=1`$. Numerically, we shall calculate this expression replacing the local density of states by a step function, which is different from zero only in a small interval that includes some levels, where its value is one. Then, we divide the energy range in a number of bins, and associate to each bin the sum of the intensities $`|C_k^i|^2`$ of the energy levels contained in it.
We shall turn now to the comparison among the quantum EF and LDOS with their classical counterparts in the following model. We consider two indistinguishable coupled quartic oscillators with the Hamiltonian
$$H=\frac{1}{2}(p_1^2+p_2^2)+\alpha (x_1^4+x_2^4)+\beta x_1^2x_2^2+\gamma (x_1^3x_2+x_1x_2^3).$$
(5)
Here, $`\alpha >0`$ and we consider $`\beta <0`$ in order to have strongly chaotic dynamics far from the dissociation limit, which is given by $`2\alpha +\beta +2|\gamma |=0`$. For the results presented below, we have used $`\alpha =10`$, $`\beta =5.5`$ and $`\gamma =5.6`$; the system is strongly chaotic and the phase space is quite homogeneous .
The system defined by the Hamiltonian (5) is obviously integrable for $`\beta =\gamma =0`$; we shall consider this case to define the unperturbed Hamiltonian $`H_0`$. We note that in the basis defined by $`H_0`$ there are diagonal contributions from the term $`\beta x_1^2x_2^2`$. These contributions could be incorporated in the definition of $`H_0`$ in order to improve the approximate mean field, but we avoid this complication.
Since the potential is a homogeneous polynomial, the system scales classically with the energy. This property can be carried over to the classical expressions for the EF, LDOS and $`n_s`$. For instance, for the classical EF, one finds $`g(,_0)=^1g(1,_0/)`$, with obvious extensions to the other quantities.
In the following we present results obtained for fermions of even parity for the Hamiltonian (5); similar results were obtained for other symmetry classes. Figure 1a refers to the EF of a typical eigenstate, which extends over a certain range of unperturbed energies; Fig. 1b shows the EF for the uncommon case of an eigenfunction with a smaller number of principal components, a localized eigenfunction. This can be readily appreciated in the vertical scale (intensity) and on the apparent density of peaks (the eigenfunctions are normalized). The distinction between a localized and an extended eigenstate can be made quantitative, for instance, by considering the number of principal components $`l`$: In the former case we obtain $`l=(_k|C_k^i|^4)^1=146.5`$, while for the latter we have $`l=18.4`$. Similar properties are also found when we consider the LDOS for individual eigenfunctions of $`H_0`$.
Since most eigenstates display similar statistical features, an average considering neighbouring eigenstates will smooth the fluctuations, and we expect a better correspondence. In Figs. 2 we present the results for the EF and the LDOS obtained for a recentered average of eigenstates containing 21 eigenstates. The average was performed by recentering each eigenstate, so the peak associated with the actual energy of the eigenstate is labeled as zero. We notice that this procedure incorporates the fact that neighbouring eigenstates are typically similar.
As shown in the insets of Figs. 1-2, for generic eigenstates there is a clear correspondence between the classical and the quantum results. As it can be appreciated in the plots, in this case, the tails of the distribution are well approximated by the classical calculations, while the central peak is the main concern for the correspondence. The results presented are actually improved as the semiclassical limit is reached. In fact, as we approach the semiclassical limit, the density of states increases and therefore the central classical peak is better resolved. As one would expect, the localized eigenstates display strong deviations from the classical results. Notice though, that after averaging the classical correspondence emerges again, since the main contributions come from the (typical) extended eigenstates. This is appreciated in Fig. 2b, where we present the LDOS obtained for the recentered average taken over 21 eigenstates, where we have chosen the central eigenstate to be localized.
At this point we shall emphasize that no free parameter has been used to fit the data. Namely, the classical energy is taken from the energy of the eigenstate under consideration (and the scaling property is used); for the results involving the average, the energy corresponds to the average energy of the eigenstates within the window. Furthermore, the similarity (under certain reflection) of the LDOS and the EF displayed in Figs. 2 is a consequence of the symmetry of $`H`$ and $`H_0`$ in Eq. (4), as given by (3).
The good correspondence found for the EF and the LDOS in the semiclassical limit can be understood by interpreting Eq. (4) as a kind of generalization of the Weyl formula for the intensities of the eigenstates, with respect to a certain basis $`H_0`$. The fact that this expression is a phase space integral, implies that it contains no information about the integrability or chaos of the classical systems.
While the above quantum-classical correspondence for the LDOS and EF, to a large extent, can be expected from previous studies of other models , in what follows we concentrate on the analysis of “single-particle” properties, in particular, the occupation numbers of single-particle states (see also where this quantity was studied for two interacting spins). This may seem of marginal importance for two-particle systems, though it is certainly of great significance as the number of particles increases . Yet, the two-particle system will be an adequate test ground to study how the semiclassical approach can be applied.
In terms of the expansion coefficients $`C_k^i`$, the single-particle occupation number distribution $`n_s`$ is defined as
$$<n_s^i>\mathrm{\Psi }_i|\widehat{n}_s|\mathrm{\Psi }_i=\underset{k}{}|C_k^i|^2n_s^{(k)},$$
(6)
where $`\widehat{n}_s=a_s^{}a_s`$ is the occupation number operator, $`a_s^{}`$ and $`a_s`$ are the creation and annihilation operators, and $`n_s^{(k)}=\mathrm{\Phi }_k^0|a_s^{}a_s|\mathrm{\Phi }_k^0`$. Aside from its significance in statistical mechanics, the interest of the occupation number operator is that it allows to calculate mean values of any single-particle operator $`<M>=_sn_sM_{ss}`$.
The classical $`n_s`$ is defined in the same terms of the classical EF or LDOS. Accordingly, it is obtained by computing the time dependent single-particle energy distribution $`ϵ(t)`$, using the solutions of the classical equations of motion for $`H`$. Again, an expression similar to Eq. (4) can be written for the classical occupation number distribution, which is given by
$$g_n(ϵ,)=A^{}d𝐩d𝐪\delta (H(𝐩,𝐪))\underset{i=1}{\overset{2}{}}\delta (ϵh_i(p_i,q_i)).$$
(7)
Here, $`ϵ`$ is the independent variable, $``$ is the energy of the full Hamiltonian, and $`h_i`$ represents the one-particle Hamiltonian. In the present case we have assumed the particle interchange symmetry, so the sum in Eq. (7) may be absorbed in the normalization constant, which corresponds to the number of particles.
In Fig. 3 we present the results for $`n_s`$. No free parameter was used to fit the data. A good correspondence of the classical and quantum results is found, although the calculations that involve individual eigenfunctions display very large fluctuations. The correspondence is certainly better if we perform an average over some neighbouring eigenstates, and again is improved as we go deeper into the semiclassical region. It is not clear, however, to what extent the fluctuations observed for individual eigenfunctions are related to quantum localization effects.
The correspondence shown in Figs. 3 is novel and important. First, it has no especial interpretation in the framework of classical mechanics for isolated systems of few interacting particles, although the single-particle occupation number distribution is an important quantity in quantum statistical mechanics, where is linked to the Boltzmann distribution (in the thermodynamic limit). Second, we note that the tail of the $`n_s`$-distribution, which displays exponential decay, is well reproduced by Eq. (7). This is a non-trivial remark if we recall that we deal with a two-particle system. Clearly, this permits to define an analogous of the Boltzmann parameter for finite systems, although its interpretation as the inverse of the temperature is not generically accepted (see the discussion in ).
Once we have shown that good quantum-classical correspondence is found in the semiclassical limit, it is possible to proceed with estimates of other quantities involving the F-function. Our calculations are based on the classical EF as calculated above, and require only the knowledge of the single-particle spectra (that allows to compute the unperturbed spectra) and the perturbed spectra. We shall refer our prescription as “semiquantum approach”. Specifically, we illustrate the method by a calculation of the inverse participation ratio (IPR),
$$P^+(E_i)=\underset{k}{}|C_k^i|^4,$$
(8)
which is an important measure of the uniformity of the expansion distribution . Other quantities which involve even powers of the expansion coefficients, i.e. where the phase of eigenfunctions does not appear, can be obtained in the same terms.
In order to compute the IPR, we must express our classical (continuous) distribution as an intensity distribution, which is normalized as an eigenfunction. Obviously, our results will depend on how we discretize this distribution, though no free parameter will be involved. The comparison between the semiquantum and the quantum results will thus give insight in the plausibility of the discretization. One possibility for this discretization is the following: We divide the unperturbed energy range into segments, such that every segment contains only one unperturbed eigenvalue, and that they span the whole interval. We define the limits of each such interval by the middle point of neighbouring eigenvalues. Then, we define the classical intensity associated with an unperturbed energy as the value of the area under the classical distribution corresponding to the segment, that contains the unperturbed eigenvalue. This procedure leads to a semiquantum intensity distribution with the required normalization, and can, therefore, be used to obtain quantities like the $`n_s`$ or a semiquantum version of the IPR. However, the semiquantum intensities obtained in this way will display more zeros than the corresponding quantum ones. This is an obvious consequence of the finite range where the classical density $`g(,_0)`$ is non-zero.
We note that from the classical scaling properties of the EF, $`g(,_0)^1`$, and the mean level density, $`\rho ()^{1/2}`$, we can obtain a semiquantum estimate for the IPR, which is expected to display a power-law decay of the form $`[g(,_0)\rho ()]^2^1`$. In Figs. 4 we present results for the semiquantum and quantum IPR. The quantum results shown correspond to the direct evaluation of Eq. (8) for individual eigenstates, that is, no average or smoothing prescription has been used. A qualitative agreement is observed between the semiquantum and the quantum data, although quantitative differences arise. The power-law decay predicted from the classical scaling properties of the system (5) is well confirmed. In fact, we have fitted curves of the form $`P_i^+E_i^\mu `$ to our results, and obtained that the best fit is provided by $`\mu _{sq}=1.06`$ and $`\mu _q=1.07`$ for the semiquantum and quantum data, respectively. Analogue results, obtained when considering averages over windows of the eigenstates, show better quantitative agreement. In this case, the features related with large values of the IPR (localized eigenstates) are smeared out.
It is interesting to note that certain semiquantum states display a rather large IPR, which could be associated with some localization properties (scars). In turn, the quantum results display more of these localized states and the IPR associated is also larger. This enhancement of localization is a well-known quantum effect.
In summary, we have shown that important quantities like the EF, LDOS or $`n_s`$ have, in the semiclassical limit, good correspondence to their classical analogues in our model of two interacting non-linear oscillators. The classical quantities are obtained as phase space integrals, assuming that they are applied to the case when the classical dynamics is strongly chaotic. We have used the classical quantities, in particular the classical EF, together with the quantum spectra of the perturbed and unperturbed systems in order to obtain “semiquantum” intensities associated to a given (perturbed) eigenstate. We have compared the quantum results with the semiquantum ones, specifically for the inverse participation ratio, and found good qualitative agreement, although quantitatively they may display differences. In particular, the semiquantum results underestimate the IPR for rather localized (scars) eigenstates, both in their number and magnitude. The energy dependence of the IPR can be understood from the classical scaling properties of our model.
Since quantities like the IPR involve information on the underlying classical mechanics, we believe that semiquantum properties may help to explore quantum localization effects in the semiclassical region. Moreover, expressions like Eq. (4) may help to define a reference from which the study of eigenfunction fluctuations and their relation to the underlying dynamics can be achieved. Our results directly take into account the two-body nature of the inter-particle interaction, and can be easily extended to systems with any number of particles.
The authors want to express sincerely their gratitude for the useful discussions to François Leyvraz. This work was partially supported by the DGAPA (UNAM) project IN-102597 and the CONACYT (México) Grants No. 25192-E, 26163-E and No. 28626-E. |
no-problem/9912/astro-ph9912364.html | ar5iv | text | # Expectations For an Interferometric Sunyaev-Zel’dovich Effect Survey for Galaxy Clusters
## 1. Introduction
The evolution of the cluster abundance is a sensitive probe of the mass density $`\mathrm{\Omega }_m`$ (e.g., Viana and Liddle (1999); Bahcall and Fan (1998); Oukbir, Bartlett, and Blanchard (1997)). X-ray cluster surveys have started to constrain $`\mathrm{\Omega }_m`$, but they are limited by their sample size and rapid decline in sensitivity with redshift, making the counts very sensitive to the selection function. While the selection function is presumably well-understood, it would be preferable to have a probe whose sensitivity does not fall off precipitously with redshift. We will show that a Sunyaev-Zel’dovich effect survey is ideal in this regard.
Hot ionized cluster gas interacts with passing cosmic microwave background (CMB) photons, distorting the CMB spectrum to create a decrement of CMB flux at lower frequencies and an excess at higher frequencies. This spectral distortion is independent of the redshift of the cluster, and only depends on the optical depth to Compton scattering and the temperature of the gas. This is the thermal Sunyaev-Zel’dovich effect (SZE) (Sunyaev and Zel’dovich (1972)).
There have been several previous predictions of the number of clusters expected in SZE surveys (e.g., Bartlett and Silk (1994); Barbosa et al. (1996)), with most earlier work focusing on the total SZE flux. As survey yields depend sensitively on the observing strategy, we focus here on yields for a proposed interferometric survey (see Mohr et al. (1999)).
A large catalog of high-redshift clusters that extends to low masses would be an extremely useful resource for several reasons. Observations suggest that the universe may no longer be matter-dominated, with a large fraction ($`70\%`$) of its present energy density either in the form of curvature (open universe) or vacuum (cosmological constant) energy. Linear theory suggests that structure formation will slow considerably when the expansion dynamics are no longer dominated by matter; this occurred around $`z1.5`$ if current measurements of $`\mathrm{\Omega }_m`$ are correct. Therefore, the cluster abundance out to $`z2`$ should be a valuable probe of the matter density of the universe.
A large collection of high-redshift, low-mass clusters also would provide an ideal sample for exploring evolution of the intra-cluster medium (ICM) and feedback from galaxy formation. The shallow potential wells of less massive clusters are more strongly affected by energy input from non-gravitational sources and are therefore the best place to search for the signatures of such processes.
Estimating the cluster yield for an SZE survey requires that we know which properties of a cluster determine its likelihood of detection. We will show that the mass of a cluster is the single most important factor in determining if a given cluster can be detected by an interferometric survey. In this case, the calculation of the expected yield separates into two distinct exercises: finding the minimum observable mass as a function of redshift and calculating the number density of clusters above a given mass threshold as a function of redshift.
We determine the minimum observable mass as a function of redshift by making synthetic observations of N-body+gas hydrodynamical simulations (§2). The number density of clusters is calculated using the Press-Schechter prescription (Press and Schechter (1974); §3). Section 4 contains estimates of the expected survey yield and the results are discussed in Section 5.
## 2. Cluster Detectability
The SZE decrement along a given line-of-sight is independent of cosmology. It is simply proportional to the integrated thermal pressure along the line-of-sight and therefore only depends on the properties of the cluster. The decrement can be written
$$\frac{\mathrm{\Delta }T}{T_{\mathrm{𝐶𝑀𝐵}}}=g(x)𝑑ln_e(l)\frac{k_BT_e(l)}{m_ec^2}\sigma _T,$$
(1)
where $`n_e`$ is the electron number density, $`T_e`$ is the electron temperature, $`m_e`$ is the electron rest mass, $`\sigma _T`$ is the Thomson cross section, $`g(x)=x(e^x+1)/(e^x1)4`$ with $`x=h\nu /k_BT_{\mathrm{𝐶𝑀𝐵}}`$ and the integral is along the entire line-of-sight; by assumption, the only significant contribution to the integral comes from the cluster atmosphere. In the Rayleigh-Jeans limit ($`\nu 200`$ GHz) the dimensionless frequency factor $`g(x)=2`$.
The specific intensity $`S_\nu `$ in the Rayleigh-Jeans regime is $`S_\nu =2k_B\mathrm{\Delta }T\nu ^2/c^2d\mathrm{\Omega }`$, where $`d\mathrm{\Omega }`$ is the effective solid angle of the observations. Thus the total SZE flux decrement $`S_{tot}`$ for a galaxy cluster can be written
$$S_{tot}(z)=\frac{2k_B^2\nu ^2g(x)\sigma _TT_{\mathrm{𝐶𝑀𝐵}}}{m_ec^4d_A(z)^2}T_e_n\frac{M_{200}f_{ICM}}{\mu _em_p},$$
(2)
where $`d_A(z)`$ is the angular diameter distance, $`T_e_n`$ is the electron density weighted mean temperature in the cluster, $`\mu _e`$ is the mean molecular weight per electron, $`m_p`$ is the proton mass, $`f_{ICM}`$ is the ratio of total gas mass to binding mass, and $`M_{200}`$ is a measure of the cluster virial mass, defined as the mass within $`r_{200}`$, the radius where the mean interior density is 200 times the critical density. Note that we explicitly ignore contributions to the SZE flux coming from outside the virial region.
Equation 2 indicates that the total SZE flux for a cluster is directly proportional to the cluster virial mass; the only dependence on cluster structure is through the density weighted mean temperature $`T_e_n`$. This is unique to a survey with a beam large enough that the cluster SZE is not resolved. However, for maximum brightness sensitivity to the SZE effect, the beam used for a survey should be well matched to the typical angular scale subtended by clusters. Such a survey will partially resolve most of the clusters and will therefore be somewhat sensitive to the internal cluster structure. We show below that for a population of clusters with similar ICM mass fraction $`f_{ICM}`$ and similar temperature structure, the detection threshold for an interferometric SZE survey is also effectively a virial mass limit.
### 2.1. Mock Interferometric SZE Observations
We determine the detection threshold of an SZE survey by analyzing mock observations of numerical cluster simulations. These mock observations are appropriate for a proposed ten-element interferometer, composed of 2.5 m diameter telescopes outfitted with receivers operating at a central frequency of 30 GHz and a bandwidth of 8 GHz. We assume a correlator efficiency of 0.88 and an aperture efficiency of 0.77. We take the system temperature below the atmosphere to be 21 K; this includes contributions from spillover past the primary. We assume an atmospheric column with opacity $`\tau =0.045`$ at zenith, which is a conservative estimate appropriate for a low altitude, moderately dry site such as the Owens Valley Radio Observatory in summer. The integration time on each cluster is 42 hr, composed of six 7 hr tracks on the cluster. This exposure on each piece of the sky would allow us to survey $``$1 deg<sup>2</sup> per month.
Interferometers measure the visibility, $`V(u,v)`$ which is the Fourier transform of the sky brightness distribution multiplied by the primary beam of the telescopes. We create mock observations at a redshift $`z`$ by imaging hydrodynamical cluster simulations (described in detail below) at that redshift; we place those clusters at the appropriate angular diameter distance, multiply the resulting SZE image with the primary beam of the 2.5 m dishes (modeled as a Gaussian with $`\mathrm{𝐹𝑊𝐻𝑀}=14.7^{}`$ at 30 GHz), and then Fourier transform to produce visibilities $`V(u,v)`$. We then sample these visibilities at the same locations in $`u`$$`v`$ space which appear in our simulated 7 hr array track and add the appropriate noise.
### 2.2. Hydrodynamical Cluster Simulations
The effects of ongoing cluster merging on the ICM density and temperature structure can be calculated self-consistently in hydrodynamical simulations. Therefore, these simulations provide a way of producing test clusters whose complexity approaches that of observed clusters. In this work we use an ensemble of 36 hydrodynamical cluster simulations carried out within three different cold dark matter (CDM) dominated cosmologies (1) SCDM ($`\mathrm{\Omega }_m=1`$, $`\sigma _8=0.6`$, $`h_{50}=1`$, $`\mathrm{\Gamma }=0.5`$), (2) OCDM ($`\mathrm{\Omega }_m=0.3`$, $`\sigma _8=1.0`$, $`h_{50}=1.6`$, $`\mathrm{\Gamma }=0.24`$), and (3) LCDM ($`\mathrm{\Omega }_m=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, $`\sigma _8=1.0`$, $`h_{50}=1.6`$, $`\mathrm{\Gamma }=0.24`$). Here $`\sigma _8`$ is the power spectrum normalization on $`8h^1`$ Mpc scales; initial conditions are Gaussian random fields consistent with a CDM transfer function with the specified $`\mathrm{\Gamma }`$ (Davis et al. (1985)). These simulations have been used previously for studies of the X-ray emission from galaxy clusters (Mohr and Evrard (1997); Mohr, Mathiesen, and Evrard (1999)).
Within each cosmological model, we use two $`128^3`$ N–body only simulations of cubic regions with scale $`400`$ Mpc to determine sites of cluster formation. Within these initial runs the virial regions of clusters with Coma–like masses of $`10^{15}`$ M contain $``$10<sup>3</sup> particles.
Using the N–body results for each model, we choose clusters for additional study, resimulating them at higher resolution with gas dynamics and gravity, as described below. The size of the resimulated region is set by the turnaround radius for the enclosed cluster mass at the present epoch. The large wavelength modes of the initial density field are sampled from the initial conditions of the large scale N–body simulations, and power on smaller scales is sampled from the appropriate CDM power spectrum. The simulation scheme is P3MSPH (Evrard (1988)), the ICM mass fraction is fixed at $`f_{ICM}=0.2`$, and radiative cooling and heat conduction are ignored.
The high resolution, hydrodynamical simulations of individual clusters require two steps: (1) an initial, $`32^3`$, purely N–body simulation to identify which portions of the initial density field lie within the cluster virial region at the present epoch, and (2) a final $`64^3`$, three species, hydrodynamical simulation. In the final simulation, the portion of the initial density field which ends up within the cluster virial region by the present epoch is represented using dark matter and gas particles of equal number (with mass ratio 4:1), while the portions of the initial density field that do not end up within the cluster virial region by the present epoch are represented using a third, collisionless, high mass, species. The high mass species is 8 times more massive than the dark matter particles in the central, high resolution region. This approach allows us to include the tidal effects of the surrounding large scale structure and the gas dynamics of the cluster virial region with simulations that take only a few days of CPU time on a low end UltraSparc.
The scale of the simulated region surrounding each cluster is in the range 50–100 Mpc, and varies as $`M_{halo}^{1/3}`$, where $`M_{halo}`$ is approximately the mass enclosed within the present epoch turnaround radius. Thus, the 36 simulated clusters in our final sample have similar fractional mass resolution; the spatial resolution varies from 125–250 kpc. We will find that the clusters of greatest interest to us are those with mass $`2\times 10^{14}M_{}`$, which have at least several thousand gas particles, even in the lowest resolution simulations (i.e., the highest mass at $`z=0`$). The masses of the final cluster sample vary by an order of magnitude. Following procedures described in Evrard (1990), we create SZE decrement images along three orthogonal lines of sight for each cluster. Each image is 128<sup>2</sup> and spans a distance of 6.7$`h_{50}^1`$ Mpc in the cluster rest frame.
A strength of using numerical cluster simulations is that structural evolution consistent with the cosmological model is accounted for naturally by simply examining higher redshift outputs of the simulations.
### 2.3. Determining the Survey Detection Threshold
We attempt to detect the clusters in the mock observations by fitting the data to the Fourier transforms of spherical $`\beta `$ models (Cavaliere and Fusco-Femiano (1978)), with $`\beta =4/3`$. The results are not sensitive to the value of $`\beta `$, and in fact a simple Gaussian works well. We choose this value of $`\beta `$ because its transform is a simple exponential and because the best fits to the SZE decrement images from the simulations yield a mean value $`\beta =1.1`$. We determine the best fit core radius $`\theta _c`$ and central decrement $`\mathrm{\Delta }T_o`$ by minimizing $`\chi ^2`$; the $`\mathrm{\Delta }\chi ^2`$ difference between the best fit $`\beta `$ model and the null model (no cluster present) provides a measure of the significance of the cluster detection. We set our threshold $`\mathrm{\Delta }\chi ^2>28.7`$ (5$`\sigma `$ for two degrees of freedom).
Figure 2.2 shows the detection significance for 108 mock observations of the simulated clusters output at redshift $`z=0.5`$. Observations of clusters in all three cosmologies appear on the same plot, and for this example we have imaged all the clusters at the same angular diameter distance and accounted for differences in $`H_{}`$. There is a striking correlation between the detection significance and cluster virial mass, indicating that even when complex cluster dynamics are considered, the survey detection threshold can still be effectively described as a mass threshold. The scatter about this correlation is a reflection of the variation in cluster structure due to different merger histories and projection effects. We determine the mass threshold by examining the $`\mathrm{\Delta }\chi ^2`$-$`M_{200}`$ relation and determining the virial mass at which $`\mathrm{\Delta }\chi ^2=28.7`$. We calculate the RMS scatter about the relation at $`\mathrm{\Delta }\chi ^2>28.7`$, and use that scatter as an estimate of the width of the detection threshold in mass; the number of mock observations used to characterize the RMS varies between 21 in our highest redshift bin to over 100 in our low redshift bins. We model the scatter as a Gaussian distribution in $`\chi ^2`$ at each mass. In this way, the fraction of detected clusters at each mass is expressed as an integral over a Gaussian.
Note that the small scatter about the $`\mathrm{\Delta }\chi ^2`$-$`M_{200}`$ relation indicates that mass is indeed the primary factor in determining cluster detectability. Moreover, the fact that all three cosmologies produce consistent relations once differences in $`H_o`$ are accounted for indicates the relative insensitivity of the detection threshold to differences in cluster structure.
By repeating this exercise at several redshifts, we determine the survey mass threshold as a function of redshift, shown in Figure 2.3. Note that the thresholds differ for each cosmology because of differences in the angular diameter distance- redshift relation $`d_A(z)`$. The error bars on the mass limit points indicate the RMS scatter in mass about the $`\mathrm{\Delta }\chi ^2M_{200}`$ relations (see Fig. 2.2).
The hydrodynamical simulations cannot be used to extract mass thresholds beyond redshift $`z2.3`$, because no clusters in our ensemble are massive enough to lie above the detection threshold. For higher redshift and to aid in interpolating the mass threshold at arbitrary redshift, we use mock observations of spherical $`\beta `$ models normalized to agree with the simulations (curves in Figure 2.3). We evolve the $`\beta `$-models in redshift using the spherical collapse model (Lahav et al. 1991); this evolution is self-similar, accounting for the difference in cluster structure due to evolution of the mean cosmological density. We use these smooth curves essentially as fitting functions.
The insensitivity to redshift of the minimum detectable mass in an interferometric survey follows from a balancing of several effects. The increasing angular diameter distance $`d_A`$ with redshift decreases the total cluster SZE flux as $`d_A^2`$ (see equation 2), tending to increase our limiting mass. However, this effect is largely offset by cluster evolution. At higher redshifts clusters are denser, and, at constant virial mass $`M_{200}`$, have higher virial temperatures $`T`$. Both of these effects increase the total SZE flux. In addition, at higher redshift a cluster has a smaller apparent size, which enhances the cluster visibility $`V(u,v)`$ (Fourier transform of SZE decrement distribution; see $`\mathrm{\S }`$2.1) at the baselines where we make our measurements. Taken together, these effects largely explain the behavior of the limiting mass with redshift.
## 3. The Cluster Abundance and Its Redshift Evolution
Given a minimum observable mass, one can calculate the number of observed clusters as:
$`N(M>M_{thres})=`$ (3)
$`c{\displaystyle 𝑑\mathrm{\Omega }𝑑z_{M_{thres}(z)}^{\mathrm{}}𝑑M\frac{dn(M,z)}{dM}\frac{d_A(z)^2(1+z)^2}{H(z)}},`$
where $`d\mathrm{\Omega }`$ is the solid angle and $`n(M,z)`$ is the comoving number density. To calculate the comoving number density of clusters, we used the Press-Schechter prescription (Press and Schechter (1974)).
The comoving number density of bound objects between masses $`M`$ and $`M+dM`$ at redshift $`z`$ is given by
$$\frac{dn(M,z)}{dM}=\sqrt{\frac{2}{\pi }}\frac{\overline{\rho }}{M}\frac{d\sigma (M,z)}{dM}\frac{\delta _c}{\sigma ^2(M,z)}\mathrm{exp}\left[\frac{\delta _c^2}{2\sigma ^2(M,z)}\right],$$
(4)
where $`\overline{\rho }`$ is the mean comoving background density, $`\sigma (M,z)`$ is the variance of the fluctuation spectrum filtered on mass scale $`M`$, and $`\delta _c`$ is the effective linear overdensity of a perturbation which has collapsed and virialized. In principle, $`\delta _c`$ has a modest dependence on the cosmological density parameter; the spherical collapse model predicts a variation of only $`5\%`$ for $`\mathrm{\Omega }_m0.11`$. In this work, we assume a constant threshold $`\delta _c=1.69`$ for simplicity (Peebles (1980)).
Following Viana and Liddle (1999), we take the variance in spheres of radius $`R`$ to be
$$\sigma (R,z)=\sigma _8(z)\left(\frac{R}{8h^1Mpc}\right)^{\gamma (R)},$$
(5)
where
$$\gamma (R)=(0.3\mathrm{\Gamma }+0.2)\left[2.92+\mathrm{log}_{10}\left(\frac{R}{8h^1Mpc}\right)\right].$$
(6)
The comoving radius $`R`$ is determined as the radius which contains mass $`M`$ at the current epoch, while $`\mathrm{\Gamma }`$ is the usual CDM shape parameter, taken to be $`0.25`$ for this study (Peacock and Dodds (1994); Dodelson and Gaztanaga (1999)) unless stated otherwise; we show that the results are insensitive to the exact choice of $`\mathrm{\Gamma }`$.
We examine the cluster abundance in three cosmological models: oCDM ($`\mathrm{\Omega }_m=0.3,\mathrm{\Omega }_\mathrm{\Lambda }=0,h=0.65,\mathrm{\Gamma }=0.25,\sigma _8=1.0`$), $`\mathrm{\Lambda }`$CDM ($`\mathrm{\Omega }_m=0.3,\mathrm{\Omega }_\mathrm{\Lambda }=0.7,h=0.65,\mathrm{\Gamma }=0.25,\sigma _8=1.0`$), and $`\tau `$CDM ($`\mathrm{\Omega }_m=1,\mathrm{\Omega }_\mathrm{\Lambda }=0,h=0.5,\mathrm{\Gamma }=0.25,\sigma _8=0.56`$), with the last model simply a CDM model with the transfer function modified to agree with observations of galaxy clustering. Note that these models differ slightly from the cosmologies assumed for the simulations.
All models are chosen to have a global ICM fraction $`f_{ICM}=0.2`$, in rough agreement with observed ICM mass fractions of clusters (David, Jones, and Forman (1995); White and Fabian (1995); Grego et al. (2000); Mohr, Mathiesen, and Evrard (1999)). As can be seen from equation 2, the mass limits will depend on the ICM mass fraction and therefore the expected yields will also be sensitive to $`f_{ICM}`$.
We show that the expected survey yield is very sensitive to $`\sigma _8`$, also calculating the expected yields for oCDM with a lower value of $`\sigma _8=0.85`$ (e.g., Viana and Liddle (1999)). The constraints on $`\sigma _8`$ will improve dramatically in the near future, as new X-ray telescopes are expected to provide a much better determination of the local abundance, so we do not expect uncertainties in $`\sigma _8`$ to affect interpretation of survey results.
In a critical density universe ($`\mathrm{\Omega }_m=1,\mathrm{\Omega }_\mathrm{\Lambda }=0`$), $`\sigma _8(1+z)^1`$. Following Carroll, Press, and Turner (1992), we express growth in alternate cosmologies through a growth suppression factor which can be approximated as
$$g(\mathrm{\Omega }_m,\mathrm{\Omega }_\mathrm{\Lambda })=\frac{5}{2}\frac{\mathrm{\Omega }_m}{\left[\mathrm{\Omega }_m^{4/7}\mathrm{\Omega }_\mathrm{\Lambda }+\left(1+\mathrm{\Omega }_m/2\right)\left(1+\mathrm{\Omega }_\mathrm{\Lambda }/70\right)\right]}.$$
(7)
In this notation, we can now express the normalization of the power spectrum as
$$\sigma _8(z)=\frac{\sigma _8(0)}{1+z}\frac{g(\mathrm{\Omega }_m(z),\mathrm{\Omega }_\mathrm{\Lambda }(z))}{g(\mathrm{\Omega }_m(0),\mathrm{\Omega }_\mathrm{\Lambda }(0))}.$$
(8)
## 4. Expected Survey Yield
Figure 4 shows both the differential counts as a function of redshift and integrated number of clusters for our three cosmologies. Cluster physics is especially uncertain at high redshift, so we have chosen to cut off all integrals at a redshift of $`z=4`$.
These results are quite exciting, because they indicate that an SZE survey will yield a large cluster catalog extending to high redshift. If the currently favored $`\mathrm{\Lambda }`$CDM model is correct, then we expect to detect 300 clusters in a one year survey. Figure 4 also shows that the width of the mass limit has little effect on the cluster yield. The yield for each cosmology is shown with two curves; the solid curve corresponds to a step function mass limit where $`\mathrm{\Delta }\chi ^2=28.7`$, and the dashed curve corresponds to a mass limit modeled as a Gaussian probability distribution in $`\mathrm{\Delta }\chi ^2`$ at each mass, with variance set by the scatter around the $`\mathrm{\Delta }\chi ^2M_{200}`$ relation seen in the mock observations. In particular, the largest differences arise mainly in the high-redshift region, where there are few clusters above our mass threshold. While encouraging, this threshold uncertainty requires more attention before future SZE surveys can be correctly interpreted.
The differences between cosmologies at $`z1`$ are very promising. Differences arise because of the different rates of structure growth, the dependence of the mass threshold on the angular diameter distance, and the differences in the volume element of the survey. Open models have both a smaller angular diameter distance and more structure at high redshift, both leading to more observed clusters. The effect of $`\mathrm{\Lambda }`$CDM probing a larger volume is a relatively small effect.
The expected cluster yield depends on cosmology through the growth rate of perturbations (fixed mainly by the density parameter $`\mathrm{\Omega }_m`$), the volume per unit solid angle and redshift, and the shape and normalization of the initial power spectrum. Varying $`\mathrm{\Gamma }`$, the CDM shape parameter, within the $`95\%`$ confidence interval, $`0.2<\mathrm{\Gamma }<0.3`$ (Peacock and Dodds (1994); Dodelson and Gaztanaga (1999)), leads to increases ($`\mathrm{\Gamma }=0.3`$) or decreases ($`\mathrm{\Gamma }=0.2`$) in the integrated counts of $`6\%`$ for $`\mathrm{\Lambda }`$CDM, $`8\%`$ for oCDM and $`17\%`$ for $`\tau `$CDM. The effect is small, indicating that SZE survey counts alone will not strongly constrain the shape of the power spectrum. However, cluster samples constructed through SZE surveys can be used to study the shape of the power spectrum through other means, such as the two-point correlation function.
On the other hand, the normalization of the power spectrum, parameterized by $`\sigma _8`$, is quite important for predicted SZE counts, as demonstrated in Figure 4. For clarity, we have only shown the effect for oCDM. The uncertainty in $`\sigma _8`$ results in a large uncertainty in the cluster yield. Upcoming X-ray observations of nearby clusters are expected to constrain $`\sigma _8`$ to much higher accuracy, and the proposed SZE survey will provide an independent measurement. Thus, while the $`\sigma _8`$ uncertainty is significant for predicting yields, it will not be a serious impediment to interpreting results from an SZE survey. It is important to note that, even for low values of $`\sigma _8`$, we expect to find a significant number of high-redshift clusters.
## 5. Discussion
In a low-density universe, half of the clusters in an SZE survey will have $`z1`$. While other surveys may yield comparable numbers of clusters at high redshift by surveying a larger fraction of the sky, the SZE catalog will be unique in having similar sensitivity at all redshifts; SZE surveys are effectively limited only by the abundance of clusters above the lowest observable mass. PLANCK is expected to find $`10^4`$ clusters, but the effective limiting mass is fairly high, resulting in relatively few high-redshift clusters (De Luca, Desert, and Puget (1995); da Silva et al. (1999)).
An interferometric survey with a synthesized beam that partially resolves clusters has several advantages. Point sources are easily identified and removed from the data; thus, it is unlikely that point sources will systematically affect the magnitude of the observed cluster decrement. In addition, the more massive clusters in our sample can be imaged with high S/N at the same time as they are detected.
Extensive follow-up will be required to make best use of an SZE survey. Extensive optical observations will be required to obtain redshifts for the detected clusters. This may be difficult for the highest-redshift objects, but it is not an insurmountable problem. Follow-up with X-ray telescopes would be helpful as well, but the low-mass, high-redshift objects are expected to be undetectable in the X-ray band, even with $`10^5s`$ XMM exposures. However, redshifts, ICM temperatures and X-ray images for a portion of the sample would enable direct SZE+X-ray distances, such as those being measured with the currently available data (Reese et al. (1999) and references therein). These distances constrain the angular diameter distance relation, which provides an independent measurement of the cosmological matter density; these results would be complementary to those available from consideration of the cluster counts alone.
Deep exposures such as the ones considered here could be contaminated by primary anisotropies in the CMB. A minimum separation of 2.5m corresponds to multipole $`\mathrm{}=1571`$ at 30 GHz. At this scale, the CMB anisotropy levels could well be larger than $`10\mu K`$, which would be above the noise levels that we have assumed but well below the detection threshold. A shift to higher $`\mathrm{}`$ may be required to avoid these effects, which can be achieved by longer baselines (which could allow larger telescopes) or a higher observing frequency. We have found that a shift to $`\mathrm{}3000`$ can be easily accommodated with only a small loss in cluster detection efficiency. The most numerous clusters will be the low-mass clusters near the detection threshold, which will be the most compact. Because of this, moving to higher multipoles would not severely affect our sensitivity to these objects, while this would decrease possible CMB contamination significantly. Indeed, larger telescopes may even be slightly more efficient for detecting high-redshift low-mass clusters, as the smaller primary beam is better matched to the compact nature of these objects.
We use an ICM mass fraction of 20% in these simulations, which is inconsistent with the observational constraints, $`f_{ICM}=0.21h_{50}^{1.5}`$ (Mohr, Mathiesen, and Evrard (1999); Grego et al. (2000)), if $`H_{}`$ is significantly higher than $`50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. However, simple virial arguments ($`TM^{2/3}`$) and Eqn. 2 indicate that the limiting mass scales as $`f_{ICM}^{0.6}`$. For $`H_{}=80\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, we expect $`f_{ICM}=0.10`$. While a gas fraction of only 10% would seriously reduce the expected number of clusters (by a factor of 2.5-3) the resulting catalog would still be large and unique.
An uncertainty that we have not discussed is the effect of galaxy formation or preheating on the ICM structure and SZE detectability. While this is being addressed with numerical simulations currently in progress, we do not believe that our results should depend significantly on gas evolution. ICM evolution mainly affects the core regions of clusters (Ponman, Cannon, and Navarro (1999)), on angular scales smaller than those for which the brightness sensitivity of the interferometer is optimized, 2’ to 7’. The total SZE flux from the unresolved core depends only on the temperature and the number of electrons at that temperature (see §2). Even fairly extreme models of gas evolution lead to only small changes in the expected counts for a survey sensitive primarily to total SZE flux (Holder and Carlstrom (1999)) and we expect this to be true for any survey that does not resolve the core regions of clusters.
While the expected cluster yield should be fairly insensitive to ICM evolution, the properties of the cluster sample and their evolution with redshift will shed considerable light on the question of evolution of the ICM. At the same time, such a survey will provide determinations of key cosmological parameters that will be entirely independent of all other determinations.
We are indebted to Erik Reese for his efforts in developing some of the code that was used for this analysis. This is work supported by NASA LTSA grant number NAG5-7986. JEC acknowledges support from the David and Lucile Packard Foundation and a NSF-YI grant. GPH is supported by the DOE at Chicago and Fermilab. JJM is supported through Chandra Fellowship grant PF8-1003, awarded through the Chandra Science Center. The Chandra Science Center is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-39073. AEE acknowledges support from NSF AST-9803199 and NASA NAG5-8458. |
no-problem/9912/astro-ph9912410.html | ar5iv | text | # A VLA Search for the Geminga Pulsar: A Bayesian Limit on a Scintillating Source
## 1 Introduction
Geminga has long been known as a strong X-ray and gamma-ray continuum source (Fichtel et al. (1975)) and, in 1987, was identified with an optical star (Bignami et al. (1987)). In 1992, Geminga was detected at X-ray (Halpern & Holt (1992)) and gamma-ray (Bertsch et al. (1992)) energies as a 237-ms pulsar with a period derivative of $`10.95\times 10^{15}`$ s s<sup>-1</sup>, indicating a characteristic age of $`3.4\times 10^5`$ yr and a magnetic field of $`1.6\times 10^{12}`$ Gauss. A proper motion of $`170\pm 6`$ mas yr<sup>-1</sup> and a parallax of $`6.36\pm 1.7`$ mas have been measured for Geminga’s optical counterpart (Caraveo et al. (1996)). From these measurements, a distance of 157$`{}_{34}{}^{}{}_{}{}^{+59}`$ pc and a transverse velocity of $`140\pm 15`$ km s<sup>-1</sup> have been calculated.
Since its discovery as a gamma-ray source in 1975, many observers have unsuccessfully attempted radio detection of Geminga both as a continuum source (Spelstra & Hermsen (1984)) and as a pulsar (Seiradakis (1992)). Recently, however, three independent groups (Malofeev & Malov (1997); Kuz’min & Losovskii (1997); Shitov & Pugachev (1997)) have reported successful detections of pulsed radio emission from Geminga. All 3 detections were made at the Pushchino Radio Astronomy Observatory at a radio frequency of 102 MHz, a lower frequency than those used for previous searches. All three groups calculate a dispersion measure (DM) of roughly 3 pc cm<sup>-3</sup>. The measured fluxes vary greatly, perhaps due to interstellar scintillations, and range from 5-500 mJy. Reported pulse widths range from 10 to 120 ms. Recently, Shitov & Malofeev (1997) reported a 41 MHz detection of Geminga with mean flux density 300 mJy. Comparing this result with Malofeev and Malov’s quoted 102 MHz mean flux of 60 mJy, we may estimate a spectral index of $`\alpha 1.8`$, where $`F(\nu )\nu ^\alpha `$. Comparing Malofeev and Malov’s quoted mean flux of 60 mJy with the previous 1 mJy upper limit at 1.4 GHz (Seiradakis (1992)), we calculate a lower bound on Geminga’s spectral index of $`\alpha 1.6`$.
Since Geminga is the only known gamma-ray pulsar which is not also a strong radio source, confirming its existence as a radio pulsar and determining the reasons for its radio weakness are essential to understanding the relationship between pulsar radio and gamma-ray emission. Geminga is also important as it may be a prototype for a class of radio-weak or radio-quiet high-energy pulsars which could account for some of the 80 EGRET sources not yet identified with known pulsars or active galactic nuclei.
We have therefore revisited the search for Geminga as a radio pulsar with a long, low frequency, fast-sampled VLA observation. In this paper, we present our results, introducing Bayesian methods for determining the pulsed flux in a signal of unknown width and phase and for estimating the intrinsic flux density of a source, accounting for interstellar scintillations.
## 2 Data
We observed Geminga (PSR J0633+1746) on 2 February 1998 with the D-configured VLA in phased array mode. The data spanned a bandwidth of 25 MHz, centered on 317.5 MHz, with 14 (not necessarily consecutive) 1-MHz channels (situated to avoid known interference frequencies). Full polarization data were recorded with the High Time Resolution Processor (Moffett (1997)) using a sampling frequency of 1152 Hz. We obtained 7 one-hour observations (scans) of Geminga which spanned a total time of 9 hours. Two other known pulsars (B0329+54 and B0950+08) were observed for 10 minutes each with the same setup as that used for Geminga. Each scan was started on a 10 second tick to ensure accurate pulse phase referencing between scans. This 10-second tick was tied to the VLA’s hydrogen maser, which is compared to Universal Time through the GPS. The array was phased, with rms phase errors less than 1, between consecutive scans. We observed a strong VLA calibrator source (B0813+482) with known flux density ($`S`$ = 45 Jy) to calculate the gain in each of the 56 channels (14 frequencies and 4 polarizations). As we expect only a 7-mJy uncertainty in our flux density calibration due to radiometer noise, our main source of uncertainty is the intrinsic source flux variation. By comparing the VLA flux measurement with those from the Westerbork and Texas surveys (Rengelink et al. (1997); Douglas et al. (1996)), we estimate a maximum intrinsic flux density variation of 10%. We therefore expect errors in our final reported flux densities to be less than 10%.
## 3 Analysis
For each of the 7 Geminga data sets, we summed polarizations to calculate the total intensity and dedispersed all 14 frequency channels using a DM of 3.2 pc cm<sup>-3</sup>, the midpoint of the DM range predicted by the TC (Taylor & Cordes (1993)) model for Galactic electron density given Geminga’s measured distance and direction. Our estimated DM is consistent with those reported for the 102 MHz Geminga detections. We note than even a 1 pc cm<sup>-3</sup> error in our assumed DM would produce only 7 ms of pulse smearing, negligible for Geminga’s period of 237 ms.
Each one-hour dedispersed time series was folded using the gamma-ray ephemeris of Mattox et al. (1998) (P = $`0.23709574610`$ s, $`\dot{\mathrm{P}}=10.974012\times 10^{15}`$ s s<sup>-1</sup>, epoch = 2446600). We used a single period to fold each 1 hour data set, but updated the period between scans, using the gamma-ray ephemeris and TEMPO (Taylor & Weisberg (1989)). We note that using a single period for each scan results in only 0.2 ms of pulse smearing. Table 1 lists the modified Julian date of the midpoint of each scan, the corresponding period, and the total length of the scan. The first 6 scans are 1 hr in length, while the last is 50 minutes, yielding a total integration time of 6 hr 50 min.
Figure 1 shows the folded profiles for scans 1 through 7, in addition to the composite profile formed by calculating the phase at the start of each scan, and shifting and summing the profiles accordingly. We have verified that our phase-shifting algorithm is correct by applying it to the observatory generated 19.2 Hz waveguide switch signal. There is structure in some of the profiles of Figure 1. However, no consistent pulse shape and/or phase is apparent. Furthermore, coherently summing all profiles dramatically decreases the signal-to-noise level of any features. We have also calculated Fourier transforms of all 7 dedispersed time series. We have not detected any harmonics of Geminga’s 237-ms period.
Figure 2 shows the folded pulse profiles for our test pulsars, B0329+54 (DM = 26.8 pc cm<sup>-3</sup>) and B0950+08 (DM = 3.0 pc cm<sup>-3</sup>). These data were processed with identical dedispersion and folding routines to the Geminga data. We have subtracted the off-pulse mean to calculate $`F`$, the phase-averaged pulsed flux density. For PSR B0329+54, $`F`$ = 1071 mJy, and for PSR B0950+08, $`F`$ = 8.9 mJy. We note that we also detected both pulsars with high signal-to-noise in Fourier transforms of their time series.
Because Geminga may be too weak to be detected through its time-averaged flux, we searched for Crab-like ‘giant’ pulses, or individual pulses with amplitudes much greater than the mean pulse amplitude (Lundgren et al. (1995)). We search for these aperiodic, dispersed pulses by dedispersing each data set over a range of trial DMs and, for each dedispersed time series, recording those pulses with amplitudes above some signal-to-noise threshold. We enchanced our sensitivity to broadened pulses by repeating this thresholding with different levels of time series smoothing. We found no evidence for isolated pulses from Geminga. In Figure 3, we show the results of this analysis for Geminga, PSR B0950+08, and PSR B0329+54, from which we detect a large number of individual pulses.
### 3.1 Bayesian Pulsed Flux Estimation
In Appendix A, we derive a Bayesian procedure for estimating the pulsed flux density given a folded profile with a pulse of unknown amplitude, width, and phase. Assuming the additive noise is Gaussian, we are able to calculate a probability density function (PDF) for the phase-averaged pulsed flux density independent of pulse width, pulse amplitude, pulse phase, off-pulse mean, and off-pulse rms. Because no prior assumptions are necessary to calculate an upper limit, this method is an improvement over standard methods used to test for pulse shape uniformity and to calculate upper limits. The $`\chi ^2`$ statistic, the method most commonly used, returns only the deviation of a binned pulse profile from a uniform distribution. More sophisticated tests, such as the Z<sup>2</sup> and H tests (DeJager (1994)), developed to analyze sparse X-ray and gamma-ray profiles, are less dependent on binning and more sensitive to a wide variety of pulse shapes than the $`\chi ^2`$ statistic, but still yield only the probability that a distribution differs from uniformity. To calculate an actual upper limit, one must still assume a pulse width and phase. Furthermore, these methods do not provide a simple mechanism for calculating a PDF for the pulsed flux density. In following papers, we will use our method to calculate more accurate X-ray and gamma-ray upper limits for known radio pulsars and to search for new high-energy pulsars. The method can also be adapted to a wide range of pulse shapes, in addition to the simple square pulse presented in Appendix A.
We apply the method of Appendix A to calculate PDFs for the pulsed flux density for the folded Geminga pulse profiles shown in Figure 1. The resultant PDFs are also shown in Figure 1 and are discussed further in Section 3.2.2.
### 3.2 Interstellar Scintillations of the Geminga Pulsar
Since Geminga is a nearby (d $``$ 150 pc) pulsar, we expect its flux density to be strongly modulated by diffractive interstellar scintillations (DISS). We therefore present a method which allows us to calculate a PDF for a source’s flux density given several measurements of the scintillated flux density. We restrict our analysis to the strong scattering regime (a good assumption at low radio frequencies), where $`\varphi _{\mathrm{rms}}`$, the rms phase perturbation due to electron-density variations, is much greater than 1. In this case, the amplitude fluctuations are ‘saturated’ because the electric field has been sufficiently randomized to have complex Gaussian statistics.
We describe DISS through a characteristic timescale $`\mathrm{\Delta }t_\mathrm{d}`$ and characteristic bandwidth $`\mathrm{\Delta }\nu _\mathrm{d}`$. These quantities depend upon the scattering measure (SM), the integrated turbulence strength (assuming a power spectrum for electron density irregularities) along the line of sight to the pulsar. In the strong scattering regime, and assuming a uniform distribution of scattering material and a Kolmogorov power spectrum, $`\mathrm{\Delta }t_\mathrm{d}`$ and $`\mathrm{\Delta }\nu _\mathrm{d}`$ scale with SM, observing frequency, distance, and pulsar velocity as (Cordes & Lazio (1991); Cordes & Rickett (1998))
$`\mathrm{\Delta }t_\mathrm{d}=10^{2.52}\nu ^{1.2}\mathrm{SM}^{0.6}V^1\mathrm{s},`$ (1)
$`\mathrm{\Delta }\nu _\mathrm{d}=10^{0.775}\nu ^{4.4}\mathrm{SM}^{1.2}D^1\mathrm{kHz}`$ (2)
where $`\nu `$ is the radio frequency in GHz, SM is the scattering measure in kpc m<sup>-20/3</sup>, $`V`$ is the pulsar’s transverse velocity in km s<sup>-1</sup>, and D is its distance in kpc. The scaling is such that a distant source with high velocity observed at a low frequency has a shorter characteristic bandwidth and timescale than a nearby, low velocity source observed at high frequency.
Once we have characterized the DISS properties of a source (through a measurement or an estimation using Eqs. 1 and 2), we calculate the posterior PDF of the intrinsic source flux density given several measurements of the scintillated flux density using the method described in Appendix B. The source’s intrinsic flux density, $`S`$, is modulated by interstellar scintillations by a factor $`g`$, so that the measured flux density, $`F`$, is
$`F=gS+N,`$ (3)
where $`N`$, the additive noise, may include both radiometer noise and radio frequency interference. We note that this additive noise model holds only for the case where the time-bandwidth ($`TB`$) product of the measurements (ie. integration time $`T`$ times bandwidth $`B`$) is large, $`TB1`$. In the strong scattering regime, the scintillation time-bandwidth product, $`\mathrm{\Delta }t_\mathrm{d}\mathrm{\Delta }\nu _\mathrm{d}`$, which is the characteristic size of a “scintle” in the $`\nu t`$ plane, may or may not be much larger than $`TB`$ for the measurement. When the time-bandwidth product of the measurement is smaller than that of one DISS ‘scintle,’ the PDF for $`g`$ is a one sided exponential:
$`f_g(g)=e^g,g0`$ (4)
with CDF
$`F_g=P\{<g\}=1e^g.`$ (5)
When $`TB\mathrm{\Delta }t_\mathrm{d}\mathrm{\Delta }\nu _\mathrm{d}`$, the measurements average over many scintles, increasing the number of degrees of freedom from 2 (for unquenched DISS) to 2$`N_{\mathrm{ISS}}`$, where
$`N_{\mathrm{ISS}}\left(1+0.2{\displaystyle \frac{B}{\mathrm{\Delta }\nu _\mathrm{d}}}\right)\left(1+0.2{\displaystyle \frac{T}{\mathrm{\Delta }t_\mathrm{d}}}\right).`$ (6)
Then, $`g`$ is a $`\chi ^2`$ random variable with $`2N_{\mathrm{ISS}}`$ degrees of freedom whose PDF is
$`f_g(g,N_{\mathrm{ISS}})={\displaystyle \frac{(gN_{\mathrm{ISS}})^{N_{\mathrm{ISS}}}}{g\mathrm{\Gamma }\left(N_{\mathrm{ISS}}\right)}}e^{gN_{\mathrm{ISS}}}U(g),`$ (7)
where $`U(g)`$ is the Heaviside function and $`\mathrm{\Gamma }`$ is the gamma function. For pulsars at large distances or those observed at low frequencies, the quantity $`N_{\mathrm{ISS}}`$ will approach infinity and $`f_g`$ will tend toward a delta function, $`\delta (g1)`$. Figure 4 illustrates the dependence of $`f_g`$ on $`N_{\mathrm{ISS}}`$.
#### 3.2.1 Results for Known Pulsars
For the known pulsars B0329+54 and B0950+08, $`\mathrm{\Delta }t_\mathrm{d}`$ and $`\mathrm{\Delta }\nu _\mathrm{d}`$ have been measured (Stinebring et al. (1996); Phillips & Clegg (1992)). Applying the known frequency scaling of these quantities (see Eqs. 12), we estimate $`\mathrm{\Delta }t_\mathrm{d}=142`$ s, $`\mathrm{\Delta }\nu _\mathrm{d}=25`$ kHz for B0329+54 and $`\mathrm{\Delta }t_\mathrm{d}=2757`$ s, $`\mathrm{\Delta }\nu _\mathrm{d}=102`$ MHz for B0950+08 at 317 MHz. Given these values and our observing bandwidth ($`B=25`$ MHz) and time ($`T=600`$ s), we may calculate $`N_{\mathrm{ISS}}`$ from Eq. 6. For B0329+54, $`N_{\mathrm{ISS}}=371`$, $`TB\mathrm{\Delta }\nu _\mathrm{d}\mathrm{\Delta }t_\mathrm{d}`$, and $`f_g`$ will approach a delta function. B0950+08 is much closer to Earth, so its time-bandwidth product $`\mathrm{\Delta }\nu _\mathrm{d}\mathrm{\Delta }t_\mathrm{d}>TB`$ and $`N_{\mathrm{ISS}}=1.095`$. Although this pulsar is likely in the transition regime from weak to strong scattering, $`f_g`$ will approximate an exponential and will have a similarly long tail.
Using the method of Appendix B, we calculate the PDFs for the intrinsic source fluxes of these pulsars given by our measurements. In Figure 5, we plot $`f_S(S)`$, where $`S`$ is the intrinsic phase-averaged flux density. For B0329+54, the PDF of the intrinsic source flux density is strongly peaked at the measured flux density, and follows a roughly Gaussian distribution away from the peak. For B0950+08, the PDF for the intrinsic source flux density again peaks at the measured flux, but is characterized by a broad exponential tail. We can calculate confidence intervals for the flux densities of these pulsars from their PDFs. For B0329+54, the 95% confidence interval is 968 mJy $`<S<`$ 1190 mJy, while for B0950+08, the 95% confidence interval is 5.4 mJy $`<S<`$ 880.1 mJy. We note that these pulsars’ predicted 317 MHz fluxes from the Princeton Pulsar Catalog (Taylor et al. (1993)) 400 MHz fluxes and measured spectral indices (Lorimer et al. (1995)) are 1890 mJy for PSR B0329+54 and 590 mJy for PSR B0950+08. While the predicted flux for PSR B0329+54 lies outside of our confidence interval, this discrepancy is consistent with the $`40\%`$ modulations due to refractive interstellar scintillation that have been measured for this pulsar (Stinebring et al. (1996)). Refractive ISS is probably also important in modulating the flux of PSR B0950+08, although its predicted flux does fall within our confidence interval.
#### 3.2.2 Results for Geminga
Because the scintillation bandwidth and timescale have not been measured for Geminga, we use Eqs. 1 and 2 to predict the characteristic timescale and bandwidth given our observing frequency, Geminga’s measured distance, and the SM predicted from the TC model. For Geminga, we estimate SM $`10^{4.4}`$ kpc m<sup>-20/3</sup> and, from Eq. 2, $`\mathrm{\Delta }\nu _\mathrm{d}`$ 1.5 MHz. Given Geminga’s transverse velocity of $`V_{}140`$ km s<sup>-1</sup>, Eq. 1 yields $`\mathrm{\Delta }t_\mathrm{d}`$ 275 s. Therefore, $`TB\mathrm{\Delta }\nu _\mathrm{d}\mathrm{\Delta }t_\mathrm{d}`$, $`N_{\mathrm{ISS}}16`$, and we expect statistical independence between measurements. Applying Eq. B9 to the distributions for $`f_F(F)`$ (see Figure 1), we calculate $`f_S(S)`$ for each of the 7 observations individually. These PDFs are plotted in Figure 6. Using Eqs. B8 and B9, we may use all 7 Geminga observations to calculate a composite PDF, shown in Figure 7. This PDF does peak at a non-zero value of $`S`$. However, as shown in Figure 10 (see Appendix A), the PDFs of simulated noise often peak at non-zero values. We note that the rms noise levels in the Geminga pulse profiles are roughly equivalent to those of the simulated profiles in this figure. In addition, we expect more structure in the Geminga profiles due to the large amount of radio frequency interference at 317 MHz. Furthermore, the marginalized PDFs for pulse width and phase for each of the 7 Geminga scans do not peak at a consistent value of these quantities, as expected for a real pulsar. We therefore treat our result as an upper limit and calculate a 95% confidence upper limit to Geminga’s pulsed flux of 3.0 mJy.
The results so far have assumed that the pulse width is completely unknown. We have also calculated upper limits assuming reasonable values for Geminga’s pulse width. If we assume Geminga’s radio pulse has the same width (roughly 180<sup>o</sup>) as its gamma-ray pulse, we calculate a 95% confidence upper limit to the pulsed flux density of 4.0 mJy. However, except in the case of the Crab, the radio pulse widths of the known EGRET pulsars are much narrower than the gamma-ray pulse widths. For this reason, we also calculate an upper limit for a pulse width of 25<sup>o</sup> (calculated assuming the $`wP^{1/2}`$ scaling of pulse width $`w`$ with period $`P`$ given by Biggs (1990)) of 1.6 mJy.
We note that the distribution of material along the line of sight will most likely not be uniform, and $`\mathrm{\Delta }t_\mathrm{d}`$ and $`\mathrm{\Delta }\nu _\mathrm{d}`$ may be somewhat different from those predicted by Eqs. 1 and 2. We therefore explore what happens in the limiting case where the scintillation timescale is longer than the 9-hr total timescale of our observations. For this case, we use Eq. B7 to calculate the PDF for intrinsic source flux density given all 7 observations, again integrating over the PDFs of Figure 1. The resultant PDF for this limiting case, also plotted in Figure 7, is much broader than that derived assuming statistically independent scintillations. The 95% confidence upper limit on Geminga’s pulsed flux for this case is 13.9 mJy. However, for the scintillation timescale to be as large as 9 hrs, the scattering measure would have to be 10<sup>-4</sup> times smaller than that estimated using the TC model. Alternatively, a timescale this long could also be explained by a thin screen located only 0.021 pc from the observer (Cordes & Rickett (1998)). Obviously, both of these scenarios are very improbable, and the true 95% upper limit is likely very close to 3 mJy.
## 4 Discussion and Conclusions
Comparing our 3 mJy upper limit to Geminga’s flux density at 317 MHz with the mean flux density of 60 mJy reported by Malofeev & Malov at 102 MHz, we calculate a lower limit on the spectral index of Geminga of $`\alpha 2.7`$ ($`F(\nu )\nu ^\alpha `$), comparable to the spectral index of the Crab. If the spectral index of Geminga is indeed this high, it would be extreme, as the Crab and Geminga pulsars have quite different ages, spin periods, and spin-down rates.
In Figure 8, we plot our upper limit along with previously measured upper limits and detections. Unfortunately, while there have been many radio searches for Geminga, there are few published upper limits. This is partly due to Geminga’s migrating gamma-ray positional error box. For example, Boriakoff et al. (1984) searched for radio pulsars in Geminga’s error box, determined from COS B data, but did not cover the current, well-determined position. Of special interest is the recent non-detection of Geminga at both 35 and 327 MHz by Ramachandran et al. (1998). While their quoted 327 MHz upper limit of 0.3 mJy is much lower than ours, the rms sensitivities of our 7-hr VLA observation and their 6-hr Ooty observation are very similiar. For consistency, we have plotted the upper limit obtained through applying our method to their data, taking into account scintillation and the unknown width and phase of Geminga’s pulse.
The reason for Geminga’s radio weakness is still debatable. One explanation is that some gamma-ray pulsars simply do not emit at radio energies. Halpern & Ruderman (1993) suggest that Geminga’s radio emission is quenched by the copious pair production in the inner magnetosphere. However, the transient dips in Geminga’s soft X-ray profile (Halpern & Wang (1997)) suggest that the processes responsible for supplying $`e^\pm `$ to the inner magnetosphere may be variable. Because the radio silence will be occasionally broken as this $`e^\pm `$ quenching plasma clears away, we may expect Geminga to be a transient radio emitter. If Geminga continues to be the only high-energy pulsar found to exhibit this transient phenomenon, studying its X-ray properties may be useful for determining if and why radio emission is suppressed.
A more likely explanation for Geminga’s radio weakness is simply that there is some misalignment between the gamma-ray and radio pulsar beams. Geminga’s pulse shape in gamma-rays (like that of the Crab and Vela pulsars) is broad and double-peaked, suggesting an origin in the “outer gaps” of the pulsar magnetosphere near the light cylinder. Because radio emission is expected to be associated with the open field line region centered on the magnetic poles, it is possible that our line-of-sight intersects the gamma-ray beam only. Romani & Yadigaroglu (1995), using the outer gap model for pulsar gamma-ray emission, have modeled the orientation and size of the radio and gamma-ray pulsar beams given radio and gamma-ray data on known pulsars. They find that, because the gamma-ray beam is much wider than the radio beam, 45% of young pulsars will be detected only at high energies, while only 19% of young pulsars will be detected at both radio and gamma-ray energies. Scaling from the 5 EGRET detections of radio pulsars, Romani & Yadigaroglu expect 12 nonradio pulsars to be visible in gamma-rays at flux levels comparable to the radio selected objects. This implies that most of the $``$ 20 unidentified Galactic EGRET sources are young radio-quiet or radio-weak pulsars like Geminga. This is consistent with other studies which find that the properties, such as flux variability (McLaughlin et al. (1996)) and spectral index (Merck et al. (1996)), of many unidentified sources are similar to those of the known EGRET pulsars. It has also been shown (Yadigaroglu & Romani (1997)) that many of the unidentified sources are associated with regions of massive star formation and death, which are expected to be breeding grounds for pulsars.
We thank J. Mattox and K. Xilouris for helpful discussions. M. A. McLaughlin acknowledges support from an NSF fellowship. The work was also supported by the National Astronomy and Ionosphere Center, which is operated by Cornell University under cooperative agreement with the National Science Foundation (NSF). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This research was partially funded by NSF grants AST93-15285 and AST95-28394.
## Appendix A Bayesian Pulsed Flux Estimation
Pulsar flux densities are usually reported as phase-averaged quantities. Therefore, to calculate a pulsed flux upper limit, one must assume a pulse width, pulse phase, off-pulse mean, and off-pulse rms. To avoid trial and error estimates, we have developed a method to calculate a PDF for the pulsed flux density independent of these quantities.
We assume an $`M`$-bin folded pulse profile, where the amplitude of each bin is described by $`d_i`$ ($`i1,M`$). Our model for $`\widehat{d_i}`$, the expected amplitude within each bin, is parametrized by
w \- width of square pulse
m \- first bin of pulse
F \- phase-averaged pulsed flux
$`\mu `$ \- off-pulse mean
$`\sigma `$ \- off-pulse rms
Given this model, we may express $`\widehat{d_i}`$ as
$`\widehat{d_i}=\{\begin{array}{cc}FM/w+N_i\hfill & \text{}\text{}\text{m, m+w}\text{1}\hfill \\ N_i\hfill & \text{otherwise}\hfill \end{array}`$ (A3)
where $`N_i`$ is the amplitude of the noise in bin $`i`$. The likelihood function of the data is given by the joint probability density function (PDF) of the noise samples. Assuming Gaussian noise with mean $`\mu `$ and rms $`\sigma `$, and statistical independence of the bins, we may write this as
$`=(2\pi \sigma ^2)^{M/2}\mathrm{exp}\left[{\displaystyle \frac{1}{2\sigma ^2}}{\displaystyle \underset{i=1}{\overset{M}{}}}(N_i\mu )^2\right].`$ (A4)
To find the best model parameters, we use Bayes’ Theorem to write the posterior probability $`p(F,w,m,\mu ,\sigma |𝐝,𝐈)`$ of a model parametrized by $`F,w,m,\mu ,`$ & $`\sigma `$ given the data d and any prior information I as
$`p(F,w,m,\mu ,\sigma |𝐝,𝐈)={\displaystyle \frac{p(𝐝|F,w,m,\mu ,\sigma ,𝐈)p(F,w,m,\mu ,\sigma |𝐈)}{p(𝐝|𝐈)}}.`$ (A5)
The factor $`p(𝐝|F,m,w,\mu ,\sigma ,𝐈)`$ is simply the likelihood function given in Eq. A4. Because we expect all parameter values to be equally likely a priori, we assume flat priors for $`F,w,m,`$ & $`\mu `$ (i.e. $`p(F|𝐈)`$ = constant) and a prior $`p(\sigma |𝐈)1/\sigma `$ (i.e. $`p(\mathrm{log}(\sigma )|𝐈)`$ = constant). We choose a prior that is uniform in $`\mathrm{log}\sigma `$ to express our ignorance of the scale of the noise variation. Then, we can express the posterior probability as simply
$`p(F,m,w,\mu ,\sigma |𝐝,𝐈){\displaystyle \frac{}{\sigma }}.`$ (A6)
Since we would like to calculate the PDF of the phase-averaged flux $`F`$ independent of the other parameters, we must marginalize the posterior probability over the “nuisance parameters” $`w,m,\mu ,`$ & $`\sigma `$. We write the PDF for $`F`$ as
$`f_F(F){\displaystyle \frac{dw}{w}𝑑m𝑑\mu 𝑑\sigma p(F,w,m,\mu ,\sigma |𝐝,𝐈)}.`$ (A7)
Substituting Eq. A4 into Eq. A7, we find that the integrals over $`\mu `$ and $`\sigma `$ may be done analytically. After some algebra, we find
$`f_F(F){\displaystyle \frac{dw}{w}𝑑m\left(D_2+\frac{F^2M^2}{w}\frac{2FMD_p}{w}\frac{(FMD_1)^2}{M}\right)^{(1M)/2}},`$ (A8)
where constant factors have been eliminated and
$`D_1={\displaystyle \underset{i=1}{\overset{M}{}}}d_i,D_2={\displaystyle \underset{i=1}{\overset{M}{}}}d_i^2,\&D_p={\displaystyle \underset{i=m}{\overset{m+w1}{}}}d_i.`$ (A9)
In the case that the width and/or phase of the pulse is known, the marginalization over $`m`$ and/or $`w`$ in Eq. A8 can be removed to further constrain the PDF for $`F`$.
To test our method, we have created many simulated profiles consisting of a pulse superimposed on Gaussian noise, and have confirmed that the PDF of Eq. A8 does maximize at the correct value of $`F`$. Some example simulation results for square wave pulses are shown in Figure 9. Figure 10 shows the results of applying our algorithm to profiles consisting only of Gaussian random noise.
## Appendix B Bayesian Flux Estimation of a Scintillating Source
We assume that a source’s intrinsic flux density is modulated by interstellar scintillations by a factor $`g`$, so that the measured flux density is
$`F=gS+N,`$ (B1)
where $`N`$ is the additive noise and $`S`$ is the intrinsic flux density. To calculate a PDF for $`S`$ we assume we have $`m`$ measurements of $`F`$ and $`m`$ separate estimates of $`N`$, corresponding to making on and off-pulse flux measurements. We assume that the noise PDF is Gaussian (a good assumption for radio observations with large time-bandwidth product), and that we have estimated $`\mu `$, the average value of the off-pulse mean, and $`\sigma `$, the standard deviation of this quantity, for each of $`m`$ profiles. The PDF of the phase-averaged noise, $`f_N(N)`$, is a Gaussian with mean $`\mu `$ and standard deviation $`\sigma `$, and the PDF for an individual measurement of $`F`$ is
$`f_F(F)={\displaystyle 𝑑Nf_N(N)f_g\left(\frac{FN}{S}\right)},`$ (B2)
where $`f_g`$ is the PDF of $`g`$. The likelihood function is a combination of all the data using PDF’s of this form. So, for $`m`$ statistically independent observations, the likelihood function for $`S`$ is
$`(S)={\displaystyle \underset{j=1}{\overset{m}{}}}f_F(F_j)`$ (B3)
We cannot use this scheme for all cases because generally we need to use the joint PDF for the scintillation gain $`g`$ for the different measurements. Because there is no closed expression for this, we consider only two limiting cases.
### B.1 Case I: Statistically Independent ISS
The first case we consider is when the interstellar scintillations are statistically independent between measurements. For this case, we may write
$`_S(S)={\displaystyle \underset{j=1}{\overset{m}{}}}{\displaystyle 𝑑Nf_N(N)f_g\left(\frac{F_jN}{S}\right)}.`$ (B4)
where $`f_N(N)`$ is a Gaussian described by $`\mu _j`$ and $`\sigma _j`$.
For $`\mathrm{\Delta }t_\mathrm{d}T`$ (i.e. the ISS time scale $`\mathrm{\Delta }t_\mathrm{d}`$ is much less than the averaging time $`T`$) and/or $`\mathrm{\Delta }\nu _\mathrm{d}B`$ (the ISS bandwidth is much less than the receiver bandwidth), the ISS is quenched and has a PDF that is related to a $`\chi ^2`$ PDF with the number of degrees of freedom given by $`2N_{\mathrm{ISS}}`$, where $`N_{\mathrm{ISS}}`$ is the number of scintles averaged over. In this case, statistical independence between observations is a good assumption.
### B.2 Case II: Perfectly Correlated Scintillations
The second case we consider is when the time scale for interstellar scintillations is much longer than the total data acquisition time (e.g. $`mT`$ for contiguous measurements). In this case, the scintillation gain $`g`$ is identical for all measurements, and has some unknown value with PDF described by Eq. 4. In this case, individual measurements are statistically independent for the noise but completely identical with respect to $`g`$. The likelihood function is therefore simply the product of the noise PDFs, and can be expressed as
$`_S(S)={\displaystyle \underset{j=1}{\overset{m}{}}}f_F(F_j)={\displaystyle \underset{j=1}{\overset{m}{}}}f_N(F_jS).`$ (B5)
We can use this equation to get the likelihood function for the product $`S^{}gS`$ and then integrate over $`f_g(g)`$ to get the likelihood for $`S`$:
$`_S(S)`$ $`=`$ $`{\displaystyle 𝑑gf_g(g)_S^{}(gS)}`$ (B6)
$`\mathrm{where}_S^{}(S^{})`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{m}{}}}f_N(F_jS^{}).`$ (B7)
Once we have chosen the scintillation regime of interest, and calculated $`_S(S)`$ with Eq. B4 or Eq. B7, we may calculate the PDF of $`S`$ as the normalized likelihood function
$`f_S(S)={\displaystyle \frac{_S(S)}{_0^{\mathrm{}}𝑑S_S(S)}}.`$ (B8)
When no source contributes to the ‘on-source’ measurement, we expect $`f_S(S)`$ to maximize at $`S=0`$ and have a width determined by $`\sigma `$ and the number of measurements. An upper bound on $`S`$ can be calculated by choosing a probability level $`1ϵ`$ and calculating the value of $`S`$ such that the area of $`f_S(S)`$ above that value is $`ϵ`$. If $`f_S(S)`$ maximizes at a non-zero value, then a confidence interval for $`S`$ can be similarly calculated.
When, as in the calculation of an upper limit using the method of Appendix A, we do not have a single measured value for $`F_j`$, but instead a PDF, we must replace $`F_j`$ by an integral over its PDF. In this case, Eq. B4 becomes
$`_S(S)={\displaystyle \underset{j=1}{\overset{m}{}}}{\displaystyle f_F(F_j)𝑑F_j𝑑Nf_N(N)f_g\left(\frac{F_jN}{S}\right)}.`$ (B9) |
no-problem/9912/cond-mat9912125.html | ar5iv | text | # A Fermi Surface study of Ba1-xKxBiO3
## I Introduction
The cubic perovskite Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> which achieves a maximum transition temperature of 32K (for $`x0.4`$) has been the subject of numerous studies. Despite some similarity to the better known high-T<sub>c</sub> cuprates, the system is three dimensional and lacks strong magnetic properties in the normal state. The vibrational breathing mode of the BiO<sub>6</sub> octahedra appears to yield a strong electron-phonon coupling which together with dielectric effects may explain superconducting properties . However, recently observed anomalous temperature dependencies of the critical magnetic fields and vanishing discontinuities in the specific heat and magnetic susceptibility suggest a fourth order transition to superconductivity . Therefore, in contrast to the standard BCS picture, thermodynamic properties seem almost unchanged through the transition.
Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> also possesses a rich structural phase diagram as a function of K doping. In the range $`0<x<0.12`$, the system assumes a monoclinic structure which can be obtained from the cubic structure via small tilting and breathing distortions of the BiO<sub>6</sub> octahedra; in the undoped compound the tilting angle along $`(1,1,0)`$ is estimated to be $`11.2^{}`$ and the breathing distortion to be $`0.085`$ $`\mathrm{\AA }`$. For $`0.12<x<0.37`$, the structure is orthorhombic, admitting tilting but not breathing distortion. Finally, for $`0.37<x<0.53`$, when the cubic phase is stabilized, the system becomes metallic. Although it is widely believed that the insulating phases for $`x<0.37`$ are caused by charge density instabilities associated with the breathing and tilting distortions, it has proven difficult to establish this in terms of first principles computations. Very recent total energy calculations on distorted lattices indicate that, in contrast to earlier results, the LDA substantially underestimates the size of the breathing distortion and yields a metallic ground state. Perhaps correlation corrections beyond the LDA are necessary in order to explain the insulating phases.
In this article, we report highly accurate, all electron computations of 3D Fermi surfaces in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> for a number of different compositions. All calculations pertain to the simple cubic (SC) lattice and are parameter free except for the use of the Korringa-Kohn-Rostoker coherent potential approximation (KKR-CPA) to treat the effects of Ba/K substitution, and the local density approximation (LDA) for treating exchange-correlation effects. Our motivation for invoking the SC structure throughout the composition range is that in this way we are in a position to focus on the evolution of nesting and other features of the Fermi surface (FS) in the underlying pristine phase and to delineate how the appearance of such features correlates with the onset of various structural transitions with K doping. Note that lattice distortions in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> are relatively small, and pseudo-cubic lattice parameters are easily assigned in all cases.
Our computations show clearly that the highest occupied band in BaBiO<sub>3</sub>, in which the FS resides, remains virtually unchanged in shape upon substituting Ba with K, and that the associated states near the Fermi energy ($`E_F`$) continue to possess long lifetimes since they suffer little disorder induced scattering in the alloy. This circumstance allows us to fit this band in BaBiO<sub>3</sub> in terms of a Fourier-like expansion which accurately describes the highest occupied band in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> for all compositions x; a knowledge of the $`E_F`$ then yields the corresponding FS. In this way, we provide a useful parametrized form which permits a straightforward determination of the full 3D Fermi surface in cubic Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> for any arbitrary K doping level.
Highlights of some of the issues addressed, together with an outline of this article are as follows. Section II gives an overview of the methodology and provides associated technical details of the computations. The presentation of results in Section III is subdivided into several subsections. Subsection IIIA discusses changes in topology of the FS with K doping and attempts to correlate these changes with the observed structural transformations in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> invoking Hume-Rothery and Van Hove-Jahn Teller scenarios. Subsection IIIB takes up the question of parametrizing the FS, and gives details of the parameters which describe the doping dependent FS of cubic Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub>. Subsection IIIC compares aspects of the electronic structures of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> and BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> as well as those of the end compounds BaBiO<sub>3</sub>, KBiO<sub>3</sub> and BaPbO<sub>3</sub> with an eye towards understanding some puzzling differences between the phase diagrams of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> and BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub>. Subsection IIID discusses how our theoretical FS’s for the cubic phase are relevant for analyzing experiments sensitive to the momentum density of the electron gas (positron annihilation, high resolution Compton scattering), and are thus amenable to substantial experimental verification; a recent ARPES measurement of doping dependence of the chemical potential in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> is also discussed in order to gain insight into the band renormalization at the Fermi energy. Section IV summarizes our conclusions. Finally, concerning related work, it may be noted that we are not aware of a systematic study of the evolution of the FS of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> with K doping in the literature, although aspects of the problem have been commented upon by various authors .
## II Overview of methodology, computational details
Before proceeding with the computation of the FS for a given K doping $`x`$, we first obtained the charge selfconsistent KKR-CPA crystal potential in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> assuming random substitution of Ba by K; for details of our KKR-CPA methodology, we refer to Refs. . The charge as well as the KKR-CPA selfconsistency cycles have been carried out to a high degree of convergence in all cases; for example, the final Fermi energies are accurate to about $`2`$ mRy and the total charge within each of the muffin-tin spheres to about $`10^3`$ electrons. The total energies were not minimized to determine the lattice constants. The experimental lattice data was used instead, but otherwise the computations are parameter free. The simple cubic lattice constants used for BaBiO<sub>3</sub> and KBiO<sub>3</sub> are: $`4.3485`$ $`\mathrm{\AA }`$ and $`4.2886`$ $`\mathrm{\AA }`$ respectively . For intermediate compositions, lattice constants were obtained via Vegard’s Law. The muffin-tin radii of Bi and O were taken to be $`a/4`$, where $`a`$ denotes the composition dependent lattice constant. The radius of the Ba or K sphere (recall that within the KKR-CPA scheme the two radii must be equal as these atoms occupy the same site randomly) was chosen by requiring the Ba/K sphere to touch the O-sphere, which gives the value $`(1/\sqrt{2}1/4)a`$ for the Ba/K radius. The aforementioned choices of the radii provide a good convergence of the crystal potential, and in any event, the results are not sensitive to these details. The calculations employ the Barth-Hedin exchange-correlation functional and are semi-relativistic with respect to the valence states, but the core states are treated relativistically. However, the relativistic effects on the valence states are expected to be small. In particular, the band giving rise to the FS is built mainly from the Bi-6s and O-2p orbitals which are affected little by the spin-orbit coupling; the effect of Bi $`6`$ $`p`$ admixture on the bands is estimated to be on the order of $`0.1`$ eV. The maximum $`\mathrm{}`$-cut-offs used are $`\mathrm{}_{max}=3`$ for Ba, K and Bi-sites, and $`\mathrm{}_{max}=2`$ for O-atoms.
Once the selfconsistent crystal potential is determined using the preceding procedure, the Fermi surface in a disordered alloy is computed by evaluating the spectral density function $`A(𝐩,E)=(1/\pi )\text{Im}(G(𝐩,E))`$, where $`G(𝐩,E)`$ is the one-particle ensemble averaged KKR-CPA Green function at a given momentum $`𝐩`$ and energy E. The radius of the FS along a given direction $`\widehat{𝐩}`$ is then defined by the position of the peak in $`A(p,E)`$ at $`E=E_F`$; the finite width of the spectral peaks reflects the disorder induced scattering of states, and would in general yield a FS in an alloy which is smeared or blurred. In the present case of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub>, however, it turns out that the Bi-O states near the $`E_F`$ are virtually unaffected by Ba/K substitution and therefore suffer little damping ($`1`$ mRy). For this reason, the KKR-CPA variations in the $`E_F`$ in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> are also close to the rigid band values based on the BaBiO<sub>3</sub> band structure. In order to obtain highly accurate 3D maps of the FS discussed below, a uniform net of about $`10^5`$ $`𝐤`$-points in the irreducible Brillouin zone has been employed. Finally, we note that the specific parameters used in the density of states and related computations on BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> presented in this work are similar to those detailed above; the lattice constant of BaPbO<sub>3</sub> was $`4.2656`$$`\mathrm{\AA }`$ .
## III Results and discussion
### A Evolution of the Fermi Surface with Doping, Structural Transitions
Figures 1-4 present 3D images of the FS in the cubic phase for K concentrations $`x`$ = 0.67, 0.40, 0.13 and 0.0, together with three different cross-sections in the (001) and (110) planes. With reference to these figures we will discuss how nesting features evolve and correlate with the occurrence of structural transitions in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> with K doping.
The FS for $`x=0.67`$ is shown in Fig. 1. The composition is at the upper limit of stability ($`x0.60.7`$) of the cubic phase. The FS is a flattened free-electron-like sphere which appears nearly cubic in shape. This is evident in the 3D rendition of Fig. 1(a) as well as in the squarish appearance of sections of Figs. 1(b)-1(c). Our computations indicate that the FS becomes even more cube-like for $`x>0.67`$ (not shown); since the cubic shape is particularly susceptible to nesting, one may speculate a connection with the aforementioned phase stability limit. Incidentally, asphericity of the FS introduces momentum dependence in the Eliashberg equation with subtle consequences for superconducting properties.
With decreasing $`x`$, the FS grows in size as seen in Fig. 2 for $`x`$=0.40; this composition has been chosen to lie close to the cubic-orthorhombic phase boundary at $`x`$=0.37. Since the orthorhombic unit cell is very similar to FCC , the associated Brillouin zone (BZ) is also drawn in Fig. 2. The FS is seen to make contact with the hexagonal face of the bcc zone; this is more clear in the (110)-section of Fig. 2(d). These results suggest that the cubic-orthorhombic transition may be viewed as a Hume-Rothery type structural instability to some larger unit cell which arises when the FS crosses the BZ of the associated supercell. We find that the FS becomes tangent to the hexagonal face at $`x=0.45`$. Note that the Hume-Rothery rules require the transition to occur not at the point of first contact with the BZ, but after the FS has grown to slightly overlap the zone boundary. In the present case these arguments would thus predict a transition to an FCC structure at $`x0.4`$ where the cubic FS has already broken through the zone boundary. Recall that the orthorhombic structure involves lattice distortions via tilting mode phonons with wavevector $`𝐑=(1,1,1)\pi /a`$ , which is consistent with Fig. 2(d) where the spanning vector (denoted by the arrow) is indeed seen to be approximately equal to $`𝐑`$.
It is noteworthy that different Hume-Rothery phases presumably involve a succession of free energy minima as a function of composition. If so, there is the possibility that the system will actually go into a mixed phase at the transition, similar to the mixed $`\alpha `$ plus $`\beta `$ phase in brasses. In the Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> system, an incommensurate modulation has been observed by electron but not neutron diffraction, suggestive of a fluctuating or nanoscale phase separation, which is reminiscent of the stripe-like phases found in the cuprates and related oxides.
We consider Fig. 3 next for $`x=0.13`$ where Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> undergoes the orthorhombic to monoclinic transition. Compared to $`x=0.40`$ (Fig. 2), the FS has become more rounded. The most striking feature however is that the FS has grown to just begin making contact with the zone boundary at the X-point. In fact, the band structure of cubic BaBiO<sub>3</sub>(see subsection C below) contains a saddle point at X which lies approximately 0.1 eV below the Fermi energy. The associated van Hove singularity (VHS) in the density of states crosses the Fermi level around $`x0.13`$ and gives rise to the change in the FS topology seen in Fig. 3. Computations of Ref. indicate that the electron-phonon coupling parameter $`\lambda `$ can increase sharply as $`x`$ decreases below 0.13 causing the breathing mode phonon to become unstable.
It is interesting to ask the question: Since the orthorhombic and monoclinic distortions possess nearly the same FCC BZ, what is the driving force behind the transformation at $`x=0.13`$? The FS of Fig. 3 suggests an interpretation in terms of a Van Hove-Jahn-Teller scenario. When the VHS intersects the Fermi energy, there are three independent VHS’s whose degeneracy cannot be lifted in the orthorhombic phase. Since each VHS involves a substantial density of states, the system can gain energy via a Jahn-Teller distortion to a lower symmetry phase (such as monoclinic) which lifts the degeneracy between the VHS’s.
Fig. 4 shows the FS of cubic BaBiO<sub>3</sub> for completeness. The FS is a distorted sphere with large necks at X. No significant changes in the topology of the FS take place over the composition range $`0<x<0.13`$.
### B A Parameterized form for the doping dependent Fermi surface
We fit first the Bi6s-O2p band $`E(k_x,k_y,k_z)`$ in BaBiO<sub>3</sub> (see Fig. 5), which gives rise to the FS, in terms of the following Fourier-like expansion:
$`E(k_x,k_y,k_z)`$ $`=`$ $`E_0+t_1(X+Y+Z)+t_2(XY+XZ+YZ)+t_3XYZ`$ (3)
$`+t_4(X_2+Y_2+Z_2)+t_5(XY_2+XZ_2+YZ_2+X_2Y+X_2Z+Y_2Z)`$
$`+t_6(X_2Y_2+X_2Z_2+Y_2Z_2),`$
where $`X=\mathrm{cos}(k_xa)`$, $`Y=\mathrm{cos}(k_ya)`$, $`Z=\mathrm{cos}(k_za)`$, $`X_2=\mathrm{cos}(2k_xa)`$, $`Y_2=\mathrm{cos}(2k_ya)`$, $`Z_2=\mathrm{cos}(2k_za)`$; $`a`$ is the lattice constant and $`E_0`$ is the average band energy. The k-dependence on the right side of Eq. (1) possesses the form of a tight-binding band and in this sense $`t_n`$ may be viewed as the $`n`$th nearest neighbor ”hopping integral”. The values of various parameters which fit the computed 3D band are (in eV): $`E_0=0.288`$ (with respect the Fermi level of BaBiO<sub>3</sub>), $`t_1=0.6191`$, $`t_2=0.4313`$, $`t_3=0.0816`$, $`t_4=0.1034`$, $`t_5=0.1361`$ and $`t_6=0.0449`$. Higher order terms in the expansion are found to be negligibly small. The fit is valid throughout the composition range in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> since, as already noted, the Bi6s-O2p band remains essentially unchanged in shape near the $`E_F`$ with K/Ba substitution. The fact that the terms with $`t_3t_6`$ are significant in obtaining an accurate fit indicates that the associated interaction parameters possess a fairly long range. Incidentally, supercell simulations indicate that electronic states near the Fermi level are not sensitive to short-range ordering effects .
The constant-energy surface can now be obtained for any given value of the energy by solving Eq. (1); in order to obtain the FS at a given doping, we need only specify the corresponding value of the $`E_F`$. For this purpose, we have parametrized the KKR-CPA values of $`E_F(x)`$ in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> as a second-order polynomial:
$$E_F(x)=a_1x^2+a_2x,$$
(4)
where $`E_F=0`$ for $`x=0`$, $`a_1=2.078`$ eV, and $`a_2=0.6612`$ eV. The solid lines in the sections of Figs. 1-4 show that Eqs. 1 and 2 provide an excellent fit to the 3D Fermi surface in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> over the entire composition range. Notably, the positive sign of the ratio, $`\tau =t_2/t_1`$ reflects a concave down curving of the FS in the basal plane (see Figs. 1-4); in contrast, some cuprates possess FS’s curving concave up. For the special case where only the nearest neighbor hopping term $`t_1`$ is considered in Eq. (1), the FS will be perfectly nested at half filling, and hence unstable with respect to infinitesimal perturbations; the presence of interactions with farther out neighbors smears this singularity, although some softness in the system remains as already discussed above.
### C Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> vs. BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub>
Despite substantial similarities, the phase diagrams of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> and BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> display significant differences. The monoclinic to orthorhombic transition occurs at roughly the same doping level in both systems, but the orthorhombic phase in BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> persists up to $`0.6`$ holes per band, and unlike Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub>, it does not undergo the transition to the cubic phase. Since the FS’s of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> and BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> may be expected to be roughly similar (in view of similarities of their electronic structures), on the face of it, the explanations of Section IIIA above for Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> would appear to be applicable also to BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub>. Some insight into the puzzling behavior of BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> may be obtained by comparing the band structures near the Fermi energy of the end compounds BaBiO<sub>3</sub>, KBiO<sub>3</sub> and BaPbO<sub>3</sub> shown in Fig. 5, and the associated composition dependent densities of states in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> and BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> (Fig. 6).
The important point to note is that the band passing through the Fermi energy in BaBiO<sub>3</sub> as well as KBiO<sub>3</sub> is a hybridized Bi-O band which is affected little when Ba is substituted by K in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub>; the associated density of states in Fig. 6a displays two distinct features near zero and $`1.4`$ eV (in BaBiO<sub>3</sub>) which are weakly doping dependent. In sharp contrast, in BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> the valence band changes from Bi-O to Pb-O, and the VHS’s in the end compounds BaBiO<sub>3</sub> and BaPbO<sub>3</sub> lie around $`0.1`$ eV and $`2.0`$ eV respectively, Fig. 6b. \[Structure from higher bands is evident above $`2`$ eV in Fig. 6b.\] As a result, the DOS of BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> alloys is characterized by a ”split-band” behavior: when Bi is substituted by Pb, the VHS around $`0.1`$ eV arising from the Bi-O band gradually loses spectral weight which gets transferred to the VHS around $`2.0`$ eV of the PbO band. Consequently, states near the Fermi level will suffer substantial disorder induced scattering . The FS in BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> will then be quite smeared, rendering suspect the arguments of Section IIIA which assume a sharply defined FS.
### D Comparison with experiments
We note first that the theoretically predicted FS for the cubic phase at $`x=0.4`$ (Fig. 1) is in good accord with the experimental FS deduced by Mosley et al. from positron annihilation measurements. As already mentioned in the introduction, the FS’s computed for the cubic phase at compositions outside the range of stability of the phase (Figs. 1, 3 and 4) are nevertheless relevant for experiments, especially where one probes the momentum density of the electron gas. We elaborate on this point now.
In a positron annihilation or high-resolution Compton scattering experiment , the underlying spectral function involved is the 3D momentum density $`\rho (𝐩)`$ of the ground state. The FS signatures, which are scattered throughout the momentum space in $`\rho (𝐩)`$ can, in principle, be enhanced by folding $`\rho (𝐩)`$ into the first BZ to obtain a direct map of the occupied states, i.e.
$$n(𝐤)=\underset{𝐆}{}\rho (𝐤+G),$$
(5)
where $`n(𝐤)`$ is the occupation number for the Bloch state $`𝐤`$, and the summation extends over the set $`\{𝐆\}`$ of reciprocal lattice vectors. The FS may then be defined as the surface of maximum gradient of $`n(𝐤)`$ . As already emphasized, the orthorhombic as well as the monoclinic phase of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> is derived via relatively small tilting and breathing distortions of BiO<sub>6</sub> octahedra. It will be sensible, therefore, to obtain $`n(𝐤)`$ from measured momentum densities by using vectors of the SC lattice in Eq. 5 at all compositions of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub>. The evolution of the FS of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> with doping depicted in Figs. 1-4 should in this way be essentially verifiable experimentally despite the intervention of phase transitions. As the cubic symmetry is broken with doping and various gaps open up, the momentum density will be smeared over a range of approximately $`E_{gap}/v_F`$, where $`E_{gap}`$ is the energy gap and $`v_F`$ is the Fermi velocity of the associated metallic state; this should, however, only produce relatively small modulations of $`n(𝐤)`$ based on the SC structure. In this vein, disorder effects in general yield a momentum smearing, $`\mathrm{\Delta }k=\gamma /v_F`$ in terms of the disorder induced width (in energy) $`\gamma `$, although the value of $`\gamma `$ in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> is negligibly small at the Fermi energy.
An interesting recent experimental result concerns the shift in chemical potential $`\mu (x)`$ as a function of doping obtained by Kobayashi et al. via XPS core level measurements in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub>. Fig. 7 shows that KKR-CPA predictions can be brought into line with the measurements provided the theoretical values are scaled down by a factor of $`0.49`$ (dashed curve), indicating that the dispersion of the quasiparticles near the Fermi energy may be given incorrectly in the underlying band structure. \[The solid curve is the fit to the KKR-CPA values given by Eq. 2\]. This is not surprising since it is well known that the excitation energies in general do not correspond to the eigenvalues of the Kohn-Sham equation . Notably, Ref. reports absence of any abrupt changes in the chemical potential through the orthorhombic and monoclinic phase transitions; however, any such jumps in $`\mu (x)`$ are expected to be small in light of the discussion of preceding sections, and lie presumably below the experimental resolution. Also, core level shifts could be affected by crystal defects which may explain part of the discrepancy between theory and experiment .
## IV Summary and Conclusions
We have obtained 3D Fermi surfaces in cubic Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> over the entire composition range; representative results for $`x=`$ $`0.67`$, $`0.4`$, $`0.13`$ and $`0.0`$ are presented and discussed. The computations employ the selfconsistent KKR-CPA approach for treating the effects of Ba/K substitution within the framework of the local density approximation, but are parameter free otherwise. An examination of changes in the topology of the FS gives insight into transformations of the cubic phase into non-cubic structures as a function of K doping. Highlights of our specific conclusions are as follows:
1. The cubic-orthorhombic transition around $`x=0.37`$ is suggested to be a Hume-Rothery type instability when the FS makes contact with the BZ of the associated fcc lattice along the (111) directions. The orthorhombic-monoclinic transition around $`x=0.13`$ is interpreted within a van Hove- Jahn Teller scenario as the FS makes contact with the X-symmetry-point of the BZ.
2. A parametrization scheme which allows an accurate determination of the 3D Fermi surface in cubic Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> for an arbitrary doping level via a straightforward use of Eqs. 1 and 2 is developed. This scheme would be useful more generally for applications requiring FS integrals (e.g. response function computations) in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub>.
3. We remark on the puzzling differences between the phase diagrams of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> and BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> by comparing the KKR-CPA electronic structures of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> and BaPb<sub>x</sub>Bi<sub>1-x</sub>O<sub>3</sub> and of the end compounds BaBiO<sub>3</sub>, KBiO<sub>3</sub> and BaPbO<sub>3</sub>. The van Hove singularity in the highest occupied Bi-O band which is virtually unaffected by Ba/K substitution is found to be smeared strongly by Pb/Bi substitution, a fact which may be relevant in this connection.
4. Concerning experimental aspects, we show that the FS’s in the cubic phase will be useful in analyzing high-resolution Compton scattering and positron-annihilation measurements on the one hand, and in verifying the present theoretical predictions on the other, suggesting the value of further experimental work along these lines. We comment also on the band renormalization in Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> implied in the light of some recent photoemission experiments.
###### Acknowledgements.
This work is supported by the US Department of Energy under contract W-31-109-ENG-38 and the Academy of Finland, and benefited from the allocation of time at the NERSC, the Northeastern University Advanced Scientific Computation Center (NU-ASCC), the Center for Scientific Computing, Helsinki, and the Institute of Advanced Computing, Tampere, and a travel grant from NATO. |
no-problem/9912/cond-mat9912293.html | ar5iv | text | # Particle Systems with Stochastic Passing
## Abstract
We study a system of particles moving on a line in the same direction. Passing is allowed and when a fast particle overtakes a slow particle, it acquires a new velocity drawn from a distribution $`P_0(v)`$, while the slow particle remains unaffected. We show that the system reaches a steady state if $`P_0(v)`$ vanishes at its lower cutoff; otherwise, the system evolves indefinitely.
PACS numbers: 02.50-r, 05.40.+j, 89.40+k, 05.20.Dd
We are interested in behavior of a system of interacting particles moving on the real line in one direction, say to the right. The system is endowed with the following simple dynamics: (i) Particles move freely between “collisions”; (ii) After a collision, or “passing” event, the velocity of the slow particle remains the same, $`v_{\mathrm{slow}}`$=const, while the fast particle instantaneously acquires some new velocity, $`v_{\mathrm{new}}>v_{\mathrm{slow}}`$, drawn from the intrinsic velocity distribution $`P_0(v)`$. We would like to answer the following basic questions about the behavior of the system: Does the velocity distribution $`P(v,t)`$ reach a steady state or the system continues to evolve indefinitely? How does the average velocity depend on time? etc.
Our motivation is primarily conceptual, as we want to understand non-equilibrium infinite particle systems with two-body interactions. Thus we have chosen the simplest dynamics – interactions occur only upon colliding, and only one particle is affected. The appealing simplicity of the model suggests that it might show itself in different natural phenomena, and indeed, we originally arrived to this model in an attempt to mimic traffic on one-lane roads. Somewhat related dynamics were already used in modeling voting systems, force fluctuations in bead packs, asset exchange processes, combinatorial processes, continuous asymmetric exclusion processes, granular gases, and aggregation-fragmentation processes.
Let us first consider discrete velocity distributions. Specifically, we assume that both initial velocities and new velocities are drawn from the same intrinsic distribution $`P_0(v)=p_j\delta (vv_j)`$. For the binary distribution, the system does not evolve at all, so the first non-trivial case is the ternary intrinsic distribution when the system contains slow, moderate, and fast particles. Initially,
$$P_0(v)=p_1\delta (vv_1)+p_2\delta (vv_2)+p_3\delta (vv_3).$$
(1)
We set
$$p_1+p_2+p_3=1,v_1<v_2<v_3,$$
(2)
without a loss of generality. When the steady state is reached, the velocity distribution remains ternary,
$$P_{\mathrm{eq}}(v)=p_1\delta (vv_1)+q_2\delta (vv_2)+q_3\delta (vv_3).$$
(3)
The density of slow particles does not change, while the densities $`q_2`$ and $`q_3`$ of moderate and fast particles differ from the initial values. The final densities are found from a simple probabilistic argument based on the requirement of stationarity. For moderate particles we get
$$p_3(v_2v_1)q_2=p_2(v_3v_1)q_3.$$
(4)
The left-hand side of Eq. (4) gives the loss in $`q_2`$ which happens when a moderate particle overtakes a slow particle and becomes a fast particle. The right-hand side gives the gain in $`q_2`$ which takes place when a fast particle overtakes a slow particle and converts into a moderate particle. Solving Eq. (4) together with the normalization condition, $`p_1+q_2+q_3=1`$, we find
$$q_2=p_2\frac{p_2+p_3}{p_2+\nu p_3},q_3=\nu p_3\frac{p_2+p_3}{p_2+\nu p_3}.$$
(5)
where $`\nu =(v_2v_1)/(v_3v_1)`$. Since $`\nu <1`$, we have $`q_2>p_2`$ and $`q_3<p_3`$. Thus the density of moderate particles increases while the density of fast particles decreases. Similarly, one can analyze discrete velocity distributions with more than three particle species. In all cases (i) the system reaches a steady state; (ii) the average velocity decreases and eventually reaches some finite value; (iii) the density of the most slow particle species remains unchanged.
Now we consider a continuous velocity distributions. Let $`[v_{\mathrm{min}},v_{\mathrm{max}}]`$ be a support of the intrinsic velocity distribution $`P_0(v)`$. By Galilean transform, we can set $`v_{\mathrm{min}}=0`$ without loss of generality. We consider unbounded distributions, $`v_{\mathrm{max}}=\mathrm{}`$, although main results equally apply to the cases with finite $`v_{\mathrm{max}}`$.
The passing rule asserts that when a fast particle overtakes a slow particle moving with a velocity $`v_{\mathrm{slow}}`$, the assignment of the new velocity $`v`$ occurs with probability
$$P_0\left(v|v_{\mathrm{slow}}\right)=P_0(v)\frac{\theta (vv_{\mathrm{slow}})}{_{v_{\mathrm{slow}}}^{\mathrm{}}𝑑v^{}P_0(v^{})}.$$
(6)
Eq. (6) guarantees that $`v>v_{\mathrm{slow}}`$ and that the normalization requirement, $`𝑑vP_0\left(v|v_{\mathrm{slow}}\right)=1`$, is obeyed.
Now we can write a Boltzmann equation for the velocity distribution $`P(v,t)`$:
$`{\displaystyle \frac{P(v,t)}{t}}=P(v,t){\displaystyle _0^v}𝑑v^{}(vv^{})P(v^{},t)`$ (7)
$`+{\displaystyle _0^v}𝑑v_2P_0(v|v_2){\displaystyle _{v_2}^{\mathrm{}}}𝑑v_1(v_1v_2)P(v_1,t)P(v_2,t).`$ (8)
The first term on the right-hand side of Eq. (7) describes loss in $`P(v,t)`$ due to collisions with slow particles: Collisions occur with rate proportional to velocity difference, and the integration limits ensure that only collisions with slower particles are taken into account. The second, gain, term accounts for the increase of $`P(v,t)`$ due to a random assignment of velocity $`v`$ after collision.
We could not solve Eq. (7) in the general case of an arbitrary intrinsic velocity distribution $`P_0(v)`$. Attempts to find a solution even for some particularly simple $`P_0(v)`$, e.g., linear, exponential, or uniform, turned out to be fruitless as well. Thus we proceed by employing asymptotic, approximate, and numerical techniques.
We start by looking at the asymptotic behavior of $`P(v)`$ in the small velocity limit. Let $`vu(t)`$, where $`u(t)`$ is the average velocity,
$`u(t)v={\displaystyle _0^{\mathrm{}}}𝑑vvP(v,t).`$ (9)
Then Eq. (7) simplifies to
$`{\displaystyle \frac{P(v,t)}{t}}`$ $`=`$ $`P_0(v)u(t){\displaystyle _0^v}𝑑v_2P(v_2,t)`$ (10)
$``$ $`P(v,t){\displaystyle _0^v}𝑑v^{}(vv^{})P(v^{},t).`$ (11)
To probe the small $`v`$ behavior, we need to know $`P_0(v)`$ at $`v0`$. Let us consider a family of intrinsic velocity distributions that behave algebraically:
$$P_0(v)Av^\mu \mathrm{when}v0.$$
(12)
Now assume that the system reaches the steady state: $`P(v,t)P_{\mathrm{eq}}(v)`$ and $`u(t)u_{\mathrm{eq}}`$. Plugging these and Eq. (12) into Eq. (10), we find that the velocity distribution also behaves algebraically in the small velocity limit,
$$P_{\mathrm{eq}}(v)(\mu +1)Au_{\mathrm{eq}}v^{\mu 1}\mathrm{when}v0.$$
(13)
In other words, the steady state velocity distribution scales as $`v^1P_0(v)`$. Recalling the normalization requirement, $`_0^{\mathrm{}}𝑑vP(v)=1`$, we see that it is possible only when $`\mu >0`$. Thus our assumption that the system reaches a steady state is certainly wrong when $`\mu 0`$. In this region, we anticipate that the system will evolve indefinitely. Note that both the exponential and uniform intrinsic distributions belong to the borderline case of $`\mu =0`$ that separates stationary and evolutionary regimes; for them an anomalously slow kinetics is anticipated.
To probe the behavior of evolving systems, we assume that in the long-time limit there is a very small fraction of “active” particles that move with velocities $`v1`$ and the vast majority of “creeping” particles that hardly move at all. We ignore collisions between active particles since their density is very low. We also ignore collisions between creeping particles since their relative velocity is very small. This picture suggests that only collisions between active and creeping particles matter. Thence, the velocity distribution of active particles obeys
$$\frac{P(v,t)}{t}=P_0(v)u(t)vP(v,t).$$
(14)
Eq. (14) may at best describe the evolution process in the long-time limit. However, for the sake of tractability, we apply it to the whole time range and use the natural initial condition $`P(v,0)=P_0(v)`$. Eq. (14) is an inhomogeneous linear differential equation which is easily solved to give
$$P(v,t)=P_0(v)e^{vt}\left[1+_0^t𝑑t^{}u(t^{})e^{vt^{}}\right].$$
(15)
This solution implies
$$P(v,t)u(t)v^1P_0(v)\mathrm{for}v\frac{1}{t},$$
(16)
which resembles Eq. (13).
To close the solution of Eq. (15), we must determine $`u(t)`$. It is possible to plug (15) into the definition of the average velocity, Eq. (9), and get an integral equation for $`u(t)`$. In the following we use another approach, which is technically simpler. Note that the density of active particles, $`𝑑vP(v,t)`$, is manifestly conserved by Eq. (14). After integration over velocity, Eq. (15) becomes
$$1=\widehat{P}_0(t)+_0^t𝑑t^{}u(t^{})\widehat{P}_0(tt^{}),$$
(17)
where $`\widehat{P}_0`$ is the Laplace transform of the intrinsic velocity distribution,
$$\widehat{P}_0(t)=_0^{\mathrm{}}𝑑vP_0(v)e^{vt}.$$
(18)
One can guess the long time behavior of $`P(v)`$ without actually solving Eq. (17). Let us assume that the average velocity varies slowly with $`t`$. Then the integral on the right-hand side of Eq. (17) can be estimated as $`u(t)_0^t𝑑t^{}\widehat{P}_0(t^{})`$, which implies
$$u(t)\left[_0^t𝑑t^{}\widehat{P}_0(t^{})\right]^1.$$
(19)
For an intrinsic velocity distribution with an algebraic behavior (12) in the small-$`v`$ limit, we have $`\widehat{P}_0(t)t^{1\mu }`$ for large $`t`$. Hence $`_0^t𝑑t^{}\widehat{P}_0(t^{})t^\mu `$ for $`\mu <0`$, and it follows from Eq. (19) that $`u(t)t^\mu `$. The above derivation is quite careless, though the final result is correct. Now we derive this result in a more rigorous way.
The convolution form of the integral in Eq. (17) suggests to apply the Laplace transform once more. It yields
$$\frac{1}{s}=_0^{\mathrm{}}𝑑v\frac{P_0(v)}{s+v}+\widehat{u}(s)_0^{\mathrm{}}𝑑v\frac{P_0(v)}{s+v}.$$
(20)
Here
$$\widehat{u}(s)=_0^{\mathrm{}}𝑑tu(t)e^{st}$$
(21)
is the Laplace transform of the average velocity. The double Laplace transform of the intrinsic velocity distribution has been simplified:
$$_0^{\mathrm{}}𝑑te^{st}_0^{\mathrm{}}𝑑vP_0(v)e^{vt}=_0^{\mathrm{}}𝑑v\frac{P_0(v)}{s+v}.$$
(22)
Thus the Laplace transform of the average velocity is
$$\widehat{u}(s)=1+\left[s_0^{\mathrm{}}𝑑v\frac{P_0(v)}{s+v}\right]^1.$$
(23)
Generally, one cannot obtain more explicit results. Given that the above approach describes only the long-time asymptotics, let us focus on this regime. To probe the long-time behavior, one should determine the small $`s`$ asymptotics of $`\widehat{u}(s)`$. For algebraic intrinsic velocity distributions (12), the asymptotics of (22) reads
$`{\displaystyle _0^{\mathrm{}}}𝑑v{\displaystyle \frac{P_0(v)}{s+v}}As^\mu {\displaystyle _0^{\mathrm{}}}𝑑w{\displaystyle \frac{w^\mu }{w+1}}={\displaystyle \frac{A\pi }{\mathrm{sin}(\pi \mu )}}s^\mu .`$ (24)
This applies for $`1<\mu <0`$ (the lower bound comes from the normalization requirement, $`𝑑vP_0(v)=1`$). Plugging (24) into (23) yields
$$\widehat{u}(s)\frac{\mathrm{sin}(\pi \mu )}{A\pi }s^{1\mu }\mathrm{for}s0,$$
(25)
and by making the inverse Laplace transform, we finally arrive at
$$u(t)\frac{\mathrm{sin}(\pi \mu )}{A\pi \mathrm{\Gamma }(1+\mu )}t^\mu \mathrm{for}t\mathrm{}.$$
(26)
This result agrees with the asymptotics we naively derived earlier.
A special consideration is required for the borderline case of $`\mu =0`$. For concreteness, consider the exponential intrinsic distribution, $`P_0(v)=\mathrm{exp}(v)`$. Its double Laplace transform reads
$`{\displaystyle _0^{\mathrm{}}}𝑑v{\displaystyle \frac{e^v}{s+v}}=e^sE_1(s),`$
where
$`E_1(s)={\displaystyle _1^{\mathrm{}}}𝑑x{\displaystyle \frac{e^{xs}}{x}}`$
is the exponential integral. As a result, Eq. (23) becomes
$$\widehat{u}(s)=1+\frac{1}{se^sE_1(s)}.$$
(27)
Using the well-known asymptotics of the the exponential integral, $`E_1(s)=\mathrm{ln}s\gamma +𝒪(s)`$, (where $`\gamma 0.5772`$ is Euler’s constant), we transform Eq. (27) into
$$\widehat{u}(s)=\frac{1}{s(\mathrm{ln}s+\gamma )}+𝒪\left(\frac{1}{\mathrm{ln}s}\right).$$
(28)
Performing the inverse Laplace transform gives
$$u(t)\frac{1}{\mathrm{ln}t}\mathrm{for}t\mathrm{}.$$
(29)
To summarize, for the family of intrinsic velocity distribution with algebraic behavior in the small $`v`$ limit (12), our predictions for the long-time asymptotics of the average velocity $`u(t)`$ are:
$$u(t)\{\begin{array}{cc}\mathrm{const},\hfill & \text{for }\mu >0\text{;}\hfill \\ (\mathrm{ln}t)^1,\hfill & \text{for }\mu =0\text{;}\hfill \\ t^\mu ,\hfill & \text{for }1<\mu <0\text{.}\hfill \end{array}$$
(30)
To check the validity of asymptotic predictions and, more generally, to see if the mean-field theory is applicable at all, we perform molecular dynamics simulations and solve the Boltzmann equation (7) numerically. To sample distinct regimes predicted in (30) we consider the intrinsic velocity distribution
$`P_0(v)={\displaystyle \frac{v^\mu e^v}{\mathrm{\Gamma }(\mu +1)}}`$ (31)
with $`\mu =1,0,1/2`$.
In molecular dynamics simulations, we place $`N`$ particles onto the ring of length $`L=N`$ so that the average density is equal to one. Most of our simulations are performed for $`N=510^4`$ particles, but we also simulated twice larger system and found no appreciable difference. Initially, particle velocities are randomly drawn from the distribution $`P_0(v)`$. The model is updated according to the collision-time-list algorithm suggested in Ref..
To solve the Eq. (7) numerically, we use Euler’s time update with both uniform and non-uniform grid; in the latter case, we take $`v_N=(N/N_{max})^4v_{max}`$ velocity grid with $`v_{max}=15`$ and $`N_{max}=300500`$. Integrals on the right-hand side of Eq. (7) are calculated using the trapezoid rule; time increment $`\delta t=0.1`$ was found to be suitable for all three $`P_0(v)`$.
The main conclusion is that results of molecular dynamics simulations and numerical solutions of the mean-field equation are virtually identical (see, e.g., Fig. 1). It confirms our assumption that the system remains well-stirred and no appreciable spatial correlations develop. Additionally, Fig. 1 shows that for $`P_0(v)=ve^v`$, the approach of $`P(v,t)`$ to the steady state is non-uniform in velocity. This is caused by the obvious fact that for any finite time, the velocity distribution $`P(v,t)`$ must still vanish at the lower cutoff as $`P_0(v)`$ does. In other words, the steady state (13) is reached outside the “boundary layer”, $`vv_{}(t)`$, while within the boundary layer, $`vv_{}(t)`$, the velocity distribution continues to evolve. The threshold velocity $`v_{}(t)`$ is estimated by evaluating the first, leading term in the right-hand side of Eq. (10): $`t^1Pv_{}^{\mu +1}P`$, which implies $`v_{}t^{1/(\mu +1)}`$. The width of the boundary layer shrinks with time but the boundary layer still exists ad infinitum.
Figs. 2–4 plot the average velocity vs. time for the intrinsic velocity distributions (31) with $`\mu =1,0,1/2`$, respectively. We find good agreement with the theoretical prediction of Eq. (30) when $`\mu 0`$. On Fig. 5 the plot of the local exponent $`\alpha (t)d\mathrm{ln}[u(t)]/d\mathrm{ln}[t]`$ vs. $`t^{1/2}`$ is shown for $`\mu =1/2`$. The results of extrapolation of $`\alpha (t)`$ to the $`t\mathrm{}`$ limit are not contradicting the theoretical prediction, $`\alpha =1/2`$.
In summary, we have shown that the fate of the system of passing particles is determined by the behavior of the intrinsic velocity distribution near its lower cutoff: If $`P_0(v)`$ vanishes in this limit, the system reaches a steady state; otherwise, the evolution continues forever. Comparison between solutions of the mean-field Boltzmann equation and results of molecular dynamics simulations suggests that the mean-field theory description is exact. It will be interesting to confirm this result rigorously.
We gratefully acknowledge partial support from NSF and ARO. |
no-problem/9912/math-ph9912012.html | ar5iv | text | # Trapping of a model-system for a soliton in a well
## 1 Introduction
Topological solitons arise as nontrivial solutions in field theories with nonlinear interactions. These solutions are stable against dispersion. Topology enters through the absolute conservation of a topological charge, or winding number.
It is for this reason they become so important in the description of phenomena like, optical self-focusing, magnetic flux in Josephson junctions or even the very existence of stable elementary particles, such as the skyrmion , as a model of hadrons.
Interactions of solitons with external agents become extremely important. These interactions allow us to test the validity of such models in real situations.
In a previous work the interaction of a soliton in one space dimension with finite size impurities was investigated.
In the works of Kivshar et al. (see also ref. ), it was found that the soliton displays unique phenomena when it interacts with an external impurity. The existence of trapped solutions for positive energy or, bound states in the continuum, is a very distinctive effect for the soliton in interaction with an attractive well.
We can understand the origin of impurity interactions of a soliton by looking at the impurity as a nontrivial medium in which the soliton propagates. An easy way to visualize these interactions consists in introducing a nontrivial metric for the relevant spacetime. The metric carries the information of the medium characteristics.
A 1+1 dimensional scalar field theory supporting topological solitons in flat space, immersed in a backgound determined by the metric $`g_{\mu \nu }`$ in a minimal coupling to the metric, is given by
$$=\sqrt{g}\left[g^{\mu \nu }\frac{1}{2}_\mu \varphi _\nu \varphi U(\varphi )\right]$$
(1)
where $`g`$ is the of the determinant of the metric, and U is the self-interaction that enables the existence of the soliton. For a weak potential we have
$`g_{00}`$ $``$ $`1+V(x)`$
$`g_{11}`$ $`=`$ $`1`$
$`g_{11}`$ $`=`$ $`g_{11}=0`$ (2)
Where $`V(x)`$ is the external space dependent potential. The equation of motion of the soliton in the background space becomes
$$\frac{^2\varphi }{t^2}\sqrt{g}^1\frac{}{x}\left[\sqrt{g}\frac{\varphi }{x}\right]+g_{00}\frac{U}{\varphi }=0.$$
This equation is identical, for slowly varying potentials, to the equation of motion of a soliton interacting with an impurity $`V(x)`$. Impurity interactions are therefore acceptable couplings of a soliton to an external potential. It is also the only way to couple the soliton without spoiling the topological boundary conditions. The source term generated by the metric is essentially a space dependent mass term.
The interaction of a soliton with an attractive impurity shows, however, some puzzling effects. A soliton can be trapped in it, when it impinges onto the well with positive kinetic energy. Energy conservation demands that the soliton fluctuates and distorts in trapped states inside the well. Even more counterintuitive is the fact that the soliton can be reflected by the well.
Neither of these effects are possible for classical point particles. The difference must obviously be due to the extended character of the soliton.
This was indeed put in evidence in the works of Kivshar et al.. It was shown there that the bulk soliton behavior may be reproduced qualitatively, by taking the center of the soliton as a time dependent collective coordinate coupled to the major excitation modes of the soliton in the well, the impurity mode, and a shape distortion mode. The first mode is excited inside the well only and appears as an oscillating packet centered at the well. The shape mode is an excitation that accompanies the soliton with a time dependent amplitude along its scattering, and accounts for the distortions of the free soliton. The dynamics of the soliton was then replaced by the equations of motion of three classical particle-like excitations: the center of the soliton, the amplitude of the impurity mode and the amplitude of the shape mode. For a $`\delta `$ function type of well, it was found that the system behaves analogously to the soliton. Kinetic energy of the soliton center can be transferred to the impurity and the shape modes. The system of three effective degrees of freedom can resonate inside the well and be trapped. Reflection by the attractive well is also observed. The system captures the essential features of the behavior of the soliton.
However, both the interactions between the collective degrees of freedom and the dynamics of the system are derived from the soliton itself. Moreover, the behavior was shown to hold for $`\delta `$ type of wells only.
Later it was shown that all the features of the scattering of a soliton show up in the case of a fine size well too.
In the present work we show that the effects are not specific to the soliton generated dynamics and interactions.
We are able to reproduce here all the abovementioned effects with a classical model for an extended object, regardless of the soliton dynamics. In the next section we show that a simple classical model exhibits the same behavior as the soliton. It can be trapped and reflected by an attractive well. The system will also show chaotic behavior.
The classical system can serve as a nice introductory example for the surprising behavior of solitons. It demands only basic lagrangian mechanics knowledge, but, it has many interesting features that are easily visualized by undergraduate students. It may serve also as an example of chaotic behavior in classical mechanics.
## 2 A classical model of trapping
Kinks and other topological deffects are sometimes used to model classical systems of masses. We will here find, that the analogy is more than mathematical. The very behavior of the soliton is exactly the one observed on a two-body system interacting with an external potential.
In ref., the dynamics of a soliton interacting with a well was also studied by selecting the collective coordinates of the soliton center, its shape-mode excitation amplitude and an impurity mode. Using this scheme, it was found that this classical system shows an analogous behavior to the soliton itself. Namely, trapped states and reflection by the attractive impurity. The specific dynamics of the soliton in terms of its collective coordinate, and its excitations including the shape mode were crucial for the obtention of the abovementioned results.
We show here that the effects are generic, any system resembling the behavior of the soliton, mainly, its extended character, will indeed yield similar results. This is an unexpected situation whose motivation arose entirely from the behavior of the soliton and its relevance goes beyond it, as will be explained below.
In order to justify the connection with the behavior of the model we appeal to a physical scenario in which both Sine-Gordon solitons and Kinks arise. Sine-Gordon solitons arise in large Josephson junctions and in the motion of dislocations in a one-dimensional crystal. Kinks arise in the latter also when the substrate potential is nonperiodic.
Consider the Hamiltonian of dislocations in a crystal with nearest neighbor interactions
$`H={\displaystyle \left[\frac{1}{2}m\left(\frac{du_n}{dt}\right)^2+\frac{G}{2}\left(u_{n+1}u_n\right)^2+V(u_n)\right]}`$ (3)
Where $`u_n`$ is the displacement of the dislocation at site $`n`$, $`G`$ is the spring constant between particles and $`V`$ is a site potential generated by the substrate chain of fixed particles upon which the mobile dislocations move.
The above Hamiltonian supports solitons in the continuum, strong coupling limit. In particular for kinks, only a few dislocation centers are needed to generate the desired effect of the moving soliton. The minimal set would then be a couple of dislocation ’particles’ moving along the substrate. Now suppose the above model is applied to a substrate for which the parameters of the substrate potential vary. This is analogous to the variation of the metric in the description of the previous section. In such a scenario we have to modify the substrate potential by adding a local interaction at fixed sites in the lattice. This is essentially the procedure in soliton-impurity scattering. If the effects found by Kivshar et al., are indeed based on the above simple dislocation model, then these should appear clearly when a couple of dislocation degrees of freedom scatter off an external potential. We will see below that this borne out in a simple two-particle model that imitates the soliton behavior.
Consider a system of two classical point particles connected by a massless spring, two of the dislocations above, and a repulsive force between them needed to prevent their collapse to zero size that subsums the behavior of the rest of the chain of dislocations. With only two degrees of freedom, we are eliminating the rigidity of the chain, thereby introducing the spurious possibility of complete overlap between the two sites, which does not occur when the chain is infinite. Hence the need for a repulsive interaction.
The above simplistic model is not directly related to the collective coordinate treatment of ref. deliberately. The aim is to show that the particular dynamics of the soliton is not the cause of the peculiar phenomena previously found.
When each particle in the system is allowed to interact with an external potential, we are imitating the impurity force or the local change in the dislocation potential.
The classical nonrelativistic one-dimensional lagrangian for the system of equal masses $`m_1=m_2=1`$ becomes:
$$_{sys}=\frac{\dot{x}_1^2}{2}+\frac{\dot{x}_2^2}{2}k\frac{(x_1x_2)^2}{2}\frac{\alpha }{|x_1x_2|^n}+V(x_1)+V(x_2)$$
(4)
For the potential well we take
$$V(x)=Ae^{\beta x^2}$$
(5)
Although any finite size well may serve for this purpose. We prepare the two-particle system at rest at a large distance far away from the well with an initial speed $`v`$. The equilibrium interparticle separation is $`r_{0}^{}{}_{}{}^{n+2}=\frac{n\alpha }{k}`$. We here use $`n=2`$.
The equations of motion are not solvable analytically. However we can show that for a well large compared to the equilibrium distance $`r_0`$ the system may be trapped and oscillate inside it. Transforming to relative and center of mass coordinates, $`r=\frac{x_2x_1}{2},R=\frac{x_1+x_2}{2}`$ and using the ansatz $`r=r_0+\delta (t)`$, with $`\delta `$ a small parameter, we find the equations of motion near the center of the well $`R=0`$
$`\ddot{\delta }+2k\delta +2Ae^{r_0^2\beta }\beta (r_0+\delta )=0`$
$`\ddot{R}+2A\beta Re^{r_0^2\beta }=0`$ (6)
Where we have used $`\beta r_0^2<<1`$, a wide well as compared to the equilibrium distance of the system. Passing to a new coordinate
$`\delta (t)={\displaystyle \frac{r_0}{1+\frac{k}{A\beta }e^{\beta r_0^2}}}+ϵ(t)`$ (7)
the first of equations 2 becomes
$$\ddot{ϵ}+2kϵ+2Ae^{r_0^2\beta }\beta ϵ=0$$
(8)
It is clear from the above equations that the center of mass coordinate of the system oscillates around the center of the well, while the relative coordinate oscillates too with the small amplitude $`ϵ`$. Moreover the system shrinks inside the well. The oscillations of the center of mass coordinate $`r`$ compensate for the loss of kinetic energy of the system that impinged from infinity with a fixed relative separation. The above treatment demonstrates, that at least there is room for the trapping to occur.
We now proceed to show the numerical results of the exact calculation of the development of the model.
Using the numerical method used in ref. we can find the outcome of the scattering events as a function of the initial speed. The system starts with a certain initial center of mass velocity $`v`$ far away from the well. The system is prepared with the relative separation of equilibrium $`r_0`$ and the outcome of the scattering is monitored as a function of the initial conditions.
Figure 1 exemplifies the results for the choice of parameters $`k=1,\alpha =1,n=2,A=2,\beta =1`$.
Quite unexpectedly, it is found that the system behaves exactly like the soliton.
The system can be trapped $`v_{final}=0`$, reflected, $`v_{final}<0`$ or transmitted, $`v_{final}>0`$ through the well by varying the initial speed.
When the system is trapped, it oscillates with a null average speed, the kinetic energy stored in the vibrational and deformation modes.
Minute changes of the initial speed around a value leading to a trapped state, may generate reflection or transmission events.
The effects are independent of the functional dependence of the interactions and external potential, as well as the values of the parameters.
In figure 1 we used a grid for $`v`$ of $`dv=.001`$. Using a finer grid, each region of reflection-transmission unfolds to more islands of trapping, reflection and transmission.
Finer and finer grids show more and more structure.
Figure 2 shows a detailed expansion of the velocity range around v=.12 with a grid dv=.0002. The system is chaotic, an infinitesimal change in the initial speed produces diverging results.
Many of the phenomena related to chaotic behavior may be identified in the system, namely scaling, bifurcation and perhaps even fractal structure.
It is now safer to claim that the unexpected behavior of a soliton interacting with an attractive well may be traced back to its extended nature. If we regard each $`\varphi (x)`$ as a classical pointlike object we will find interactions between neighboring particles of attractive and repulsive character. The basic attractive interaction is provided by the space derivative of the soliton lagrangian and a piece of the self-interaction potential, while the repulsive interaction is provided by the latter and the coupling to the remainder of the soliton or linear chain ina discrete model.
## 3 Final remarks
The simplest implementation of the system studied would be a toy-like system of two masses tied-up to a spring sliding on a frictionless table with a carved well on it. Two atoms in a molecule scattering off an external Van-der-Waals potential might show the same effects in a quasi-classical approximation. However, quantum effects can blurr the picture due to interference.
Turning the process backwards: an extended object, may it be a soliton or a classical assembly of bound particles, in a trapped state, can suddenly be freed from it provided some random interaction causes the reversal of the process of trapping, a process reminiscent of the decay of metastable states in quantum mechanics. The concept of trapping in general is not discussed in the classical mechanics literature. Soliton physics has taught us that an extended object made of particles in interaction has a much richer behavior than expected.
Acknowledgements
This work was supported in part by the Department of Energy under grant DE-FG03-93ER40773 and by the National Science Foundation under grant PHY-9413872. |
no-problem/9912/astro-ph9912058.html | ar5iv | text | # Cosmic Ray Origin, Acceleration and Propagation
## Introduction
A review of a collection of papers on cosmic ray origin, acceleration and propagation is necessarily broad. Historically, International Cosmic Ray Conferences have separated the papers in these extensive subjects for consideration by different Rapporteurs. However, since the Rome conference in 1995, a new precedent has been established with a review of all these fields becoming the responsibility of one individual. This has perhaps been propelled by the burgeoning number of astrophysics-related contributions to the meetings, and has reduced the comprehensiveness possible in a Rapporteur’s written summary. This tract represents my attempt to assemble a description of interesting and new results presented at the Salt Lake City conference pertaining to the origin, acceleration and propagation themes. Space limitations preclude completeness, and accordingly I ask for forbearance from authors who feel their work is not given a sufficient exposure here. I also offer the standard disclaimer: that the views expressed here are personal, and may not reflect the perspective of “The Management,” i.e. the contributing authors whose research has provided such a rewarding experience for this Rapporteur.
The material I was asked to report upon can be grouped into five categories: origin and composition, for which there were $`11`$ papers, propagation of ions and electrons (26 papers), acceleration theory and astrophysical applications of acceleration models (33 papers), and discussions of ultra-high energy cosmic rays (UHECRs, 12 papers), with about 5 papers falling into the miscellaneous pot. These themes define the structure of this review, and there have been varying degrees of advancement in these fields. The citation scheme adopted here identifies conference papers by OG 3.\*.\* designations, for which the reader should refer to the proceedings volumes, either hardcopy or on-line at http://krusty.physics.utah.edu/~icrc1999/proceedings.html.
## Origin and Composition
The power supply for the acceleration of galactic cosmic rays is traditionally attributed to supernova remnants, yet there is much debate as to the mass and type of the progenitor stars, the specific nature of the circumstellar environment, and the galactic origin of the material accelerated. This discussion has spawned a field rich with ideas, with diagnostics largely provided by cosmic ray primary compositional data. The papers presented at this meeting generally relate to one of two problems: (i) the discussion of whether fresh supernova ejecta or environmental dust grains provide the seeds for cosmic ray acceleration, and (ii) explanations of the Li, Be, and B abundances, the well-known LiBeB problem.
Early ideas on cosmic rays focused on the environment for their acceleration, assuming some pre-existing seed population, rather than addressing the question of the origin of such seed material. Over a period of time, it became clear that the galactic cosmic ray (GCR)/solar photosphere abundance ratios provided valuable clues to the origin of galactic cosmic ray matter. Such ratios exhibit (e.g. see Silberberg, Tsao & Barghouty OG 3.1.06) a general enrichment of refractory elements (i.e those with high condensation temperatures: Mg, Al, Si, Fe and Ni) relative to highly volatile ones (principally H, He, N, Ne and Ar). Two competing interpretations of this property emerged. The first is that low energy ions are pre-accelerated in stellar coronae to enrich the interstellar medium (ISM) before participating in acceleration at proximate SNR shells. In early work, CasseGoret78 ; Meyer85 suggested that enrichment correlates with elemental first ionization potential (FIP; see also Silberberg, et al. OG 3.1.06), with high-FIP elements being somewhat suppressed. The FIP interpretation was largely driven by the discovery that FIP biases the composition of solar energetic particles; hence the connection to stellar coronae was made. The second proposal originated with Bibring & Cesarsky bibces81 and Epstein Epstein80 , where erosion products of grains formed from old material seed the acceleration process, so that enrichment should correlate with volatility mde97 . The acceleration process is then naturally enhanced in non-linear SNR shocks with increasing mass-to-charge (A/Q) ratio of the species edm97 in a manner commensurate with observed abundances.
While FIP proponents are invoking atomic physics concepts and a volatility interpretation appeals to molecular physics, the two views are not entirely opposite: FIP and volatility are clearly related quantities, albeit in a rather subtle manner. For many light and heavier sub-Fe elements, these two scenarios provide comparable GCR/solar abundance ratios. Yet success of the FIP-based models is contingent upon a number of disconnected and controversial assumptions, pertaining mostly to H, He, and <sup>22</sup>Ne and the contribution of Wolf-Rayet winds. In contrast, the volatility description offers a more coherent picture with fewer debatable assumptions, depending principally on the chemistry and composition of interstellar grains. It is therefore becoming the more widely-accepted description, with the work of Lingenfelter & Ramaty (OG 3.1.05) coming out in support of volatility as the descriptor of cosmic ray abundances. Nevertheless, their research group had previously advocated lrk98 fresh supernova ejecta as the seeds for acceleration, as opposed to the grains created from older matter in the model of Meyer, Drury & Ellison mde97 ; edm97 . This was a major point of controversy that was addressed and resolved at the Conference, based on two discriminating pieces of information.
The first diagnostic concerned the C and O ratios. These elements provide critical diagnostics since they possess intermediate FIPs and are moderately volatile, and hence are bridge elements between the volatiles and the refractories. Both are key products of nucleosynthesis in massive O and B stars, which are the progenitors of the type Ib and II supernova that dominate the observed supernova population. The property crucial to the success of the grain-acceleration proposition is that these two species are present in grains (e.g. various oxides and graphite) in just the appropriate amounts to explain their abundances mde97 . Consequently, it becomes apparent that interstellar grain chemistry is the important parameter for the composition problem, and should be a focus of future research efforts.
The second decisive indicator concerned the age of the seeds for acceleration. Since grains can be much older than the SNRs that tap them, the grain-induced cosmic ray composition picture mde97 ; edm97 is less subject to temporal restrictions provided by unstable nuclei that offer markers of the chronology of nucleosynthesis. Foremost among these is the electron K-capture decay of <sup>59</sup>Ni to <sup>59</sup>Co, with a half-life of around $`\mathrm{\hspace{0.17em}10}^5`$ years, for which the ACE experiment has recently provided discriminating information: the low abundance of <sup>59</sup>Ni relative to Fe and the high abundance of <sup>59</sup>Co (Weidenbeck et al., OG 1.1.01) implies a passing of at least $`\mathrm{\hspace{0.17em}10}^5`$ years between nucleosynthesis and acceleration. Meyer, Drury & Ellison have consistently argued that grains are easily old enough to satisfy the ACE temporal constraints. While Lingenfelter et al. lrk98 had advocated a fresh ejecta scenario, Higdon, Lingenfelter & Ramaty’s contribution (OG 3.1.04) indicated an evolution in their position so that the two groups concurred that the ACE dataset does indeed provide age lower bounds that render fresh (i.e. young) ejecta unlikely seeds for the acceleration process at SNR shocks. Focus has now turned to timescales of ejecta mixing well in excess of $`\mathrm{\hspace{0.17em}10}^6`$ years, which can be suitably probed (Waddington, OG 3.2.33) by abundance measurements of actinides such as Th, Np, CM and Pu. Westphal (OG 3.1.09) discussed the potential for the ECCO experiment aboard the International Space Station to provide such discriminating data.
The mixing question is pertinent to the discussion of whether superbubbles with many SNIb/SNII explosions as opposed to more isolated ISM regions with SNIa progenitors are the locales for cosmic ray origin. The site issue, still unresolved, provides a natural progression to the LiBeB problem. This longstanding conundrum relates to the abundances of Li, Be and B in old halo stars, the principal spallation products of reactions of nucleosynthetic or ambient <sup>12</sup>C, <sup>14</sup>N and <sup>16</sup>O in collisions with hydrogen and helium of either ISM or ejecta origin (see, e.g. Korejwo et al. OG 3.2.22, for accelerator data on <sup>12</sup>C fragmentation/spallation cross-sections for various products in the GeV/nucleon range). Balmer-like line (Be II) observations indicate a linear correlation (e.g. Ramaty, Lingenfelter & Kozlovsky, OG 3.1.03; Fields & Olive OG 3.2.04) of the abundance of LiBeB with Fe metallicity (Fe/H) in these metal-poor stars (note that Fe/H is effectively an age parameter for these systems). Yet, theoretically (see the review in vroc98 ) LiBeB is expected to increase quadratically with Fe/H, since, for a constant supernova rate, LiBeB/H should scale as the integral over time of the supernova rate times the total number of antecedent supernovae in the galaxy. This apparent conflict becomes cleaner, observationally, by considering Be alone, since it provides no ambiguities; some of the <sup>7</sup>Li is probably a product of primordial nucleosynthesis, and much of the <sup>11</sup>B population may result from neutrino-induced spallation (on <sup>12</sup>C) in supernovae.
Presentations on explaining Be/H evolution at the Conference included papers by Ramaty, Lingenfelter & Kozlovsky (OG 3.1.03) and Parizot & Drury (OG 3.1.18, OG 3.2.51). A result common to these two groups is that, by separating the light and metallic spallation participants in space, a decoupling between the metallicity of stars and their age is effected. This is achieved if there is no significant mixing between metal-poor ISM that is accelerated at a supernova remnant’s forward shock and the enriched, high-metallicity ejecta accelerated at a remnant’s reverse shock (Parizot & Drury OG 3.1.18). The dominant contributions to Be production are then spawned by (i) low metallicity ISM ions accelerated by forward shocks colliding with metal-rich supernova ejecta, and (ii) enriched ejecta material accelerated at reverse shocks interacting with light elements from the surrounding ISM. In each case, the Be production is independent of the ISM metallicity, generating a Be/H halo star abundance proportional to Fe/H metallicity. For this reason, Ramaty, Lingenfelter & Kozlovsky (OG 3.1.03) argue that the Be/H evolution with Fe/H is a strong indication that fresh ejecta are crucial to cosmic ray origin. Reconciling the Be production with the ACE observations of <sup>59</sup>Ni should be a major objective of future studies. Supernova/cosmic ray energetics also play a constraining role in this discussion, with both Ramaty, et al. and Parizot & Drury observing that there is an underproduction of Be (by over an order of magnitude) in the early galaxy if most supernovae explode in the average ISM. This has motivated papers by Higdon, Ramaty & Lingenfelter (OG 3.1.04) and Parizot & Drury (OG 3.2.51) that describe how a superbubble/starburst locale for LiBeB generation can provide prepared metallicity-enhanced environs due to the OB stellar associations. The production rate can increase more than tenfold to match the observed abundances in this scenario because the spallation reactions involving enriched ambient CNO can tap the greater accelerating potential of forward shocks in SNRs.
A cautionary note for the LiBeB problem was sounded by Fields and Olive (OG 3.2.04). While historically Fe/H has been used as the marker of metallicity for discussing Be production, Fields and Olive argued that the O/H ratio is a far more appropriate indicator since oxygen is an actual participant in the spallation reactions that spawn Be. The consequences of such a shift in perspective are substantial. The evolution of O/H does not trace Fe/H linearly so that O and Fe are different indicators of metallicity. Accordingly, Fields and Olive observe that Be/H is more strongly dependent on O/H than Fe/H in halo (population II) star atmospheres, more closely resembling the quadratic dependence that was anticipated in incipient theoretical considerations of the LiBeB abundances. The implication of their work is that the so-called LiBeB problem is a “tempest in a teapot.” This is not entirely discouraging for theorists in their quest for nailing the origin of cosmic rays, since the data spread in the Be/H versus O/H diagram is considerable; observationally, it is more problematic to determine oxygen metallicity than Fe/H. Future refined observations from uniform/consistent stellar atmospheres should resolve this issue.
## Propagation
Studies of propagation have perhaps had the slowest evolution of the sub-fields covered here. This is essentially imposed by the pace at which new and discriminating experimental data relating to this complex problem are forthcoming. Properties of the interstellar medium of the galaxy remain enigmatic, presently prohibiting the elimination of any one of the handful of preferred propagation models. Foremost in this group is the “canonical” Leaky-Box approximation, the “tool of choice” for most members of the propagation community, due to its simplicity. More sophisticated and physically realistic models with various mutations are the halo diffusion picture, wind scenarios, turbulent diffusion model, and calculations invoking re-acceleration, each with their proponents (see berez90 for a review). There are a number of standard tests for the viability of each of these; we shall explore here the latest results separately for the cases of propagation of ions and electrons.
### Ions
A significant number of papers were presented, many producing very similar results. The leaky box model (LBM), where a “one-zone” scenario is envisaged with an escape length or rather grammage $`X_{\mathrm{lb}}`$ forming the principal model parameter, and the halo diffusion picture (HDM), where the galactic disk and halo represent two regions distinct in their source and diffusion properties, were the most common invocations (e.g. OG 3.1.16, 3.2.02, 3.2.03, 3.2.06, 3.2.07, 3.2.08, 3.2.09, 3.2.18, 3.2.32). While these two models dominate the discussion here, propagation in galactic winds (OG 3.1.16, 3.2.07, 3.2.13, 3.2.19, 3.2.32) and contributions from re-acceleration (OG 3.2.02, 3.2.07, 3.2.18, 3.2.32) were also considered.
In the LBM, the grammage parameter is often specified as a broken-power-law in rigidity $`R`$ (e.g. Ptuskin et al. OG 3.2.02), increasing as a moderate power of particle velocity $`\beta `$ at non-relativistic speeds and declining roughly as $`X_{\mathrm{lb}}R^{0.6}`$ for relativistic energies $`E`$. This form is chosen (i) to explain the observed steepening of the primary cosmic ray spectrum from the approximately $`E^{2.1}`$ spectrum expected at sources, (ii) match the observed secondary/primary ratios of stable species, and (iii) to accommodate spectral shapes observed in the transition region between the modulated and unmodulated ion spectrum. Coefficients of these proportionalities are of the order of a few g/cm<sup>2</sup> to match densities of the interstellar medium and establish scale-heights above the galactic plane of the order of a kpc or so. Physically, the decline in $`X_{\mathrm{lb}}`$ as a function of rigidity corresponds to the expectation of greater losses for more energetic particles. The cosmic ray production in the LBM is homogeneous in space, not being coupled to the galactic plane.
The halo diffusion model (e.g. ptuskin74 ) introduces more complexity, distinguishing between galactic disk and halo with different source densities and propagation characteristics in each region. Spatial uniformity can be assumed in each region (e.g. Ptuskin et al., OG 3.2.02, OG 3.2.32) or disk and halo can possess inhomogeneous distributions in altitude $`z`$ above the plane (e.g. Strong & Moskalenko, OG 3.2.18). The diffusive escape parameter is usually set to $`X_\mathrm{e}1/𝒟R^{1/3}`$ in accord with the dependence of the diffusion coefficient $`𝒟`$ for Kolmogorov turbulence. Essentially, free escape arises at the halo extremities in this scenario, and the selective confinement of matter near the plane renders the pathlength distribution for losses exponential as in the Leaky Box model. The vertical height of the disk is constrained by the diffusive lengthscale $`\sqrt{𝒟\tau }`$ for “interesting” radioactive isotopes of ballistic lifetime $`\tau `$ (i.e. $`10^6`$ years; discussed below). A distinct advantage of the HDM is that it can accommodate the observed low cosmic ray anisotropies that are almost constant out to $`\mathrm{\hspace{0.17em}10}^{14}`$eV (e.g. kifune91 ; see also Hillas OG 3.2.10) more easily than the LBM, due to its weaker dependence of loss scale on rigidity.
Primary source spectra for species such as carbon and iron alone are insufficient to discriminate between Leaky Box and halo diffusion models (e.g. see Ptuskin et al. OG 3.2.32), being more dependent on solar modulation properties (such as the assumed force-field potential: e.g. Webber, OG 3.2.8, Strong & Moskalenko OG 3.2.18). Stable secondary to primary ratios are somewhat more sensitive to model characteristics since they probe energy loss rates in matter traversal, i.e. $`X_{\mathrm{lb}}`$ and $`X_\mathrm{e}`$, for different species involved in nuclear interactions with the interstellar medium. The most popular choices for these ratios, corresponding to spallation reactions involving the principal components of cosmic rays, are those of boron to carbon, B/C, and sub-iron group to iron nuclei, (Sc+Ti+V)/Fe. The spectrum of the spallation products traces that of the parent nuclei when they are created, with a subsequent steepening being induced by the energy-dependent propagation effects. Nevertheless, the increased data spread appearing in such ratios is sufficient to preclude unequivocal discrimination between models, so that the LBM and HDM are equally viable (e.g. OG 3.2.32) based on analysis of stable secondaries.
Hence considerable effort was expended in a number of papers that focused on radioactive isotopes. The abundances of suitable secondary radioactive nuclei provide clues to the confinement time of cosmic rays in the galaxy (e.g. Streitmatter & Stephens OG 3.2.03), and therefore offer observational diagnostics complementary to those engendered by matter traversal. Suitability is naturally governed by significant elemental abundances and lifetimes that approximate typical galactic disk diffusion timescales of 1 Myr. Therefore, excellent choices include <sup>10</sup>Be (beta decay, 2.3 Myr), <sup>26</sup>Al (inverse beta decay/ K-capture, 1.6 Myr) and <sup>36</sup>Cl (beta decay, 0.4 Myr); <sup>54</sup>Mn is also a possible option, though its $`\beta ^+`$ decay lifetime is still not precisely determined. While often-quoted Al/Mg and Cl/Ar fractions represent parent/daughter nuclei pairs, the ratio of choice for <sup>10</sup>Be decay is <sup>10</sup>Be/<sup>9</sup>Be, representing the relative abundance of surviving <sup>10</sup>Be to its “sister” spallation product <sup>9</sup>Be rather than its decay offspring <sup>10</sup>B. This alternative is afforded by the well-measured cross sections for spallation reactions in accelerators. Note that <sup>10</sup>Be is optimal for experimental purposes due to the lower mass resolution required to distinguish it from other isotopes. Of particular interest is the trans-relativistic regime of 1–10 GeV/nucleon, where time-dilation effects are sampled.
Since the mean proximity of sources from the solar system differs for the Leaky Box and halo diffusion models, the fractional abundances of radioactive nuclides expected for the two scenarios are generally disparate. Various data model comparisons were presented by Ptuskin, Soutoul & Streitmatter (OG 3.2.02), Streitmatter & Stephens (OG 3.2.03), and Simon & Molnar (OG 3.2.06), sometimes expressed as relative abundances (the experimentalists’ preference), and sometimes as surviving fractions (perhaps the theorist’s choice), which incorporate model-dependent information. Variations in theoretical predictions were modest, and preference for either the LBM or HDM is indiscernible given that model parameters can be appropriately fine-tuned; the abundance ratio data from Voyager, Ulysses and HEAO-3 missions are typically accurate to only a factor of two. Yet the potential for advances in this field in the near future is significant. The recent ACE data from the CRIS experiment (e.g. Yanasek et al. OG 1.1.03, and Weidenbeck’s highlight talk, these proceedings) reduced experimental uncertainties in these ratios in the 0.1–0.3 GeV/nucleon range down to the 20%–40% level. Further gains are anticipated with ISOMAX (Hams et al., OG 3.1.33), which will extend the range of exploration up to a few GeV, so as to more completely probe the mildly-relativistic regime.
The possible influence of galactic winds and interstellar cosmic ray re-acceleration complicate the propagation problem. Winds away from the galactic disk (typically at $`20`$km/sec) necessarily enhance loss rates and therefore can impose less stringent requirements on the energy dependence of the diffusion and lead to anisotropies in the diffusion tensor (Breitschwerdt, Dogiel & Völk, OG 3.2.19); these authors argue that such winds may explain the small ratio of radial gradients of diffuse gamma rays to cosmic rays. Ptuskin et al. (OG 3.2.32) indicate that wind and minimal re-acceleration models are both just as consistent with stable secondary/primary ratio data as the LBM and HDM. Re-acceleration models did not achieve the same exposure and topicality as in previous Cosmic Ray Conferences. Their basic properties are understood. Depletions of low energy cosmic rays due to in transit acceleration effectively eliminate the need for a broken power-law for the variation of the escape length $`X_{\mathrm{lb}}`$ with rigidity. Re-acceleration alleviates the problem of weakly rigidity-dependent, low-level anisotropies, by permitting a reduced dependence of the escape length on $`R`$. At the same time, re-acceleration has a profound influence on ions below 10 GeV/nucleon (Jones et al. OG 3.2.07) that have long residence times; this becomes an asset when trying to fit B/C and (sub-Fe)/Fe spectral flattenings in the low-energy modulation range.
In concluding the discussion of ion propagation, note that two formalism papers were contributed by Forman (OG 3.2.11) and Ragot (OG 3.2.45), which focused on quasi-linear theory aspects of particle diffusion in field turbulence (gyro-resonant and non-resonant, respectively), works that while interesting for propagation specialists, are more salient to heliospheric issues in the SH sessions.
### Electrons
Considerations of electron propagation were largely confined to the work of one research group, Webber and his collaborators. Nothing extremely new was forthcoming, yet discussion of electrons provides an interesting forum for the interplay between cosmic ray physics and astrophysics. The observed cosmic ray (total) electron spectrum is steeper in the 3–100 GeV range than its ion counterpart mueller95 , suggesting either that ions and electrons possess distinct propagation characteristics, or that electron source spectra are steeper than ion ones. This latter alternative was promoted in several papers: Stephens (OG 3.2.14), Higbie et al. (OG 3.2.15), Rockstroh et al. (OG 3.2.16) and Peterson et al. (OG 3.2.17). Inferences in this direction are facilitated by broadening the dynamic range of cosmic ray energies sampled using data of astronomical origin. The diffuse radio synchrotron spectrum is very informative since it evades modulation effects, and can therefore probe lower electron energies, principally in the 0.2–3 GeV range. However, the “model-independence” of such information is marred at low energies by significant free-free absorption in the ISM (Peterson et al. OG 3.2.17). Matching normalizations of the radio-derived $`e^{}`$ spectrum with the cosmic ray electron one measured at higher energies requires assuming a mean interstellar field of around 5$`\mu `$G. While the aforementioned papers advocated an $`E^{2.4}`$ electron source spectrum, the data spread is sufficient to render an $`E^{2.25}`$ spectrum not implausible for the particular diffusion model invoked by Rockstroh et al and Peterson. et al. Since deductions pertaining to the cosmic ray origin are contingent upon propagation and modulation assumptions, the flatter source spectra are not presently excluded.
Higbie et al. (OG 3.2.15) argued that modelling the diffuse gamma-ray emission with the same Monte Carlo propagation simulation again points towards a steeper $`e^{}`$ source distribution: simultaneous fitting of the pion “decay bump” in the $`>50`$ MeV EGRET data and the relatively steep COMPTEL 1–30 MeV spectrum with a bremsstrahlung component Strong93 (both experiments were on board the Compton Gamma-Ray Observatory) provides the basis for this assertion. Porter & Protheroe (OG 3.2.38) arrive at a different conclusion when modelling diffuse gamma-ray emission, arguing in favour of flatter electron source spectra. These disparate inferences largely reflect differences in propagation models, and therefore indicate the limits that should be placed on such assertions at this stage. Stephens (OG 3.2.14) addressed positron propagation and claimed a small (10–15%) charge-sign dependence of modulation; while potentially interesting, data uncertainties limit this interpretation to merely a prediction for future experimental verification.
## Acceleration Theory and Astrophysics
The subject area of the theory of particle acceleration and astrophysical applications was the most diverse in terms of the material presented at the Conference. Hence, only principal focal points can be addressed in this brief exposition.
### Acceleration Theory
The discussions of cosmic ray propagation hinge on the widely-used assumption that the sources of cosmic rays produce quasi-power-law populations ($`E^\alpha `$) with $`\alpha 2.1`$–2.4. This is readily satisfied by test-particle acceleration at the strong shocks formed at supernova remnant shells as the expansion ploughs through the ISM. This feature has lead to the almost universal acclaim that SNRs are the site of cosmic ray acceleration, at least up to the knee at $`10^{15}`$eV. Yet there are many subtleties, including those related to deviations from the test-particle approximation, how shock heating of the downstream gas is influenced by the fluid dynamics, questions of the efficiency of injection (particularly for electrons), and what are the differences between relativistic and non-relativistic shocks.
The issue of validity of the test particle approximation is important for the cosmic ray problem. The beauty of diffusive acceleration was underscored by the natural explanation it provided for the power-law slope of the cosmic ray distribution over many decades in energy. Yet this attractive feature is contingent upon two criteria: (i) that the accelerated particles do not modify the dynamics of the shocked flow, i.e. act only as test particles to the problem, and (ii) that there is no particular energy scale for losses of particles. It is palpable that neither of these properties is satisfied in shocks in SNR shells, thereby eliminating the most aesthetic reason for considering shock acceleration as the principal means of energizing cosmic rays. Nevertheless such acceleration is virtually inevitable at the interface between supersonic and subsonic flows, and hence is widely accepted to be ubiquitous in astrophysical systems by theorists and experimentalists alike.
Non-linear shock acceleration effects and their implications featured prominently in the contributed papers, and are suitably discussed in the reviews of Drury83 ; JE91 . When the accelerated ions have sufficient pressure to modify the flow dynamics in the shock environs, they can no longer be considered as test particles. The cosmic ray ions act to slow down the flow upstream of the shock discontinuity, resulting in an increase of the overall compression ratio $`r`$ above the canonical test-particle value of $`r=4`$ if the system sustains significant losses of particles or energy. This strengthening of the shock adds to the non-thermal ion pressure, modifying the flow speed further, and thereby provides a feedback that defines the non-linearity of the acceleration process. Such non-linear effects are present in SNR shocks because they are inherently strong, have had sufficient time (at least in the Sedov phase) to accumulate significant pressure in the cosmic rays, and suffer losses on the largest spatial scales. Electrons seldom contribute to the dynamics (e.g. bergg99 ), unless they possess a peculiarly large abundance relative to the cosmic ray $`e/p`$ ratio mueller95 of 1–3% in the 1–10 GeV range.
A principal signature of these non-linearities in strong SNR shocks is the upward spectral curvature eich84 in the non-thermal ions, a consequence of higher energy ions generally having larger diffusive scales and thereby sampling greater effective compression ratios in the cosmic ray-modified flow. Concomitantly, the acceleration is enhanced with increasing mass to charge (A/Q) ratio, implying a relative profusion of higher metallicity species that was salient for the cosmic ray origin discussion above. Berezhko & Ksenofontov (OG 3.3.09) and Ellison et al. (OG 2.2.09) illustrate such predictions of non-linear acceleration theory and emphasize that spectral curvature is consistent with all-particle or individual species data given the significant experimental spread below the knee, an argument supported by Zatsepin & Sokolskaya (OG 3.1.02). This line of reasoning is obviously at odds with the common wisdom that the cosmic ray spectrum is a beautiful power-law. Merit can be found in both perspectives, which are not inherently incompatible: the spectral curvature predicted is sufficiently small (enhancements by a factor of a few over several decades in energy) that it essentially cannot be discriminated from exact power-laws as an appropriate model for the cosmic ray spectrum below the knee. In any case, since the cosmic ray measurements represent a convolution of source properties and propagation characteristics, such a distinction loses meaning. In this regard, gamma-ray signatures in the GeV to TeV band from isolated remnants will be more informative in seeking evidence of spectral curvature.
The critical point for discussion is that spectral cutoffs expected in SNRs (generally around 10–100 TeV; see bergg99 , Berezhko & Völk, OG 3.3.08, Yoshida & Yanagita, OG 3.3.11) could impose structure in the cosmic ray spectrum more severe than observed near the knee. This is a principal outstanding problem for cosmic ray studies; its resolution requires more detailed spectral and compositional information in the vicinity of the knee (the ACCESS project access99 should help provide this). The KASCADE air shower experiment provided some interesting results salient to this issue, namely deductions of proton and Fe spectra from muon data (Haungs et al. HE 2.2.02; Chilingarian et al. HE 2.2.04). Complementary inferences from gamma-ray upper limits (CASA-MIA results: Markoff et al. OG 3.3.18; HEGRA observations: Horns et al. OG 3.2.24) are currently not constraining.
Non-linear acceleration-induced spectral curvature obviously will impose more severe requirements on propagation models, both by requiring a stronger dependence of the escape length on rigidity and by increasing difficulties in minimizing anisotropies of the highest energy particles in the galaxy. Another non-linear feature is the reduction of the compression ratio of the viscous subshock (i.e. shock discontinuity) below $`r=4`$, thereby reducing the dissipational heating of the downstream plasma (Ellison & Berezhko OG 3.3.12). This property is pertinent to the interpretation of X-ray line emission from SNRs, computations of X-ray bremsstrahlung in SNR emission models and the deduced electron-to-proton ratio ebb99 ; the latter impacts the gamma-ray flux expected from remnantsbergg99 . Several papers were devoted to such astrophysical signatures and are discussed below.
The injection issue was the subject of two papers, Gieseler, Jones & Kang (OG 3.3.20) and Sugiyama & Fujimoto (OG 3.3.21), though neither paper treated electron injection, a perennial concern for theorists. Gieseler et al. developed Kang & Jones’ diffusion-convection equation approach to modelling acceleration at non-linear shocks by incorporating a description, due to Malkov, of the interaction of thermal ions with self-generated magneto-hydrodynamic (MHD) waves. It is unclear what advantages this step has to offer over antecedent developments by Kang & Jones that parameterized injection efficiencies (e.g. see OG 3.3.32, which discussed an interesting use of an adaptive mesh technique to improve the dynamic range of lengthscales that can be probed). The injection formalism incorporated in OG 3.3.20 is based on quasi-linear theory, which has limited applicability to turbulence in the environs of strong, modified shocks. Sugiyama & Fujimoto simulated injection in such strong turbulence by computing ion motions in large amplitude MHD waves, using techniques employed in hybrid and full plasma simulations. Their test particle investigation of essentially coherent acceleration in time-dependent electric fields in the shock neighbourhood yielded expected results, which are usually generated by more complete plasma simulations (reviewed in JE91 ), namely that suprathermal ions are produced in significant numbers on timescales considerably larger than the ion gyroperiod. Such coherent effects are an integral part of the dissipational heating in the shock layers, and naturally provide injection that seeds diffusive acceleration at higher energies.
From a small pot pourri of papers treating diverse acceleration problems, I wish to highlight two contributions before proceeding to the astrophysically-oriented offerings. The first was the presentation of a simple analytic model of non-linear acceleration in plane-parallel shocks by Ellison & Berezhko (OG 3.3.12), specifying a complete (and continuous) particle distribution via a thermal component plus a three-piece broken power-law representing non-thermal ions. The power-law slopes, energies of connection between the various spectral portions, and the normalization coefficients are self-consistently determined in a modelling of the flow hydrodynamics; only the efficiency of injection from thermal energies need be specified as a parameter. The model possesses great potential for astrophysical applications, due to its facility, and agrees well with more complete predictions of Monte Carlo ebj96 and kinetic transport equation byk96 techniques.
The second interesting result was in the discussion by Drury et al. (OG 3.3.13, OG 3.3.16) of “pile-ups” in cosmic ray electron source distributions near the maximum (i.e. cutoff) energy due to significant synchrotron losses. This issue has had various preceding treatments, with the conclusion that only test-particle shocks with compression ratios $`r>4`$ could yield a build up of electrons near the cooling cutoff, i.e. an improbable occurrence. The new feature of Drury et al.’s work is that momentum-dependent diffusion scales are treated so that synchrotron cooling of electrons sufficiently remote downstream from the shock can result in losses from the system additional to those due to convection. The criterion for build-ups relaxes to $`r3.5`$, generating an interesting regime of phase space where strong shocks potentially can yield these spectral bumps. Essentially, pile-ups arise when momentum losses in cooling outpace the spatial losses that are integral in determining the index of the canonical test-particle distribution. Such pile-up considerations could prove very relevant to the interpretation of non-thermal X-ray emission and TeV gamma-ray spectra from SNRs.
### Astrophysical Applications
Supernova remnants were the dominant subject of astrophysical applications of acceleration theory. While dynamical calculations of cosmic ray acceleration at SNR shocks and limited models of radio to gamma-ray emission from these particles have been around for a long time, this field has really burgeoned in the last half decade following the detection by the EGRET experiment on the Compton Gamma-Ray Observatory of a number of unidentified 100 MeV–10 GeV gamma-ray sources with SNR celestial associations espos96 and the subsequent campaigns prosch96 ; buck97 by atmospheric Čerenkov telescopes to search for TeV emission from various prime candidate remnants (see baring00 and Buckley’s Rapporteur paper in these proceedings for reviews of this field). The field now possesses confirmed detections in non-thermal X-rays and TeV gamma-rays in a few sources, an enviable position compared with the status 5 years ago. The models have rapidly become more sophisticated and complete in their radiation predictions. Two alternative techniques are at the forefront of this acceleration problem, both being represented at the Conference: (i) Berezhko et al.’s semi-analytic solution byk96 of the time-dependent spherical transport equation for ions, and (ii) Ellison, Baring and collaborators’ use of a Monte Carlo simulation of diffusive acceleration bergg99 ; ebj96 . These approaches each have their virtues and limitations. Berezkho et al.’s method handles all the time-dependent effects self-consistently, but requires a parametric specification of injection, whereas the Monte Carlo simulation, which automatically injects ions from the thermal populations, models steady-state parallel shocks and incorporates effects of time-dependence through a hybridization bergg99 involving Sedov evolution of shock parameters. Both methods must parameterize electron injection, an imposition due to current shortcomings in acceleration theory.
There is a remarkable convergence of results from these two complementary models, as is patently evident in the spectral comparison presented by Ellison & Berezhko (OG 3.3.27). While there are some fine-scale dissimilarities, this global agreement has led to a fairly robust set of predictions ebb99 for radio, X-ray and gamma-ray astronomy, embodied in the Conference papers of Berezhko and Völk (OG 3.3.08), Berezhko, Ksenofontov & Petukhov (OG 3.3.23) and Ellison et al. (OG 2.2.09). Principal features include the virtual constancy (and peaking) of the maximum particle energy and gamma-ray luminosity throughout the Sedov epoch (OG 3.3.08, OG 3.3.23), and prominent pion decay emission for high circumstellar densities in both the GeV and TeV wavebands; for ambient fields approaching 1 mG, synchrotron cooling is sufficient to render such hadronic emission dominant in the super-TeV range (Ellison et al. OG 2.2.09, and Berezhko & Völk OG 3.3.24, who also explore remnant properties for explosions in wind bubbles spawned by massive progenitors). Such pion decay signatures are potentially almost unambiguous evidence of the presence of cosmic rays in supernova remnants. The quest for such a proof of cosmic ray acceleration in SNRs is of primal importance to the cosmic ray community. Acquisition of this evidence seems imminent, given the impending ground-based and spaced-based gamma-ray experiments scheduled to come “on-line” in the next 5–6 years. Theory is currently well-placed to interpret the anticipated wealth of new information to be afforded by these programs.
There was a marked paucity of papers addressing relativistic shocks at the Conference. This was in spite of considerable recent interest in their acceleration properties by modellers of the topical gamma-ray burst (GRB) phenomenon, and the probable relevance to generation of ultra-high energy cosmic rays. Baring (OG 2.3.03) provided the principal offering at the Conference on acceleration predictions at relativistic shocks, highlighting the major needs for GRB theorists: quantifying the injection efficiency (particularly for electrons), and determining the spectral index (which is not uniquely specified in terms of the shock compression ratio) and the time and maximum energy of acceleration. None of these properties can be discerned easily, and there is a major need to redress such gaps in our knowledge. Baring explored spectral differences between large angle scattering and pitch angle diffusion in ultrarelativistic plane-parallel shocks (i.e. those with bulk Lorentz factor $`\mathrm{\Gamma }1`$), and confirmed the finding of Bednarz & Ostrowski bo98 that in the case of pitch angle diffusion, the power-law spectrum for accelerated particles approaches approximately $`E^{2.2}`$ as the shock speed asymptotes to the speed of light. Ostrowski (OG 3.3.07) discussed the possibility of acceleration at shear layers bordering relativistic jets in active galaxies. As intuitively expected, he observed the acceleration to be rapid due to large kinematic boosts acquired when particles diffuse between the jet and surrounding medium. Yet no indication of the efficiency of injection was proffered, and it is unclear that this type of boundary layer acceleration can be very effective in the presence of shear turbulence that is naturally established in jet entrainment of the surrounding ambient material. It is also uncertain whether such kinematic boosts to particle energies in either of these extragalactic environs can enhance the sources’ ability to generate cosmic rays with $`E10^{19}`$eV, an issue that should be the focus of future research.
## Ultra-High Energy Cosmic Rays
The study of Ultra-High Energy Cosmic Rays (UHECRs) bridges the interests of cosmic ray physicists and astrophysicists. While the perennial problem of what is the metallicity of $`>10^{19}`$eV cosmic rays (i.e. protons vs. Fe) remains, focus at this meeting was centered on the highest energy ones, namely those around and above the Greisen-Zatsepin-Kuzmin (GZK) cutoff at $`5\times 10^{19}`$eV Greisen66 ; ZatKuz66 . This subject was driven largely by the recent announcement (Takeda et al. Takeda98 ) that there is a significant excess of cosmic rays above the GZK cutoff, with 13 events now detected (mostly AGASA data) above $`\mathrm{\hspace{0.17em}10}^{20}`$eV. Papers at the meeting can be categorized as those discussing arrival directions and those addressing spectral issues.
Stanev and Hillas (OG 3.3.04) provided a detailed statistical analysis of arrival directions for events with energies $`E>40`$EeV, exploring possible associations and anisotropies on various angular scales. Their conclusions were that there is no significant correlation between UHECR directions and those of extragalactic supernovae, and that there was only a marginal enhancement of UHECR flux near the supergalactic plane. Ion deflections in galactic and extragalactic magnetic fields clearly de-correlate directions of prospective sources and observed events significantly. Tkaczyk (OG 3.1.14) posited upper limits to the neutron content of UHECRs via analysis of their anisotropy, using the fact that neutrons are undeflected by these magnetic fields. Stanev and Hillas did indicate, however, that there was significant clustering on angular scales less than 5, primarily spawned by two UHECR triplets; pair groupings were not unusually numerous. The Auger Pryke98 and Owl Streit98 projects will obviously increase the database dramatically, and improve such statistical analyses immeasurably. Directional information was also a focus of Horns et al. (OG 3.2.24), who used data from the HEGRA scintillation array to search for high-energy gamma-ray associations with UHECR events, and concomitant anisotropies. One particular marginal association stood out, a $`\mathrm{\hspace{0.17em}4}\sigma `$ excess in the sky at gamma-ray energy of $`\mathrm{\hspace{0.17em}10}^{14}`$eV, coincident with the arrival direction of the 320 EeV Fly’s Eye cosmic ray. In a paper supporting this directional analysis, Horns (OG 3.2.37) simulated electromagnetic cascades initiated by UHECRs.
Two discussions relating to extragalactic source spatial distributions were offered by Ptuskin, Rogovaya & Zirakashvili (OG 3.2.23) and Medina-Tanco (OG 3.2.52). These two works focused on explaining the excess implied by the UHECR observations Takeda98 , with essentially the same premise: natural clustering of galaxies provides source densities that exceed, on small distance scales, the average density for a uniform, homogeneous spatial distribution. This property obviously weights the calculation of cosmic ray cooling by photo-pion production on the microwave background, and permits a population of UHECRs above the traditional GZK cutoff at $`5\times 10^{19}`$eV. Both groups effectively assumed that cosmic ray production rates trace galaxy luminosity to some extent, since the latter underpins astronomical detectability. Ptuskin et al. and Medina-Tanco reached the same conclusion: that the galaxy distributions can permit cosmic ray distributions commensurate with the observed spectrum, thereby resolving any purported observation/theory discrepancy. Their conclusion was arrived at by different analyses: Ptuskin et al. invoked a fractal distribution of galaxies as a mathematically-motivated description of clustering, while Medina-Tanco made use of the data collection of the CfA survey at redshifts $`z<0.05`$. Hence the bottom line here is that there appears to be no need to seek a galactic connection for the $`>10^{20}`$eV events.
Papers addressing the actual source of UHECRs were exceedingly sparse, with the only offerings being the galactic scenarios of Olinto, Epstein & Blasi (OG 3.3.03) and Blasi (OG 3.3.02). Olinto et al. envisage neutron stars acting as sources of ultra-high energy Fe, stripped off the stellar surfaces by intense electric fields induced by rotation. Key properties of their picture include a very flat source spectrum, modelling structure around and above the ankle in the cosmic ray spectrum, and of course, a heavy metallicity of the UHECR population. Conditions for minimal effects of energy degradation of accelerated Fe nuclei on the surrounding pre-supernova ejecta are achieved for fast rotators, i.e. millisecond pulsars. Their model has a number of attractive features, however its viability is contingent upon the ease with which iron can be stripped from the star; this issue is somewhat controversial in the pulsar community, with skeptics (in the majority) appealing to the large work function of Fe to argue their case. Blasi (OG 3.3.02) suggested an exotic origin: super-heavy dark matter in the galactic halo, comprising postulated quasi-stable particles that are relics of the early universe. These particles are purported to spawn neutral and charged pions in spontaneous decays so that electromagnetic signatures are generated, principally gamma-rays in the $`>100`$ MeV range appropriate for exploration by the proposed GLAST Gehrels99 experiment. This scenario suffers from the drawback that it is difficult to discriminate spectrally its predictions from those of more mainstream origins of diffuse emission. In a related paper, Medina-Tanco & Watson (OG 3.1.17) indicated that present statistical limitations on UHECR anisotropies preclude discrimination between various dark matter halo distributions. Due to the proximity of their sources, neither of these origin scenarios need to address so-called GZK-violations.
## Future Directions
To conclude, it is appropriate to identify a list of salient tasks for the cosmic ray community relating to the subjects discussed here. For origin/composition specialists, the question of how old the seed material is still remains, and a reconciliation of ACE data constraints with inferences from the LiBeB problem is needed. Data on actinide abundances should help probe matter mixing timescales. It is also important to determine whether O metallicity is a better indicator than Fe/H for the LiBeB problem. For the propagation community, extending the data range of unstable secondary to primary ratios to span the trans-relativistic regime, 1–10 GeV/nucleon, will help discriminate between propagation models; while ACE has made progress here, we await future flights of ISOMAX. Improving spectra and composition studies around the knee are clearly a major priority for the acceleration community, to discern how effective SNRs are at accelerating up to these energies. A related issue is the search for pion decay signatures in gamma-ray emission from remnants, which would provide the first unequivocal proof that SNRs are indeed the galactic sites of acceleration; the opportunity for this resolution seems imminent. On the theoretical side, three-dimensional plasma simulations are desperately needed to elucidate the electron injection problem, and considerable investment in the study of acceleration at relativistic shocks would advance the astrophysics of active galaxies and gamma-ray bursts. For the UHECR field, it is anticipated that the database increase due to Auger and Owl projects will provide a clearer picture of the spectral, anisotropy and clustering properties of such high energy particles, enabling discrimination between various postulates of their origin.
Acknowledgments: I thank my collaborators Don Ellison and Frank Jones, and also Luke Drury and Bob Streitmatter for many insightful discussions, and also for their critical reading of the manuscript. I also thank the Organizing Committee of the Conference for sponsorship during my stay in Salt Lake City. |
no-problem/9912/astro-ph9912256.html | ar5iv | text | # MEASUREMENT OF THE GALACTIC X-RAY/𝛾-RAY BACKGROUND RADIATION: CONTRIBUTION OF DISCRETE SOURCES
## 1 INTRODUCTION
Since its discovery, the Galactic X-ray/$`\gamma `$-ray background, particularly from the ridge (i.e. the narrow region centered on the plane covering approximately $`\pm 60^0`$ in longitude), has been studied with every major X-ray and $`\gamma `$-ray observatory. The spectrum of the emission is reasonably well measured and understood above $`1`$ MeV (e.g., Kinzer, Purcell, & Kurfess 1999; Strong et al. 1996a; Bloemen et al. 1997; Hunter et al. 1997; Hunter, Kinzer, & Strong 1997). At energies above 100 MeV, the dominant emission process is the decay of $`\pi ^0`$ meson produced in the interaction of cosmic ray nucleons with the interstellar matter (e.g., Bertsch et al. 1993). Between $`1`$ and 70 MeV, electron bremsstrahlung and inverse Compton scattering appear to dominate over discrete sources (e.g., Sacher & Sch$`\ddot{\mathrm{o}}`$nfelder 1984; Kniffen & Fitchel 1981, Skibo 1993). However, in the hard X-ray/soft $`\gamma `$-ray band (3-500 keV) the shape of the spectrum and the origin of the emission remain uncertain. At soft $`\gamma `$-ray energies (below 1 MeV), multiple components are believed to contribute to the total emission. These include transient discrete sources, positron annihilation line and 3-photon positronium continuum radiation, and a soft $`\gamma `$-ray component dominant up to about 300 keV of unknown origin. This component, measured with the CGRO’s Oriented Scintillation Spectrometer Experiment (OSSE), can be roughly characterized by simple power law models of indices between 2.3 and 3.1 at different locations on the Galactic plane (e.g., Kinzer et al. 1999; Skibo et al. 1997). More recently, the soft $`\gamma `$-ray emission from the Galactic center was measured by the HIREGS balloon-borne germanium spectrometer and was characterized by a single power law of photon index $`1.8`$ plus the positronium component (Boggs et al. 1999). However, many of the soft $`\gamma `$-ray observations, particularly those from the central region of the Galaxy, are contaminated by bright and variable discrete sources. At hard X-ray energies ($`1035`$ keV), the overall spectrum of the Galactic plane background was characterized by a power law of photon index $`1.8`$ with RXTE (Valinia & Marshall 1998; hereafter VM98). Hard X-ray emission above 10 keV was also detected with Ginga (Yamasaki et al. 1997).
How the spectral shape of the background radiation extends from the hard X-ray to the soft $`\gamma `$-ray regime and how much of the emission is due to discrete sources remains to be determined and is the subject of this paper. Determining the exact nature of the spectrum in this band has significant implications for the energetics of the Interstellar Medium (ISM). For example, a power law spectrum extending from 10 keV to 1 MeV, if interpreted to be of diffuse nonthermal origin, has been proposed to result from nonthermal electron bremsstrahlung (e.g. Skibo et al. 1997). However this process is energetically very demanding since electron bremsstrahlung at these energies is highly inefficient. A power of $`10^{42}10^{43}\mathrm{erg}\mathrm{s}^1`$ is required, which approaches or even exceeds the power injected into the Galaxy via supernovae explosions (Skibo et al. 1997). Attempting to explain the nature of the emission in terms of diffuse thermal processes is equally unsatisfactory because plasma temperatures of 80-100 keV are implied. Since the gravitational potential of the Galaxy is only on the order of 0.5 keV, it is not clear how such a plasma would be generated and confined to the Galactic plane.
Unfortunately, measurement of the Galactic background radiation in the hard X-ray/soft $`\gamma `$-ray band and its interpretation is inherently difficult with current instruments because of the presence of numerous transient, hard discrete sources in the Galactic plane, and the fact that generally hard X-ray/$`\gamma `$-ray instruments either have large fields of view and no imaging capabilities or have imaging capability but no diffuse emission sensitivity. As a result, distinction of diffuse emission from point sources with current instruments has remained a difficult task. For this reason, simultaneous, multiple instrument observations are necessary. To date, coordinated observations of the Galactic center region with OSSE and the imaging instrument SIGMA has been performed (Purcell et al. 1996). However, SIGMA has a sensitivity of about 25 mcrab ($`2\sigma `$) for a typical 24 hour observation. As a result, weak sources escape detection in such a survey and the unresolved spectrum can still be significantly contaminated by discrete source contribution.
In this paper, we present contemporaneous observations of the Galactic background emission near the Scutum Arm (centered at $`l=33^{}`$) over the 3 keV to 1 MeV range with RXTE and OSSE. RXTE has a relatively small field of view of $`1^{}`$ FWHM with hard X-ray capability and discrete source sensitivity of $`1`$ mcrab in the 2-10 keV band, making the detection of hard, discrete sources in the field of view of OSSE possible. OSSE, with a field of view of $`11^{}.4\times 3^{}.8`$ FWHM, is sensitive to diffuse emission in the soft $`\gamma `$-ray band. The Scutum Arm was chosen because it exhibits bright and apparently diffuse emission (e.g. Kaneda 1997; VM98), and unlike the Galactic center region for example, there are few bright discrete sources in the field of view. This direction is approximately tangent to the 5-kpc arm of our Galaxy as seen from the Earth. The arm contains large numbers of young stars (Hayakawa et al. 1981), and an unusual concentration of high-mass X-ray binary transients (van Paradijs 1995). Our main goal is to constrain the shape of the spectrum and understand its origin in the $`10400`$ keV band. In §2 we present our observations and in §3 we present the results of our analysis. Finally in §4 we discuss their implications.
## 2 OBSERVATIONS
The simultaneous RXTE and OSSE observations were performed from January 28 through February 23 of 1998. The OSSE instrument (Johnson et al. 1993) consists of four Na(TI)/CsI(Na) phoswich detectors which cover the energy range 50 keV-10 MeV. It has an effective area of $`2000\mathrm{cm}^2`$ and average energy resolution of $`8.8\%`$ at 511 keV. The field of view is approximately rectangular with FWHM of $`11^{}.4\times 3^{}.8`$. During the 4 weeks of observation, OSSE (with its long collimator axis parallel to the Galactic plane) was continuously oriented toward Galactic coordinates $`(l,b)=(33^{},0^{})`$ except for alternating 2-minute intervals during which offset background measurements were made. The background was taken alternately at Galactic latitudes $`\pm 9^{}`$ ($`l=33^{}`$) during weeks 1 and 3 of the observation and $`\pm 12^{}`$ ($`l=33^{}`$) during weeks 2 and 4 of the observation.
The RXTE observations were planned such that during the 4 week observation, RXTE was scanning the full field of view of OSSE (i.e. the region $`44^{}<l<22^{}`$ and $`4^{}<b<4^{}`$) every day for either one or two hours (exposures alternated between one and two hour intervals). The goal of these observations were two-fold. One was to monitor discrete sources in the field of view of OSSE and to estimate their contribution to OSSE’s flux. The second goal was to measure the spectrum of the diffuse emission in the hard X-ray band and simultaneously model that with the spectrum measured with OSSE. The RXTE scans were performed with the Proportional Counter Array (PCA). The PCA (Jahoda et al. 1996) has a total collecting area of 6500 $`\mathrm{cm}^2`$, an energy range of 2-60 keV, and energy resolution of $`18\%`$ at 6 keV. The field of view of the collimator is approximately circular with FWHM of $`1^{}`$. The RXTE “diffuse” emission spectrum was obtained by scanning the field of view of OSSE excluding the regions that the scans went over known and detected discrete sources. The most recent PCA background estimator program pcabackest (version 2.0c; L7 model) provided by the RXTE Guest Observer Facility was used to estimate the PCA background.
## 3 ANALYSIS
Figure 1 shows the composite diffuse plus discrete-source emission spectrum as measured by OSSE in weeks 1 and 2 (filled circles - hereafter W1-2) and weeks 3 and 4 (open circles- hereafter W3-4) of the observations. The lowest energy data point for each observation has an approximately 20% systematic uncertainty and was therefore not used for modeling the data. In order to convert the flux through the OSSE collimator to the diffuse flux per radian of the Galaxy, a $`5^{}`$ FWHM Gaussian distribution in latitude and constant intensity in longitude for the spatial distribution of the emission was assumed (e.g., Kinzer et al. 1999; Purcell et al. 1996). In the 8-35 keV band, the FWHM of the distribution was derived to be $`4^{}.8_{1.0}^{+2.4}`$ with RXTE (VM98). The converted flux per radian is therefore a function of this latitude assumption. For example, assuming a FWHM of $`2^{}`$ would lower the flux per radian by $`30\%`$. Assuming a FWHM of $`8^{}`$ would increase the flux per radian by $`50\%`$ (see Figure 4 of Kinzer et al. 1999). We will discuss how this assumption affect the spectral fits in §3.2.
As seen from Figure 1, the spectra in the 40 keV to 1 MeV range from the two viewing periods are similar except in the $`40100`$ keV energy range where variable discrete sources have apparently altered the shape of the spectrum. The intensity in the $`60100`$ keV band dropped by about 32% between the two viewing periods. The difference spectrum of the two viewing periods (triangles in Figure 1) can be modeled by a power law of photon index $`4.3`$. In what follows, we first discuss the detection and estimated contribution of discrete sources to the total OSSE flux. We then discuss the modeling and characteristics of the measured RXTE/OSSE spectra.
### 3.1 Discrete Sources
Table 1 lists bright X-ray sources detected during the PCA scans that were within the field of view of OSSE. It also includes GRS 1915+105, which is near the edge of the field of view, since it is known to be extremely variable with intensities as high as a few Crabs (e.g., Muno et al. 1999). The first four sources are accretion-driven pulsars while the last two sources are microquasars. The spectra of these sources could not be well determined above 20 keV with our RXTE observations because the scans had insufficient exposure time. For each of these sources, we determined a spectral shape from other reported observations and estimated their 60 and 100 keV flux by normalizing their spectra with the average 2-10 keV flux during the PCA scans. We used public RXTE data for XTE J1858+034 (from observations on Feb. 20 and 24, 1998) and GS 1843+009 (from March 5, 1997). For both of these sources we model the photon distribution using the standard form for pulsars
$$A(E)=k(E/1\mathrm{keV})^\mathrm{\Gamma }\mathrm{exp}((E_cE)/E_f)$$
(1)
(e.g. White, Swank, & Holt 1983). For XTE J1858+034, we find $`\mathrm{\Gamma }1.3`$, $`E_c2.3`$ and $`E_f24.6`$. For GS 1843+009, we find $`\mathrm{\Gamma }0.6`$, $`E_c13.3`$ and $`E_f22.3`$. Koyama et al. (1990) found similar parameters ($`\mathrm{\Gamma }0.71`$, $`E_c18.3`$ and $`E_f25`$) during the 1988 observations with Ginga when the source intensity varied between 30 and 60 mCrabs. In the case of A1845-024 (also identified as GRO 1849-03), we used the BATSE spectrum of this source during outburst reported by Zhang et al. (1996) characterized by a power law of $`\mathrm{\Gamma }2.8`$. For XTE J1855-026, we used the RXTE spectrum reported by Corbet et al. (1998) characterized by the pulsar model (equation 1) and parameters $`\mathrm{\Gamma }1.23`$, $`E_c14.7`$ and $`E_f27`$. For SS433, we used an exponentially cutoff power law model of photon index $`1.5`$ and cutoff energy at $`20`$ keV (Band, private communication, 1999). During our observations, GRS1915+105 was in a a very soft spectral state with an average power law photon index of $`5.7`$ and was not detected with RXTE above 40 keV (Muno & Morgan 1999, private communication; Heindl 1999, private communication).
According to our estimate, the integrated 60-100 keV flux of the combined known bright sources listed in Table 1 (as would be seen through the OSSE’s collimator) decreased by about 41% (from $`(1.22\pm 0.12)\times 10^3`$ to $`(7.14\pm 0.66)\times 10^4`$ $`\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1`$) from the first viewing period (W1-2) to the second viewing period (W3-4) while the total OSSE measured flux for the entire field of view dropped by 32% (from $`(3.68\pm 0.15)\times 10^3`$ to $`(2.50\pm 0.11)\times 10^3`$ $`\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1`$). It appears that the unresolved portion of the emission (i.e. total OSSE flux minus the estimated contribution of discrete sources flux) has also decreased from the first to the second viewing period. We offer two plausible explanation for this. One is that if indeed, the residual flux is made of discrete sources, this implies that the intensity of the integrated unresolved sources as seen through the OSSE collimator has also decreased. There may be a tendency to believe that the integrated contribution of hundreds of discrete sources cannot change very much. However, the integrated contribution can easily be dominated by one or two sources. If these sources are highly variable as shown in Table 1, then the integrated flux can also vary substantially. Furthermore, RXTE’s total monitoring time amounts to only $`7`$% of the total observation time by OSSE. If these sources showed substantial variability on the order of a day, it is possible that a potential source outburst escaped detection by RXTE but its flux was continuously measured with OSSE. Another explanation is that the apparent decrease in the unresolved flux is due to inaccuracies in our estimates of the contribution of known sources to the emission. These estimates are unavoidably uncertain because they use spectral fits from other observations and the spectral fits are generally at energies below 50 keV. Consequently spectral variations with time or deviations from the assumed spectral model will lead to errors in the estimated fluxes. In particular, since the 2-10 keV fluxes are generally lower during the second viewing interval, luminosity dependent spectral changes will cause systematic differences in the estimated fluxes for the two observing intervals. For example, from Table 1 it appears that GS 1843+009 is the dominant contributor among discrete sources. During the first viewing period, the 2-10 keV flux of this source was a factor of 2.7 lower than that measured during its bright state observed on March 5, 1997, and the flux was a factor of $`5`$ lower during the second viewing period. We have assumed the same spectral shape for both observations. We are not aware of a comprehensive study documenting the relation between luminosity and spectral shape of pulsars, but there is some evidence that the spectrum of accretion-driven pulsars depends on luminosity. Reynolds, Parmar, & White (1993) found that the spectrum at low energies became harder and $`E_c`$ decreased as the luminosity of the transient pulsar EXO 2030+375 decreased during an outburst. As a result, the ratio of the extrapolated hard X-ray (50-100 keV) flux to the 2-10 keV flux decreased as the source luminosity decreased. The hardness ratio “HR”, defined as the ratio of the flux at 50 keV to the flux at 5 keV, decreased by $`2`$ as the luminosity decreased by $`25`$ from $`1.0\times 10^{38}\mathrm{ergs}\mathrm{s}^1`$. On the other hand, Koyama et al. (1990) did not find luminosity dependent spectral changes for GS 1843+009 while the source varied by a factor of $`2`$. If the spectrum of GS 1843+009 during the OSSE measurements is softer than the spectrum measured when it was more luminous, then its contributions to the OSSE measurements have been overestimated, particularly in the second viewing period when the source luminosity was a factor of 5 lower than the bright state observation of March 5, 1997. A factor of 2 decrease in the contribution of GS 1843+009 would increase the residual flux at 60 keV from $`5.8\times 10^5`$ to $`7.2\times 10^5\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1`$ for the second observing interval. This decreases the difference in the residual fluxes for the two viewing intervals (the residual flux at 60 keV for the first viewing period was determined to be $`10.1\times 10^5\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1`$ from Table 1). The fact that Koyama et al. (1990) did not find luminosity dependent spectral changes for GS 1843+009 makes this explanation less compelling. We also note that the HR for our spectral model for GS 1843+009 is $`3`$ times larger than that of the Koyama et al. model and is also larger than that of any of the pulsars in the review of White, Swank, and Holt (1983). This suggests that the values in Table 1 may, in fact, be overestimates of the contributions of GS 1843+009.
Because of these uncertainties, we do not subtract the contribution of these sources from the OSSE spectrum reported in the next section but instead, we model only the OSSE data during the second viewing period when the intensity of discrete sources was the least.
### 3.2 Spectral Characteristics
#### 3.2.1 OSSE data: 50-400 keV
We now focus on the spectral characteristics of the soft $`\gamma `$-ray Galactic background emission. As discussed by Kinzer et al. (1999), the gamma ray continuum between 50 keV and 10 MeV can be described by a composite of 3 independent components: (1) a soft $`\gamma `$-ray component dominant up to about 300 keV of uncertain origin; (2) a hard $`\gamma `$-ray component (hereafter high energy continuum or HE) which is the extrapolation of the HE component above 1 MeV and is likely due to the interaction of cosmic rays with the ISM dominating above 500 keV (e.g. Skibo 1993); (3) positron annihilation line and 3-photon positronium continuum radiation (hereafter PA; Ore & Powell 1949) which are strongly enhanced toward the Galactic center. Fits to the OSSE data accumulated during the entire 4 week observation were used to determine the best fit parameters for the HE and PA components since they are not expected to be variable with time (see Figure 1). The positronium continuum and narrow 511 keV annihilation line integral fluxes were $`(2.0\pm 0.7)\times 10^3`$ and $`(0.1\pm 0.3)\times 10^3\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1\mathrm{rad}^1`$, respectively. The HE continuum intensity was determined from fits to these data combined with the collected Galactic plane observations following Kinzer et al. (1999). The HE component extrapolated to energies below 1 MeV can be characterized as a power law function of photon index $`1.6`$ and normalization of $`3.6\times 10^2\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1\mathrm{MeV}^1\mathrm{rad}^1`$ at 100 keV.
Modeling the OSSE data alone from 50-400 keV (obtained from the second viewing period), we find that the best fit is achieved with an exponential cut off power law model
$$A(E)=k(E/1\mathrm{keV})^\mathrm{\Gamma }\mathrm{exp}(E/E_c)$$
(2)
plus the HE continuum included as a fixed model component ($`\chi ^2/\nu =3.95/9`$). (We have subtracted the PA components from the OSSE data points according to the results obtained from the 4 week combined observations). The best fit parameters for the power law are: $`\mathrm{\Gamma }=0.2_{2.5}^{+1.2}`$ and $`E_c=30.1_{5.4}^{+47.3}`$ and a total flux (including the HE continuum) of $`7.8\times 10^7\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1\mathrm{keV}^1\mathrm{deg}^2`$ at 100 keV averaged over the field of view of OSSE. All the quoted errors are for 90% confidence limit. Removing the HE component produces a worse fit ($`\chi ^2/\nu =5.0/10`$) and yields a higher photon index and cut off energy that are within the 90% confidence limits of the previous fit. Without the HE component, the $`50400`$ keV data can also be satisfactorily fit ($`\chi ^2/\nu =9.4/10`$) with a single power law ($`A(E)=k(E/1\mathrm{keV})^\mathrm{\Gamma }`$) of photon index $`\mathrm{\Gamma }=2.6\pm 0.2`$.
#### 3.2.2 RXTE/OSSE data: 3-400 keV
We proceed by modeling the RXTE/OSSE spectra simultaneously using the OSSE spectrum obtained in the second viewing period, but the PA component subtracted from OSSE data. For the simultaneous fit, the flux normalization for the two instruments must be handled consistently. For extended emission, the measured flux depends on the field of view of each instrument and the spatial distribution of the diffuse emission. Hence, an effective solid angle (i.e. the convolution of the distribution of the emission with the detector’s response function) should be calculated for each instrument. In the case of OSSE (fov of $`11^{}.4\times 3^{}.8`$ FWHM), convolving the soft $`\gamma `$-ray Galactic diffuse emission (assuming a Gaussian latitude distribution with $`5^{}`$ FWHM) and OSSE ’s triangular response function yields an effective solid angle of $`1.3\times 10^2`$ sr. RXTE ’s solid angle is $`3\times 10^4`$ sr. Hence, a composite data set was obtained by scaling down the OSSE data by a factor of $`2.3\times 10^2`$ to account for the difference in the solid angle of the two instruments. In fitting the combined data, the intensity model parameters for the two instruments has been set equal to each other. In all the models presented hereafter, the diffuse HE continuum is included as a fixed model component present at all energies. It contributes approximately 13% to the total emission in the 3-100 keV band. Generally, inclusion of this component tends to slightly improve the fit.
Examining the $`10400`$ keV spectrum first, we find that the excess over the HE diffuse component is best described by an exponentially cut off power law model (equation 2) with the following parameters ($`\chi ^2/\nu =24.8/43`$): photon index $`\mathrm{\Gamma }=0.63\pm 0.25`$, energy cutoff $`E_c=41.4_{8.4}^{+13.0}`$ keV, and flux of $`6.0\times 10^7\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1\mathrm{keV}^1\mathrm{deg}^2`$ at 100 keV. The total flux (including the HE continuum flux) at 100 keV is $`7.7\times 10^7\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1\mathrm{keV}^1\mathrm{deg}^2`$. Notice that the best fit parameters are within the 90% confidence limits of those derived from fitting the OSSE data alone but that they are considerably more tightly constrained.
As we discussed earlier, the flux intensity measured with OSSE depends on the assumption of latitude distribution of the diffuse emission. So far, we have presented results assuming a Gaussian distribution of $`5^{}`$ FWHM in latitude. To explore how the results depend on this assumption, we have simultaneously fitted the RXTE data with the OSSE data scaled up and down by 50%, respectively, to allow for a different FWHM Gaussian distribution. Scaling up the OSSE data by 50% is equivalent to assuming a $`8^{}`$ FWHM Gaussian distribution while scaling it down by 50% is equivalent to $`0^{}.1`$ FWHM. Scaling the OSSE data up yields a lower photon index ($`\mathrm{\Gamma }=0.41_{0.17}^{+0.19}`$; $`E_c=41.7_{6.3}^{+9.5}`$ keV) while scaling it down will have the opposite effect ($`\mathrm{\Gamma }=1.23_{0.29}^{+0.33}`$; $`E_c=51.6_{14.5}^{+37.8}`$ keV). In both cases, the fit is worse and the fit parameters are marginally consistent with the 90% confidence limits of those parameters derived for a $`5^{}`$ FWHM Gaussian assumption.
Other models with high-energy cutoffs of different forms also provide good fits. For the standard, $`5^{}`$ FWHM Gaussian distribution, the standard pulsar model (equation 1) yields $`\mathrm{\Gamma }=0.8_{0.4}^{+0.6}`$, $`E_c=21.5_{21.5}^{+51.7}`$ keV, and $`E_f=45.3_{9.7}^{+20.3}`$ ($`\chi ^2/\nu =24.3/42`$). The best-fit ($`\chi ^2/\nu =25.5/42`$) Comptonized disk model (Sunyaev & Titarchuk 1980) has an input soft photon (Wien) temperature of $`<1`$ keV, a plasma electron temperature of 21 keV, and a plasma optical depth of 3. A broken power law model with photon indices of $`1.4`$ and $`3.3`$ and a break energy at $`75.2`$ keV also provides a good fit ($`\chi ^2/\nu =29.0/42`$), but a simple power law model (best fit photon index of $`1.7`$) does not ($`\chi ^2/\nu =97.3/44`$).
We extended the RXTE/OSSE spectral fit down to 3 keV by adding a Raymond-Smith thermal plasma component to the exponentially cutoff power law model (and the HE continuum fixed in the model as before) and including the effect of Galactic absorption. The broader-range fit shown in Figure 2 yields $`\chi ^2/\nu =54.0/60`$. The 3-10 keV spectrum is dominated (71%) by the thermal plasma component. The best model yields a hydrogen column density of $`N_H=3.1\pm 1.5\times 10^{22}\mathrm{cm}^2`$ (the interstellar value for the line of sight at the center of the OSSE field of view is $`2\times 10^{22}\mathrm{cm}^2`$; the values for the lines of sight to the FWHM corners of the field of view are as low as $`0.4\times 10^{22}\mathrm{cm}^2`$), and a thermal plasma component of solar abundances and temperature $`2.6_{0.3}^{+0.4}`$ keV with an unabsorbed flux of $`6.8\times 10^6\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1\mathrm{deg}^2`$ at 10 keV. The best fit parameters for the exponentially cutoff power law model in this extended fit were $`\mathrm{\Gamma }=0.52\pm 0.25`$ and energy cutoff $`E_c=39.4_{7.2}^{+11.2}`$ keV.
## 4 SUMMARY AND DISCUSSION
We have measured the spectrum of the Galactic X-ray/$`\gamma `$-ray background near the Scutum arm region. In addition to the extrapolated high energy continuum (due to the interaction of cosmic rays with the interstellar medium) and the positron annihilation components, we have measured a variable hard X-ray/soft $`\gamma `$-ray component dominating the 10-200 keV band. The shape of this component at its minimum observed intensity is best modeled by an exponentially cut off power law of photon index of $`0.6`$ and energy cut off of $`41`$ keV. We estimate that at 60 and 100 keV, known bright discrete sources in the field of view of OSSE contribute about 46% and 20% to the total (minimum) measured flux, respectively.
The nature of the unresolved emission still remains to be determined. One interpretation is that the emission is due to a combination of unknown discrete sources. In the course of PCA scans, we detected several low luminosity discrete sources of the order of a few mCrabs for which we do not have a positive identification with previously known sources. We were not able to determine the spectral shape of these sources above $`20`$ keV because of the short exposures. The integrated 2-20 keV flux from these weak sources is a substantial fraction of the total of the previously identified sources listed in Table 1 if GRS1915+105 is excluded. It is then plausible that these weaker sources also make a significant contribution to the flux detected with OSSE, and in this case a significant fraction of the 40-100 keV flux observed with OSSE would be due to discrete sources whose 2-10 keV flux is brighter than $`1`$ mCrab. The estimated 40-100 keV spectrum of the bright sources listed in Table 1 is softer than the total OSSE spectrum measured during the second viewing period (see §3.1). The spectrum of the remaining contributors would therefore have to be harder than that of the known sources reported in Table 1. While the spectrum during the second viewing period can be well fit with the standard pulsar model, the best-fit parameters are unusual. The HR of the model is higher than that of any pulsar in the review of White, Swank, & Holt (1983), higher than ever seen during the outburst of EXO 2030+375 (Reynolds, Parmar & White 1993), and higher than that of any of the pulsars in Table 1. While the HR of EXO 2030+375 decreased as its luminosity decreased, there is no clear dependence of the HR on luminosity for the pulsars reviewed by White, Swank, & Holt (1983). If the observed spectrum is dominated by unresolved pulsars, they must have much harder spectra than typically seen for X-ray pulsars. The spectra out to at least 40 keV is similar to that of the low state of black hole candidates (BHC) (e.g. Tanaka and Lewin 1995) since the unresolved spectrum can also be characterized with a broken power law model of photon indices of 1.4 and 3.3 and a break energy at 75.2 keV. Higher quality data are needed to determine if the form of the spectral break is similar to that of BHC.
An alternative interpretation for the origin of the unresolved emission is that it is truly of diffuse origin and is produced by mechanism(s) such as nonthermal electron bremsstrahlung or inverse Compton (IC) scattering of interstellar radiation off of cosmic-ray electrons (e.g. Skibo & Ramaty 1993). It is expected that the scale height of the emission due to IC scattering be broad because of the large scale height of the interstellar radiation field (optical, infrared, and the Cosmic microwave background). Indeed, the emission has been measured to be broad with RXTE (Valinia & Marshall 1998) and COMPTEL/CGRO (Strong et al. 1996b). Therefore, in the diffuse origin scenario, the hard spectral shape and the apparent broad extent of the emission would suggest the possibility that the IC scattering contribution may dominate over nonthermal bremsstrahlung processes. Unlike the nonthermal bremsstrahlung scenario, the IC scattering scenario has the added advantage that the total power required is well within that provided by Galactic supernovae.
Our discovery of variability in the 40-100 keV flux from the Galactic Ridge near Scutum shows that a substantial part of this emission is due to discrete sources. Determining the exact contribution of discrete sources and diffuse mechanism(s) to the total Galactic plane emission requires instruments capable of sensitive imaging over the hard X-ray/soft $`\gamma `$-ray band. A sensitivity of at least $`10^610^7\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1\mathrm{keV}^1`$ at 100 keV and spatial resolution of a few arc minutes are required.
###### Acknowledgements.
We thank David Band, Mark Finger, William Heindl, James Kurfess, Ed Morgan, and Mike Muno for helpful discussions. We also thank Keith Jahoda for help with the scanning response matrix for RXTE. |
no-problem/9912/cond-mat9912074.html | ar5iv | text | # 1
Stripe dynamics in presence of disorder and lattice potentials
C. Morais Smith<sup>a,b</sup>, N. Hasselmann<sup>a,c</sup>, Yu. A. Dimashko<sup>a</sup>, and A. H. Castro Neto<sup>c</sup>
$`^a`$I Institut für Theoretische Physik, Universität Hamburg, D-20355 Hamburg, Germany
$`^b`$Institut de Physique Théorique, Université de Fribourg, Pérolles, CH-1700 Fribourg, Switzerland
$`^c`$Dept. of Physics, University of California, Riverside, CA, 92521, USA
Abstract. We study the influence of disorder and lattice pinning on the dynamics of a charged stripe. Starting from a phenomenological model of a discrete quantum string, we determine the phase diagram for this system. Three regimes are identified, the free phase, the flat phase pinned by the lattice, and the disorder pinned phase. In the absence of disorder, the system can be mapped onto a 1D array of Josephson junctions (JJ). The results are compared with measurements on nickelates and cuprates and a good qualitative agreement is found between our results and the experimental data.
1. INTRODUCTION
During the last years, there has been a great deal of interest concerning the existence and dynamics of the striped phase in nickelate and cuprate materials. Theoretical , as well as experimental investigations have revealed the presence of charge and spin order.
In the present paper, we study within a phenomenological model the transversal dynamics of a stripe pinned by an underlying periodic lattice and by a random pinning potential provided by impurities. A phase diagram is determined as a function of the hopping parameter $`t`$, the stripe stiffness $`J`$ and the doping concentration $`\nu `$. At large values of $`t/J`$ and $`\nu `$, the stripe is free and the system can be described by a quantum membrane. At low doping $`\nu `$, the role played by the impurities is always relevant and the stripe is in a disorder pinned phase. When the effect of disordered impurities is negligible and $`t/J1`$, the stripe is pinned by the underlying lattice. In this case, by performing a dual transformation in the initial phenomenological Hamiltonian, we can map the problem onto a 1D array of JJ and show that the quantum depinning occurs at $`(t/J)_c=2/\pi ^2`$. Finally, we discuss our results and compare them to recent experimental measurements on nickelates and cuprates.
2. THE MODEL
Let us consider a single vertical stripe with one hole/site on a 2D lattice in a disorder potential. The holes are allowed to hop only in the transversal direction. The phenomenological Hamiltonian describing this system is
$$\widehat{H}=\underset{n}{}\left[2t\mathrm{cos}\left(\frac{\widehat{p}_na}{\mathrm{}}\right)+\frac{J}{2a^2}\left(\widehat{u}_{n+1}\widehat{u}_n\right)^2+V_n(\widehat{u}_n)\right].$$
(1)
Here, $`t`$ is the hopping parameter, $`a`$ is the lattice constant, $`\widehat{u}_n`$ is the displacement of the n-th hole from the equilibrium (vertical) configuration, $`\widehat{p}_n`$ is its conjugate transversal momentum, $`J`$ is the stripe stiffness, and $`V_n(\widehat{u}_n)`$ is an uncorrelated disorder potential, $`<V_n(u)V_n^{}(u^{})>_D=D\delta (uu^{})\delta _{n,n^{}}`$, where $`<\mathrm{}>_D`$ denotes the Gaussian average over the disorder ensemble and $`D`$ is the inverse of the impurity scattering time.
A dimensional estimate can provide us with the main features of the phase diagram for this system. Let us start with the case of no impurity potential $`V_n(\widehat{u}_n)=0`$. At large values of the hopping constant $`tJ`$, we can expand the cos-term, $`2t\mathrm{cos}(\widehat{p}_na/\mathrm{})const.+t(\widehat{p}_na/\mathrm{})^2`$. The dynamics is then governed by the competition between the kinetic term $`t(k_na)^2`$ that tries to free the holes, and the elastic one, $`(J/2a^2)(\widehat{u}_{n+1}\widehat{u}_n)^2`$, that tends to keep the holes together. When the confinement is given by the lattice pinning potential, a typical displacement $`\widehat{u}_{n+1}\widehat{u}_na`$ and the wave vector $`k_n1/a`$. Hence, a transition from the flat phase, with the stripe pinned by the underlying lattice, to a free phase should occur at $`t/J1`$.
Indeed, by performing a dual transformation in the Hamiltonian (1) to new variables referring to segments of the string, i.e., to a pair of neighbour holes
$$\widehat{u}_n\widehat{u}_{n1}=\widehat{\pi }_n,\widehat{p}_n=\widehat{\phi }_{n+1}\widehat{\phi }_n.$$
(2)
and taking the limit of a large system, we obtain
$$\widehat{H}=2t\underset{n}{}\mathrm{cos}\left[\frac{(\phi _{n+1}\phi _n)a}{\mathrm{}}\right]\frac{J}{2a^2}\underset{n}{}(/\phi _n)^2,$$
(3)
which is the Hamiltonian describing a Josephson junction chain (JJC). The solution of this problem at $`T=0`$ was found by Bradley and Doniach . Depending on the ratio $`t/J`$, the chain is either insulating (small $`t/J`$) or superconducting (large $`t/J`$). At the critical value $`(t/J)_c=2/\pi ^2`$, the JJC undergoes a Kosterlitz-Thouless like superconductor/insulator transition. This transition is known to represent the unbinding of vortex/antivortex pairs in the equivalent XY model.
For the case of the stripe, this corresponds to a depinning transition from the flat to the free phase, as we discussed above. The excitations of the stripe are kinks (K) and antikinks (AK) and the transition in this case corresponds to the unbinding of K/AK pairs. By exploiting the relation of the stripe Hamiltonian to the sine-Gordon theory, we calculated the infrared excitation spectrum of the quantum string and showed that the insulating gap $`\mathrm{\Delta }J`$ present at $`t=0`$ vanishes at $`(t/J)_c=2/\pi ^2`$ . Hence, at large values of $`t/J`$ there is a proliferation of kinks and antikinks and the striped structure disappears.
Let us now consider the other limit of strong pinning by impurities. In this case, the potential provided by the lattice is irrelevant and the typical displacement is of the order of the separation between stripes $`1/k_n\widehat{u}_{n+1}\widehat{u}_nL`$, see Fig. 1. By comparing now the kinetic $`t(a/L)^2`$ and the elastic $`J(L/a)^2`$ terms, we observe that a transition should occur at $`t/J(L/a)^4`$.
Indeed, by deriving the renormalization group (RG) differential equations to lowest nonvanishing order in the lattice and disorder parameters, one finds a set of flow equations , indicating that the transition from the disorder pinned to the free phase occurs at $`(t/J)_c=(18/\pi ^2)(L/a)^4`$. The phase diagram of the striped phase is shown in Fig. 2, with $`\nu =a/L`$.
Figure 1: Striped phase. Figure 2: Phase diagram.
3. DISCUSSIONS
Next, we discuss the different phases displayed in Fig. 2 in the light of experimental results obtained for the cuprates and nickelates. If neither disorder nor lattice potential are relevant, the stripe is in the freely fluctuating Gaussian phase and the dynamics can be described by a quantum membrane. This is the case for doped La<sub>2</sub>CuO<sub>4</sub>, that seems to be insensitive to disorder. The incommensurate (IC) spin fluctuations in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> near optimal doping are strinkingly similar to those found in La<sub>2</sub>CuO<sub>4+δ</sub> , although the oxygen doping produces annealed and the Sr doping produces quenched disorder. However, although weak disorder is unimportant near optimal doping, we expect from Fig. 2 a critical doping $`x_c\nu _c`$ below which disorder becomes relevant. Hence, for $`\nu <\nu _c`$ the stripes will be pinned, implying a broadening of the IC spin fluctuations. Eventually, the IC peaks will overlap to produce a single broad peak centered at the commensurate antiferromagnetic position. This effect is actually observed in the spin glass phase $`(x<0.05)`$ in neutron scattering experiments , indicating the pinning of the stripes by disorder. For disorder pinned stripes, a depinning transition under strong magnetic fields has been predicted . For a doping $`x10^2`$ we estimate $`E10^4`$ V/cm.
Concerning the pinning provided by the underlying lattice, the very weak Bragg peaks that are observable in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> only at $`x=1/8`$ indicate that the striped phase is nearly always in a floating phase IC with the lattice. However, by doping this material with Nd, one produces a tilt in the oxygen octahedra in the CuO planes, increasing hence the pinning by the lattice. Indeed, La<sub>1.6-x</sub>Nd<sub>0.4</sub>Sr<sub>x</sub>CuO<sub>4</sub> shows strong lattice commensurability effects. The static stripe order is most pronounced at the commensurate $`x1/8`$ , but has also been observed at other compositions . In sum, the Nd doping pushes the roughening transition to higher values of $`t/J`$ and drives the cuprate materials into the flat phase.
Let us now consider the nickelates. Weak disorder is clearly relevant in these materials, since the width of the IC peaks is always much narrower in the oxygen doped samples \[12-14\] than in the Sr doped ones \[15-17\]. In addition, both, Sr and oxygen doped nickelates show strong commensuration effects , indicating that the striped phase couples strongly to the lattice. Therefore, we conclude that for the nickelates both, pinning by the lattice and by impurities are relevant.
In conclusion, we studied the competition between disorder and lattice effects on the transverse stripe fluctuations for nickelates and cuprates and identified the different phases in a $`t/J`$ versus $`\nu `$ phase diagram.
Acknowledgement
The authors gratefully acknowledge financial support by DAAD-CAPES Project No. 415-probral/schü.
References
| 1. | H. Eskes et al., Phys. Rev. B 58, 13265 (1998) and references therein. |
| --- | --- |
| 2. | J. M. Tranquada et al., Nature 375, 561 (1995); Phys. Rev. B 54, 7489 (1996). |
| 3. | J. M. Tranquada et al., Phys. Rev. Lett. 78, 338 (1997). |
| 4. | J. Dimashko et al. to appear in Phys. Rev. B, July (1999). |
| 5. | R. M. Bradley and S. Doniach, Phys. Rev. B 30, 1138 (1984). |
| 6. | N. Hasselmann et al. Phys. Rev. Lett. 82, 2135 (1999). |
| 7. | G. Aeppli et al., Science 278, 1432 (1997); T. E. Mason et al., Phys. Rev. Lett. 68, 1414 (1992). |
| 8. | B. O. Wells et al., Science 277, 1067 (1997). |
| 9. | K. Yamada et al., Phys. Rev. B 57, 6165 (1998). |
| 10. | C. Morais Smith et al., Phys. Rev. B 58, 453 (1998). |
| 11. | T. Suzuki et al., Phys. Rev. B 57, R3229 (1998). |
| 12. | P. Wochner et al., Phys. Rev. B 57, 1066 (1998). |
| 13. | K. Nakajima et al., J. Phys. Soc. Jpn. 66, 809 (1997). |
| 14. | J. M. Tranquada et al., Phys. Rev. Lett. 73, 1003 (1994); ibid., 79, 2133 (1997). |
| 15. | G. Blumberg et al., Phys. Rev. Lett. 80, 564 (1998). |
| 16. | S. H. Lee and S.-W. Cheong, Phys. Rev. Lett. 79, 2514 (1997). |
| 17. | J. M. Tranquada et al., Phys. Rev. B 54, 12 318 (1996). | |
no-problem/9912/astro-ph9912116.html | ar5iv | text | # Coexistence of Global and Local Time Provides Two Ages for the Universe
## I The Coordinate System
In space are observers about which we know the following experimentally: Each observer observes three locally flat spatial coordinates $`(x,y,z)`$ and a local rate of aging, $`dt`$. Each clock displays a local time, $`t`$, that is the integral of $`dt`$ as experienced by that clock. The time shown on each clock, also known as local time, is the amount of system evolution experienced by the local observer, and the rate of system evolution varies from clock to clock as known from relativity.
Special relativity transmutes the time shown on clocks into a physical dimension. The local $`t`$ is merged with local spatial coordinates $`(x,y,z)`$ to create an $`(x,y,z,t)`$ space-time, and the principle of relativity is used to determine the covariance of space and time in coordinate transformations. This results in the Lorentz metric. It also results in the absence of global time and, as a consequence, creates the inability to embed the universe in a global $`(W,X,Y,Z,T)`$ Euclidean hyperspace.Gravitation (1)
The approach here does not merge space and time; it leaves local time, $`t`$, as simply the integral of $`dt`$, which we know it to be. $`dt`$ is a scalar function of location corresponding to the rate of system evolution at each location. That is, $`dt`$ is the local rate of aging.
A new global spatial coordinate, $`W`$, separate from time, is added to three global $`(X,Y,Z)`$ Euclidean coordinates to create a global $`(W,X,Y,Z)`$ Euclidean hyperspace. The four coordinates obey Euclidean translation and rotation invariances. The space may also referenced in four-dimensional spherical or other coordinates without loss of generality, as in Section 3. Whereas in Kaluza-Klein theories one of the spatial dimensions is compactedKK (2), in this theory each spatial coordinate $`(W,X,Y,Z)`$ has infinite extent. The metric is:
$$ds^2=dW^2+dX^2+dY^2+dZ^2.$$
(1)
Time is not a part of the metric. All points in the space are considered physically real and take on only real values. The observed universe is embedded in the Euclidean space as a quasi-three dimensional surface $`F_0(W,X,Y,Z)<\delta `$. The curvature of the observable universe is assumed to be smooth (differentiable) with its thickness, $`\delta `$, small relative to the curvature of the surface. This is simply a generalization of the observation that space is everywhere locally observed as flat.
One global time, $`T`$, is defined such that all events can be uniquely referenced by $`(W,X,Y,Z,T)`$. $`T`$ is not, however, a spatial coordinate. That is, events that happen at time $`T`$ at location $`(W,X,Y,Z)`$, happen and are gone. They do not continue to reside at a point $`(W,X,Y,Z,T)`$ to which we can travel. The global time, $`T`$, defines simultaneity. That is, two events that happen at the same time $`T`$ are simultaneous. The fact that different observers may see them as happening at different local times and in a different sequence becomes, in this formulation, an artifact of the observation process. The relationship between the rate of local time and the rate of global time is given by $`dt/dT`$; and it becomes, in this formulation a property of the physical system rather than a property of the coordinate system. (See Section 2.)
The observable universe, $`F_0`$, has motion in accord with kinematic rules. It is one of an infinite set of locally parallel observable universes {$`F_i,\mathrm{}<i<\mathrm{}`$} that fill the Euclidean space. We observe only the one sub-space $`F_0`$ for physical reasons determined in Section 3.2. Other observers may observe other $`F_i`$. Local coordinates $`(x,y,z)`$, defined within each $`F_i`$, are curved to follow the shape of $`F_i`$ and uniquely reference each location in $`F_i`$. Where $`F_i`$ is locally flat, then the local $`(x,y,z)`$ can be overlaid on the global $`(X,Y,Z)`$ making them equivalent within the locally flat region.
## II Reconstruction of Special Relativity as Observables
The principle of relativity is used together with the observed kinematics on particles (time dilation, particle anti-particle pair formation) to determine aspects of the kinematics of the observable universe as embedded in the $`(W,X,Y,Z)`$ hyperspace. As used here, the principle of relativity requires that no point or direction in space be distinguishable from any other by any local measurement.
### II.1 Observing Particles
The observation of particles implies the existence of hyper-particles in the global space. Hyper-particles are quasi-one-dimensional entities of various types such that, at the intersection of a hyper-particle and the observable universe, we observe a quasi-zero-dimensional particle corresponding to a known particle type. The hyper-particles are not assumed stationary; they are not like “world-lines” defined in other formulationsGravitation (1). Rather, the hyper-particles move and obey kinematics rules to be derived later in the discussion. Figure 1 shows a snapshot in time of hyper-particles intersecting an observable universe to create observable particles at the intersection.
The principle of relativity requires that hyper-particles be everywhere perpendicular to the local direction of the observed universe. If hyper-particles were not everywhere perpendicular to the local observed universe, then a local measurement could distinguish one direction in the observable universe from all others, in violation of relativity. Thus, in Figure 1, the observable universe is shown deforming near each hyper-particle to be locally perpendicular to it.
The principle of relativity requires that an observer be able to reside at any point in $`(W,X,Y,Z)`$ space. At that point in space, the observer must, further, reside in some $`F_i`$. Otherwise some locations in space would be distinctly different from others, in violation of relativity. Thus, the global space is filled with observable universes. This implies that each observable universe has a four dimensional volume. Thus, each observable universe is described as a three dimensional surface plus some thickness, $`ϵ`$ (related to $`\delta `$), along a local $`w`$ coordinate locally perpendicular to that surface, as shown in Figure 2a. Then an infinite set of parallel universes fill the $`(W,X,Y,Z)`$ space as shown in Figure 2b.
The principle of relativity requires that the thickness of observable universes increase at intersections with hyper-particles not perpendicular to the overall direction of the observable universe, as in Figure 2. If the thickness did not increase, then either (1) local gaps would form between adjacent observable universes, creating locations not within an observable universe, in violation of relativity, or (2) the shapes of $`F_{i1}`$, $`F_i`$, and $`F_{i+1}`$ would have to be different from each other in ways that make them distinguishable, in violation of relativity. By inspection of Figure 2a, a thickness of $`ϵ/cos\theta `$ for $`F_i`$ provides accordance with relativity. Here $`\theta `$ is the angle of deformation caused by the hyper-particle.
### II.2 Observing Time Dilation
By relativity, the rate of local time at any location must vary inversely as the local thickness of the observable universe. Otherwise a local measurement of the thickness of the universe using, for instance, the time it takes for light to cross the local thickness of that observable universe, would distinguish one point in space from another by local measurement, which would violate relativity. Thus,
$$ϵ\chi /c=constant,$$
(2)
where, $`\chi dt/dT`$ is the local ratio of the rate of local time to the rate of global time, $`dT`$ is a global constant, $`c`$ is the local speed of light; and $`ϵ`$ is the local thickness of the observable universe at any particular location in space. In Equation (2), $`c`$ and $`ϵ`$ are both measured in global coordinates. From Equation (2), it follows that the rate of local time is invariant under Euclidean transformations. This is because the rate of local time depends on the scalar quantities, $`c`$ and $`ϵ`$, that are invariant under Euclidean spatial transformation. It also follows that the observable universe must also stretch along the $`x`$, $`y`$, and $`z`$ coordinates by a factor $`1/\chi `$ so that the observed speed of light remains everywhere the same.
By Equation (2), the local rate of time in the region near the hyper-particle in Figure 2a, relative to the local rate of time away from the hyper-particle, is given by
$$dt^{}=dtcos\theta .$$
(3)
The local slowing of the rate of time is known, in special relativity, as “time dilation”. Specifically, time dilation in Lorentz coordinates gives a change in the local rate of time as $`dt^{}=dt/\gamma `$, where $`\gamma =(1v^2/c^2)^{1/2}`$. Comparing, time dilation for special relativity with Equation (3) gives
$$dt^{}/dt=1/\gamma =cos\theta ,$$
(4)
which implies that the observed velocity for any particle relates to the angle of that hyper-particle as
$$v=\pm csin\theta .$$
(5)
Thus, the larger the angle, $`\theta `$, of the hyper-particle, the faster the observed speed of the observed particle. Equation (5) includes an ambiguity regarding whether, for a given angle, the observed particle travels towards positive or towards negative $`x`$.
### II.3 Observing Finite Lifetime
An observed particle may have finite lifetime due to a number of causes. One cause can be that a particle is observed to come into existence in any particular observable universe when a hyper-particle first intersects that particular observable universe, and is destroyed when the hyper-particle no longer intersects that observable universe. The lifetime, $`\delta t`$, of such a particle is given by
$$\delta t=L/V_T,$$
(6)
where $`L`$ is the length of the hyper-particle and $`V_T`$ is its velocity through the observable universe (measured in global coordinates). By relativity, $`\delta `$t, as measured by a clock at the particle, must be the same for all particles. Otherwise one point in space would be distinguishable from another by local measurement. Similarly, by relativity, $`V_T`$ must be the same for all hyper-particles. Otherwise a local measurement of the time it takes a particular point on a hyper-particle to pass through the observable universe would not give the same result everywhere. \[Even though the thickness of the observable universe varies as $`ϵ`$/cos$`\theta `$, the local rate of time varies also as t/cos$`\theta `$. Hence the velocity of hyper-particle through the observable universe proportional to $`ϵ/t`$, which is independent of $`\theta `$.\] Finally, since the clock rate varies as per Equation (2), thus, to satisfy Equation (6) for all velocities of the particle, the length of the hyper-particle must be L=$`\gamma L_0`$, where $`L_0`$ is the length of the hyper-particle for that particle when at rest. Thus, the length of the hyper-particle increases in proportion to $`\gamma `$, which causes the observed lifetime of the observed particle to increase in proportion to $`\gamma `$, in accord with relativity.
### II.4 Observing Particle Anti-Particle Pair Formation
The observance of particle anti-particle pair formation provides one determination for $`V_T`$. It also provides one resolution for the “$`\pm `$” direction ambiguity for the observed velocity, $`v`$, in Equation (5). Referring to Figure 1, hyper-particles 2 and 3 touch at a point below the observed universe, $`F_0`$. If the hyper-particles are moving towards positive $`w`$, then, when that point goes through $`F_0`$, the observed effect will be of two particles colliding and annihilating — as particle and anti-particle. If they are moving towards negative $`w`$, then they were seen already to come into existence via pair formation, and are seen as moving away from each other.
The event of particle anti-particle pair formation will be seen in each observable universe, $`F_i`$, if hyper-particles 2 and 3 stay exactly touching each other (as in Figure 1) while flowing through {$`F_i`$}. That is achieved if the hyper-particle velocity along $`x`$ is zero. Thus, the hyper-particle motion is given by:
$$V_W=V_Tcos\theta ,and$$
(7)
$$V_X=V_Tsin\theta +v=V_Tsin\theta \pm csin\theta =0.$$
(8)
Equation (8) is true only if
$$V_T=\pm candv=V_Tsin\theta .$$
(9)
Thus, the hyper-particle passes through the observable universe at speed c in one direction or another. The “$`\pm `$" ambiguity of Equation (5) is now seen to imply that hyper-particles may in general, flow in either direction through the observed universe.
### II.5 Lorentz Metric
While the metric for the space is strictly Euclidean as per Equation (1), the Lorentz metric also appears in the theory. However, the Lorentz metric is not the metric of the space, it is rather an observable of the relationship between events in the space.
Figure 3 overlays a snapshot of a hyper-particle segment at two instants of time, $`T`$ and $`T^{}`$. The segment is at $`\overline{AD}`$ at global time, $`T`$, and at $`\overline{BC}`$ at a global time, $`T^{}`$. If $`V_T=c`$, then the hyperparticle moves from $`\overline{AD}`$ to $`\overline{BC}`$ over the time interval (T,T’) and T’\>T. If $`V_T=+c`$, then the hyperparticle moves from $`\overline{BC}`$ to $`\overline{AD}`$ over the time interval (T’,T) and T\>T’.
Moving in accord with Equations (7) through (9) the hyper-particle motion creates a set of events in the $`(W,X,Y,Z)`$ universe. $`A`$ is the event of observing a particle at location $`(W,X,Y,Z)`$ at global time $`T`$. $`B`$ is that same event observed at the same 3D location in a different observable universe at location $`(W^{},X,Y,Z)`$ at global time $`T^{}`$. Event $`C`$ is the event of observing the same particle at location $`(W,X^{},Y,Z)`$ at time $`T^{}`$ that is observed in event $`A`$ at location $`(W,X,Y,Z)`$ at $`T`$. Included in the figure are two different observable universes: (1) the observable universe containing event $`A`$, in the shape it has at time $`T`$; and (2) the observable universe containing event $`B`$, in the shape it has at time $`T^{}`$.
Setting $`dT=T^{}T`$ in Figure 3, the distance $`\overline{AC}`$ is the distance which the observed particle travels in time $`dT`$. That distance is $`vdT`$. The distance $`\overline{BC}`$ is equal to the amount of hyper-particle that has passed through the upper observable universe in time $`dT`$. That distance is $`V_TdT=cdT`$. The event $`A`$ moves to event $`B`$ at rate $`V_W`$ as given by Equation (7). Thus, sides of triangle $`\overline{ABC}`$ have lengths whose squares are:
$$\overline{BC}^2=c^2dT^2,$$
(10)
$$\overline{AC}^2=v^2dT^2=c^2dT^2sin^2\theta ,and$$
(11)
$`\overline{AB}=V_T^2dT^2=c^2dT^2cos^2\theta ={\displaystyle \frac{c^2dT^2}{\gamma }}=dW^2.`$ (12)
The Euclidean metric provides that the lengths of the sides of triangle ABC, being a right triangle, obey the Pythagorean theorem:
$$\overline{BC}^2=\overline{AC}^2+\overline{AB}^2.$$
(13)
To confirm that the Euclidean metric is obeyed, Equations (10)-(12) are substituted into Equation (13). This gives:
$`c^2dT^2=c^2dT^2sin^2\theta +c^2dT^2cos^2\theta `$
$`=c^2dT^2(sin^2\theta +cos^2\theta )=c^2dT^2,`$ (14)
which confirms self-consistency of the derivation. Meanwhile, the Lorentz metric gives the equation:
$$ds^2=c^2d\tau ^2=c^2dt^2+dx^2+dy^2+dz^2,$$
(15)
where $`d\tau `$ is the proper time, $`d\tau =dt/\gamma `$. For a particle in motion, Equation (15) becomes
$$c^2dt^2/\gamma ^2=c^2dt^2+v^2dt^2.$$
(16)
Comparing Equation (16) to the right triangle $`\overline{ABC}`$ in Figure 3, it is clear that Equation (16) is Equation (13) rearranged as:
$$\overline{AB}^2=\overline{BC}^2+\overline{AC}^2,$$
(17)
Thus, in this theory the Lorentz metric is the Euclidean metric rearranged to provide a statement specifying the rate at which events move along the $`W`$ coordinate as a function of the velocity of particles. The Lorentz metric includes time in its calculation by combining $`t`$, $`T`$, and $`W`$ into one coordinate by defining fixed relationships among these independent variables. In particular it assumes that local time is global time $`(dtdT)`$, and it defines the relationship between $`t`$ and $`W`$ as:
$$d\tau ^2dW^2/c^2.$$
(18)
## III New Insights
This section uses the new coordinate system to infer effects beyond the three dimensions that we observe. Correlations across neighboring observable universes are explored. Also, the observation of only three of four spatial dimensions is explored. Curvature is added to $`F_i`$ to explore the expansion and closure of the universe. The principle of relativity is applied across time to determine $`dt/dT`$ as a function of space and time. From that, two ages of the universe are computed and compared, one from $`t`$ and the other from $`T`$.
### III.1 Flow of Past, Present, Future
For a flat section of the observable universe, if the vast preponderance of hyper-particles have non-relativistic observed velocity ($`\gamma 1`$ ) and travel towards $`W(V_Wc)`$, then, from Equation (18), any observable universe in the $`W>0`$ portion of space has a set of events (particle locations) that will later recur nearly identically at the observable universe at $`W=0`$. For example, the spatial configuration at $`W=D(D>0)`$ at global time $`T`$ will recur at $`W=0`$ at time $`T^{}=T+D/c`$. Thus, defining the observable universe at $`W=0`$ as the "present", then the "future" is at $`W>0`$. Further, the "past" is at $`W<0`$. Configurations flow from $`(W>0)`$ to $`(W=0)`$ to $`(W<0)`$. The farther one proceeds along the $`+W`$ direction, the farther into the "future" one goes, and the farther one proceeds along $`W`$, the farther into the "past" one goes; however the correlation decreases because the difference in flow rates along $`W`$ (related to differences in observed velocity) becomes increasingly important.
While $`W`$ is correlated with a time coordinate, it is not a time coordinate. Traveling in $`W`$, if possible, is not traveling in time. It is traveling to a place whose spatial configuration is correlated with a spatial configuration that existed elsewhere at another time, and the correlation is reduced, according to Equation (18), by the presence of relativistic objects whose events travel through W at reduced speed.
### III.2 Slicing Hyperspace Into Parallel Universes
The Lorentz formulation of relativity provides that for light $`d\tau =0`$, which defines a light-cone. Equation (18) provides a translation of that into the Euclidean formulation as $`dW=0`$. This infers that hyper-particles of light do not travel in the direction perpendicular to $`F_i`$. Thus, photons, and all light-like particles, spend their lifetime entirely in the one observable universe in which they were generated. Thus, each observable universe is physically a sub-space containing all light-like particles that might ever interact with one another.
Equation (5), for light-like particles, gives $`cos\theta =0`$. Thus, their hyper-particles are everywhere locally parallel to the local observable universe. This implies, further, that their entire hyper-particle is observed. Thus, the particle is the hyper-particle, which implies that the derivation that led to Equations (7)-(9) is not valid for light-like particles. Instead, the X component of the hyper-particle speed must match the observed speed, which implies
$$V_X=v=\pm c.$$
(19)
Separately, for the W component of the hyper-particle motion, dW=0, implies
$$V_W=0,$$
(20)
i.e., photons have no velocity perpendicular to F. Thus, Figure 3 and the Lorentz metric are valid for all particles, including light-like particles.
### III.3 The Nature of the Observer
Our observation of only the spatial dimensions $`(x,y,z)`$ while residing in a Euclidean $`(W,X,Y,Z)`$ space is data that provides new understanding of our nature as observers. In the construction, $`F_0`$ was defined as an observable universe, such that an observer in $`F_0`$ observes only that one observable universe. Equation (20) indicates that light-like hyper-particles in any $`F_i`$ are trapped in that $`F_i`$ and observe only that which is in that observable universe. That is, light-like particles, are limited to observing $`(x,y,z)`$. All other hyper-particles experience many observable universes. Thus, some key aspect of us as observers, e.g. the mechanism of memory and/or observation, though not necessarily the physical body, must be fundamentally light-like.
Each observer sees past, present, and future unfold in one observable universe. The spatially correlated configurations on other observable universes are not something that we typically see. Those configurations are simply correlated configurations, and by the time they come to pass on the observable universe where a particular observer sees them, those configurations no longer exist on any other observable universe.
For each observer, the calculation of Lorentz distances is correct over a region of space in which the path of light is perpendicular to the hyper-particle of the observer. For example, in Figure 1, an observer at Particle 1 can correctly assume a flat observable universe in calculating the Lorentz distances between it and Particles 2 and 3. Observers at Particles 2 and 3 cannot, however, correctly assume that their flat observable universe extends to either of the other particles. In trying to do so, observers at Particles 2 and 3 obtain the wrong answers. This resolves the twin paradoxGravitation (1) in the Euclidean formulation.
The observer at Particle 1 in Figure 1 is special because it is at rest with respect to the larger observable universe. It also has the fastest rate of aging. The shape of the observable universe can be mapped by measuring the velocity at which the clock rate is maximal at various locations. This does not imply a global rest frame, since there is no mechanism that fixes the motion of any observable universe relative to the Euclidean frame. Also, it does not violate relativity since the local observer at Particle 1 cannot determine that it is special by any local measurement.
### III.4 Kinetic Energy Is Mass Energy
Energy conservation is required within each observable universe for compliance with both Newtonian and relativistic mechanics. Since hyper-particles flow through the observable universe with a fixed speed $`V_T=c`$, by Equation (9), hence, if hyper-particles have mass energy per unit length, then that mass energy density is constant over the length of the hyper-particle. Otherwise the quantity of mass energy entering the sub-space would be different from the quantity leaving the sub-space, and energy would not be conserved in the sub-space.
The mass energy of the observed particle is the product of the mass energy per unit length, $`\rho _mc^2`$, times the length of hyper-particle within the observed universe, $`ϵ^{}=ϵ/cos\theta `$. Defining $`m=\rho _mϵ`$ as the rest mass of the particle, then the mass energy of a particle at any velocity is
$$E=\gamma mc^2.$$
(21)
The mass energy in Equation (21) corresponds with the combined value for the kinetic energy plus mass energy in special relativity. Hence, in the Euclidean formulation, the kinematic energy of the particle is a contribution to the mass energy. Observed particles have no kinetic energy apart from their mass energy.
As the velocity of an observed particle changes, the length of the corresponding hyper-particle within the observable universe changes. Yet, since $`V_T`$ is a constant, hence the rate of hyper-particle entering and leaving the observable universe remains unchanged. Hence, the energy required to generate increased length comes from inside the observable universe, perhaps exchanged between hyper-particles within the observable universe by forces within that observable universe.
Special relativity provides momentum as $`\overline{p}=\gamma m\overline{v}`$. Since each of the terms on the right of this equation is defined in the new formulation in accord with special relativity, the mathematical quantity $`\overline{p}`$ can be defined in Euclidean $`(W,X,Y,Z)`$ space as an observable three-vector locally within any observable universe. Since the energy, $`E`$, is also in accord with special relativity, it follows that, mathematically, the equations for conservation of this momentum are identical to the Lorentz formulation.
In this formulation the interpretation of momentum changes slightly because the mass of the particle is physically a function of its velocity. That is, $`\gamma m`$ is physically the mass of the hyper-particle. The interpretation appears to apply even for photons traveling at the speed of light. For photons, the entire hyper-particle is observed. Thus, their energy is the mass of the entire hyper-particle and the momentum is that mass times the velocity, $`c`$.
### III.5 Concentric Spherical Universes
The ubiquitous redshift of light from distant astronomic phenomena may be interpreted as a result of the expansion of the universeastrophys (3). The Hubble Constant, $`H`$, indicates, in particular, a linear relation (at least approximately) between the distance to an object and the size of the red shift. One approach to modeling such expansion is to give the observable universe a curvature of radius, $`R`$, with an expansion rate $`dR/dT`$. If the relation between distance and red shift is linear, then R is constant, otherwise R may be a function of position.
With the addition of this curvature, within the Euclidean formulation, each observable universe becomes a hollow sphere (if R is constant) whose surface area and volume is finite and which grows over time. The thickness, $`ϵ`$, also must increase over time in proportion, by relativity.
Figure 4 shows a slice through a set of concentric spherical observable universes. It highlights three select observable universes of a continuous set. At every location there is a local $`w`$ coordinate that corresponds by Euclidean transformation (spatial rotation and translation) to a global radial coordinate, $`R`$. From a global perspective, the observable universes are seen to expand outward towards $`+R`$, while locally hyper-particles flow towards $`w`$ at the local speed of light. Figures 1 through 3, may be viewed as close-ups of a very small and therefore nearly flat sections of the curved observable universes shown in Figure 4.
Extending the principle of relativity across observable universes, it is possible to determine many properties of the universe expansion. Of course, since the corpus of experimental results all apply to one observable universe, there is no direct measurement that indicates that relativity applies across observable universes. However, that aside, relativity implies the following constraints in order that an observer cannot distinguish observable universes:
1. $`cR`$ so that the circumference of each observable universe is the same number of wavelengths for all $`R`$;
2. $`ϵR`$ so that the ratio of the circumference to the thickness is the same for each observable universe;
3. The rate of aging is independent of $`R`$, by Equation (2) combined with Constraints 1 and 2 above.
### III.6 Two Ages for Our Universe
An observable universe has two ages9905022 (4):
* Time in current seconds since the start of the universe as computed from the Hubble Constant, $`H`$
* Evolutionary age computed as the integral of $`dt`$ from $`T`$=0 to the present time.
The value of $`H`$ is determined by measurement of the expansion-induced relative velocity between particles some distance apart in the observable universe. That velocity is observed not with a ruler, but rather as a change in the time for light to travel between two particles. For Hyper-particles 1 and 2 in Figure 4, the observed velocity of expansion, in global time units, is
$$V_{exp}=c\frac{d(\mathrm{\Phi }R/c)}{dT}.$$
(22)
where $`\mathrm{\Phi }`$ is a fixed angle between the particles as shown in Figure 4. This may be rewritten as
$$V_{exp}=c(\frac{(\mathrm{\Phi }R/c)}{R}\frac{dR}{dT}+\frac{(\mathrm{\Phi }R/c)}{T}).$$
(23)
Since $`cR`$, hence $`\mathrm{\Phi }R/c`$ is independent of $`R`$ so the first term is zero. Thus,
$$V_{exp}=\frac{\mathrm{\Phi }R}{c}\frac{c}{T}.$$
(24)
By relativity, to prevent the distinction of observable universes, $`V_{exp}`$ must be proportional to $`c`$, so that their ratio cannot be used to determine location in space. Using this in Equation (24) gives a first order non-linear differential equation:
$$c\frac{\mathrm{\Phi }R}{c}\frac{c}{T},$$
(25)
which has the solution
$$cR/T.$$
(26)
Equation (26) can be recast in terms of the current speed of light as
$$c=c_0(R/R_0)(T_0/T),$$
(27)
where $`T_0`$ is the present global time, $`c_0`$ is the present speed of light at $`F_0`$, and $`R_0`$ is the present radius of $`F_0`$. Thus, $`c`$ at any location decreases over time. Also, substituting Equation (27) into Equation (22), gives
$$V_{exp}=\frac{\mathrm{\Phi }R}{T}.$$
(28)
Since $`H`$ is the ratio of the velocity between particles and the distance between particles, hence
$$H(T)=\frac{V_{exp}}{\mathrm{\Phi }R}=\frac{1}{T},$$
(29)
which, finally gives the age of the universe in global seconds as
$$T=1/H.$$
(30)
Meanwhile, the rate of local time, obtained by using Equation (27) in Equation (2) is
$$\frac{dt}{dT}=\frac{T_0}{T},$$
(31)
which integrates to give the evolutionary age of the universe as
$`{\displaystyle _0^{T_0}}𝑑t={\displaystyle _0^{T_0}}𝑑T{\displaystyle \frac{T_0}{T}}=T_0log({\displaystyle \frac{T_0}{T}})_0^{T_0}`$
$`=T_0log(0)=\mathrm{}.`$ (32)
Thus, the total aging since the beginning of global time is predicted to be infinite everywhere in the universe even though the age of the universe, in global time units, is finite. This prediction is consistent with data that has generally shown, until recently reinterpreted to reduce the discrepancy freedman2 (5), that the age of the universe, as obtained from observations of astronomic evolution, is greater than that computed from the Hubble Constant.
### III.7 Refraction Closing Our Universe Without Gravity or Dark Matter
The spherical shape for each observable universe, $`F_i`$, means that its extent in each local dimension $`x,y,z`$ is finite and wraps around, i.e., if one goes far enough along $`+x`$ one ends up a $`x`$. The volume for each observable universe is finite. Meanwhile the volume for the global $`(W,X,Y,Z)`$ universe is infinite and there are an infinite number of observable universes.
The spherical shape of $`F_i`$ is consistent with refraction-induced curvature. Applying Huygen’s principle to the $`R`$-dependent speed of light as given in Equation (27), a wavefront of light starting at Hyper-particle 1 (in Figure 4), for example, propagates an equal number of wavelengths at each radius. Because $`cR`$, hence the wavefront proceeds as a straight line rotating around $`R=0`$. Thus, by refraction each point on the wavefront travels in a circle.
This refraction does not appear to be a gravitational lensing effect.gravity (6) There are two reasons. First, gravity is a force associated with mass. In particular, gravity is a function of the distance from the mass (measured along a curve within $`F_i`$). Thus, while the three-dimensional deformation of $`F_i`$ near each hyper-particle may be gravity-induced refraction (see Section IV), the $`R`$-dependent gradient of the speed of light has the wrong spatial dependence to be related to the distance from mass.
Second, since gravity is a force, hence, if it is present, then there should be a tell-tale deceleration of $`F_i`$. In particular, the gravitational force on the observable universe, $`F_0`$ at $`R_0`$ would be proportional to the integral of the mass from $`R=0`$ to $`R=R_0`$. Instead, by relativity, in order that observable universes cannot be distinguished by the ratio of the expansion rate to the speed of light, and by Equation (26), hence the rate of radial motion at any point in the universe at any time must be:
$$V_RcR/T.$$
(33)
Equation (33) can be manipulated to compute the time dependence of the radius, $`R_i(T)`$, of any particular observable universe, $`F_i`$. First, representing $`V_R`$ as a differential gives:
$$\frac{dR}{dT}R/T,$$
(34)
which can be rearranged as:
$$\frac{dR}{R}\frac{dT}{T}.$$
(35)
Both sides are integrated to give
$$(R_i(T)R_i(0))T,$$
(36)
where $`R_i(0)`$ is the radius of $`F_i`$ at $`T=0`$.
Equation (36) shows that the rate of expansion for any particular observable universe, in global time units, is constant - there is no deceleration. Hence, either the gravitational force is cancelled by another force, or it does not exist on this grand scale or in the $`R`$ direction. In any case, the refraction that closes each observable universe exists by some cause other than gravity.
### III.8 A Continuous, Accelerating Universe
Equations (26) and (28) show singularities occuring at $`T=0`$, similar to those implied by the "Big Bang" theories. However, Equation (32) describing the evolutionary age of the universe applies for all times $`T`$. Thus, even when the universe was one global second old, the evolutionary age was already infinite.
In the early universe, clocks ran faster so the rate of system evolution was higher. The general relationship between the global and local rates of time can be derived by changing the integration limits on Equation (32). This gives:
$$T=T_0e^{(tT_0)/T_0}.$$
(37)
With this normalization, $`t`$ and $`T`$ have the same values and derivatives at time $`T_0`$. When $`t=0`$, $`T=T_0e^1`$, and when $`t=\mathrm{}`$, $`T=0`$. When the universe was one global second old, the local clock rate on our observable universe, $`F_0`$, would have been approximately $`5x10^{17}`$ times its current rate (using 16 Billion years as its current age).
The rapid clock rate of the early universe may have been undetectable by local measurement. For example, the circumnavigation time in local seconds, $`\mathrm{\Delta }t_c`$, for light to complete a trip around a particular observable universe was the same when the universe was one second old as it is now. The circumnavigation time is obtained by combining Equations (26), (31), and (36). This gives:
$$\mathrm{\Delta }t_c\frac{T_0}{T}\mathrm{\Delta }T_c\frac{T_0}{T}\frac{2\pi R_i(T)}{c_i(T)}\frac{R_0(T_0)}{c_0(T_0)},$$
(38)
where $`c_i(T)`$ is the speed of light vs. time on observable universe $`F_i`$, and $`\mathrm{\Delta }T_c`$ is the circumnagivation time in global seconds. Thus, the observed circumnagivation time is the same on all observable universes at all times, even when the universe was one global second old.
Some of the predicted phenomona of the "Big Bang" theories may still apply. For instance, if $`R_i(0)=0`$, then all of the observable universes were collapsed to a point at T=0. Conversely, if $`R_i(0)>0`$, then the observable universes did not collapse to a point at T=0.
Our interpretation of the expansion rate of the universe is also affected by local time. Equation (36) shows a constant rate of expansion with respect to global time. However, substituting Equation (37) into Equation (36) gives the expansion as a function of local time:
$$(R_i(T)R_i(0))T_0e^{(tT_0)/T_0}.$$
(39)
Thus, the universe is expanding exponentially with respect to local time, i.e., the expansion is accelerating9906220 (7)accell2 (8)Repulsion (9).
### III.9 A New Lagrangian
The motion of $`F_i`$ even absent the presence of hyper-particles indicates that space has substance as a continuum in accord with other models9905066 (10)scalar-tensor (11). A Lagrangian density, $`L`$, is defined here to describe that continuum as an ideal cosmological fluid:
$$L(X_\mu )=\frac{\varphi }{2}V_\mu V_\mu \frac{k}{2}\frac{\varphi }{X_\mu }\frac{\varphi }{X_\mu }+\lambda (\frac{\varphi }{T}+\frac{(\varphi V_\mu )}{X_\mu }).$$
(40)
Here $`X_\mu `$ represents $`\{W,X,Y,Z\}`$, $`\varphi (W,X,Y,Z,T)`$ is the time-dependent density of the cosmological fluid, $`V_\mu (W,X,Y,Z,T)`$ is the time-dependent velocity of the fluid, $`k`$ is a real constant of currently unknown value, and $`\lambda `$(W,X,Y,Z,T) is a Lagrange multipliervariationalprinciples (12). The first term of the Lagrangian density is the kinetic energy density; the second term is a harmonic potential energy density; and the third term is continuity as a holonomic constraint. Because the fluid is an ’ideal’ fluid, there are no terms in the Lagrangian density for internal properties of the fluid such as temperature or entropy. The holonomic constraint requires that fluid cannot simply disappear in one place and reappear in another; it requires that transport of fluid occur via its motion from location to location.
The Lagrangian is the four dimensional integral of the Lagrangian density over the spatial coordinates $`(W,X,Y,Z)`$. The Lagrangian gives rise to two Lagrange equations, a vector equation from the variation of the Lagrangian with respect to $`V_\mu `$ and a scalar equation from its variation with respect to $`\varphi `$. The vector Lagrange equation, for any Lagrangian density that is a function of $`V_\mu `$ and its first derivatives but not higher order derivatives, is given by:
$$\frac{L}{V_\mu }\frac{}{T}\frac{L}{\frac{V_\mu }{T}}\frac{}{X_\mu }\frac{L}{\frac{V_\mu }{X_\mu }}=0.$$
(41)
Substituting in $`L`$ from Equation (40), this vector equation reduces to:
$$V_\mu =\lambda /X_\mu .$$
(42)
This result is general to all solutions of this Lagrangian and reduces the theory to a bi-scalar theory, where $`\varphi `$ and $`\lambda `$ are the two scalar fields.
The scalar Lagrange equation, for any Lagrangian density that is a function of $`\varphi `$ and its first derivatives but not higher order derivatives, is given by:
$$\frac{L}{\varphi }\frac{}{T}\frac{L}{\frac{\varphi }{T}}\frac{}{X_\mu }\frac{L}{\frac{\varphi }{X_\mu }}=0.$$
(43)
Substituting $`L`$ from Equation (40) and reducing by use of Equation (42) gives
$$k\frac{^2\varphi }{X_\mu ^2}\frac{1}{2}\frac{\lambda }{X_\mu }\frac{\lambda }{X_\mu }\frac{\lambda }{T}=0,$$
(44)
which which must be solved together with the continuity equation. The continuity equation, after substitution from Equation 42, is
$$\frac{\varphi }{T}+\frac{}{X_\mu }(\varphi \frac{\lambda }{X_\mu })=0.$$
(45)
A closed form solution to the Lagrange equation that satisfies continuity, is:
$$\varphi =\frac{K}{T^N}and\lambda =1/2\frac{R^2}{T},$$
(46)
where $`N`$ is the number of spatial dimensions of the universe \[which is at least four to include $`(W,X,Y,Z)`$\], and where $`R=(X_\mu X_\mu )^{1/2}`$. The value of $`K`$ is undetermined. At $`T=0`$ both scalar fields are infinite exposing the same "Big Bang" type singularity as seen earlier.
The dimensional dependence of $`\varphi `$ in Equation (46) concurs with the analysis in Section 3.5. In particular, it shows the volume of each observable universe expanding as $`T^4`$ for a $`(W,X,Y,Z)`$ space, which corresponds to each observable universe expanding both due to the increasing radius and due to its increasing thickness, $`ϵ`$, as predicted by relativity arguments in the earlier section.
The radial velocity of each point in the universe is computed by differentiating $`\lambda `$ as per Equation (42). The result is
$$V_R=R/T,$$
(47)
which is consistent with Equation (33) and determines that $`R_i(0)=0`$. Thus, for this solution, the universe was collapsed to a point at $`T=0`$ which would be in accord with "Big Bang" theories.
An alternate closed form solution for this Lagrangian is:
$`\varphi ={\displaystyle \frac{(N+1)}{(2k)(N+2)(4+2N)^2}}{\displaystyle \frac{R^4}{T^2}},and`$ (48)
$`\lambda ={\displaystyle \frac{1}{4+2N}}{\displaystyle \frac{R^2}{T}}.`$ (49)
This solution provides $`R_i(0)>0`$. Thus if the actual universe obeys this solution, then the universe was not collapsed to a point at $`T=0`$, contrary to "Big Bang" theories.
Both of the solutions show closure of the universe even before hyper-particles or mass are introduced.
## IV Future Directions
A key future goal is to incorporate hyper-particles into the Lagrangian as perturbations on $`\varphi `$ and $`\lambda `$, and to solve thereby for the structure of the fundamental particles. That further may lead to a derivation of the forces between particles as interactions among those perturbations and to a determination of the fundamental constants and any scaling of those constants over time and space. It may further lead to an exploration of a four-dimensional Euclidean momentum with possible conservation laws and, separately to an exploration of the relation between the presented theory and standard string theories.
Mapping general relativity into the Euclidean formulation may be achievable by an investigation of the relationship between gravity and the deformations of $`F_i`$ near hyper-particles. The possibility of non-gravitationally induced refraction on astronomic scales may imply that some phenomena currently ascribed to dark matter darkmatter (13) may actually be unrelated to gravity and due rather to the new phenomenon shown here to cause the refraction that closes the universe.
Other unknowns to be investigated include the possibility of freeing oneself from a particular observable universe to directly observe others, and the possibility of additional Euclidean dimensions as might be inferred from experimental observations. Additionally, a goal is to determine the thickness of the observable universe (a value for $`ϵ`$). Also the ratio between the expansion rate of the universe and the speed of light is of fundamental interest.
## V Conclusions
The separation of Lorentz time into three parameters – global time, a global spatial coordinate, and a local rate of aging that is a function of position and time, – has enabled successful formulation of relativity in a global $`(W,X,Y,Z,T)`$ Euclidean space. The new formulation agrees with the main predictions of special relativity, provides re-interpretation of some relativistic phenomena, and provides new predictions. Some of the predictions are supported by existing experimental evidence. Others may be tested by future experiments. The predictions are derived by the application of the principle of relativity via geometry and supported by a new Lagrangian describing the universe as an ideal cosmological fluid. Sample predictions include:
* The age of the universe as measured by the Hubble constant is finite, while that measured by system evolution is infinite.
* The universe is closed by refraction without gravity or dark matter, implying a new phenomenon at astronomic scales.
* The speed of light varies throughout the universe.
* Parallel universes exist and are correlated as "past", "present", "future". |
no-problem/9912/chao-dyn9912007.html | ar5iv | text | # An exit-time approach to ϵ-entropy
## Abstract
An efficient approach to the calculation of the $`ϵ`$-entropy is proposed. The method is based on the idea of looking at the information content of a string of data, by analyzing the signal only at the instants when the fluctuations are larger than a certain threshold $`ϵ`$, i.e., by looking at the exit-time statistics. The practical and theoretical advantages of our method with respect to the usual one are shown by the examples of a deterministic map and a self-affine stochastic process.
PACS: 05.45.-a, 05.45.Tp
The problem of quantifying the degree of complexity of an evolving system is ubiquitous in natural science (for a nice review see ). The typical questions may range from the aim to distinguish between stochastic or chaotic systems to the more pragmatic goal of determining the degree of complexity (read predictability) at varying the resolution in phase-space and time .
From the pioneering works of Shannon and Kolmogorov , a proper mathematical tool, the Kolmogorov-Sinai ($`KS`$) entropy, $`h_{KS}`$, has been developed to address the above question quantitatively and unambiguously (in principle). The main idea is very natural: we must look at the information contained in the time evolution as a probe of the underlying dynamics. This is realized by studying the symbolic dynamics obtained by assigning different symbols to different cells of a finite partition of the phase space. The probability distribution of allowed sequences (words) is determined by the dynamical evolution. The average information-gain is defined by comparing sequences of length $`m`$ and $`m+1`$, in the limit of large $`m`$. Letting the length of the words, $`m`$, to infinity and going to infinitely refined partition, one obtains the $`KS`$-entropy, which is a measure of the degree of complexity of the trajectory. The $`KS`$-entropy determines also the rate of information transmission necessary to unambiguously reconstruct the signal.
Unfortunately, only in simple, low dimensional, dynamical systems such a procedure can be properly carried out with conventional methods . The reason is that for high dimensional systems the computational resources are not sufficient to cope with the very high resolution and extremely long time series required. Moreover, in many systems, like in turbulence, the existence of non-trivial fluctuations on different time and spatial scales cannot be captured by the $`KS`$-entropy. This calls for a more general tool to quantify the degree of predictability which depends on the analyzed range of scales and frequencies. This was the aim leading Shannon and Kolmogorov to introduce the so-called $`ϵ`$-entropy, later generalized to the $`(ϵ,\tau )`$-entropy . Conceptually it corresponds to the rate of information transmission necessary to reconstruct a signal with a finite accuracy $`ϵ`$, and with a sampling time interval $`\tau `$. The naive $`(ϵ,\tau )`$-computation is usually performed by looking at the Shannon entropy of the coarse grained dynamics on an $`(ϵ,\tau )`$-grid in the phase-space and time. This method suffers of so many computational drawbacks that it is almost useless for many realistic time-series . Another attempt in this direction is the introduction of the Finite Size Lyapunov Exponent .
The aim of this letter is to introduce an alternative approach for the determination of the $`(ϵ,\tau )`$-entropy, based on the analysis of exit-times. In a few words, the idea consists in looking at the information-content of a string of data, without analyzing the signal at any fixed time, $`\tau `$, but only when the fluctuations are larger than some fixed threshold, $`ϵ`$. This simple observation allows for a remarkable improvement of the computational possibility to measure the $`(ϵ,\tau )`$-entropy as will be discussed in detail later.
We believe that the approach presented hereafter is unavoidable in all those cases when either the high-dimensionality of the underlying phase space, or the necessity to disentangle non-trivial correlations at different analyzed time scales, leads to the failure of the standard methods.
Let us just briefly recall the conventional way to calculate the $`(ϵ,\tau )`$-entropy for the case of a time-continuous signal $`x(t)\mathrm{I}R`$, recorded during a (long) time interval $`T`$. One defines an $`ϵ`$-grid on the phase-space and a $`\tau `$-grid on time. If the motion is bounded, the trajectory visits only a finite number of cells; therefore to each subsequence of length $`n\tau `$ from $`x(t)`$ one can associate a word of length $`n`$, out of a finite alphabet: $`W_t^n(ϵ,\tau )=(S_t,S_{t+\tau },\mathrm{},S_{t+(n1)\tau })`$, where $`S_t`$ labels the cell containing $`x(t)`$. From the probability distribution of the above words one calculates the block entropies $`H_n(ϵ,\tau )`$:
$$H_n(ϵ,\tau )=\underset{\{W^n(ϵ,\tau )\}}{}P(W^n(ϵ,\tau ))\mathrm{ln}P(W^n(ϵ,\tau )).$$
(1)
the $`(ϵ,\tau )`$-entropy per unit time, $`h(ϵ,\tau )`$ is finally defined as:
$`h_n(ϵ,\tau )`$ $`=`$ $`{\displaystyle \frac{1}{\tau }}[H_{n+1}(ϵ,\tau )H_n(ϵ,\tau )],`$ (2)
$`h(ϵ,\tau )`$ $`=`$ $`\underset{n\mathrm{}}{lim}h_n(ϵ,\tau )={\displaystyle \frac{1}{\tau }}\underset{n\mathrm{}}{lim}{\displaystyle \frac{1}{n}}H_n(ϵ,\tau ),`$ (3)
where for practical reasons the dependence on the details of the partition is ignored, while the rigorous definition is given in terms of the infimum over all possible partitions with elements of diameter smaller than $`ϵ`$ . Notice that the above defined $`(ϵ,\tau )`$-entropy is nothing but the Shannon-entropy of the sequence of symbols $`(S_t,S_{t+\tau },\mathrm{})`$ associated with the given signal. The Kolmogorov-Sinai entropy, $`h_{KS}`$, is obtained by taking the limit $`(ϵ,\tau )0`$:
$$h_{KS}=\underset{\tau 0}{lim}\underset{ϵ0}{lim}h(ϵ,\tau ).$$
(4)
In the case of discrete-time systems, one can define $`h(ϵ)h(ϵ,\tau =1)`$, and $`h_{KS}=lim_{ϵ0}h(ϵ)`$. In continuous-time evolutions, whose realizations are continuous functions of time, the $`\tau `$ dependence disappears from $`h(ϵ,\tau )`$, so one can still define an $`ϵ`$-entropy per unit time $`h(ϵ)`$. In particular, also in a pure deterministic flow one can put $`h(ϵ)=h(ϵ,\tau =1)`$.
Let us remind that for a genuine deterministic chaotic system one has $`0<h_{KS}<\mathrm{}`$ ($`h_{KS}=0`$ for a regular motion), while for a continuous random process $`h_{KS}=\mathrm{}`$. Therefore, in order to distinguish between a purely deterministic system and a stochastic system it is necessary to perform the limit $`ϵ0`$ in (4). Obviously, from a physical and/or numerical point of view this is impossible. Nevertheless, by looking at the behavior of the $`(ϵ,\tau )`$-entropy at varying $`ϵ`$ one can have some qualitative and quantitative insights on the chaotic or stochastic nature of the process. For most of the usual stochastic processes one can explicitly give an estimate of the entropy scaling behavior when $`ϵ0`$ . For instance, in the case of a stationary Gaussian process one has
$$h(ϵ)\underset{\tau 0}{lim}h(ϵ,\tau )\frac{1}{ϵ^2}.$$
(5)
Let us now introduce the main point of this letter by discussing in detail the difficulties that may arise in measuring the $`ϵ`$-entropy for the following non-trivial example of a chaotic-diffusive map ,
$$x_{t+1}=x_t+p\mathrm{sin}2\pi x_t.$$
(6)
When $`p>0.7326\mathrm{}`$, this map produces a diffusive behavior on large scales, so one expects
$$h(ϵ)\lambda \mathrm{for}ϵ<1;h(ϵ)\frac{D}{ϵ^2}\mathrm{for}ϵ>1,$$
(7)
where $`\lambda `$ is the Lyapunov exponent and $`D`$ is the diffusion coefficient. The numerical computation of $`h(ϵ)`$, using the standard codification, is highly non-trivial already in this simple system. This can be seen by looking at Fig. 1 where the behavior (7) in the diffusive region is roughly obtained by considering the envelope of $`h_n(ϵ,\tau )`$ evaluated for different values of $`\tau `$; while looking at any single (small) value of $`\tau `$ (one would like to put $`\tau =1`$) one obtains a rather inconclusive result. This is due to the fact that one has to consider very large block lengths $`n`$ when computing $`h(ϵ,\tau )`$, in order to obtain a good convergence for $`H_n(ϵ,\tau )H_{n1}(ϵ,\tau )`$ in (3). Indeed, in the diffusive regime, a simple dimensional argument shows that the characteristic time of the system is $`T_ϵϵ^2/D`$. If we consider for example, $`ϵ=10`$ and typical values of the diffusion coefficient $`D10^1`$, the characteristic time, $`T_ϵ`$, is much larger than the elementary sampling time $`\tau =1`$.
Our approach to calculate $`h(ϵ)`$ differs from the usual one in the procedure to construct the coding sequence of the signal at a given level of accuracy. Specifically, we use a different way to sample the time, i.e., instead of using a constant time interval, $`\tau `$, we sample according to the exit-time, $`t(ϵ)`$, on an alternating grid of cell size $`ϵ`$. We consider the original continuous-time record $`x(t)`$ and a reference starting time $`t=t_0`$; the subsequent exit-time, $`t_1`$, is then defined as the first time necessary to have an absolute variation equal to $`ϵ/2`$ in $`x(t)`$, i.e., $`|x(t_0+t_1)x(t_0)|ϵ/2`$. This is the time the signal takes to exit the cell of size $`ϵ`$. Then we restart from $`t_1`$ to look for the next exit-time $`t_2`$, i.e., the first time such that $`|x(t_0+t_1+t_2)x(t_0+t_1)|ϵ/2`$ and so on. Let us notice that, with this definition, the coarse-graining grid is not fixed, but it is always centered in the last exit position. In this way we obtain a sequence of exit-times, $`\{t_i(ϵ)\}`$. To distinguish the direction of the exit (up or down out of a cell), we introduce the label $`k_i=\pm 1`$, depending whether the signal is exiting above or below. Doing so, the trajectory is univocally coded with the required accuracy, by the sequence $`((t_1,k_1),(t_2,k_2),\mathrm{},(t_M,k_M))`$, where $`M`$ is the total number of exit-time events observed during the total time $`T`$. Correspondingly, an “exit-time word” of length $`n`$ is a sequence of couples of symbols $`\mathrm{\Omega }_i^n(ϵ)=((t_i,k_i),(t_{i+1},k_{i+1}),\mathrm{},(t_{i+n1},k_{i+n1}))`$. From these words one firstly calculates the block entropies, $`H_n^\mathrm{\Omega }(ϵ)`$, and then the exit-time $`ϵ`$-entropies: $`h^\mathrm{\Omega }(ϵ)lim_n\mathrm{}H_{n+1}^\mathrm{\Omega }(ϵ)H_n^\mathrm{\Omega }(ϵ)`$. Let us notice that $`h^\mathrm{\Omega }(ϵ)`$ is an $`ϵ`$-entropy per exit and that $`M=T/t(ϵ)`$. The exit-time coding is a faithful reconstruction with accuracy $`ϵ`$ of the original signal. Therefore, the total entropy, $`Mh^\mathrm{\Omega }(ϵ)`$, of the exit-time sequence, $`\mathrm{\Omega }^M(ϵ)`$, is equal to the total entropy, $`Th(ϵ)=N\tau h(ϵ)`$, of the standard codification sequence, $`W^N(ϵ)`$. Namely, for the $`ϵ`$-entropy per unit time, we obtain:
$$h(ϵ)=Mh^\mathrm{\Omega }(ϵ)/T=\frac{h^\mathrm{\Omega }(ϵ)}{t(ϵ)}.$$
(8)
Now we are left with the determination of $`h^\mathrm{\Omega }(ϵ)`$. This implies a discretization, $`\tau _e`$, of the exit-times. The exit-time entropy $`h^\mathrm{\Omega }(ϵ)`$ becomes an exit-time $`(ϵ,\tau _e)`$-entropy, $`h^\mathrm{\Omega }(ϵ,\tau _e)`$, obtained from the sequence $`\{\eta _i,k_i\}`$, where $`\eta _i`$ identifies the exit-time cell containing $`t_i`$. Equation (8) becomes now
$$h(ϵ)=\underset{\tau _e0}{lim}h^\mathrm{\Omega }(ϵ,\tau _e)/t(ϵ)h^\mathrm{\Omega }(ϵ,\tau _e)/t(ϵ),$$
(9)
the latter relation being valid for small enough $`\tau _e`$. However, in all practical situations there exist a minimum $`\tau _e`$ given by the highest acquisition frequency, i.e., the limit $`\tau _e0`$ cannot be reached. The discretization interval $`\tau _e`$ can be thought as the equivalent to the $`\tau `$ entering in the usual $`(ϵ,\tau )`$-entropy, so that
$$h(ϵ,\tau _e)=h^\mathrm{\Omega }(ϵ,\tau _e)/t(ϵ).$$
(10)
At this point it is important to stress that in most of the cases the leading $`ϵ`$ contribution to $`h(ϵ)`$ in (9) is given by the mean exit-time $`t(ϵ)`$ and not by $`h^\mathrm{\Omega }(ϵ,\tau _e)`$. Anyhow, the computation of $`h^\mathrm{\Omega }(ϵ,\tau _e)`$ is compulsory in order to recover a zero entropy for regular (e.g. periodic) signals. It is easy to obtain the following bounds for $`h^\mathrm{\Omega }(ϵ,\tau _e)=h^\mathrm{\Omega }(\{\eta _i,k_i\})`$ :
$$h^\mathrm{\Omega }(\{k_i\})h^\mathrm{\Omega }(\{\eta _i,k_i\})$$
$$h^\mathrm{\Omega }(\{\eta _i,k_i\})h^\mathrm{\Omega }(\{\eta _i\})+h^\mathrm{\Omega }(\{k_i\})h^\mathrm{\Omega }(\{k_i\})+H_1^\mathrm{\Omega }(\{\eta _i\})$$
where $`h^\mathrm{\Omega }(\{k_i\})`$ is the Shannon entropy of the sequence $`\{k_i\}`$ and $`H_1^\mathrm{\Omega }(\{\eta _i\})`$ is the one-symbol entropy of the $`\{\eta _i\}`$. Therefore we have
$$\frac{h^\mathrm{\Omega }(\{k_i\})}{t(ϵ)}h(ϵ)\frac{h^\mathrm{\Omega }(\{k_i\})+c(ϵ)+\mathrm{ln}(t(ϵ)/\tau _e)}{t(ϵ)}$$
(11)
where $`c(ϵ)=p(z)\mathrm{ln}p(z)dz`$, and $`p(z)`$ is the probability distribution function of the rescaled exit-time $`z(ϵ)=t(ϵ)/t(ϵ)`$.
We shall see in the following that the above bounds are rather good, and typically $`t(ϵ)`$ shows the same scaling behavior as $`h(ϵ)`$. One could wonder why the exit-time approach is better than the usual one. The reason is simple (and somehow deep): in the exit-time approach it is not necessary to use a very large block size since, at fixed $`ϵ`$, the typical time at that scale is automatically given by $`t(ϵ)`$. This fact is particularly clear in the case of Brownian motion. In such a case $`t(ϵ)ϵ^2/D`$, where $`D`$ is the diffusion coefficient. As previously discussed, the computation of the $`h(ϵ)`$ with the standard methods implies the use of very large block sizes, of order $`ϵ^2/D`$.
With our method the $`t(ϵ)`$ captures the correct scaling behavior and the exit-time entropy introduces, at worst, a sub-leading logarithmic contribution $`h^\mathrm{\Omega }(ϵ,\tau _e)\mathrm{ln}(t(ϵ)/\tau _e)`$. This is because $`c(ϵ)`$ is $`O(1)`$ and independent of $`ϵ`$ for a self-affine signal and the $`h^\mathrm{\Omega }(\{k_i\})\mathrm{ln}2`$ term is small compared with $`\mathrm{ln}(t(ϵ)/\tau _e)`$ (for not too small $`ϵ`$), so that, neglecting the logarithmic corrections, $`h(ϵ)1/t(ϵ)Dϵ^2`$. In order to clarify this point, we plot in Fig. 2 the calculation of the $`(ϵ,\tau )`$-entropy via the exit-time approach for the previously discussed diffusive map. Fig. 2 has to be compared with Fig. 1 where the usual approach has been used. While in Fig. 1 the expected $`ϵ`$-entropy scaling is roughly recovered as an envelope over many different $`\tau `$, in our case the predicted behavior is easily recovered for any $`ϵ`$ with a remarkable improvement in the quality of the result.
Let us now briefly comment the limit $`ϵ0`$ for a discrete-time system (e.g. maps). In this limit the exit-time approach coincides with the usual one: we have just to observe that practically the exit-times always coincide with the minimum sampling time and to consider the possibility to have jumps over more than one cell, i.e., the $`k_i`$ symbols may take values $`\pm 1,\pm 2,\mathrm{}`$.
As another example, we present the calculation of $`ϵ`$-entropy for a self-affine stochastic signal with Hölder-exponent $`\xi =1/3`$, i.e., $`|x(t)x(t+\mathrm{\Delta }t)|(\mathrm{\Delta }t)^{1/3}`$. Such a signal can be seen as a stochastic surrogate of a turbulent signal (ignoring intermittency) and can be constructed in different ways (see Ref. and references therein). A simple dimensional estimate, which is rigorous for Gaussian processes , tells us that the leading contribution to the $`ϵ`$-entropy scaling is given by $`h(ϵ)ϵ^3`$. To generate the self-affine signal we use a recently proposed algorithm , where $`x(t)`$ is obtained by using many Langevin processes. In Fig. 3 we show the bounds (11) for $`(ϵ,\tau _e)`$-entropy calculated via the exit-time approach. We observe an extended region of well-defined scaling, which is the same for $`1/t(ϵ)ϵ^3`$. The usual approach (not shown) gives a poor estimate for the scaling as the envelope of $`h(ϵ,\tau )`$ computed for various $`\tau `$ (see for example Figs. 15-18 in ), as in the case of Fig. 1.
In conclusion, we have introduced an efficient method to calculate the $`ϵ`$-entropy based on the analysis of the exit-time statistics. It is able to disentangle in a more proper way the leading contributions to $`h(ϵ)`$ at the scale $`ϵ`$, compared with the standard way based on a coarse-grained dynamics on a fixed $`(ϵ,\tau )`$ grid. We have presented the application to two examples, a chaotic diffusive map and a stochastic self-affine signal. More applications to chaotic systems, stochastic multi-affine processes, and experimental turbulent signals will be reported elsewhere.
The entropy-approach to evaluate the degree of complexity of a time series allows also to attack much more sophisticated problems often encountered in dynamical system theory. We just mention, e.g., the problem to disentangle correlations between time series obtained by measuring different observable of the same system. This might be addressed by using the conditional-entropy.
We acknowledge useful discussions with G. Boffetta and A. Celani. This work has been partially supported by INFM (PRA-TURBO) and by the European Network Intermittency in Turbulent Systems (contract number FMRX-CT98-0175). M.A. is supported by the European Network Intermittency in Turbulent Systems. |
no-problem/9912/astro-ph9912373.html | ar5iv | text | # 1 Preamble
## 1 Preamble
First, a historical perspective:
What would a conference on ‘cosmology’ have been like in earlier decades? In the first half of the century the agenda would have been almost solely theoretical. The standard models date from the 1920s and 1930s; the Hubble expansion was first claimed in 1929, but not until the 1950s was there any prospect of discriminating among the various models, Indeed, there was even then little quantitative data on how closely any isotropic homogeneous model fitted the actual universe.
A cosmology meeting in the 1950s would have focussed on the question: is there evidence for evolution, or is the universe in a steady state? Key protagonists on the theoretical side would have included Hoyle and Bondi. Ryle would have been arguing that counts of radio sources – objects so powerful that many lay beyond the range of optical telescopes – already offered evidence for cosmic evolution; and Sandage would have advocated the potential of the Mount Palomar 200 inch telescope for extending the Hubble diagram far enough to probe the deceleration. Intimations from radio counts that the universe was indeed evolving, were strengthened after 1963 by the redshift data on quasars.
The modern era of physical cosmology of course began in 1965, when the discovery of the microwave background brought the early ‘fireball phase’ into the realm of empirical science, and the basic physics of the ‘hot big bang’ was worked out. (The far earlier contributions by Gamow’s associates, Alpher and Herman, continued, however, to be under-appreciated). There was also substantial theoretical work on anisotropic models, etc. Throughout the 1970s this evidence for the ‘hot big bang’ firmed up, as did the data on light elements, and their interpretation as primordial relics.
Theoretical advances in the 1980s gave momentum to the study of the ultra-early universe, and fostered the ‘particle physics connection’: the sociological mix of cosmologists changed. There was intense discussion of inflationary models, non-baryonic matter, and so forth.
Here we are in the late 1990s with a still larger and more diverse community. The pace of cosmology has never been faster. We’re witnessing a crescendo of discoveries that promises to continue into the next millennium. This is because of a confluence of developments:
1. $`\underset{¯}{\text{The microwave background fluctuations}}`$ : these are now being probed with enough sensitivity to provide crucial tests of inflation and discrimination among different models.
2. $`\underset{¯}{\text{The high-redshift universe}}`$: the Hubble Space Telescope (HST) has fulfilled its potential; two Keck Telescopes have come on line, along with the first VLT telescope, Subaru, and the first Gemini telescope. These have opened up the study of ‘ordinary’ galaxies right back to large redshifts, and to epochs when they were newly formed. In the coming year, three new X-ray telescopes will offer higher resolution and higher sensitivity for the study of distant galaxies and clusters.
3. $`\underset{¯}{\text{Large scale clustering and dynamics}}`$: big surveys currently in progress are leading to far larger and more reliable samples, from which we will be able to infer the quantitative details of how galaxies of different types are clustered, and how their motions deviate from Hubble flow. Simultaneously with this progress, there have been dramatic advances in computer simulations. These now incorporate realistic gas dynamics as well as gravity.
4. $`\underset{¯}{\text{Developments in fundamental physics}}`$ offer important new speculative insights, which will certainly figure prominently in our discussions of the ultra-early universe.
It is something of a coincidence – of technology and funding – that the impetus on all these fronts has been more or less concurrent.
Max Planck claimed that theories are never abandoned until their proponents are all dead: that’s too cynical, even in cosmology! Some debates have been settled; some earlier issues are no longer controversial; some of us change our minds (quite frequently, sometimes). And as the consensus advances, new questions which couldn’t even have been posed in earlier decades are now being debated. This conference’s agenda is therefore a ‘snapshot’ of evolving knowledge, opinion and speculation.
Consider the following set of assertions – a typical utterance of the r-m-s cosmologist whom you might encounter on a Cambridge street. Our universe is expanding from a hot big bang in which the light elements were synthesised. There was a period of inflation, which led to a ‘flat’ universe today. Structure was ‘seeded’ by gaussian irregularities, which are the relics of quantum fluctuations, and the large-scale dynamics is dominated by ‘cold’ dark matter. but $`\mathrm{\Lambda }`$ (or quintessence) is dynamically dominant.
I’ve written it like nine lines of ‘free verse’ to highlight how some claims are now quite firm, but others are fragile and tentative. Line one is now a quite ancient belief: it would have rated 99 percent confidence for several decades. Line two represents more recent history, but would now be believed almost equally strongly. So, probably, would line three, owing to the improved observations of abundances, together with refinements in the theory of cosmic nucleosynthesis. The concept of inflation is now 20 years old; most cosmologists suspect that it is was indeed a crucial formative process in the ultra-early universe, and this conference testifies to the intense and sophisticated theorising that it still stimulates.
Lower down in my list of statements, the confidence level drops below 50 percent. The ‘stock’ in some items – for instance CDM models , which have had several ‘deaths’ and as many ‘resurrections’ – is volatile, fluctuating year by year! The most spectacular ’growth stock’ now (but for how long?) is the cosmological constant, lambda.
## 2 The Cosmological Numbers
Traditionally, cosmology was the quest for a few numbers. The first were H, q, and $`\mathrm{\Lambda }`$. Since 1965 we’ve had another : the baryon/photon ratio. This is believed to result from a small favouritism for matter over antimatter in the early universe – something that was addressed in the context of ‘grand unified theories’ in the 1970s. (Indeed, baryon non-conservation seems a prerequisite for any plausible inflationary model. Our entire observable universe, containing at least $`10^{79}`$ baryons, could not have inflated from something microscopic if baryon number were strictly conserved)
In the 1980s non-baryonic matter became almost a natural expectation, and $`\mathrm{\Omega }_b/\mathrm{\Omega }_{\mathrm{CDM}}`$ is another fundamental number .
Another specially important dimensionless number, $`\mathrm{Q}`$, tells us how smooth the universe is. It’s measured by
— The Sachs-Wolfe fluctuations in the microwave background
— the gravitational binding energy of clusters as a fraction of their rest mass
– or by the square of the typical scale of mass- clustering as a fraction of the Hubble scale.
It’s of course oversimplified to represent this by a single number Q, but insofar as one can, its value is pinned down to be $`10^5`$. (Detailed discussions introduce further numbers: the ratio of scalar and tensor amplitudes, and quantities such as the ‘tilt’, which measure the deviation from a pure scale-independent Harrison-Zeldovich spectrum.)
What’s crucial is that Q is small. Numbers like $`\mathrm{\Omega }`$ and H are only well-defined insofar as the universe possesses ‘broad brush’ homogeneity – so that our observational horizon encompasses many independent patches each big enough to be a fair sample. This wouldn’t be so, and the simple Friedmann models wouldn’t be useful approximations, if Q weren’t much less than unity. Q’s smallness is necessary if the universe is to look homogeneous. But it isn’t, strictly speaking, a sufficient condition – a luminous tracer that didn’t weigh much could be correlated on much larger scales without perturbing the metric. Simple fractal models for the luminous matter are nonetheless, as Lahav will discuss, strongly constrained by other observations such as the isotropy of the X-ray background, and of the radio sources detected in deep surveys.
## 3 How confident can we be of our models?
If our universe has indeed expanded, Friedmann-style, from an exceedingly high density, then after the first $`10^{12}`$ seconds energies are within the range of accelerators. After the first millisecond – after the quark-hadron transition – conditions are so firmly within the realm of laboratory tests that there are no crucial uncertainties in the microphysics (though we should maybe leave our minds at least ajar to the possibility that the constants may still be time-dependent). And everything’s still fairly uniform – perturbations are still in the linear regime.
It’s easy to make quantitative predictions that pertain to this intermediate era, stretching from a millisecond to a million years. And we’ve now got high-quality data to confront them with. The marvellous COBE ‘black body’ pins down the microwave background spectrum to a part in 10,000. The ‘hot big bang’ has lived dangerously for thirty years: it could have been shot down by (for instance) the discovery of a nebula with zero helium, or of a stable neutrino with keV mass; but nothing like this has happened. The debate (concurrence or crisis?) now focuses on 1 per cent effects in helium spectroscopy, and on traces of deuterium at very high redshifts. The case for extrapolating back to a millisecond is now compelling and battle-tested. Insofar as there’s a ‘standard model’ in cosmology, this is now surely part of it.
When the primordial plasma recombined, after half a million years, the black body radiation shifted into the infrared, and the universe entered, literally, a dark age. This lasted until the first stars lit it up again. The basic microphysics remains, of course, straightforward. But once non-linearities develop and bound systems form, gravity, gas dynamics, and the physics in every volume of Landau and Lifshitz, combine to unfold the complexities we see around us and are part of.
Gravity is crucial in two ways. It first amplifies ‘linear’ density contrasts in an expanding universe; it then provides a negative specific heat so that dissipative bound systems heat up further as they radiate. There’s no thermodynamic paradox in evolving from an almost structureless fireball to the present cosmos, with huge temperature differences between the 3 degrees of the night sky, and the blazing surfaces of stars.
It is feasible to calculate all the key cosmic processes that occurred between (say) a millisecond and a few million years: the basic physics is ‘standard’ and (according at least to the favoured models) everything is linear. The later universe, after the dark age is over, is difficult for the same reason that all environmental sciences are difficult.
The whole evolving tapestry is, however, the outcome of initial conditions (and fundamental numbers) imprinted in the first microsecond – during the era of inflation and baryogenesis, and perhaps even on the Planck scale . This is the intellectual habitat of specialists in quantum gravity, superstrings, unified theories, and the rest.
So cosmology is a sort of hybrid science. It’s a ‘fundamental’ science, just as particle physics is. But it’s also the grandest of the environmental sciences. This distinction is useful, because it signals to us what levels of explanation we can reasonably expect. The first million years is described by a few parameters: these numbers (plus of course the basic physical laws) determine all that comes later. It’s a realistic goal to pin down their values. But the cosmic environment of galaxies and clusters is now messy and complex – the observational data are very rich, but we can aspire only to an approximate, statistical, or even qualitative ‘scenario’, rather like in geology and paleontology.
The relativist Werner Israel likened this dichotomy to the contrast between chess and mudwrestling. The participants in this meeting would seem to him, perhaps. an ill-assorted mix of extreme refinement and extreme brutishness (just in intellectual style, of course!).
## 4 Complexities of Structure, and Dark Matter
### 4.1 Prehistory of Ideas on Structure Formation
Since we are meeting in the Isaac Newton Institute, it’s fitting to recall that Newton himself envisaged structures forming via ‘gravitational instability’. In an often-quoted letter to Richard Bentley, the Master of Trinity College, he wrote:
“If all the matter of the universe were evenly scattered throughout all the heavens, and every particle had an innate gravity towards all the rest, and … if the matter were evenly dispersed throughout an infinite space, it could never convene into one mass, but some of it would convene into one mass and some into another, so as to make an infinite number of great masses, scattered at great distances from one another throughout all that infinite space. And thus might the sun and fixed stars be formed. … supposing the matter to be of a lucent nature.”
(It would of course be wishful thinking to interpret his last remark as a premonition of dark matter!)
### 4.2 The role of simulations
Our view of cosmic evolution is, like Darwinism, a compelling general scheme. As with Darwinism, how the whole process got started is still a mystery. But cosmology is simpler because, once the starting point is given, the gross features are predictable. The whole course of evolution isn’t, as in biology, sensitive to ‘accidents’. All large patches that start off the same way, end up statistically similar.
That’s why simulations of structure formation are so important. These have achieved higher resolution, and incorporate gas dynamics and radiative effects as well as gravity. They show how density contrasts grow from small-amplitude beginnings; these lead, eventually, to bound gas clouds and to internal dissipation.
Things are then more problematical. We’re baffled by the details of star formation now, even in the Orion Nebula. What chance is there, then, of understanding the first generation of stars, and the associated feedback effects? In CDM-type models, the very first stars form at redshifts of 10-20 when the background radiation provides a heat bath of up to 50 degrees, and there are no heavy elements. There may be no magnetic fields, and this also may affect the initial mass function. We also need to know the efficiency of star formation, and how it depends on the depth of the potential wells in the first structures.
Because these problems are too daunting to simulate ab initio, we depend on parameter-fitting guided by observations. And the spectacular recent progress from 10-metre class ground based telescopes and the HST has been extraordinarily important here.
### 4.3 Observing high redshifts
We’re used to quasars at very high redshifts. But quasars are rare and atypical – we’d really like to know the history of matter in general. One of the most important advances in recent years has been the detection of many hundreds of galaxies at redshifts up to (and even beyond) 5. Absorption due to the hundreds of clouds along the line of sight to quasars probes the history of cosmic gas in exquisite detail, just as a core in the Greenland ice-sheet probes the history of Earth’s climate.
Quasar activity reaches a peak at around $`z=2.5`$. The rate of star formation may peak at somewhat smaller redshifts (even though the very first starlight appeared much earlier) But for at least the last half of its history, our universe has been getting dimmer. Gas gets incorporated in galaxies and ‘used up’ in stars – galaxies mature, black holes in their centres undergo fewer mergers and are starved of fuel, so AGN activity diminishes.
That, at least, is the scenario that most cosmologists accept. To fill in the details will need better simulations. But, even more, it will need better observations. I don’t think there is much hope of ‘predicting’ or modelling the huge dynamic range and intricate feedback processes involved in star formation. A decade from now, when the Next Generation Space Telescope (NGST) flies, we may know the main cosmological parameters, and have exact simulations of how the dark matter clusters. But reliable knowledge of how stars form, when the intergalactic gas is reheated, and how bright the first ‘pregalaxies’ are will still depend on observations. The aim is get a consistent model that matches not only all we know about galaxies at the present cosmic epoch, but also the increasingly detailed snapshots of what they looked like, and how they were clustered, at all earlier times.
But don’t be too gloomy about the messiness of the ‘recent’ universe. There are some ‘cleaner’ tests. Simulations can reliably predict the present clustering and large-scale distribution of non-dissipative dark matter. This can be observationally probed by weak lensing, large scale streaming, and so forth, and checked for consistency with the CMB fluctuations, which probe the linear precursors of these structures.
### 4.4 Dark matter: what, and how much?
The nature of the dark matter – how much there is and what it is – still eludes us. It’s embarrassing that 90 percent of the universe remains unaccounted for.
This key question may yield to a three-pronged attack:
1. $`\underset{¯}{\text{Direct detection}}`$. Astronomical searches are underway for ‘machos’ in the Galactic Halo; and several groups are developing cryogenic detectors for supersymmetric particles and axions.
2. $`\underset{¯}{\text{Progress in particle physics}}`$. Important recent measurements suggest that neutrinos have non-zero masses; this result has crucially important implications for physics beyond the standard model; however the inferred masses seem too low to be cosmologically important. If theorists could pin down the properties of supersymmetric particles, the number of particles that survive from the big bang could be calculated just as we now calculate the helium and deuterium made in the first three minutes. Optimists may hope for progress on still more exotic options.
3. $`\underset{¯}{\text{Simulations of galaxy formation and large-scale structure}}`$. When and how galaxies form, the way they are clustered, and the density profiles within individual systems, depend on what their gravitationally-dominant constituent is, and are now severely constraining the options.
## 5 Steps Beyond the Simplest Universe: Open Models, $`𝚲`$, etc.
### 5.1 The case for $`\mathrm{\Omega }<1`$
Everyone agrees that the ‘simplest’ universe would be a flat Einstein-de Sitter model. But we shall hear several claims during the present meeting that this model is now hard to reconcile with the data. Several lines of evidence suggest that gravitating CDM contributes substantially less than $`\mathrm{\Omega }_{\mathrm{CDM}}=1`$. The main lines of evidence are
(i) The baryon fraction in clusters is 0.15-0.2, On the other hand, the baryon contribution to omega is now pinned down by deuterium measurements to be around $`\mathrm{\Omega }_b=0.015h^2`$, where $`h`$ is the Hubble constant in units of 100 km/sec/Mpc. If clusters are a fair sample of the universe, then this is incompatible with a dark matter density high enough to make $`\mathrm{\Omega }=1`$.
(ii) The presence of clusters of galaxies with $`z=1`$ is hard to reconcile with the rapid recent growth of structure that would be expected if $`\mathrm{\Omega }_{\mathrm{CDM}}`$ were unity.
(iii) The Supernova Hubble diagram (even though the case for actual acceleration may not be compelling) seems hard to reconcile with the large deceleration implied by an Einstein-de Sitter model.
(iv) The inferred ages of the oldest stars are only barely consistent with an Einstein-de Sitter model, for the favoured choices of Hubble constant.
### 5.2 Open universe, or vacuum energy?
The two currently-favoured options seem to be:
(A) an open model, or else
(B) a flat model where vacuum energy (or some negative-pressure component that didn’t participate in clustering) makes up the balance.
If the universe is a more complicated place than some people hoped, which of these options is the more palatable? Opinions here may differ: How ‘contrived’ are the open-inflation models? Is it even more contrived that the vacuum-energy should have the specific small value that leads it to start dominating just at the present epoch?
Either of these models involves a specific large number. In case (A) this is the ratio of the Robertson Walker curvature scale to the Planck scale; in (B) it is the ratio of vacuum energy to some other (much higher) energy density. At present, (A) seems to accord less well than (B) with the data. In particular, the angular scale of the ‘doppler peaks’ in the CMB angular fluctuations seems to favour a flat universe; and the supernova Hubble diagram indicates an actual acceleration, rather than merely a slight deceleration (as would be expected in the open model).
We will certainly hear a great deal about the mounting evidence for $`\mathrm{\Lambda }`$ (or one of its time-dependent generalisations): the claimed best fit to all current data suggests a non-zero energy in the vacuum. However we should be mindful of the current large scatter in all CMB measurements relevant to the doppler peak, and the various uncertainties (especially those that depend on composition, etc.) in the supernovae from which a cosmic acceleration has been inferred. I think the jury is still out. However, CMB experiments are developing fast, and the high-$`z`$ supernova sample is expanding fast too; so within two years we should know whether there is a vacuum energy, or whether systematic intrinsic differences between high-$`z`$ and low-$`z`$ supernovae are large enough to render the claims spurious. (On the same timescale we should learn whether the Universe actually is flat).
### 5.3 The History of $`\mathrm{\Lambda }`$
I wouldn’t venture bets on the final status of $`\mathrm{\Lambda }`$. It is nonetheless interesting to recall its history. $`\mathrm{\Lambda }`$ was of course introduced by Einstein in 1917 to permit a static unbounded universe. After 1929, the cosmic expansion rendered Einstein’s motivation irrelevant. However, by that time de Sitter had already proposed his expanding $`\mathrm{\Lambda }`$-dominated model. In the 1930s, Eddington and Lemaitre proposed that the universe had expanded (under the action of the $`\mathrm{\Lambda }`$-induced repulsion) from an initial Einstein state. $`\mathrm{\Lambda }`$ fell from favour after the 1930s: relativists disliked it as a ‘field’ acting on everything but acted on by nothing. A brief resurgence in the late 1960s was triggered by a (now discredited) claim for a pile-up in the redshifts of quasars at a value of $`z`$ slightly below 2. (The CMB had already convinced most people that the universe emerged from a dense state, rather than from an Einstein static model, but it could have gone through a coasting or loitering phase where the expansion almost halted. A large range of affine distance would then correspond to a small range of redshifts, thereby accounting for a ‘pile up’ at a particular redshift. It was also noted that this model offered more opportunity for small-amplitude perturbations to grow.)
The ‘modern’ interest in $`\mathrm{\Lambda }`$ stems from its interpretation as an vacuum energy. This leads to the reverse problem: Why is $`\mathrm{\Lambda }`$ $`120`$ powers of 10 smaller than its ‘natural’ value, even though the effective vacuum density must have been very high in order to drive inflation? The interest has of course been hugely boosted recently, through the claims that the Hubble diagram for Type 1A supernovae indicates an acceleration.
(If $`\mathrm{\Lambda }`$ is fully resurrected, it will be a great ‘coup’ for de Sitter. His model, dating from the 1920s, not only describes the dynamics throughout the huge number of ‘e-foldings’ during inflation, but also describes future aeons of our cosmos with increasing accuracy. Only for the 50–odd decades of logarithmic time between the end of inflation and the present does it need modification!).
## 6 Inflation and the Very Early Universe
Numbers like $`\mathrm{\Omega },\left(\mathrm{\Omega }_b/\mathrm{\Omega }_{\mathrm{CDM}}\right),\mathrm{\Lambda }`$ and Q are determined by physics as surely as the He and D abundances – it’s just that the conditions at the ultra-early eras when these numbers were fixed are far beyond anything we can experiment on, so the relevant physics is itself still conjectural.
The inflation concept is the most important single idea. It suggests why the universe is so large and uniform – indeed, it suggests why it is expanding. It was compellingly attractive when first proposed, and most cosmologists (with a few eminent exceptions like Roger Penrose) would bet that it is, in some form, part of the grand cosmic scheme. The details are still unsettled. Indeed, cynics may feel that, since the early 1980s, there’ve been so many transmogrifications of inflation – old, new, chaotic, eternal, and open – that its predictive power is much eroded. (But here again extreme cynicism is unfair.)
We’ll be hearing some discussion of whether inflationary models can ‘naturally’ account for the fluctuation amplitude $`\mathrm{Q}=10^5`$ ; and, more controversially, whether it’s plausible to have a non-flat universe, or a present-day vacuum energy in the permissible range. It’s important to be clear about the methodology and scientific status of such discussion. I comment with great diffidence, because I’m not an expert here.
This strand of cosmology may still have unsure foundations, but it isn’t just metaphysics: one can test particular variants of inflation. For instance, definite assumptions about the physics of the inflationary era have calculable consequences for the fluctuations –whether they’re gaussian, the ratio of scalar and tensor modes, and so forth – which can be probed by observing large scale structure and, even better, by microwave background observations. Cosmologists observe, stretched across the sky, giant proto-structures that are the outcome of quantum fluctuations imprinted when the temperature was $`10^{15}`$ GeV or above. Measurements with the MAP and Planck/Surveyor spacecraft will surely tell us things about ‘grand unified’ physics that can’t be directly inferred from ordinary-energy experiments.
## 7 The Agenda 10 Years From Now: a Bifurcated Community?
### 7.1 The next five years
The current pace of advance is such that within five years we’ll surely have made substantial further progress. We will not only agree that the value of H is known to 10 percent – we’ll agree what that value is.
We’ll know the key parameters (from high-$`z`$ supernovae, from the CMB, from high-$`z`$ observations, and from improved statistics on large scale clustering and streaming. I’d even bet (though maybe I’m being a bit rash here) that we’ll know what the dominant dark matter is.
### 7.2 Ten years ahead?
If we were to reconvene 10 years from now, what would be the ‘hot topics’ on the agenda? The key numbers specifying our universe – its geometry, fluctuations and content – may by then have been pinned down. I’ve heard people claim that cosmology will thereafter be less interesting– that the most important issues will be settled, leaving only the secondary drudgery of clearing up some details. I’d like to spend a moment trying to counter that view.
It may turn out, of course, that the new data don’t fit at all into the parameter-space that these numbers are derived from. (I was tempted to describe this view as ‘pessimistic’ but of course some people may prefer to live in a more complicated and challenging universe!). But maybe everything will fit the framework, and we will pin down the contributions to from baryons, CDM, and the vacuum, along with the amplitude and tilt of the fluctuations, and so forth. If that happens, it will signal a great triumph for cosmology – we will know the ‘measure of our universe’ just as, over the last few centuries, we’ve learnt the size and shape of our Earth and Sun.
Our focus will then be redirected towards new challenges, as great as the earlier ones. But the character and ‘sociology’ of our subject will change: it will bifurcate into two sub-disciplines. This bifurcation would be analogous to what actually happened in the field of general relativity 20-30 years ago. The ‘heroic age’ of general relativity – leading to the rigorous understanding of gravitational waves, black holes, and singularities – occurred the 1960s and early 1970s. Thereafter, the number of active researchers in ‘classical’ relativity declined (except maybe in computational aspects of the subject): most of the leading researchers shifted either towards astrophysically-motivated problems, or towards quantum gravity and ‘fundamental’ physics.
What will be the foci of the two divergent branches of ‘post classical’ cosmology we’ll be pursuing a decade from now? One will be ‘environmental cosmology’ – understanding the evolution of structure, stars and galaxies. The other will focus on the fundamental physics of the ultra-early universe (pre-inflation, m-branes, multiverses, etc). A few words about each of these:
### 7.3 Environmental cosmology: long range prospects
One continuing challenge will be to explore the emergence of structure. This is a tractable problem until the first star (or other collapsed system) forms. But the huge dynamic range and uncertain feedback thereafter renders the phenomena too complex for any feasible simulation.
To illustrate the uncertainty, consider a basic question such as when the intergalactic gas was first photoionized.
There have been many detailed models, but essentially this requires one photon for each baryon (somewhat more, in fact, to compensate for recombinations). A hot (O or B) star produces, over its lifetime, $`10^410^5`$ photons for each of its constituent baryons; if a black hole forms via efficient accretion of baryons, the corresponding number is several times $`10^6`$. Thus, only a small amount of material need collapse into such objects in order to provide enough to ionize all the remaining baryons. But the key questions, of course, are how efficiently O-B stars or black holes can form. This depends on the so-called ‘initial mass function’ (IMF), which determines how much mass goes into high mass stars (or black holes) compared with the amount going concurrently into lower-mass stars? The challenge of calculating the IMF – involving gas-dynamical and radiative transfer calculations over an enormous dynamic range – may not have been met even ten years from now. But even if we assume that it has the same form as now, there is the issue of feedback: do the first stars provide a heat input (via radiation, stellar winds and supernovae) that inhibits later ones from forming? More specifically, we can imagine two options; either (a) all the gas that falls into gravitationally bound clumps of CDM turns into stars; or (b) one percent turns into stars, whose winds and supernovae provide enough momentum and energy to expel the other 99 percent. In the first case, the 3-sigma peaks would suffice; in case (b) more typical (1.5 sigma) peaks would be needed, or else larger and deeper potential wells more able to retain the gas.
Even if the clustering of the CDM under gravity could be exactly modelled, along with the gas dynamics, then as soon as the first stars form we face major uncertainties that will still be a challenge to the petaflop simulations being carried out a decade from now.
### 7.4 Probing the Planck era and ‘beyond’
The second challenge would be to firm up the physics of the ultra-early universe. Perhaps the most ‘modest’ expectation would be a better understanding of the candidate dark matter particles: if the masses and cross-sections of supersymmetric particles were known, it should be possible to predict how many survive, and their contribution to $`\mathrm{\Omega }`$, with the same confidence as that with which we can compute primordial nucleosynthesis. Associated with such progress, we might expect a better understanding of how the baryon-antibaryon asymmetry arose, and the consequence for $`\mathrm{\Omega }_b`$.
A somewhat more ambitious goal would be to pin down the physics of inflation. Knowing parameters like Q, the tilt, and the scalar/tensor ratio will narrow down the range of options. The hope must be to make this physics as well established as the physics that prevails after the first millisecond.
One question that interests me specially is whether there are multiple big bangs, and which features of our actual universe are contingent rather than necessary. Could the others have different values of Q, or different Robertson-Walker curvature? Furthermore, will the ‘final theory’ determine uniquely what we call the fundamental constants of physics – particle masses and coupling constants? Are these ‘constants’ uniquely specified by some equation that we can eventually write down? Or are they in some sense accidental features of a phase transition as our universe cooled – secondary manifestations of some still deeper laws governing a whole ensemble of universes?
This might seem arcane stuff, disjoint from ‘traditional’ cosmology – or even from serious science. But my prejudice is to be openminded about ensembles of universe and suchlike. This makes a real difference to how I weigh the evidence and place my bets on rival models.
Rocky Kolb’s highly readable history ‘Blind Watchers of the Sky’ reminds us of some fascinating debates that occurred 400 years ago. Kepler was upset to find that planetary orbits were elliptical. Circles were more beautiful – and simpler, with one parameter not two. But Newton later explained all orbits in terms of a universal law with just one parameter, G. Had Kepler still been alive then, he’d surely have been joyfully reconciled to ellipses.
The parallel’s obvious. The Einstein-de Sitter model seems to have fewer free parameters than any other. Models with low $`\mathrm{\Omega }`$, non-zero $`\mathrm{\Lambda }`$, two kinds of dark matter, and the rest may seem ugly. But maybe this is our limited vision. Just as Earth follows an orbit that is no more special than it needed to be to make it habitable, so we may realise that our universe is just one of the anthropically-allowed members of a grander ensemble. So maybe we should go easy with Occam’s razor and be wary of arguments that $`\mathrm{\Omega }=1`$ and $`\mathrm{\Lambda }=0`$ are a priori more natural and less ad hoc.
There’s fortunately no time to sink further into these murky waters, so I’ll briefly conclude.
A recent cosmology book (not written by anyone at this conference) was praised, in the publisher’s blurb, for ‘its thorough coverage of the inflammatory universe’. That was a misprint, of course. But maybe enough sparks will fly here in the next few days to make it seem a not inapt description.
The organisers have chosen a set of fascinating open questions. I suspect they’ll still seem open at the end of this meeting, but we’ll look forward to learning the balance of current opinion, and what bets people are prepared to place on the various options. |
no-problem/9912/astro-ph9912338.html | ar5iv | text | # A High Peculiarity Rate for Type Ia SNe
## INTRODUCTION
Type Ia supernovae (SNe Ia) are not perfectly homogeneous. There are peculiar ones: SN 1991T-like (overluminous), SN 1986G-like (subluminous), and SN 1991bg-like (very subluminous) objects. Figure 1 shows a comparison of the spectra of peculiar SNe Ia with that of a relatively normal SN Ia, SN 1994D.
The peculiarity rate, however, is not well established. An estimate (less than 10%) by Branch et al. (1993) is limited by the small number of peculiar SNe Ia known at that time.
In the past 3 years, a number of peculiar SNe have been discovered in the course of several successful nearby SN surveys, and we try to update the peculiarity rate here.
## THE SN Ia SAMPLE
We have compiled a sample of 90 SNe Ia from 1997 to 1999 (up to SN 1999da). The only criterion we used to select SNe Ia in the sample is that the redshift of the SN must be smaller than 0.1. This is to avoid high-redshift SNe Ia to ensure that there is no contamination from possible evolution and/or observational biases between nearby and high-redshift SNe Ia.
The SNe Ia in our sample are subclassified as normal or as one of the peculiar SNe Ia: SN 1991T, which had prominent Fe III absorption lines and weak Si II lines prior to and near maximum brightness ; SN 1991bg, which had an enhanced Si II 5700Å absorption, and a broad absorption trough extending from about 4150 to 4400Å due to Ti II lines ; SN 1986G, which also had Ti absorption but less prominent than in SN 1991bg . Classification is done based on information in the International Astronomical Union Circulars (IAUC) and our SN spectrum and photometry database.
## THE OBSERVED PECULIARITY RATE
We have divided our SN Ia sample into several subsamples and reported the observed peculiarity rates as follows.
In the total sample, all 90 SNe are considered. 17 (18.9%) SNe are peculiar, among which 11 (12.2%) SNe are SN 1991T-like and 6 (6.7%) SNe are SN 1991bg/1986G-like.
In the near-maximum sample, only the SNe Ia that were spectroscopically classified at no later than a week after maximum are considered. 61 SNe are in the sample, among which 17 (27.9%) are peculiar. 11 (18.0%) SNe are SN 1991T-like and 6 (9.9%) SNe are SN 1991bg/1986G-like.
In the Lick-Beijing (LB) sample, only the SNe that were discovered in the sample galaxies of the Lick Observatory Supernova Search (LOSS) and the Beijing Astronomical Observatory Supernova Survey (BAOSS) are considered. 35 SNe are in the sample, among which 13 (37.1%) are peculiar. 7 (20.0%) SNe are SN 1991T-like and 6 (17.1%) SNe are SN 1991bg/1986G-like.
## OBSERVATIONAL BIASES
There are various observational biases that make the observed peculiarity rate deviate from its intrinsic value.
1. The maximum-only bias – caused by the fact that SN 1991T-like objects can best be spectroscopically identified prior to or near maximum brightness. It is thus unknown whether a SN discovered well after maximum is normal or SN 1991T-like. Classifying them as normal causes the maximum-only bias that underestimates the peculiarity rate.
2. The Malmquist bias – caused by the difference in luminosity among SNe Ia.
3. The light-curve shape bias – caused by the difference in light-curve shape among SNe Ia.
## MONTE CARLO SIMULATIONS
We have done Monte Carlo simulations to study the effects of the observational biases. Simulations are done for magnitude-limited SN surveys, with the baseline as the only input parameter. Simulations are also done for distance-limited SN surveys, with the baseline and the limiting magnitude of the survey as parameters.
All observational biases are well accounted for in the Monte Carlo simulations. We also studied the role of extinction of SN 1991T-like objects in determining the rate of those objects. There are speculations that SN 1991T-like objects are more likely to occur in dusty, star-forming regions and thus may suffer more extinction than the normal and SN 1991bg/1986G-like objects.
An example of the results of the Monte Carlo simulations is shown in Figure 2.
## THE INTRINSIC PECULIARITY RATE
Our simulations indicate that all SNe Ia in the LB sample should have been discovered because the two surveys are distance-limited with small baselines and deep limiting magnitudes. In other words, the peculiarity rate and luminosity function in the LB sample should be intrinsic.
Our simulations also indicate that a high peculiarity rate (more than 30%) and a flat luminosity function for SN Ia (e.g., the rate of SN 1991bg-like and SN 1991T-like objects are comparable) are consistent with the observed peculiarity rates in all three (total, near-maximum, and LB) samples.
These results have important implications for studies of high-redshift SNe Ia and of SN Ia progenitor systems.
1. The high-redshift results: The high peculiarity rate for nearby SNe Ia ($`>`$30%) and the small apparent (but preliminary) peculiarity rate for the several dozen high-redshift SNe Ia studied thus far may indicate a systematic difference between them. To reconcile the absence of SN 1991T-like objects found at high redshifts with the peculiarity rate found at low redshifts, an extra extinction of more than $`1`$ mag for the SN 199T-like objects is needed, which is not supported by the observations. However, the existing spectral studies of high-redshift SNe Ia are not very detailed.
2. The progenitor systems of SNe Ia: The high peculiarity rate, together with other evidence, favors the existence of multiple progenitor systems for SNe Ia (e.g., single-degenerate systems and double-degenerate systems).
Our supernova research at UC Berkeley is supported by NSF grant AST-9417213 and NASA grant GO-7434. |
no-problem/9912/astro-ph9912154.html | ar5iv | text | # Radiative transfer in 3D
## 1. Introduction
Interpretation of molecular line observations requires understanding of the radiative transfer processes involved. The observed intensity is an integral over the line-of-sight and depends on various physical parameters that vary along this line. The problem is usually a non-invertible one. Finite angular resolution means that conditions may vary over the beam and further complications arise e.g. in the form of incomplete beam filling.
One may approach the problem from different perspective by constructing models of the density, velocity and temperature structure of the object. Models can be tested by using radiative transfer methods that predict the observed spectra. Eventually one can find a model that fits the observations. However, usually there is a number of possible solutions.
The models can be restricted by other arguments e.g. by requiring physical self-consistency, although e.g. in massive star forming regions the situation is often too complicated for this (e.g. Juvela 1998). For interstellar clouds at larger scales the situation seems to be better. There the dynamical processes are less violent and the laws of magneto-hydrodynamics can be used to simulate typical distribution and motion of gas. Conversely, with radiative transfer methods the basic assumptions of the MHD calculations can be checked against observations.
## 2. Radiative transfer with the Monte Carlo method
For radiative transfer calculations the model cloud is discretized into cells e.g. according to a three-dimensional Cartesian grid. Each cell is assigned density, temperature, velocity and intrinsic linewidth. The effect of the radiation field is simulated and the results are used to update estimates of the level populations in each cell. Final solution is obtained by iterating these two steps.
There are several methods for improving the computational efficiency of Monte Carlo. These include the use of a reference field (Bernes 1979; Choi et al. 1995), importance sampling and the use of quasi random numbers (Juvela 1997). Furthermore, if one uses the same set of random numbers on each iteration one can eliminate random noise from computed level populations. This enables the use of similar acceleration methods as used with lambda iteration e.g. Ng-acceleration (Ng 1974).
The Monte Carlo method has been considered unsuitable for use with high optical depths, $`\tau >>1.0`$. Hartstein & Liseau (1998) showed, however, that with core saturation method calculations are possible even with optical depths of several thousands. The idea is to consider only photons in the line wings. In the optically thick case photons in the line centre are emitted and absorbed locally and do not contribute to the transfer of energy.
We have used two simulation methods in the radiative transfer calculations. Method B in Juvela (1997) is based on ray-tracing. A photon package is started at the edge of the cloud and as the package moves a distance $`s`$ through a cell with optical depth $`\tau `$ the number of emitted photons escaping the cell,
$$n(\nu )n_uA_{ul}\frac{1e^\tau }{\tau }\varphi (\nu ),$$
(1)
is added to the package while the rest of the emitted photons are absorbed in the cell (Juvela 1997). In the formula $`n_u`$ is the population of the upper energy level and $`A_{ul}`$ is the Einstein coefficient of spontaneous emission. We can use this information in a way similar to the core saturation method to eliminate photons that are absorbed in the same cell from which they were emitted. The equilibrium equations must also be modified, which can be done in several ways. We store in each cell the fraction of discarded photons and use those numbers to correct the equilibrium equations. This means an increase in the memory consumption by $``$30%. Conceptually the method is identical to the accelerated lambda iteration (Olson, Auer & Buchler 1986).
In Monte Carlo simulation the knowledge of the cell geometry is required only for finding the next cell boundary along the track of the photon package and for calculating the cell volumes. This makes it easy design the computational geometry according to the problem at hand. So far we have implemented the following: (1) spherically symmetric clouds divided into shells, (2) cylinder symmetric clouds divided by cylinders and orthogonal planes, (3) 3D Cartesian grid with cubic cells, (4) 3D grid with embedded spherical clumps divided by spheres, longitudes and latitudes and (5) 3D Cartesian grid with hierarchical subdivision according chosen criteria. The last option is promising for the MHD simulations where better resolution is usually needed only in some small sub-volume. The reduced memory requirements are, however, not complemented by equal savings in the run times although importance sampling can be used to concentrate on the regions with higher discretization.
## 3. MHD models of interstellar cloud
Padoan & Nordlund (1999) have shown that super-alfvénic random flows provide a good model for the dynamics and structure of molecular clouds. The column density maps made of MHD simulations are reminiscent of the filamentary structures observed in interstellar clouds and the results give a good starting point for the radiative transfer modelling of these clouds. With a snapshot of the MHD simulations (i.e. density and velocity fields) the Monte Carlo method can be used to solve the the radiative transfer problem and the computed spectra can be compared with observations.
The original MHD calculations were performed on 128<sup>3</sup> cell grid. For radiative transfer calculations the data are resampled on a smaller grid of typically 90<sup>3</sup> or 64<sup>3</sup> cells. This can be done without significant loss of accuracy since individual cells are usually optically thin. However, proper sampling of the velocity field sets its own requirements on the discretization. Models represent clouds with linear size $``$10 pc and average gas densities a few times 100 cm<sup>-3</sup>. In the densest knots the density exceeds 10<sup>4</sup> cm<sup>-3</sup>.
The results of the radiative transfer calculations are presented as synthetic molecular line maps that can be compared directly with observations (Padoan et al. 1998). CO spectra show typically several peaks corresponding to different high density sheets of gas along the line of sight. The average spectrum is almost Gaussian.
Padoan et al. (1999a) have compared the computed spectra in detail with CO observations made of the Perseus molecular cloud complex. The statistical properties of the computed spectra are almost identical to the properties of the spectra observed e.g. in L 1448. This demonstrates that the main features of such clouds are correctly described by the MHD models.
The models spectra can be analyzed using the LTE approximation in order to estimate the validity of the LTE assumption in the interpretation of observations. Even in isothermal models one can find a wide range of excitation conditions and LTE analysis tends to underestimate the true column densities. In the case of our models the discrepancy can be as high as a factor $``$5, at the lowest values of the CO column density. (Padoan et al. 1999b)
## 4. Thermal equilibrium of interstellar clouds
The previous model calculations can be extended to the study of thermal balance in interstellar clouds. The MHD calculations provide the heating rates due to ambipolar diffusion and total heating rates are obtained by adding other known mechanisms (cosmic ray heating etc.). These are balanced by cooling rates which in the case of molecular clouds are mainly due to line emission.
The cooling rates at any point in the cloud will depend on the local escape probability of the emitted photons but also on the photon flux from the surrounding regions. Since velocity and density distributions are inhomogeneous the calculation of the cooling rates requires proper solution of the full three-dimensional radiative transfer problem.
In the case of Monte Carlo method cooling rates are a by-product of the radiative transfer simulation and are obtained by simply counting the net flow of photons from each cell. For practical reasons each molecule is simulated separately and the kinetic temperatures, $`T_{\mathrm{kin}}`$, can be updated only after all the main cooling species have been simulated. Since the line emission depends in turn on T<sub>kin</sub> the final solution is obtained with iteration. Since <sup>12</sup>CO is usually the dominant coolant it is not necessary to update the cooling rate estimates from other species on every iteration.
The ambipolar diffusion heating can exceed heating by cosmic rays (Padoan, Zweibel & Nordlund 1999c). Computed cloud temperatures are therefore sensitive to initial assumptions and can be used to place limits on the possible range of the input parameters, especially the strength of the magnetic fields.
## References
Bernes C. 1979, A&A, 73, 67
Choi M., Evans II N.J., Gregersen E.M., Wang Y. 1995, ApJ, 448, 742
Hartstein D., Liseau R. 1998, A&A, 332, 702
Juvela M. 1997, A&A, 322, 943
Juvela M. 1998, A&A, 329, 659
Ng K.C. 1974, J.Chem.Phys. 61, 2680
Olson G.L., Auer L.H., Buchler J.R. 1986, J.Quant.Spectrosc.Radiat.Transfer 35, 431
Padoan P., Bally J., Billawala Y., Juvela M., Nordlund Å. 1999a, ApJ, in press
Padoan P., Juvela M., Bally J., Nordlund Å. 1998, ApJ, 504, 300
Padoan P., Juvela M., Bally J., Nordlund Å. 1999b, ApJ, in press
Padoan P., Nordlund Å. 1999, ApJ, in press
Padoan P., Zweibel E., Nordlund, Å. 1999c, ApJ, submitted |
no-problem/9912/astro-ph9912133.html | ar5iv | text | # Untitled Document
æ
Spectral analysis of four multi mode pulsating sdB stars
Ulrich Heber<sup>1</sup>, I. Neill Reid<sup>2</sup>, Klaus Werner<sup>3</sup>
$`^{\text{ }1}`$ Dr. Remeis-Sternwarte Bamberg, Astronomisches Institut
der Universität Erlangen-Nürnberg, D-96049 Bamberg, Germany
$`^{\text{ }1}`$ Palomar Observatory,
Pasadena, USA
$`^{\text{ }3}`$ Institut für Astronomie und Astrophysik, Waldhäuser Straße 64,
D-72076 Tübingen, Germany Received October 27, 1999.
Abstract. Four members of the new class of pulsating sdB stars are analysed from Keck HIRES spectra using NLTE and LTE model atmospheres. Atmospheric parameters (T<sub>eff</sub>, log g, log(He/H)), metal abundances and rotational velocities are determined. Balmer line fitting is found to be consistent with the helium ionization equilibrium for PG 1605+072 but not so for PG 1219$`+`$534 indicating that systematic errors in the model atmosphere analysis of the latter have been underestimated previously. All stars are found to be helium deficient probably due to diffusion. The metals are also depleted with the notable exception of iron which is solar to within error limits in all four stars, confirming predictions from diffusion calculations of Charpinet et al. (1997). While three of them are slow rotator’s (v sini $`<`$ 10 km/s), PG 1605$`+`$072 displays considerable rotation (v sin i = 39 km/s, P$`<`$8.7h) and is predicted to evolve into an unusually fast rotating white dwarf. This nicely confirms a prediction by Kawaler (1999) who deduced a rotation velocity of 130km/s from the power spectrum of the pulsations which implies a low inclination angle of the rotation axis.
Key words: Stars: atmospheres Stars: abundances Stars: subdwarfs Stars: rotation Stars: individual: PG1605$`+`$072, Feige 48, KPD2109$`+`$4401, PG1219$`+`$534
1. INTRODUCTION
It is now well established that the hot subluminous B stars can be identified with models of the extreme Horizontal Branch (EHB) stars (Heber, 1986, Saffer et al., 1994).
Recently, several sdB stars have been found to be pulsating (termed EC14026 stars after the prototype, see O’Donoghue et al. 1999 for a review), defining a new instability strip in the HR-diagram. The study of these pulsator’s offers the possibility of applying the tools of asteroseismology to investigate the structure of sdB stars. The existence of pulsating sdB stars was predicted by Charpinet et al. (1996), who uncovered an efficient driving mechanism due to an opacity bump associated with iron ionization in EHB models. However, in order to drive the pulsations, iron needed to be enhanced in the appropriate subphotospheric layers, possibly due to diffusion. Subsequently, Charpinet et al. (1997) confirmed this assumption by detailed diffusion calculations. Even more encouraging was the agreement of the observed and predicted instability strip.
Thirteen pulsating sdB stars are well-studied photometrically (O’Donoghue et al. 1999). A precise knowledge of effective temperature, gravity, element abundances and rotation is a prerequisite for the asteroseismological investigation.
We selected four EC14026 stars for a detailed quantitative spectral analysis: PG 1605$`+`$072 was chosen because it has the lowest gravity and, therefore, has probably already evolved beyond the extreme horizontal branch phase. It also displays the richest frequency spectrum amongst the EC 14026 stars ($`>`$50 periods have been identified, Kilkenny al. 1999). Recently, Kawaler (1999) predicted from his modelling of the pulsations that PG 1605$`+`$072 should be rotating. PG 1219$`+`$534 was chosen because it has the shortest pulsation periods and has a helium abundance larger than most other sdB stars (O’Donoghue et al., 1999). For Feige 48 and KPD 2109$`+`$4401 only 4 rsp. 5 frequencies have been found so far. Feige 48 is also the coolest of all EC 14026 stars known.
2. OBSERVATIONS
High resolution optical spectra of the four pulsating sdB stars were obtained with the HIRES echelle spectrograph (Vogt et al. 1994) on the Keck I telescope on July 20, 1998 using the blue cross disperser to cover the full wavelength region between 3700Å and 5200Å at a resolution 0.09Å.
The spectra are integrated over one pulsation cycle or more since the exposure times (600–900s) were long compared to the pulsational periods.
The standard data reduction as described by Zuckerman & Reid (1998) resulted in spectral orders that have a somewhat wavy continuum. In order to remove the waviness we used the spectrum of H1504$`+`$65 (a very hot pre-white dwarf devoid of hydrogen and helium) which was observed in the same night. Its spectrum has only few weak lines of highly ionized metals in the blue (3600–4480Å) where the strong Balmer lines are found in the sdB stars. Therefore we normalized individual spectral orders 1 to 20 (3600–4480Å) of the sdB stars by dividing through the smoothed spectrum of H1504$`+`$65. The remaining orders were normalized by fitting the continuum with spline functions (interpolated for orders 26 and 27 which contain H$`\beta `$). Judged from the match of line profiles in the overlapping parts of neighboring orders this procedure worked extremely well. Atmospheric parameters determined from individual Balmer lines are found to be consistent with each other except for H$`\beta `$. Therefore, we excluded H$`\beta `$ from the fit procedure. Moreover, the resulting T<sub>eff</sub> and log g are also in excellent agreement with those from the fit of a low resolution spectrum of PG 1605$`+`$072 obtained at the ESO NTT. Details on the analysis of PG 1605$`+`$072 can be found in Heber et al. (1999a).
3. ATMOSPHERIC PARAMETERS
The simultaneous fitting of Balmer and He line profiles by a grid of synthetic spectra (see Saffer et al. 1994) has become the standard technique to determine the atmospheric parameters of sdB stars. The Balmer lines (H$`\gamma `$ to H 12), He I (4471Å, 4026Å, 4922Å, 4713Å, 5016Å, 5048Å) and He II 4686Å lines are fitted to derive all three parameters simultaneously.
The analysis is based on grids of metal line blanketed LTE model atmospheres for solar metalicity and Kurucz’ ATLAS6 Opacity Distribution Functions (see Heber et al. 1999b). Synthetic spectra are calculated with Lemke’s LINFOR program (see Moehler et al. 1998). In addition, a grid of H-He line blanketed, metal free NLTE model atmospheres (Napiwotzki 1997), calculated with the ALI code of Werner & Dreizler (1999).
PG1605$`+`$072
The results (T<sub>eff</sub>=31 900K, log g=5.29, log(He/H)=-2.54) are in agreement with those from low resolution spectra analysed with similar models (Koen et al. 1998) as well as from our own low resolution spectrum for PG 1605$`+`$072.
Fig. 1. Balmer and He line profile fits for PG 1605+072 of the HIRES spectrum from NLTE model atmospheres.
Four species are represented by two stages of ionization (He I and He II, C II and C III, N II and N III, Si III and Si IV). Since these line ratios are very temperature sensitive at the temperatures in question, we alternatively can derive T<sub>eff</sub> and abundances by matching these ionization equilibria. Gravity is derived subsequently from the Balmer lines by keeping T<sub>eff</sub> and log(He/H) fixed. These two steps are iterated until consistency is reached. C II is represented by the 4267Å line only, which is known to give notoriously too low carbon abundances. Indeed the carbon ionization equilibrium can not be matched at any reasonable T<sub>eff</sub>. The ionization equilibria of He, N and Si require T<sub>eff</sub> to be higher than from the Saffer procedure, i.e. 33 200K (He), 33 900K (N) and 32 800K (Si).
Since this difference could be caused by NLTE effects, we repeated the procedure for T<sub>eff</sub> and log(He/H) using NLTE models. Alternatively, applying Saffer’s procedure with the NLTE model grid (see Fig. 1) yields T<sub>eff</sub> almost identical to that obtained with the LTE grid. Evaluating the He ionization equilibrium in NLTE, indeed, results in T<sub>eff</sub> being consistent with that from Saffer’s procedure. We therefore conclude that the higher T<sub>eff</sub> derived above from the ionization equilibrium in LTE is due to NLTE effects.
However, a systematic difference in log g persists, the LTE values being higher by 0.06 – 0.08 dex than the NLTE results. Since its origin is obscure, we finally adopted the averaged atmospheric parameters: T<sub>eff</sub>=32 300$`\pm `$300K, log g=5.25$`\pm `$0.05, log(He/H)=-2.53$`\pm 0.1`$. Helium is deficient by a factor of 30 as is typical for sdB stars.
Feige 48
Since no He II line can be detected, the helium ionization equilibrium can not be evaluated. The procedure of Saffer et al. (1994) results in T<sub>eff</sub>=29 400K, log g=5.51, log(He/H)=-2.90. Feige 48 has the lowest helium abundance among our programme stars.
KPD 2109$`+`$4401
The helium ionization equilibrium and the Saffer et al. procedure give (averaged) parameters T<sub>eff</sub>=31 800K, log g=5.79, log(He/H)$`=`$2.22 for KPD 2109$`+`$4401.
PG 1219$`+`$534
Unlike for PG 1605$`+`$072, the helium ionization equilibrium and the Saffer et al. procedure give discrepant results: T<sub>eff</sub>=33 200K, log g=5.93, log(He/H)=-1.60 (Saffer et al. procedure, Fig. 2) and T<sub>eff</sub>=35 200K, log g=6.03, log(He/H)=-1.41 (He ionization equilibrium, Fig. 3). At the lower T<sub>eff</sub> the Balmer lines are well matched through out the entire profile, whereas for He II 4686Å there is a significant mismatch (see Fig.2). At the higher T<sub>eff</sub> He II 4686Å is well reproduced, but the Balmer line cores are not reproduced at all (see Fig. 3). Despite of its high gravity PG 1219$`+`$534 has an unusually high helium abundance, i.e. it is deficient by a factor of 2 to 5 only. The line cores of He I 4026Å and 4471Å cannot be reproduced by either model. We conclude that our models do not describe the outermost layers of the atmosphere correctly where the cores of the Balmer and He I lines are formed. We point out that PG 1219+534 has the highest helium abundance and the shortest pulsation periods, which might affect the outermost layers.
Fig. 2. Balmer and He line profile fits for PG 1219$`+`$534 of the HIRES spectrum. Note the mismatch of the He II 4686Å line profile and the cores of He I 4026Å and 4471Å.
Fig. 3. He line profile fits for PG 1219$`+`$534 of the HIRES spectrum to determine T<sub>eff</sub> and log(He/H) simultaneously, log g is adjusted to match the Balmer line wings. Note the mismatch of the cores of the Balmer lines and of He I 4026Å and 4471Å.
4. ABUNDANCES
Weak metal lines are present in the spectra of all programme stars. However, the number of detectable lines differs considerably. The largest number of metal lines is present in Feige 48 (C, N, O, Ne, Mg, Si, Al, S and Fe) and PG 1605$`+`$072 (which lacks Al and S). In PG 1219$`+`$534 only N, S and Fe are detectable.
Fig. 4. Abundances of PG 1605$`+`$072 relative to the sun. Upper limits are denoted by arrows.
The metal lines are sufficiently isolated to derive abundances from their equivalent widths except for the crowded region from 4635Å to 4660Å in PG 1605$`+`$072 which we analyse by detailed spectrum synthesis. Results are plotted in Figure 4. Upper limits are shown when no line of a species was detectable. Although several O lines are available in the spectra of PG 1605$`+`$072 and Feige 48, it was impossible to determine the microturbulent velocity in the usual way, i.e. by minimizing the slope in a plot of the O abundances versus equivalent widths, due to the lack of sufficiently strong lines. We adopted 5$`\pm `$5km/s which translates into small systematic abundance uncertainties of $`\pm `$0.05dex for most ions. The analysis is done in LTE. A temperature uncertainty of $`\mathrm{\Delta }`$T<sub>eff</sub>=1000 K translates into abundance uncertainties of less than 0.1 dex. Hence systematic errors are smaller for most ions than the statistical errors.
Like helium the metals are deficient with the notable exception of iron, which is solar to within the error limits. The high gravity stars (KPD 2109$`+`$4401 and PG 1219$`+`$534) have considerably lower O and Si abundances than the stars of somewhat lower gravity which point to the (selective) action of diffusion. It is, however, puzzling that iron is solar irrespective of the stellar gravity. UV spectroscopy is required to determine more precise iron abundances. We point out that a solar surface abundance is in perfect agreement with the diffusion calculations of Charpinet et al. (1997).
5. ROTATION VELOCITIES
The spectral lines of PG 1605$`+`$072 are considerably broadened, which we attribute to stellar rotation and derive v sin i = 39km/s, by fitting the strongest metal lines. In Figure 5 we compare a section of the spectrum of PG 1605$`+`$072 to that of the Feige 48, which (like PG 1219$`+`$534 and KPD 2109$`+`$4401) are very sharp-lined (v sin i$`<`$8–10km/s).
Assuming a mass of 0.5$`\mathrm{M}_{}`$ the radius of R=0.28$`\mathrm{R}_{}`$ for PG 1605$`+`$072 follows from the gravity. Since sin$`i`$ cannot be constrained the corresponding rotation period of PG 1605$`+`$072 must be smaller than 8.7h. PG 1605+072 displays the most complex power spectrum with more than 50 frequencies identifiable (Kilkenny et al. 1999), 39 being bona fide normal pulsation frequencies.
Usually rotation becomes manifest in the power spectrum by the characteristic splitting into equidistantly spaced multiplet components as is observed e.g. for the pre-white dwarf PG 1159$``$035 (rotation period: 1.4$`d`$, Winget et al. 1991). Such multiplet’s, however, have not been identified for PG 1605+072. Fast rotation introduces higher order terms that result in unequally spaced multiplet components. Recently, Kawaler (1999), was able to identify the five main peaks by considering mode trapping and rotational splitting. He predicted that PG 1605$`+`$072 should be rapidly rotating (130 km/s). The measured v sin i= 39 km/s, hence, is a nice confirmation of Kawaler’s prediction. Taken at face value a low inclination angle of 17 degrees results.
Rotation is interesting also from the point of view of stellar evolution. PG 1605$`+`$072 is probably already in a post-EHB phase of evolution (Kilkenny et al. 1999) and will evolve directly into a white dwarf, i.e. will shrink from its present radius of 0.28$`\mathrm{R}_{}`$ to about 0.01$`\mathrm{R}_{}`$. Hence PG 1605$`+`$072 will end its life as an unusually fast rotating white dwarf if no loss of angular momentum occurs. Isolated white dwarfs, however, are known to be mostly very slow rotators (e.g. Heber et al. 1997, Koester et al. 1998).
Fig. 5. Fit of a section of the metal line spectrum of PG 1605$`+`$072, (bottom, v sini=39 km/s) compared to that of Feige 48 (top, no rotation).
ACKNOWLEDGEMENT.
U.H. gratefully acknowledges financial support by NATO ARW funds.
REFERENCES
Charpinet S., Fontaine G., Brassard P., Dorman B. 1996, ApJ 471, L103
Charpinet S., Fontaine G., Brassard P. et al. 1997, ApJ 483, L23
Heber U. 1986, A&A 155, 33
Heber U., Napiwotzki R., Reid I.N. 1997, A&A 323, 819
Heber U., Reid I.N., Werner K. 1999a, A&A 348,L25
Heber U., Edelmann H., Lemke M., Napiwotzki R., Engels D., 1999b, PASPC 169, 551
Kawaler S. 1999, PASPC 169, 158
Kilkenny D., et al. 1999, MNRAS 303, 525
Koen C., O’Donoghue D., Kilkenny D., Stobie R.S. 1998, MNRAS 296, 317
Koester D., Dreizler S., Weidemann V., Allard N.F. 1998, A&A 338, 612
Moehler S., Heber U., Lemke M., Napiwotzki R. 1998, A&A 339, 537
Napiwotzki R. 1997, A&A 322, 256
O’Donoghue D., Koen C., Kilkenny D., Stobie R.S., Lynas-Gray A.E. 1999, PASPC 169, 149
Saffer R.A., Bergeron P., Koester D., Liebert J. 1994, ApJ 432, 351
Vogt S.S., et al. 1994, SPIE 2198, 362
Werner K. 1991, A&A 251, 147
Werner K, Dreizler S. 1999, Journal of Computational and Applied Mathematics, Elsevier, 109, 65
Winget D.E., Nather R.E., Clemens J.C., et al. 1991, ApJ 378, 326
Zuckerman B., Reid I.N. 1998, ApJ 505, L143 |
no-problem/9912/astro-ph9912156.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Finding the origin of heavy elements has been an attractive and longstanding issue in astrophysics. Among various nucleosynthesis processes, the rapid neutron capture process (r-process) is believed to be essential to create many of heavy elements (Burbidge et al. 1957). Since the r-process nucleosynthesis requires enough neutron source to create heavy elements up to $`A200`$, how and where, in the Universe, the condition for the r-process is realized has been a central issue. The recent observations of r-process elements in metal poor stars (Sneden et al. 1996) further strengthen the motivation to identify the r-process site(s) in the Universe.
Among various proposed sites, the type II supernovae have been focused as the most plausible site and many studies have been made to find out where exactly the r-process occurs during the event of supernova explosions (Hillebrandt 1978; Cowan, Thielemann & Truran 1991). The neutrino-driven wind from a proto-neutron star, just born in the supernova explosion, has been suggested as a promising site (Meyer et al. 1992; Woosley et al. 1994). The surface material of the proto-neutron star is heated by the supernova neutrinos and a portion of the material is ejected as a hot bubble having a high entropy per baryon ($`400k_B`$). Due to this high entropy, the neutron-to-seed ratio after the charged particle reactions freeze out ($`\alpha `$-rich freeze-out) becomes high enough to lead to the r-process nucleosynthesis.
The dynamics of the neutrino-driven wind and its outcome of the nucleosynthesis have been studied since then (Qian & Woosley 1996 (here after QW); Hoffman, Woosley & Qian 1997; Otsuki 1999, Otsuki et al. 1999). Qian and Woosley have investigated the dynamics of the neutrino-driven wind both by analytic treatments and numerical simulations. They have shown that the entropy per baryon in the wind turns out too low by a factor of $`23`$ for the r-process. Witti et al. (1994) and Takahashi et al. (1994) have performed the numerical simulations of the neutrino-driven wind and the r-process adopting an initial configuration provided by Wilson. The entropy per baryon in their simulation falls again short for the r-process. They had to introduce artificially an extra factor to scale down the densities along the trajectory to get high entropies for successful r-process. These studies, which are mostly done non-relativistically, have cast questions on the scenario of high entropy bubble for the r-process.
Cardall and Fuller (1997) have further studied the general relativistic dynamics of the neutrino-driven wind with the analytic treatment. They have shown that the general relativistic effects increase the entropy and shorten the expansion time scale and will be favorable for the r-process. Furthermore, Otsuki et al. (1999) have shown with extensive parametric studies that the r-process is actually possible in the neutrino-driven wind with a short expansion time scale even if the entropy is not as high as $`400k_B`$. They have demonstrated by the network calculation that the r-process up to $`A200`$ can take place for massive and compact neutron stars having high neutrino luminosities. These general relativistic studies revive some interests in the neutrino-driven wind as an r-process site and suggest that detailed studies are necessary with general relativity for realistic models of proto-neutron stars and supernova neutrinos.
General relativistic studies have been done so far only analytically for stationary neutrino-driven winds with given mass and radius of the proto-neutron stars. To remove these restrictions, we have performed general relativistic hydrodynamical simulations of the neutrino-driven wind. We have adopted the general relativistic, implicit and Lagrangian hydro code, which were developed for the supernova simulations (Yamada 1997), and have implemented necessary neutrino processes. We aim to perform simulations of the time evolution of the neutrino-driven wind adopting the results of proto-neutron star cooling simulations (Suzuki 1994; Sumiyoshi, Suzuki & Toki 1995). The final goal is to find out whether the r-process takes place in a realistic situation under the general relativistic hydrodynamics and to answer whether the high entropy bubble scenario or the rapid expansion scenario or any other way is realized to create the r-process elements.
In the current paper, we report the first step of this line of research. We adopt the realistic profiles of the proto-neutron stars, but we take rather simple assumptions about the neutrino luminosities and spectra in order to compare with the previous analytic studies. We remark that the relativistic EOS table, which has been completed for supernova simulations (Shen et al. 1998; Shen et al. 1998), enabled us to construct the initial configurations of the wind consistently with the proto-neutron star cooling simulations using the same physical EOS table. We examine systematically the dependence on the mass of proto-neutron stars and the neutrino luminosities.
The paper is arranged as follows. In section 2, after the brief description of the neutrino-driven wind, we give explanations of our numerical treatment. In section 3, we show the numerical results of simulations for the realistic profiles of the proto-neutron stars ($`\mathrm{\S }`$ 3.1) and the simplified models with given mass and radius ($`\mathrm{\S }`$ 3.2). In section 4, we discuss the r-process condition derived from the numerical simulations ($`\mathrm{\S }`$ 4.1) and describe the r-process network calculation using the trajectory of our simulation ($`\mathrm{\S }`$ 4.2). We demonstrate the case of successful r-process nucleosynthesis from our results. The summary will be given in section 5.
## 2 Hydrodynamical simulation
The neutrino-driven wind is the mass ejection from the surface of the proto-neutron star due to neutrino heating (Duncan, Shapiro & Wasserman 1986). Since the proto-neutron star, which is formed in the collapse-driven supernova explosion, emits a plenty of neutrinos, a small portion of the surface material might be heated up and ejected by gaining the energy to escape the gravitational potential of the compact object. Ejected matter is expanded and then cools down to the temperature regime for the nucleosynthesis.
The main interest here is the thermodynamical history of ejecta determined by hydrodynamics during the time of the nucleosynthesis. If it is favorable to make the neutron-to-seed ratio high enough during the charged particle reactions, the r-process may take place afterwards. The key quantities are the electron fraction, $`Y_e`$, the entropy per baryon, $`S`$, and the expansion time scale, $`\tau _{dyn}`$. A low electron fraction, a high entropy per baryon and a short expansion time scale are known to be favorable for the r-process (Meyer & Brown 1997). We investigate those conditions by performing the hydrodynamical simulations for the surface layers of proto-neutron stars.
### 2.1 Numerical code
We employ the implicit numerical code for the general relativistic and spherically symmetric hydrodynamics (Yamada 1997). The general relativity is essential here since it is known to influence the properties of neutrino-driven wind and may lead to a better condition for the r-process (Cardall & Fuller, 1997; Otsuki et al. 1999). The implicit time differencing is advantageous to follow the hydrodynamics for a long time compared with the sound crossing time of the numerical mesh. The hydro code uses a Lagrangian mesh, which is suitable to follow the thermal history for the nucleosynthesis. We employ the baryon mass meshes with equal spacing. The grid size ranges typically from $`10^8M_{}`$ to $`10^6M_{}`$, depending mainly on the luminosity, so as to have enough resolutions.
The heating and cooling processes due to neutrinos are added on top of the hydro code. The optically thin-limit is assumed for neutrinos since the surface region of interest has low densities and is transparent to neutrinos. Although the Boltzmann solver of neutrino transfer is already implemented in the numerical code (Yamada, Janka & Suzuki 1999), we do not solve the Boltzmann equation, but instead we set the neutrino distribution function at each Lagrangian mesh point as described below.
We treat the following neutrino reactions as sources of heating and cooling,
$`\nu _e+ne^{}+p,`$ (1)
$`\overline{\nu }_e+pe^++n,`$ (2)
$`\nu _i+e^{}\nu _i+e^{},`$ (3)
$`\nu _i+e^+\nu _i+e^+,`$ (4)
$`\overline{\nu }_i+e^{}\overline{\nu }_i+e^{},`$ (5)
$`\overline{\nu }_i+e^+\overline{\nu }_i+e^+,`$ (6)
$`\nu _i+\overline{\nu }_ie^{}+e^+,`$ (7)
where the index i of $`\nu _i`$ stands for neutrino flavors ($`i=e,\mu ,\tau `$). The evaluation of the reaction rates (1) and (2) follows the standard procedure as used in supernova calculations (Yamada, Janka & Suzuki 1999). The heating and cooling rates are calculated by the energy integrals using the distribution functions of neutrinos and electrons in the Boltzmann solver. The Pauli blocking effects for electrons and positrons, the neutron-proton mass difference are thus properly taken into account. This is in contrast to the approximate treatment in QW where the radiation dominated situation is assumed. The other heating and cooling rates due to the pair process and the electron scattering are evaluated by Eqs. (12), (13) and (14) of QW. The evolution of the electron fraction, $`Y_e`$, is solved together with hydrodynamics using the collision terms for the reactions (1) and (2) of the Boltzmann solver (Yamada 1997).
As for the equation of state (EOS) of dense matter, we adopt the table of the relativistic EOS, which is recently derived for supernova simulations (Shen et al. 1998, Shen et al. 1998) in the relativistic nuclear many body framework. It reproduces the nuclear matter saturation and the properties of stable and unstable nuclei in the nuclear chart (Sugahara & Toki 1994; Sumiyoshi, Kuwabara & Toki 1995). The table covers the wide range of density ($`10^{5.1}10^{15.4}`$ g/cm<sup>3</sup>), electron fraction ($`0.00.56`$), and temperature ($`0100`$ MeV), which is required for supernova simulations. The electron/positron and photon contributions as non-interacting particles are added to the nuclear contribution of the EOS. The arbitrary degeneracy of electrons and disappearance of positrons at low temperatures are properly treated.
We extend the EOS table toward lower densities below $`10^5`$ g/cm<sup>3</sup> in the current study. Since this low density regime appears in the simulations at temperature below 0.5 MeV, we assume the mixture of neutrons, protons and $`\alpha `$-particles in nuclear statistical equilibrium. This is a good approximation at the time of $`\alpha `$-rich freeze-out because only a slight amount of nuclei is synthesized. To determine the composition precisely in this temperature regime, one has to solve the nuclear reaction network with the hydrodynamics at the same time. This is a formidable task beyond the current scope of study. We note, however, that the evolution of the electron fraction is properly coupled to hydrodynamics.
As for the neutrino spectra, we take rather simple assumptions in the current study. The neutrino distribution, $`f_{\nu _i}(R_{\nu _i})`$, at the neutrinosphere is assumed to be monochromatic in energy and isotropic in angle as follows
$$f_{\nu _i}(R_{\nu _i})=f_{\nu _{i0}}\delta (E_{\nu _i}E_{\nu _i}).$$
(8)
The coefficient, $`f_{\nu _{i0}}`$, is determined by the neutrino luminosity, $`L_{\nu _i}`$, and the radius of the neutrinosphere, $`R_{\nu _i}`$, as
$$f_{\nu _{i0}}=\frac{2\pi L_{\nu _i}}{E_{\nu _i}^3R_{\nu _i}^2},$$
(9)
where the average energy, $`E_{\nu _i}`$, $`L_{\nu _i}`$ and $`R_{\nu _i}`$ are given as model parameters. The number density of neutrinos, $`n_{\nu _i}`$, at radius, $`r`$, is given by
$`n_{\nu _i}`$ $`=`$ $`{\displaystyle \frac{1x}{4\pi ^2}}E_{\nu _i}^2f_{\nu _{i0}}`$ (10)
$`=`$ $`{\displaystyle \frac{1x}{2\pi }}{\displaystyle \frac{L_{\nu _i}}{E_{\nu _i}R_{\nu _i}^2}},`$ (11)
where $`x`$ is defined as
$$x=\left(1\frac{R_{\nu _i}^2}{r^2}\right)^{\frac{1}{2}},$$
(12)
to take into account the solid angle subtended by the neutrinosphere (QW). Using this expression of the number density, we set the neutrino distribution at radius, $`r`$, as
$$f_{\nu _i}(r)=f_{\nu _{ir}}\delta (E_{\nu _i}E_{\nu _i}),$$
(13)
where the coefficient, $`f_{\nu _{ir}}`$, is given by
$$f_{\nu _{ir}}=\frac{2\pi (1x)L_{\nu _i}}{E_{\nu _i}^3R_{\nu _i}^2}.$$
(14)
The positions of the neutrinospheres are assumed to be common to all neutrino species in the current study. The general relativistic effects such as the red-shift and the ray bending are not taken into account here. The above approximation for neutrino spectra is just intended to compare with the previous analytic studies. More realistic neutrino spectra will be incorporated in the forthcoming paper.
As for the inner boundary condition, the radius, the gravitational mass and the baryon mass are given. Thermodynamical quantities are set to be the same as those at the innermost mesh point. We adopt the reflecting condition for the velocity. As for the outer boundary, we impose a constant pressure. The thermodynamical quantities are again assumed to be the same as those at the neighboring mesh point inside.
### 2.2 Initial models
The initial models are constructed based on the hydrostatic configuration of the proto-neutron stars. We take a surface layer of the proto-neutron star and remap it to the hydro code as an initial configuration. We construct initial models in two ways. One is based on the numerical results of the proto-neutron star cooling (Sumiyoshi, Suzuki & Toki 1995) as realistic models. In the second case, we choose the neutron star mass and radius at the inner boundary arbitrarily.
As for the former case, the numerical simulations have been performed by solving the quasi-static evolution of the proto-neutron star with neutrino transport using the multi-energy-group flux-limited diffusion scheme (Suzuki 1994). In these simulations, we have used the relativistic EOS table, which is the same as the one used in the current study. By so doing, mapping of thermodynamical quantities to the hydro code is done consistently. We choose the cases of the baryon mass of $`1.62M_{}`$ and $`2.00M_{}`$. Snapshots of density, electron fraction and temperature at a certain time of the proto-neutron star cooling simulations are picked up to be used as initial inputs for the neutrino-driven wind simulations. We remark that the proto-neutron star cooling simulations start from the initial profile provided by Mayle and Wilson, which corresponds to the supernova core at 0.4 seconds after the core bounce. We cut out the surface layer containing a small baryon mass (typically from $`10^6M_{}`$ to $`10^4M_{}`$ depending on the models) from the proto-neutron star and map it onto the mesh of the hydro code. As for the latter case, we assume the mass and radius of proto-neutron star arbitrarily as inputs like the cases in QW. We solve the Oppenheimer-Volkoff equation to obtain the structure of a thin layer above the given radius. We assume the temperature and the electron fraction are constant in the whole layer and take $`T=3`$ MeV and $`Y_e=0.25`$ in the current study. For both cases, we have confirmed that the initial configuration settles into static state within a short time after remapping as long as we switch off the neutrino reactions.
We take as a reference the average energy of neutrinos as
$`E_{\nu _e}`$ $`=`$ $`10\mathrm{MeV},`$ (15)
$`E_{\overline{\nu }_e}`$ $`=`$ $`20\mathrm{MeV},`$ (16)
$`E_{\nu _\mu }`$ $`=`$ $`30\mathrm{MeV}.`$ (17)
Similar values are chosen in the analytic studies (QW, Otsuki et al. 1999). The average energies of $`\mu `$, $`\tau `$ neutrinos and anti-neutrinos are assumed to be identical. We give the neutrino luminosities as inputs and assume that they are constant during the simulations. We also assume that the luminosities are common to all flavors. These simple settings for neutrinos are meant to compare with previous studies and to clarify the dependence on the luminosity and average energy of neutrinos and the mass and radius of proto-neutron stars. In the forth-coming paper, we will take more realistic neutrino spectra from the proto-neutron star simulations taking into account energy distribution, flavor-dependence and time-dependence.
## 3 Numerical results
### 3.1 Models based on proto-neutron star cooling simulations
We start with a model based on the proto-neutron star cooling simulations with the baryon mass of $`1.62M_{}`$. As an initial configuration, we employ the output at $`t=3`$ sec of the proto-neutron star cooling simulation. We set the neutrino luminosities $`L_{\nu _i}=1\times 10^{51}`$ ergs/s for each species. The total luminosity, therefore, amounts to $`L_\nu ^{tot}=6\times 10^{51}`$ ergs/s. The outer boundary pressure is set to be $`p_{out}=10^{22}`$ dyn/cm<sup>2</sup>. We take the radius of the neutrinosphere as $`R_\nu =16.4`$ km.
We display in Fig. 1 the trajectories of mass elements during the simulation. The positions in radius are shown as a function of time. The trajectories of every 5 mass shells are presented here. The surface layers are heated by neutrinos and escape from the surface of the proto-neutron star one by one, forming the neutrino-driven wind. Figure 2 depicts the temperatures of the mass elements as a function of time. The temperature once becomes as high as 3 MeV due to the neutrino-heating and cools down to 0.1 MeV during the expansion. The pressure of the material balances with the imposed outer pressure when the temperature of the material becomes around 0.1 MeV, which roughly corresponds to this pressure. Figure 3 shows the densities of the mass elements as a function of time. The density of the material in the wind decreases due to expansion and stays around $`10^4`$ g/cm<sup>3</sup>. The entropy per baryon, $`S`$, becomes high during the evolution and reaches up to $`87`$ (in units of $`k_B`$ hereafter) at $`T=0.5`$ MeV in this model. After that, the entropy remains roughly constant because the heating and cooling processes become negligible.
We define the expansion time scale, $`\tau _{dyn}`$, as the $`e`$-fold time of temperature at $`T=0.5`$ MeV during the expansion. It is found to be 0.15 seconds in this model. This definition accords with those in the previous papers by Hoffman et al. (1997) and Otsuki et al. (1999) to discuss on the $`\alpha `$-rich freeze-out and the r-process. The mass loss rate, $`\dot{M}`$, which is the amount of the ejected mass divided by the mass loss time, is found to be $`1.5\times 10^5M_{}`$/s.
Figure 4 displays the time evolution of the electron fraction for one of the mass elements. The electron fraction is small at the beginning since the mass element was originally in the surface layer of the proto-neutron star. It increases due to neutrino reactions on nucleons during the expansion in the wind and reaches 0.46 at $`T=0.5`$ MeV. The equilibrium value of the electron fraction is roughly determined by the neutrino spectra. We will discuss this point in section 4.1.
To explore their dependences, we have performed simulations with different neutrino luminosities and simulations with a different proto-neutron star mass. For the latter case, we employ the profile of a more massive proto-neutron star at 3 seconds in the cooling simulation. Table 1 summarizes the model parameters and some results of the numerical simulations.
Figure 5 shows the dependence of the key quantities on the neutrino luminosity. For comparisons, we plot also the values obtained with the analytic treatment (Otsuki et al. 1999), in which the neutron star mass and the radius are assumed to be $`1.4M_{}`$ and 10 km, respectively. The dependence on the neutrino luminosity is found qualitatively similar between the numerical simulations and the analytic treatment despite the difference of initial profile. We see, however, quantitative differences in the mass loss rate and the entropy per baryon. The mass loss rate in the current numerical study turns out larger than that in the analytic study while the entropy per baryon tends to be lower.
### 3.2 Models with given mass and radius
To compare with the analytic treatment in detail, we have performed the numerical simulations for the proto-neutron star with the mass and radius given by hand. This is meant to check the numerical results by the comparison with the analytic treatment for simple cases as well as to understand the results obtained in the previous section better.
Table 2 summarizes the model parameters and some results of the numerical simulations. We choose $`1.4M_{}`$ and $`2.0M_{}`$ for the mass and 10 km for the radius to compare with the results by Otsuki et al. (1999), where the general relativistic analytic study has been worked out for the same parameters.
Figure 6 shows the key quantities of the neutrino-driven wind as a function of neutrino luminosity both for the numerical simulations and the analytic treatment. The general trend of the luminosity dependence is common. As for the mass loss rate and the entropy per baryon, the results of numerical simulations accord quantitatively well with those of analytic treatment. On the other hand, the expansion time scale is found systematically shorter in the numerical simulations than in the analytic study. The difference becomes large in some cases, which is preferable for r-process.
We have found that this difference of the expansion time scale mainly arises from our proper treatment of the equation of state. In the analytic studies done so far, they used the equation of state of the radiation-dominated matter
$$p=\frac{11\pi ^2}{180}T^4.$$
(18)
even for temperatures below 0.5 MeV. It overestimates the pressure of electrons/positrons in the low temperature regimes. As a result, the pressure at large radii, where the temperature is low, is larger in the analytic treatment than in our simulations with the same temperature. The larger pressure results in a longer expansion time scale since it decelerates the wind.
We have investigated this issue in detail for model c01. In the numerical simulations, we impose a constant pressure ($`1\times 10^{22}`$ dyn/cm<sup>3</sup> for model c01) at the outer boundary while in the analytic treatment the temperature is assumed to be 0.1 MeV at the radius $`10^4`$ km. Although the temperatures at the radius $`10^3`$ km are 0.12 MeV in both cases, the pressures there are found $`2.2\times 10^{22}`$ dyn/cm<sup>2</sup> in the analytic treatment and $`1.1\times 10^{22}`$ dyn/cm<sup>2</sup> in the simulation. This discrepancy comes from different treatments of the equation of state and the smaller pressure results in a shorter expansion time in the simulation. To confirm this interpretation, we recalculate the analytic model with the lower temperature (0.09 MeV) which gives the the same pressure at the radius $`10^3`$ km as model c01. In this case, the profiles of the pressure become closer to each other, although the pressure in the analytic treatment is still higher inside the radius $`10^3`$ km. The expansion time scale becomes shorter ($`0.10`$ sec) than the original analytic model ($`0.16`$ sec) and closer to the value ($`0.05`$ sec) in the simulation. Thus it is clear that the proper treatment of the equation of state is crucial to obtain the expansion time scale. We stress again that the shorter expansion time scale is preferable for r-process.
We have also examined the dependence on the pressure imposed at the outer boundary. When we take a smaller pressure ($`1\times 10^{20}`$ dyn/cm<sup>2</sup>) the expansion time scale becomes shorter (0.026 sec) than the value (0.05 sec) in the standard case ($`1\times 10^{22}`$ dyn/cm<sup>2</sup>), while the other quantities such as the mass loss rate and the entropy per baryon change little. The temperature becomes as low as 0.04 MeV due to the rapid expansion, which roughly corresponds to the pressure imposed at the outer boundary. The radius and the density become larger and lower, correspondingly.
Here we comment on the effect of the general relativity. As pointed out by Cardall & Fuller (1997) and Otsuki et al. (1999), we found that the entropy per baryon is higher and the expansion time scale is shorter than the corresponding results in the Newtonian treatment by Qian & Woosley (1996).
## 4 R-process nucleosynthesis
### 4.1 Conditions for r-process
Based on the hydrodynamical simulations of the neutrino-driven wind, we discuss here key quantities for the r-process nucleosynthesis, that is, the entropy per baryon, expansion time scale and electron fraction at the time of the nucleosynthesis. Higher entropy per baryon, shorter expansion time scale and lower electron fraction (and their combinations) are favorable for the r-process nucleosynthesis (Hoffman et al. 1997; Meyer and Brown 1997).
Figure 7 summarizes all the results of the numerical simulations listed in Tables 1 and 2 in the plane of the expansion time scale and the entropy per baryon. The analytic models are also shown in the same figure. It is found that the results for the models based on the proto-neutron star cooling simulations are similar to the analytic model with $`1.4M_{}`$. Among them, the more massive case with $`2.00M_{}`$ is rather preferable for the r-process having higher entropy per baryon. As for the models with given mass and radius, they have higher entropy and shorter expansion time scale (upper left direction in the figure) than the models based on the proto-neutron star cooling simulations. This is mainly because the former has a smaller radius and a deeper gravitational potential than the latter (for details, see Qian & Woosley 1996). We can see here again the general trend that the more massive models are the better for the r-process. Using analytic model with general relativity, Otsuki et al. (1999) pointed out that the r-process might be possible for massive neutron stars with a short expansion time scale. Our results with the numerical simulations show even shorter expansion time scale having a higher possibility of the successful r-process. Indeed, in the analytic study by Otsuki et al. (1999), the r-process is possible only for high luminosity cases, while the current study relaxes this constraint.
Another key quantity is the electron fraction. If the electron fraction is small enough, the r-process is possible even with low entropy per baryon and with long expansion time scale. The electron fraction in the neutrino-driven wind is governed by the neutrino captures on nucleons, thereby determined by the relative strength of the luminosities and energy spectra of the electron-type neutrinos and anti-neutrinos. In the model with a higher average energy for the electron-type anti-neutrinos, $`E_{\overline{\nu }_e}=30`$ MeV, the electron fraction at $`T=0.5`$ MeV turns out to be 0.38 which is lower than 0.46 in model p06 with $`E_{\overline{\nu }_e}=20`$ MeV.
The electron fraction in the neutrino-driven wind can be estimated by the average energies of neutrinos and anti-neutrinos as
$$Y_e^{eq}=\frac{E_{\nu _e}+2\mathrm{\Delta }}{E_{\nu _e}+E_{\overline{\nu }_e}},$$
(19)
where $`\mathrm{\Delta }`$ is the neutron-proton mass difference (Qian and Woosley 1996). Here we assume that the charge-changing neutrino capture reactions are in equilibrium and that the luminosity of neutrinos are the same as that of anti-neutrinos.
The electron fractions obtained in the numerical simulations roughly accord with the above estimation. We found that the neutrino capture reactions come to equilibrium before the temperature falls down to 0.5 MeV. When the temperature goes below 0.5 MeV, the mass fraction of $`\alpha `$-particle becomes large and that of proton becomes negligible. In this situation, the electron fraction remains almost constant with only a slight increase due to the anti-neutrino capture on neutrons. The neutrino capture on protons has already frozen out due to the lack of free protons.
### 4.2 R-process network calculation
We have performed a nucleosynthesis calculation using a result of our hydrodynamical simulations for neutrino-driven winds in order to demonstrate that the r-process nucleosynthesis is indeed possible for the model.
We employ model b09 (the mass $`2.0M_{}`$ and the radius 10 km with $`L_\nu ^{tot}=6\times 10^{52}`$ ergs/s) which has the shortest expansion time scale among our models. We pick up one of mass trajectories in the hydrodynamical result and input it to the r-process calculation. We start the r-process calculation at the time when the temperature becomes $`T_9\frac{T}{10^9\mathrm{K}}=9`$ in the trajectory. Figure 8 displays the temperature and density trajectories taken from the hydrodynamical simulation. The origin of time in the figure is shifted so that the temperature becomes $`T_9=9`$ there. In the network calculation, matter is assumed to be initially composed of neutrons and protons with the electron fraction $`Y_e=0.44`$, the result of the hydrodynamical simulation.
The nuclear reaction network employed in the code covers neutron-rich nuclei from the $`\beta `$ stability line to the neutron drip line for $`Z=10100`$. It includes also light nuclei which are required to synthesize seed elements for the r-process. The neutron capture, its reversed reaction, $`\beta `$ decay and $`\beta `$ delayed emission are incorporated for $`Z10`$. On top of the reactions for the r-process, charged particle reactions are included to follow the $`\alpha `$-rich freeze-out. The ($`\alpha `$, n) reactions up to $`Z=36`$ and ($`\alpha `$, $`\gamma `$) reactions up to <sup>28</sup>Si are included. The $`\alpha `$ reactions that produce <sup>9</sup>Be, <sup>12</sup>C and beyond are also included. The details of the reaction network code will be described elsewhere (Terasawa et al. 1999).
The neutron-to-seed ratio at $`T_9=2.5`$ is found to be 120 in the r-process network calculation. The electron fraction at the same temperature is 0.42. The rapid expansion during the time of the $`\alpha `$-rich freeze out ($`T0.5`$ MeV) leads to only a small amount of the seed elements resulting in the high neutron-to-seed ratio. With this high neutron-to-seed ratio, the r-process elements up to $`A200`$ are produced ensuingly. Figure 9 shows the yields of our r-process network calculation. It is remarkable that the 3rd peak at $`A=195`$ as well as the 2nd peak at $`A=130`$ is reproduced appropriately. It is emphasized that this result is consequence of the short expansion time scale and accords with the result shown by Otsuki et al. (1999). The shorter expansion time scale obtained in the current simulation than in the analytic treatment further enhances the 3rd peak height having a higher neutron-to-seed ratio.
## 5 Summary
We study numerically the general relativistic hydrodynamics of the neutrino-driven wind blown from the proto-neutron stars and examine the r-process nucleosynthesis there. We focus on the entropy per baryon, electron fraction, expansion time scale and mass loss rate as key quantities for the r-process. By employing the results of the proto-neutron star cooling simulations as inputs for the neutrino-driven wind simulations, we investigate those quantities in realistic situations. We also make comparison with the results of analytic treatment by assuming masses and radii of proto-neutron stars as in the analytic study. We explore the dependence on the profile of the proto-neutron star as well as neutrino luminosity.
We find that the entropy per baryon and the expansion time scale are neither high nor short enough for the r-process in the models based on proto-neutron star cooling simulations. This is mainly because the radius of the proto-neutron star is rather large due to the thermal pressure. The expansion time scale has a strong dependence on the neutrino luminosity and it becomes as short as 10 msec for high neutrino luminosities, which is close to the value required in the rapid expansion scenario by Otsuki et al. (1999).
On the other hand, the hydrodynamical simulations for models with given mass and radius show that larger masses and smaller radii are found more favorable for the r-process. In the case of massive and compact neutron stars with high neutrino luminosities, the entropy per baryon is high enough and the expansion time scale is short enough. We demonstrate by the network calculation a successful r-process nucleosynthesis for a model of our hydrodynamical simulations. The 2nd and 3rd abundance peaks of r-process elements and their relative height are well reproduced.
We find that the expansion time scales obtained in the simulations are systematically shorter than the values in the analytic treatment. This is because the pressure at low temperature is overestimated in the analytic treatment, and therefore the expansion time scale becomes longer than what one expects with the proper treatment of the equation of state as in our study. Since the expansion time scale determines the neutron-to-seed ratio crucially, even a small decrease in the time scale may increase the neutron-to-seed ratio substantially making ensuing r-process more promising.
We notice here that the dynamics of the neutrino-driven wind is sensitive to the outer boundary condition. A decrease of the outer boundary pressure results in a shorter expansion time scale. On the other hand, the temperature should remain around 0.1 MeV for about 1 second so that the r-process could occur. If the outer boundary pressure is too small, the temperature decreases too rapidly for the r-process to proceed even though the expansion time scale is short enough for the high neutron-to-seed ratio. So far most of the previous studies adopted $`T=0.1`$ MeV at $`10^4`$ km as a boundary condition, however, we have to bare in mind this additional uncertainty.
The fact that the results of hydrodynamical simulations based on the proto-neutron star cooling simulation turns out not suitable for the r-process does not exclude the possibility of the r-process in the neutrino-driven wind. We have assumed the neutrino luminosity is constant in time in the current study. This is too simple an assumption. In reality, however, the neutrino luminosity decreases with time scale of 10 seconds and one should not forget the time dependence of the luminosity. In the simulation of Woosley et al. (1994), the r-process takes place in a late stage with a high entropy when the neutrino luminosity is low and the outer material is already expanding due to the high neutrino luminosity in the earlier stage. The numerical simulations with the time dependent luminosities as well as realistic neutrino energy spectra are in progress and will be reported elsewhere.
## Acknowledgment
We are grateful to S. Wanajo, T. Kajino and I. Tanihata for encouraging comments and fruitful discussions on the r-process nucleosynthesis. We would appreciate K. Oyamatsu, H. Shen and H. Toki for the advice on the usage of the relativistic EOS table. K. S. would like to express special thanks to H. Shen for providing the numerical code for the EOS with $`\alpha `$-particles. The numerical simulations have been performed on the supercomputer VPP700E/128 at RIKEN and VPP500/80 at KEK (KEK Supercomputer Projects No.98-35 and No.99-52). This work is partially supported by the Grants-in-Aid for the Center-of-Excellence (COE) Research of the ministry of Education, Science, Sports and Culture of Japan to RESCEU (No.07CE2002).
## References
Burbidge, E.M., Burbidge, G.R., Fowler, W.A. and Hoyle, F. 1957, Rev. Mod. Phys. 29, 547
Cardall, C.Y. and Fuller, G.M. 1997, ApJ 486, L111
Cowan, J.J., Thielemann, F.-K. and Truran, J.W. 1991, Phys. Rep. 208, 267
Duncan, R.C., Shapiro, S.L. and Wasserman I. 1986, ApJ 309, 141
Hillebrandt, W. 1978, Space Sci. Rev. 21, 639
Hoffman, R.D., Woosley, S.E. and Qian, Y.-Z. 1997, ApJ 482, 951
Käppeler, F., Beer, H. and Wisshak, K. 1989, Rep. Prog. Phys. 52, 945
Meyer, B.S., Mathews, G.J., Howard, W.M., Woosley, S.E. and Hoffman, R.D. 1992, ApJ 399, 656
Meyer, B.S. and Brown, J.S. 1997, ApJS 112, 199
Otsuki, K. 1999, Ph.D Thesis, Osaka University
Otsuki, K., Tagoshi, H., Kajino, T. and Wanajo, S. 2000, ApJ 531, in press
Qian, Y.-Z. and Woosley, S.E. 1996, ApJ 471, 331 (QW)
Shen, H., Toki, H., Oyamatsu, K. and Sumiyoshi, K. 1998, Nucl. Phys. A637, 435
Shen, H., Toki, H., Oyamatsu, K. and Sumiyoshi, K. 1998, Prog. Theor. Phys. 100, 1013
Sneden, C., McWilliam, A., Preston, G.W., Cowan, J.J., Burris, D.I. and Armosky, B.J. 1996, ApJ 467, 819
Sumiyoshi, K., Kuwabara, H. and Toki, H. 1995, Nucl. Phys. A581, 725
Sumiyoshi, K., Suzuki, H. and Toki, H. 1995, A&A 303, 475
Sugahara, Y. and Toki, H. 1994, Nucl. Phys. A579, 557
Suzuki, H. 1994, Physics and Astrophysics of Neutrinos, edited by Fukugita, M. and Suzuki, A., (Springer-Verlag, Tokyo, 1994), p763
Takahashi, K., Witti, J. and Janka, H.-Th. 1994, A&A 286, 857
Terasawa, M., Sumiyoshi, K., Tanihata, I. and Kajino, T., in preparation
Witti, J., Janka, H.-Th. and Takahashi, K. 1994, A&A 286, 841
Woosley, S.E., Wilson, J.R., Mathews, G.J., Hoffman, R.D. and Meyer, B.S. 1994, ApJ 433, 229
Yamada, S. 1997, ApJ 475, 720
Yamada, S., Janka, H.-Th. and Suzuki, H. 1999, A&A 344, 533
## Table caption
Summary of models based on proto-neutron star cooling simulations. $`M_B`$, $`M_G`$ and $`R`$ are the baryon mass, gravitational mass and radius of the proto-neutron star. For the definition of the other entries, see the text.
Summary of models with given mass and radius. $`M_G`$ and $`R`$ are gravitational mass and radius of the proto-neutron star. For the definition of the other entries, see the text.
Table 1
| Model | $`M_B`$ \[$`M_{}`$\] | $`M_G`$ \[$`M_{}`$\] | $`R`$ \[km\] | $`L_\nu ^{tot}`$ \[ergs/s\] | $`S`$ \[$`k_B`$\] | $`\tau _{dyn}`$ \[sec\] | $`\dot{M}`$ \[$`M_{}`$/s\] | $`Y_e`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| p07 | 1.62 | 1.55 | 17.7 | $`3.6\times 10^{51}`$ | 100 | $`3.2\times 10^1`$ | $`6.1\times 10^6`$ | 0.47 |
| p06 | 1.62 | 1.55 | 17.7 | $`6.0\times 10^{51}`$ | 87 | $`1.5\times 10^1`$ | $`1.5\times 10^5`$ | 0.46 |
| p08 | 1.62 | 1.55 | 17.7 | $`1.8\times 10^{52}`$ | 76 | $`5.3\times 10^2`$ | $`1.0\times 10^4`$ | 0.45 |
| p09 | 1.62 | 1.55 | 17.7 | $`6.0\times 10^{52}`$ | 65 | $`1.6\times 10^2`$ | $`8.6\times 10^4`$ | 0.44 |
| r06 | 2.00 | 1.88 | 16.9 | $`3.6\times 10^{51}`$ | 128 | $`2.9\times 10^1`$ | $`3.4\times 10^6`$ | 0.46 |
| r03 | 2.00 | 1.88 | 16.9 | $`6.0\times 10^{51}`$ | 119 | $`1.5\times 10^1`$ | $`8.3\times 10^6`$ | 0.46 |
| r12 | 2.00 | 1.88 | 16.9 | $`1.8\times 10^{52}`$ | 105 | $`5.0\times 10^2`$ | $`5.6\times 10^5`$ | 0.45 |
| r01 | 2.00 | 1.88 | 16.9 | $`6.0\times 10^{52}`$ | 84 | $`1.5\times 10^2`$ | $`4.7\times 10^4`$ | 0.44 |
Table 2
| Model | $`M_G`$ \[$`M_{}`$\] | $`R`$ \[km\] | $`L_\nu ^{tot}`$ \[ergs/s\] | $`S`$ \[$`k_B`$\] | $`\tau _{dyn}`$ \[sec\] | $`\dot{M}`$ \[$`M_{}`$/s\] | $`Y_e`$ |
| --- | --- | --- | --- | --- | --- | --- | --- |
| c02 | 1.40 | 10.0 | $`3.6\times 10^{51}`$ | 142 | $`9.7\times 10^2`$ | $`2.7\times 10^6`$ | 0.46 |
| c01 | 1.40 | 10.0 | $`6.0\times 10^{51}`$ | 131 | $`5.0\times 10^2`$ | $`6.5\times 10^6`$ | 0.45 |
| c08 | 1.40 | 10.0 | $`1.8\times 10^{52}`$ | 121 | $`1.2\times 10^2`$ | $`4.4\times 10^5`$ | 0.44 |
| c04 | 1.40 | 10.0 | $`6.0\times 10^{52}`$ | 95 | $`6.2\times 10^3`$ | $`4.2\times 10^4`$ | 0.44 |
| b17 | 2.00 | 10.0 | $`3.6\times 10^{51}`$ | 263 | $`1.0\times 10^1`$ | $`8.8\times 10^7`$ | 0.46 |
| b10 | 2.00 | 10.0 | $`6.0\times 10^{51}`$ | 239 | $`5.3\times 10^2`$ | $`2.2\times 10^6`$ | 0.45 |
| b18 | 2.00 | 10.0 | $`1.8\times 10^{52}`$ | 196 | $`9.6\times 10^3`$ | $`1.6\times 10^5`$ | 0.44 |
| b09 | 2.00 | 10.0 | $`6.0\times 10^{52}`$ | 165 | $`5.1\times 10^3`$ | $`1.3\times 10^4`$ | 0.44 |
## Figure captions
The trajectories of the mass elements in the neutrino-driven wind for model c01 as a function of time. Every 5 mass shells are shown here.
The temperatures of mass elements for the same model as Fig. 1.
The densities of mass elements for the same model as Fig. 1.
The electron fraction of a mass element for the same model as Fig. 1.
The luminosity dependence of the mass loss rate, entropy per baryon and expansion time scale for the models based on the proto-neutron star cooling simulations (symbols) and for the case of the analytic treatment (solid line). The solid circles are models with the baryon mass of $`1.62M_{}`$ and the solid squares with the baryon mass of $`2.00M_{}`$. The result of the analytic treatment is for the case with the gravitational mass of $`1.4M_{}`$ and the radius of 10 km.
The luminosity dependence of the mass loss rate, entropy per baryon and expansion time scale for the models with given mass and radius (symbols) and for the analytic treatment (solid line). The open circles are models with the gravitational mass of $`1.4M_{}`$ and the open squares with the gravitational mass of $`2.0M_{}`$. The solid and dashed lines show the results of the analytic treatment with the gravitational mass of $`1.4M_{}`$ and $`2.0M_{}`$, respectively. The radii are taken as 10 km in all cases.
The expansion time scale ($`\tau _{dyn}`$) and entropy per baryon ($`S`$) at the temperature $`T=0.5`$ MeV. The solid circle and solid square symbols are for the models based on the proto-neutron star cooling simulations with the baryon mass of $`1.62M_{}`$ and $`2.00M_{}`$, respectively. The open circle and open square symbols are for the models with given mass and radius (the gravitational mass of $`1.4M_{}`$ and $`2.0M_{}`$, respectively). The solid and dashed lines show the results of the analytic treatment with the gravitational mass of $`1.4M_{}`$ and $`2.0M_{}`$, respectively. The radii are taken as 10 km in all cases.
The time evolution of the temperature and density of a mass element taken from the model with the shortest expansion time scale (the gravitational mass of $`2.0M_{}`$, the radius of 10 km, and the total neutrino luminosity of $`6\times 10^{52}`$ ergs/s). The time when the temperature becomes $`T_9=9`$ is defined to be 0.
The abundance of the r-process elements obtained by the network calculation using the trajectory shown in Fig. 8. The scaled r-process abundances from Käppeler, Beer and Wisshak (1989) are shown by dots for comparison. |
no-problem/9912/hep-ph9912466.html | ar5iv | text | # Cosmological implications of Supersymmetric CP violating phases
## Abstract
We show that large SUSY phases have no significant effect on the relic density of the lightest supersymmetric particle (LSP). However, they are very significant for the detection rates. We emphasise that the phase of the trilinear coupling increase the direct and indirect detection rates.
In supersymmetric (SUSY) models there are many new CP violating phases beyond the phase $`\delta _{CKM}`$ of the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix. They arise mainly from the soft SUSY breaking parameters which are in general complex. These phases give large one loop contributions to the electric dipole moments (EDM) of the neutron and electron which exceed the current limits. Hence, SUSY phases are generally quite constrained, they have to be of order $`10^3`$ for SUSY particle masses of order 100 GeV. However, it was suggested that there are internal cancellations among various contribution to the EDM (including the chromoelectric and purely gluonic operator contributions) whereby allowing for large CP phases . We have shown that in the effective supergravity derived from string theory, such cancellation is accidental and it only occurs at few points in the parameter space . Recently, it was argued that the non universal gaugino masses and their relative phases are crucial for having sufficient cancellations among the contributions to EDMs .
In such a case, one expects that these large phases have important impact on the lightest supersymmetric particle (LSP) relic density and its detection rates. In Ref. the effect of SUSY phases on the LSP mass, purity, relic density, elastic cross section and detection rates has been considered within models with universal, hence real, gaugino masses. It was shown that the phases have no significant effect on the LSP relic abundance but a substantial impact on the detection rates. Here, we study the effect of gaugino phases, particularly, we consider D-brane model recently proposed which is able to allow large value of phases while the EDM of the neutron and electron are less than the experimental limit as shown in Ref. . It turns out that the LSP of this model could be bino or wino like depending on the ratio between $`M_1`$ and $`M_2`$. In the region where the EDMs are smaller than the limit, the mass of the LSP is very close to the lightest chargino, hence the co-annhilation between them becomes very important and it greatly reduces the relic density . The phases have no important effect on the LSP relic aboundance as in the case of Ref. . However, their effect on the detection rates is very significant and is larger than what is found in the case of real gaugino masses .
The possibility of non-universal gaugino masses and phases at the tree level is natural in the type I string theory . The soft SUSY breaking terms in this class of models depend on the embedding of the standard model (SM) gauge group in the D-brane sector. In case of the SM gauge group is not associated with a single set of branes the gaugino masses are non universal. We assume that the gauge group $`SU(3)_C\times U(1)_Y`$ is associated with one set of five branes (say $`5_1`$) and $`SU(2)_L`$ is associated with a second set $`5_2`$ . The soft SUSY breaking terms take the following form .
$`M_1`$ $`=`$ $`\sqrt{3}m_{3/2}\mathrm{cos}\theta \mathrm{\Theta }_1e^{i\alpha _1}=M_3=A,`$ (1)
$`M_2`$ $`=`$ $`\sqrt{3}m_{3/2}\mathrm{cos}\theta \mathrm{\Theta }_2e^{i\alpha _2},`$ (2)
where $`A`$ is the trilinear coupling. The soft scalar mass squareds are gives by
$`m_Q^2`$ $`=`$ $`m_L^2=m_{H_1}^2=m_{H_2}^2=m_{3/2}(13/2\mathrm{sin}^2\theta ),`$ (3)
$`m_D^2`$ $`=`$ $`m_U^2=m_E^2=m_{3/2}(13\mathrm{cos}^2\theta ),`$ (4)
and $`\mathrm{\Theta }_1^2+\mathrm{\Theta }_2^2=0`$. In this case, by using the appropriate field redefinitions and the $`R`$-rotation we end up with four physical phases, which can not be rotated away. These phases can be chosen to be: the phase of $`M_1`$ ($`\varphi _1`$), the phase of $`M_3`$ ($`\varphi _3`$), the phase of $`A`$ ($`\varphi _A`$) and the phase of $`\mu `$ ($`\varphi _\mu `$). The phase of $`B`$ is fixed by the condition that $`B\mu `$ is real.
The effect of these phases on the EDM of the electron and the neutron, taking into account the cancellation mechanism between the different contributions, has been examined in Ref. <sup>1</sup><sup>1</sup>1See also Ref. . It was shown that large values of these phases can be accommodated and the electron and neutron EDM satisfy the experimental constraint. It is worthy noticed that the EDM impose a constraint on the ratio $`M_1/M_2`$. In fact, to have an overlap between the electron and neutron EDM allowed regions, $`M_2`$ should be less than $`M_1`$, and as explained in Ref. , a precise overlap between these two regions occurs at $`\mathrm{\Theta }_1=0.85`$. Such constraint has an important impact on the LSP. In this case, we have $`M_2`$ is the lightest gaugino at GUT scale. However, at the electroweak (EW) scale, it turns out that the lightest neutralino is a bino like. Furthermore, the LSP mass is close to the lightest chargino mass which is equal to the mass of the next lightest neutralino $`(\stackrel{~}{\chi }_2^0)`$. Therefore, the co-annhilation between the bino and the chargino as well as the next to lightest neutralino are very important and have to be included in the calculation of the relic density.
We study the effect of the SUSY CP violating phases and the co-annihilation on the relic density and also on the upper bound of the LSP mass. Since the LSP is bino like, the annihilation is predominantly, as usual, into leptons by the exchange of the right slepton. Without co-annihilation, the constraint on the relic density $`0.025<\mathrm{\Omega }_{LSP}h^2<0.22`$ impose sever constraint on the LSP mass, namely $`m_\chi <150`$ GeV, and the SUSY phases have no any significant effect in relaxing such sever constraint as found in Ref. . Including the co-annihilation of $`\chi `$ with $`\chi _1^+`$ and $`\stackrel{~}{\chi }_2^0`$ is very important to reduce the LSP relic density to an acceptable level.
Given that the LSP is almost pure bino, the co-annihilation processes are predominantly into fermions. However, since the coupling of $`\stackrel{~}{\chi }_2^0f\stackrel{~}{f}`$ is proportional to $`Z_{2j}`$, it is smaller than the coupling of $`\stackrel{~}{\chi }_1^+f\stackrel{~}{f^{}}`$. We found that the dominant contribution is due to the co-annhiliation channel $`\stackrel{~}{\chi }_1^+\chi f\overline{f}`$. We also include $`\stackrel{~}{\chi }_1^+\chi W^+\gamma `$ channel, estimated to contribute with a few cent. Then, we can calculate the relic aboundance using the standard procedure . In Fig.1 we show the values of the LSP relic abundance $`\mathrm{\Omega }_\chi h^2`$, estimated with including the co-annihilations, corresponding to the LSP mass.
This figure shows that the co-annihilation processs have very significat rule in reducing the values of $`\mathrm{\Omega }_\chi h^2`$, even now we obtain an upper bound on the mass of the LSP from the lower bound of the relic density, $`\mathrm{\Omega }_\chi h^2>0.025`$ which leads to $`m_\chi <400`$ GeV. Here, also the effect of the SUSY phases is insignificant and the same upper bound of the LSP mass is obtained for vanishing and non vanishing phases. It is important to notice that the gaugino phases especially the phase of $`M_3`$ have important impact on having large $`\varphi _A`$ at the EW scale. It dominantly contributes to the phase of $`A`$-term during the renormalization from the GUT scale to EW scale. Thus, the radiative corrections to $`\varphi _A`$ is very small and the phase of $`A`$ is kept large at EW. However, as we have shown, such large phases are not effecting for the LSP mass and the relic abundance. In fact, this result is due to two facts, first the LSP is bino so it slightly depends on the phase of $`\mu `$, second, the phases are important if there is a significant mixing in the sfermion mass matrix. In theses class of models we consider the off diagonal element are much smaller than the diagonal element.
As shown in Ref. , the SUSY phases is found to have a significant effect on the direct detection rate ($`R`$) and indirect detection rate ($`\mathrm{\Gamma }`$). The phase of $`\varphi _A`$ increases the values of $`R`$ and $`\mathrm{\Gamma }`$ . Furthermore, the enhancement of the ratios of the rates with non vanishing $`\varphi _A`$ to the rates in the absence of this phase are even large than what is found in Ref. , since as we explained, here $`\varphi _A`$ has larger values at EW scale due to the gluino contribution through the renormalization. This work is supported by a Ministerio de Educacion y Cultura research grant. |
no-problem/9912/cond-mat9912423.html | ar5iv | text | # Two-species percolation and Scaling theory of the metal-insulator transition in two dimensions
## I Background and introduction of the model
The surprising experimental observation of a metal-insulator transition in two dimensions , in contradiction with the predictions of single-parameter scaling theory for noninteracting electrons , has been a subject of extensive investigation in recent years. Theories ranging from attributing the effect to scattering by impurities to those suggesting “a new form of matter” have been proposed. Some theories, based on the treatment of disorder and electron-electron interactions by Finkelstein , have been put forward , while other approaches considered spin-orbit scattering or percolation of electron-hole liquid . Altshuler et al. gave several arguments why this transition is not due to a non-Fermi liquid behavior, including the fact that the exponential increase of the conductance with temperature persists to high densities where the conductance is almost two orders of magnitude larger than the critical conductance, and the fact that the Hall resistance is rather insensitive to temperature, and does not display any critical behavior. Some experimental results supporting the conclusion that the transition is not driven by interactions that were mentioned in included the fact that such a transition was observed also in high-density electron gas upon the introduction of artificial disorder and the fact that increasing the density in a parallel electron gas increases the conductance , even though the interactions are screened by the parallel gas. More recently, the compressibility on the metallic side of the transition was measured and was shown to be accurately described by Hartree-Fock approximation, again indicating a normal Fermi-liquid behavior. Several other recent experiments have demonstrated weak-localization behavior on the metallic side with very little effect of electron-electron interactions.
Recently I proposed a simple non-interacting electron model, combining local quantum tunneling and global classical percolation, to explain several features of the experimental observations. At low electron or hole densities the potential fluctuations due to the disorder cannot be screened and they define density puddles (density separation into puddles in gated GaAs was indeed observed experimentally by Eytan et al. , using near-field spectroscopy). These puddles are connected via saddle points, or quantum points contacts (QPCs). It is now established that even at low temperatures and for open puddles (or quantum dots), the dephasing time may be shorter than the escape time from the puddle . Thus it is assumed that between tunneling events through the QPCs dephasing takes place, and the conductance of the system will be determined by adding classically these quantum resistors. (A related model was introduced by Shimshoni et al. to describe successfully transport in the quantum Hall regime.) Each saddle point is characterized by its critical energy $`ϵ_c`$, such that the transmission through it is given by $`T(ϵ)=\mathrm{\Theta }(ϵϵ_c)`$. (I assume that the energy scale over which the transmission changes from zero to unity is smaller than the other relevant energy scales, to avoid additional parameters). Then the conductance through a QPC is given by the Landauer formula,
$`G(\mu ,T)`$ $`=`$ $`{\displaystyle \frac{2e^2}{h}}{\displaystyle 𝑑ϵ\left(\frac{f_{FD}(ϵ)}{ϵ}\right)T(ϵ)}`$ (1)
$`=`$ $`{\displaystyle \frac{2e^2}{h}}{\displaystyle \frac{1}{1+\mathrm{exp}[(ϵ_c\mu )/kT]}},`$ (2)
where $`\mu `$ is the chemical potential, and $`f_{FD}`$ is the Fermi-Dirac distribution function.
The system is now composed of classical resistors, where the resistance of each one of them is given by (2), with random QPC energies.
## II Two-species scaling theory
In I presented numerical calculations to be compared to the experimental data. Here I present a different approach, based on the scaling theory of a two-species percolation network. At low temperatures the resistors can be divided into two groups, the conducting ones ($`ϵ_c<\mu `$), whose conductance is about $`2e^2/h`$, and the insulating ones ($`ϵ_c>\mu `$), whose conductance is nearly zero. Thus the distribution of the conductances will be a two-peak distribution, where the weight of each peak will be determined mainly by density (or chemical potential) and the position of the conducting peaks will be determined mainly by temperature. Since most properties of such a percolating network are insensitive to details of this distribution, I replace it by a two-delta function distribution, namely replace the network by a network comprising of two types of conductors:
An effective conductor $`\sigma _m`$ describing the metallic QPCs, and whose conductance is given by (2), with an appropriately averaged $`ϵ_c`$:
$$\sigma _m=\frac{2e^2}{h}\frac{1}{1+\mathrm{exp}[A/kT]}$$
(3)
($`A`$, which depends on the potential fluctuations distribution, is taken as unity in the following, i.e. it defines the temperature scale), and an effective conductor $`\sigma _i`$ describing the contribution of the insulating phase. The conductance of the insulating QPC is dominated by activation, $`\sigma _i=\sigma _a\mathrm{exp}[A_1/T]`$. (Indeed, experimental invesigations reported that “two different contributions to the conductivity (or two conducting systems) may exist, one with a metallic temperature behavior and another one with a standard, insulating, weak-localization behavior” .)
The scaling form of the two-dimensional conductance of such a two-phase mixture, $`\sigma `$, near the percolation threshold is well known ,
$$\sigma =\sqrt{\sigma _m\sigma _i}f\left[(nn_c)^t\sqrt{\sigma _m/\sigma _i}\right]$$
(4)
with $`t1.3`$, the conductance critical exponent for two-dimensional percolation, and
$$f(x)\{\begin{array}{cc}x\hfill & x\mathrm{}\hfill \\ 1/x\hfill & x\mathrm{}\hfill \end{array},$$
(5)
so that in the case $`\sigma _i0`$ (a regular random resistor network), $`\sigma (n)\sigma _m(nn_c)^t`$, while in the case $`\sigma _m\mathrm{}`$ (a mixture of an insulator and a superconductor), $`\sigma (n)\sigma _i(n_cn)^t`$. (In the above I used the notation $`x^tsign(x)|x|^t`$.) The exact form of $`f(x)`$ is not very important, and in the following I have chosen $`f(x)=\mathrm{log}(B+\mathrm{exp}[x])/\mathrm{log}(B+\mathrm{exp}[x])`$, with $`B=2`$.
## III “Zero” temperature
At low enough temperatures, such that $`TA,A_1`$ the conductors have either zero conductance or a conductance equal to $`2e^2/h`$. If the dephasing time is still finite at these temperatures, one has a random-resistor network, which exhibits a second-order percolation transition. In Fig. 1 I fit the lowest-temperature experimental data of to the expected critical dependence. Clearly, the agreement with the classical percolation prediction is excellent. Such as agreement with the classical percolation critical behavior may serve as an additional experimental indication that the dephasing is still finite at the lowest available experimental temperatures.
Fig. 1. Comparison of the lowest temperature data of (two sets of data, triangles and squares, 330mK, density given by the lower axis) and of (circles, 57mK, density given by the upper axis), and of the n-type data (diamonds) to the prediction of percolation theory (solid line). Inset: Logarithmic derivative of the data which gives a line whose slope is inverse of the critical exponent. The percolation prediction $`(t1.3)`$ is given by the solid line. For comparison a $`t=1`$ slope is also shown (broken line).
In the inset I plot the experimental data for $`1/(d\mathrm{log}\sigma /d\mathrm{log}n)`$, which, if indeed $`\sigma (nn_c)^t`$, is given by $`(nn_c)/t`$. The data indeed fits on a straight line, with a slope given by $`1/t1/1.3`$. For comparison a straight line with a slope of unity is also depicted, in order to demonstrate that a critical exponent of unity cannot fit the data.
## IV Temperature dependence of the resistance
As temperature increases, the Fermi-Dirac distribution is broadened. Consequently the conductance of the of the transparent quantum point contacts ($`ϵ_c<\mu `$) decreases exponentially (towards half its value), while that of the insulating ones increases. Thus we expect to see rather dramatic effects as a function of temperature. This is indeed depicted in Fig. 2. In (a) we plot the prediction of the model and in (b) the experimental data . As temperature is lowered, systems with slightly different resistance at high temperatures will diverge exponentially with decreasing temperatures. The resistance of systems on the metallic side ($`n>n_c`$) will saturate at zero temperature, while that of insulating samples will diverge, in agreement with the general shape of the experimental curves. Note that there is an upward turn even on the metallic side of the transition. In fact, close to the transition, on the metallic side, as temperature decreases, the conductance of the insulating part decreases significantly, and its contribution to the total conductance is dramatically reduced. Since the critical percolation cluster is very ramified (in fact of fractal dimension), the contribution of the insulating part of the system dominates at high temperatures, and the increase of its resistance with decreasing temperature leads to an increase of the resistance as temperature is lowered, even on the metallic side. At low enough temperatures, however, when the resistance of the insulating part of the network becomes high enough, its contribution to the total conductance becomes negligible. Then the total resistance is dominated by the percolating conducting network, and thus the overall resistance will decrease with decreasing temperature. This leads to a nonmonotonic temperature dependence on the metallic side of the critical point, which can be clearly observed both in the model (Fig. 2c) and in the data (Fig. 2d).
The fact that only deep in the metallic regime, the overall resistance increases with increasing temperature, suggests that the density at which the resistance is approximately temperature independent is not the true critical point, but rather deeper on the metallic side. This is clearly seen in Fig. 3b, where one can see a point where all the low-temperature curves nearly cross, well inside the metallic regime. The above discussion suggests that one should be cautious in associating the critical point with the experimentally observed “temperature-independent” point (Fig. 3a), as done routinely in the experiment interpretations.
Fig. 2. Temperature dependence of the resistance for systems of different densities, as obtained by the model (a) and compared to the experimental data of Ref. (b). The critical line is denoted by the bold curve. All curves below the critical line saturate at zero temperature, while above it the resistance diverges. For systems close to the transition on the metallic side, the resistance is a nonmonotonic function of temperature, as seen both in the model (c) and in the data (d).
Fig. 3. Comparison of the density dependence of the conductance between the data of (a) and the model (b), for several temperatures. The density at which the theoretical curves seem to cross each other, is well above the true critical point.
Lastly, The resistance at the metallic regime is given by some geometrical factor times the inverse of $`\sigma _m`$ (Eq. 3), which naturally gives the observed exponential temperature dependence observed experimentally. The high-temperature resistance of the critical density network is naturally around $`h/e^2`$, the only resistance scale in this model.
## V Parallel magnetic fields
The effect of a parallel magnetic field on the overall conductance is determined by the way it affects the individual points contacts. The effect of a parallel field on transport through a single QPC has been studied in detail . These experimental and theoretical studies demonstrated that the threshold density where the QPC opens up increases parabolically with the in-plane magnetic field. This effect was attributed to coupling of the in-plane motion to the strong confinement in the vertical direction, leading to an increase in the confining enrgy. Writing, for simplicity, the three dimensional Hamiltonian that describes free motion in two-dimensions and a harmonic confining potential in the third ($`z`$) direction, with a magnetic field pointing in the $`x`$-direction
$$=\frac{p_x^2}{2m}+\frac{(p_y+eBz)^2}{2m}+\frac{p_z^2}{2m}+\frac{1}{2}m\omega _0^2z^2,$$
(6)
it is straightforward to see that the bottom of the 2d band shifts from $`\mathrm{}\omega _0/2`$ to $`\mathrm{}\sqrt{\omega _0^2+\omega _c^2}/2`$, with $`\omega _ceB/mc`$, leading to a corresponding decrease in the kinetic energy of all electrons. Thus the effective critical Fermi energy, or density, becomes larger. In other words, for a given density or chemical potential, if the system at zero field is on the metallic side, i.e. if the Fermi momentum is above the critical momentum (or kinetic energy) allowing percolation through the system, a parallel field will lower that energy towards the critical energy, eventually crossing the critical point and leading to an insulating behavior. Fig. 4 depicts the experimental data of , and the corresponding predictions of the model. As expected, as magnetic field increases, the system gradually crosses over from a metallic to an insulating behavior.
The above discussion allows a quantitative prediction of the effective critical energy in a parallel field ,
$$ϵ_c(H)=ϵ_c(H=0)+\mathrm{}\left(\sqrt{\omega _0^2+\omega _c^2}\omega _0\right)/2.$$
(7)
At zero temperature, when the resistance on the insulating side is infinite, we expect the resistance on the metallic side $`\mu >ϵ_c(H=0)`$ to diverge with increasing field,
$$R(H)(\mu ϵ_c(H))^t.$$
(8)
For a finite temperature this divergence is cut off by the finite resistance of the insulator, and the magnetic field dependence changes as one crosses into the insulating side. This behavior is clearly seen in the experimental data of (very similar data was also reported by Mertes et al. ). Fig. 5 depicts the experimental data for the magnetic field dependence of the resistance, compared to what is expected from the model. The difference in behavior between the metallic and the insulating regimes is clear. On the metallic side we see that the resistance increases rapidly as the magnetic field brings the critical point closer to the chemical potential and then an abrupt change of behavior as the system enters the insulating regime. If the system was on the insulating side to begin with, then the magnetic field dependence of the resistance is similar to what is usually seen in systems where transport is via variable-range hopping . In these system the positive magnetoresistance is due to spin polarization . The magnetic field dependence on the insulating side depicted in Fig. V is assumed here to be due to that process. In this regime, however, the magnetic field at which there is a marked change in behavior is spin-related and should depend only weakly on density. A recent experimental investigation of the magnetoresistance in the insulating side indeed supports this mechanism.
Fig. 5. Comparison of the experimentally measured resistance, as a function of parallel magnetic field to the model predictions. On the metallic side, the magnetic field shift the critical point towards the chemical potential, leading to a divergence in the resistance, which is cut off by finite temperature.
As was mentioned above, the critical point in the density – magnetic field plane shifts towards higher densities with increasing magnetic field. Yoon et al. have also measured the dependence of the critical field on density, which can be deduced from the inversion of Eq.(7),
$$H_c=m^{}c/e\sqrt{\mathrm{\Delta }(H)^2+2\mathrm{}\omega _0\mathrm{\Delta }(H)},$$
(9)
where $`\mathrm{\Delta }(H)ϵ_c(H)ϵ_c(H=0).`$ The comparison of the prediction of this simple equation to the data is depicted in Fig. 6 for two samples cut from the same wafer. The fitting parameters are the zero-field critical point, that can be read directly from the data, the gate capacitance - the rate at which the Fermi energy changes with density, and the confining energy in the perpendicular direction, $`\mathrm{}\omega _0`$. It is encouraging to note that while the critical energy, which is determined by the disorder realization, and the gate capacitance, which is determined by the geometry, are different for the two samples, both sets of data can be fitted by the same value of the perpendicular confining energy, which ought to be the same for the two samples, and turns out to be $`\mathrm{}\omega _00.8meV`$, leading to an extension of the wavefunction in the perpendicular direction of the order of 11 nm, similar to the value used by the authors of Ref. to fit their experimental data.
Fig. 6. Comparison of the measured density dependence of the critical magnetic field (circles) to the prediction of the theory (Eq.9, solid lines). The two data sets are two different samples cut from the same wafer, and can be fitted using the same value of the perpendicular confining energy.
Interestingly, it seems that the effects of parallel fields can be understood without employing the electron spin. As a parallel field will also reduce the conductance of some of the point contacts from $`2e^2/h`$ to $`e^2/h`$, the Zeeman effect will also increase the system’s resistance. It should be noted that while the coupling of the in-plane motion to the confining potential in the perpendicular direction was also considered by Das Sarma and Hwang , the magnetoresistance predicted here, in contrast with Ref. , should not exhibit any anisotropy. The reason is that the direction of transport through the quantum point contacts is expected to be random, with no preferred direction.
## VI perpendicular magnetic fields
While the longitudinal resistance depends exponentially on temperature, the weak field Hall resistance is practically independent of temperature . Such an observation might be hard to account for in theories that argue for new non-Fermi liquid-like behavior, but it is trivial in the present model - the critical exponent for the Hall coefficient in a two dimensional percolation problem is exactly zero , and thus the Hall coefficient should display no critical behavior at the critical point. This prediction was indeed confirmed in classical percolation experiments , and is very similar to that observed by Pudalov et al. .
Fig. 7. The critical density - the number of electrons in the puddle, so that the topmost energy will allow transport through the point contact - as a function of magnetic field, in the presence of a finite disorder. The continuous curve is an averaged fit through the (necessarily integer) data points. Top-right inset: the corresponding experimental data . Bottom-left inset: results of effective medium theory .
The situation in larger perpendicular magnetic fields is more interesting, as quantum Hall (QH) states are formed. Transport through a single QPC in perpendicular field and the crossover between the zero field limit and the QH limit have been studied in detail . As expected, one finds that the critical energy oscillates with magnetic field due to the depopulation of Landau levels. In the present case, the oscillations are smoothed out by the disorder and by the averaging over many QPCs. Thus only the strongest oscillation, near $`\nu =1`$, may survive, leading to a single dip in the critical density vs. magnetic field plot, as was observed experimentally. In order to allow for the averaging procedure, one has to take the full conductance distribution into account, which is beyond the two-species scaling theory. For completeness I report here results of numerical calculations and effective medium theory . In the numerical calculation I studied the energy levels of one puddle of electrons, which we modeled by a circular disk, in the presence of disorder . In Fig. 7 I plot the “critical density” – the number of electrons that need to occupy the puddle, so that the energy of the highest-energy electron will be enough to transverse the QPC , equivalent in the bulk system to the critical density – as a function of magnetic field. Indeed a dip near $`\nu =1`$ is clearly seen, with all other oscillations smoothed out by the disorder. This curve has a strong resemblance to the experimental data (top-right inset). The results of the effective medium theory are depicted in the bottom-left inset, again demonstrating a dip near $`\nu =1`$. In addition, it is expected that as the magnetic field is lowered below the $`\nu =1`$ minimum, more than one channel will transverse some QPCs, leading to an increase in the critical conductance, as indeed reported experimentally.
## VII Reduction factors larger than 2
In the degenerate electron limit ($`TE_F`$), the biggest reduction factor in the resistance on the metallic side, with decreasing temperature is a factor of $`2`$. On the other hand, reduction factors close to an order of magnitude and even larger have been observed experimentally, especially in Silicon based samples . Here I show that when temperature becomes of the same order of magnitude of $`E_F`$, the reduction factor in the model can assume arbitrarily large factors .
The electron density is given by
$$n=_0^{\mathrm{}}𝑑ϵ\frac{\rho _0}{1+\mathrm{exp}[(ϵ\mu )/T]}=\rho _0T\mathrm{log}\left[1+\mathrm{exp}(\mu /T)\right]$$
(10)
where $`\rho _0`$, assumed constant, is the electronic density of states (per energy and per volume), and $`\mu `$, the chemical potential, is measured relative to the bottom of the band. Inverting the above equation, the chemical potential for a given density, is
$$\mu =T\mathrm{log}\left[\mathrm{exp}(n/\rho _0T)1\right]T\mathrm{log}\left[\mathrm{exp}(E_F/T)1\right]$$
(11)
where the Fermi energy is defined as the $`T0`$ limit of the chemical potential. These textbook expressions demonstrate that while the Fermi energy varies linearly with the density, the chemical potential may be more sensitive to density variations in the nondegenerate limit $`TE_F`$. Moreover, the chemical potential is now temperature dependent, and decreases with increasing temperature (see inset in Fig. 8). Substituting the above expression in the expression for the conductance through a single point contact (2) demonstrates that the conductance can decrease arbitrarily with increasing temperature (or, equivalently, that the resistance can decrease by an arbitrary factor with decreasing temperature). In Fig. 8 the temperature dependence of the conductance is plotted for two values of the Fermi energy, $`E_F`$. The curve for $`E_FT`$ shows the expected behavior for the degenerate electron gas - the conductance decreases and saturates at a value smaller from the zero temperature conductance by a factor of 2. On the other hand, in the nondegenerate regime, $`E_FT`$, the conductance decreases by a much larger factor.
Fig. 8. The temperature dependence of the conductance in the degenerate ($`kTE_F`$) and in the degenerate regime ($`kTE_F`$). While in the nondegenerate regime, the conductance can decrease by a factor of two, it can decrease arbitrarily in the nondegenerate regime, due to the temperature dependence of the chemical potential (inset).
## VIII Conclusions
All the above results and discussion demonstrated that many of the experimental observations can be explained in the context of the simple semi-classical, noninteracting model introduced here. This is not to say that interactions and other effects are irrelevant. For example, the formation of the electron puddles may be dominated by interaction effects (see, e.g., Ref.), and dephasing is certainly dominated, at these low temperatures, by electron-electron interactions. Other effects, including the energy dependence of the transmission coefficient and the possibility of more than one channel through the QPCs, the role of interband-scattering and temperature-dependent impurities may also be important to understand quantitative aspects of the data. Nevertheless, the fact that several important aspects of the experimental data can be explained in the context of a simple model is quite encouraging.
Some predictions made in , where the model was presented, were already confirmed. The mechanism for the quenching of the metallic phase by a parallel magnetic field was suggested there, and, as discussed above, agrees very well with recent experiments. Moreover, it was suggested that local measurements will be able to explore the percolative nature of the insulating phase. Indeed, Ilani et al. have used local probes to measure the change of the local chemical potential with density. While on the metallic side the signals from all probes were identical, and were accurately described by Hartree-Fock theory, these probes gave different signals on the insulating side, which the authors interpreted as a signature of a percolative phase. (An indirect experimental verification of the percolation process in the QH regime was already reported in ). As the metallic puddles can be thought of as quantum dots, one can use the abundant information about such structures , to gain additional understanding of the characteristics of the puddles and the phase separation. Such local probes can give a ”smoking gun” verification of the picture presented here by looking for the periodic oscillations of the local chemical potential on the insulating side, due to depopulation of the Landau levels, as was observed in quantum dots .
To conclude, a semi-classical model, combining local quantum transport and global classical percolation was employed to explain the observed metal-insulator transition. The model attributes the transition to the finite dephasing length in these temperatures. As temperature is lowered even further, and the dephasing length becomes larger than the puddle size, quantum localization effects should kick in. Such weak-localization corrections in the metallic side were indeed observed experimentally, confirming the expectation that, if the dephasing length indeed diverges at zero temperature, these systems will eventually becomes Anderson insulators.
I thank many of my colleagues for fruitful discussions: A. Auerbach, Y. Gefen, Y. Hanein, P. McEuen, D. Shahar, E. Shimshoni, U. Sivan, A. Stern, N. S. Wingreen, A. Yacoby and Y. Yaish. In particular, I would like to thank Y. Hanein & D. Shahar and U. Sivan & Y. Yaish for making their data available to me. This work was supported by THE ISRAEL SCIENCE FOUNDATION - Centers of Excellence Program, and by the German Ministry of Science. |
no-problem/9912/hep-ex9912022.html | ar5iv | text | # References
Fermilab-PUB-99/360-E
CDF/PUB/JET/PUBLIC/4760
December 10, 1999
A Measurement of the Differential Dijet Mass Cross Section in $`p\overline{p}`$ Collisions at $`\sqrt{s}=1.8`$ TeV
T. Affolder,<sup>21</sup> H. Akimoto,<sup>43</sup> A. Akopian,<sup>36</sup> M. G. Albrow,<sup>10</sup> P. Amaral,<sup>7</sup> S. R. Amendolia,<sup>32</sup> D. Amidei,<sup>24</sup> K. Anikeev,<sup>22</sup> J. Antos,<sup>1</sup> G. Apollinari,<sup>36</sup> T. Arisawa,<sup>43</sup> T. Asakawa,<sup>41</sup> W. Ashmanskas,<sup>7</sup> M. Atac,<sup>10</sup> F. Azfar,<sup>29</sup> P. Azzi-Bacchetta,<sup>30</sup> N. Bacchetta,<sup>30</sup> M. W. Bailey,<sup>26</sup> S. Bailey,<sup>14</sup> P. de Barbaro,<sup>35</sup> A. Barbaro-Galtieri,<sup>21</sup> V. E. Barnes,<sup>34</sup> B. A. Barnett,<sup>17</sup> M. Barone,<sup>12</sup> G. Bauer,<sup>22</sup> F. Bedeschi,<sup>32</sup> S. Belforte,<sup>40</sup> G. Bellettini,<sup>32</sup> J. Bellinger,<sup>44</sup> D. Benjamin,<sup>9</sup> J. Bensinger,<sup>4</sup> A. Beretvas,<sup>10</sup> J. P. Berge,<sup>10</sup> J. Berryhill,<sup>7</sup> S. Bertolucci,<sup>12</sup> B. Bevensee,<sup>31</sup> A. Bhatti,<sup>36</sup> C. Bigongiari,<sup>32</sup> M. Binkley,<sup>10</sup> D. Bisello,<sup>30</sup> R. E. Blair,<sup>2</sup> C. Blocker,<sup>4</sup> K. Bloom,<sup>24</sup> B. Blumenfeld,<sup>17</sup> S. R. Blusk,<sup>35</sup> A. Bocci,<sup>32</sup> A. Bodek,<sup>35</sup> W. Bokhari,<sup>31</sup> G. Bolla,<sup>34</sup> Y. Bonushkin,<sup>5</sup> D. Bortoletto,<sup>34</sup> J. Boudreau,<sup>33</sup> A. Brandl,<sup>26</sup> S. van den Brink,<sup>17</sup> C. Bromberg,<sup>25</sup> M. Brozovic,<sup>9</sup> N. Bruner,<sup>26</sup> E. Buckley-Geer,<sup>10</sup> J. Budagov,<sup>8</sup> H. S. Budd,<sup>35</sup> K. Burkett,<sup>14</sup> G. Busetto,<sup>30</sup> A. Byon-Wagner,<sup>10</sup> K. L. Byrum,<sup>2</sup> M. Campbell,<sup>24</sup> A. Caner,<sup>32</sup> W. Carithers,<sup>21</sup> J. Carlson,<sup>24</sup> D. Carlsmith,<sup>44</sup> J. Cassada,<sup>35</sup> A. Castro,<sup>30</sup> D. Cauz,<sup>40</sup> A. Cerri,<sup>32</sup> A. W. Chan,<sup>1</sup> P. S. Chang,<sup>1</sup> P. T. Chang,<sup>1</sup> J. Chapman,<sup>24</sup> C. Chen,<sup>31</sup> Y. C. Chen,<sup>1</sup> M. -T. Cheng,<sup>1</sup> M. Chertok,<sup>38</sup> G. Chiarelli,<sup>32</sup> I. Chirikov-Zorin,<sup>8</sup> G. Chlachidze,<sup>8</sup> F. Chlebana,<sup>10</sup> L. Christofek,<sup>16</sup> M. L. Chu,<sup>1</sup> S. Cihangir,<sup>10</sup> C. I. Ciobanu,<sup>27</sup> A. G. Clark,<sup>13</sup> A. Connolly,<sup>21</sup> J. Conway,<sup>37</sup> J. Cooper,<sup>10</sup> M. Cordelli,<sup>12</sup> D. Costanzo,<sup>32</sup> J. Cranshaw,<sup>39</sup> D. Cronin-Hennessy,<sup>9</sup> R. Cropp,<sup>23</sup> R. Culbertson,<sup>7</sup> D. Dagenhart,<sup>42</sup> F. DeJongh,<sup>10</sup> S. Dell’Agnello,<sup>12</sup> M. Dell’Orso,<sup>32</sup> R. Demina,<sup>10</sup> L. Demortier,<sup>36</sup> M. Deninno,<sup>3</sup> P. F. Derwent,<sup>10</sup> T. Devlin,<sup>37</sup> J. R. Dittmann,<sup>10</sup> S. Donati,<sup>32</sup> J. Done,<sup>38</sup> T. Dorigo,<sup>14</sup> N. Eddy,<sup>16</sup> K. Einsweiler,<sup>21</sup> J. E. Elias,<sup>10</sup> E. Engels, Jr.,<sup>33</sup> W. Erdmann,<sup>10</sup> D. Errede,<sup>16</sup> S. Errede,<sup>16</sup> Q. Fan,<sup>35</sup> R. G. Feild,<sup>45</sup> C. Ferretti,<sup>32</sup> I. Fiori,<sup>3</sup> B. Flaugher,<sup>10</sup> G. W. Foster,<sup>10</sup> M. Franklin,<sup>14</sup> J. Freeman,<sup>10</sup> J. Friedman,<sup>22</sup> Y. Fukui,<sup>20</sup> S. Galeotti,<sup>32</sup> M. Gallinaro,<sup>36</sup> T. Gao,<sup>31</sup> M. Garcia-Sciveres,<sup>21</sup> A. F. Garfinkel,<sup>34</sup> P. Gatti,<sup>30</sup> C. Gay,<sup>45</sup> S. Geer,<sup>10</sup> D. W. Gerdes,<sup>24</sup> P. Giannetti,<sup>32</sup> P. Giromini,<sup>12</sup> V. Glagolev,<sup>8</sup> M. Gold,<sup>26</sup> J. Goldstein,<sup>10</sup> A. Gordon,<sup>14</sup> A. T. Goshaw,<sup>9</sup> Y. Gotra,<sup>33</sup> K. Goulianos,<sup>36</sup> H. Grassmann,<sup>40</sup> C. Green,<sup>34</sup> L. Groer,<sup>37</sup> C. Grosso-Pilcher,<sup>7</sup> M. Guenther,<sup>34</sup> G. Guillian,<sup>24</sup> J. Guimaraes da Costa,<sup>24</sup> R. S. Guo,<sup>1</sup> C. Haber,<sup>21</sup> E. Hafen,<sup>22</sup> S. R. Hahn,<sup>10</sup> C. Hall,<sup>14</sup> T. Handa,<sup>15</sup> R. Handler,<sup>44</sup> W. Hao,<sup>39</sup> F. Happacher,<sup>12</sup> K. Hara,<sup>41</sup> A. D. Hardman,<sup>34</sup> R. M. Harris,<sup>10</sup> F. Hartmann,<sup>18</sup> K. Hatakeyama,<sup>36</sup> J. Hauser,<sup>5</sup> J. Heinrich,<sup>31</sup> A. Heiss,<sup>18</sup> M. Herndon,<sup>17</sup> B. Hinrichsen,<sup>23</sup> K. D. Hoffman,<sup>34</sup> C. Holck,<sup>31</sup> R. Hollebeek,<sup>31</sup> L. Holloway,<sup>16</sup> R. Hughes,<sup>27</sup> J. Huston,<sup>25</sup> J. Huth,<sup>14</sup> H. Ikeda,<sup>41</sup> J. Incandela,<sup>10</sup> G. Introzzi,<sup>32</sup> J. Iwai,<sup>43</sup> Y. Iwata,<sup>15</sup> E. James,<sup>24</sup> H. Jensen,<sup>10</sup> M. Jones,<sup>31</sup> U. Joshi,<sup>10</sup> H. Kambara,<sup>13</sup> T. Kamon,<sup>38</sup> T. Kaneko,<sup>41</sup> K. Karr,<sup>42</sup> H. Kasha,<sup>45</sup> Y. Kato,<sup>28</sup> T. A. Keaffaber,<sup>34</sup> K. Kelley,<sup>22</sup> M. Kelly,<sup>24</sup> R. D. Kennedy,<sup>10</sup> R. Kephart,<sup>10</sup> D. Khazins,<sup>9</sup> T. Kikuchi,<sup>41</sup> M. Kirk,<sup>4</sup> B. J. Kim,<sup>19</sup> H. S. Kim,<sup>16</sup> M. J. Kim,<sup>19</sup> S. H. Kim,<sup>41</sup> Y. K. Kim,<sup>21</sup> L. Kirsch,<sup>4</sup> S. Klimenko,<sup>11</sup> P. Koehn,<sup>27</sup> A. Köngeter,<sup>18</sup> K. Kondo,<sup>43</sup> J. Konigsberg,<sup>11</sup> K. Kordas,<sup>23</sup> A. Korn,<sup>22</sup> A. Korytov,<sup>11</sup> E. Kovacs,<sup>2</sup> J. Kroll,<sup>31</sup> M. Kruse,<sup>35</sup> S. E. Kuhlmann,<sup>2</sup> K. Kurino,<sup>15</sup> T. Kuwabara,<sup>41</sup> A. T. Laasanen,<sup>34</sup> N. Lai,<sup>7</sup> S. Lami,<sup>36</sup> S. Lammel,<sup>10</sup> J. I. Lamoureux,<sup>4</sup> M. Lancaster,<sup>21</sup> G. Latino,<sup>32</sup> T. LeCompte,<sup>2</sup> A. M. Lee IV,<sup>9</sup> S. Leone,<sup>32</sup> J. D. Lewis,<sup>10</sup> M. Lindgren,<sup>5</sup> T. M. Liss,<sup>16</sup> J. B. Liu,<sup>35</sup> Y. C. Liu,<sup>1</sup> N. Lockyer,<sup>31</sup> J. Loken,<sup>29</sup> M. Loreti,<sup>30</sup> D. Lucchesi,<sup>30</sup> P. Lukens,<sup>10</sup> S. Lusin,<sup>44</sup> L. Lyons,<sup>29</sup> J. Lys,<sup>21</sup> R. Madrak,<sup>14</sup> K. Maeshima,<sup>10</sup> P. Maksimovic,<sup>14</sup> L. Malferrari,<sup>3</sup> M. Mangano,<sup>32</sup> M. Mariotti,<sup>30</sup> G. Martignon,<sup>30</sup> A. Martin,<sup>45</sup> J. A. J. Matthews,<sup>26</sup> J. Mayer,<sup>23</sup> P. Mazzanti,<sup>3</sup> K. S. McFarland,<sup>35</sup> P. McIntyre,<sup>38</sup> E. McKigney,<sup>31</sup> M. Menguzzato,<sup>30</sup> A. Menzione,<sup>32</sup> C. Mesropian,<sup>36</sup> T. Miao,<sup>10</sup> R. Miller,<sup>25</sup> J. S. Miller,<sup>24</sup> H. Minato,<sup>41</sup> S. Miscetti,<sup>12</sup> M. Mishina,<sup>20</sup> G. Mitselmakher,<sup>11</sup> N. Moggi,<sup>3</sup> E. Moore,<sup>26</sup> R. Moore,<sup>24</sup> Y. Morita,<sup>20</sup> A. Mukherjee,<sup>10</sup> T. Muller,<sup>18</sup> A. Munar,<sup>32</sup> P. Murat,<sup>32</sup> S. Murgia,<sup>25</sup> M. Musy,<sup>40</sup> J. Nachtman,<sup>5</sup> S. Nahn,<sup>45</sup> H. Nakada,<sup>41</sup> T. Nakaya,<sup>7</sup> I. Nakano,<sup>15</sup> C. Nelson,<sup>10</sup> D. Neuberger,<sup>18</sup> C. Newman-Holmes,<sup>10</sup> C.-Y. P. Ngan,<sup>22</sup> P. Nicolaidi,<sup>40</sup> H. Niu,<sup>4</sup> L. Nodulman,<sup>2</sup> A. Nomerotski,<sup>11</sup> S. H. Oh,<sup>9</sup> T. Ohmoto,<sup>15</sup> T. Ohsugi,<sup>15</sup> R. Oishi,<sup>41</sup> T. Okusawa,<sup>28</sup> J. Olsen,<sup>44</sup> C. Pagliarone,<sup>32</sup> F. Palmonari,<sup>32</sup> R. Paoletti,<sup>32</sup> V. Papadimitriou,<sup>39</sup> S. P. Pappas,<sup>45</sup> D. Partos,<sup>4</sup> J. Patrick,<sup>10</sup> G. Pauletta,<sup>40</sup> M. Paulini,<sup>21</sup> C. Paus,<sup>22</sup> L. Pescara,<sup>30</sup> T. J. Phillips,<sup>9</sup> G. Piacentino,<sup>32</sup> K. T. Pitts,<sup>10</sup> R. Plunkett,<sup>10</sup> A. Pompos,<sup>34</sup> L. Pondrom,<sup>44</sup> G. Pope,<sup>33</sup> M. Popovic,<sup>23</sup> F. Prokoshin,<sup>8</sup> J. Proudfoot,<sup>2</sup> F. Ptohos,<sup>12</sup> G. Punzi,<sup>32</sup> K. Ragan,<sup>23</sup> A. Rakitine,<sup>22</sup> D. Reher,<sup>21</sup> A. Reichold,<sup>29</sup> W. Riegler,<sup>14</sup> A. Ribon,<sup>30</sup> F. Rimondi,<sup>3</sup> L. Ristori,<sup>32</sup> W. J. Robertson,<sup>9</sup> A. Robinson,<sup>23</sup> T. Rodrigo,<sup>6</sup> S. Rolli,<sup>42</sup> L. Rosenson,<sup>22</sup> R. Roser,<sup>10</sup> R. Rossin,<sup>30</sup> W. K. Sakumoto,<sup>35</sup> D. Saltzberg,<sup>5</sup> A. Sansoni,<sup>12</sup> L. Santi,<sup>40</sup> H. Sato,<sup>41</sup> P. Savard,<sup>23</sup> P. Schlabach,<sup>10</sup> E. E. Schmidt,<sup>10</sup> M. P. Schmidt,<sup>45</sup> M. Schmitt,<sup>14</sup> L. Scodellaro,<sup>30</sup> A. Scott,<sup>5</sup> A. Scribano,<sup>32</sup> S. Segler,<sup>10</sup> S. Seidel,<sup>26</sup> Y. Seiya,<sup>41</sup> A. Semenov,<sup>8</sup> F. Semeria,<sup>3</sup> T. Shah,<sup>22</sup> M. D. Shapiro,<sup>21</sup> P. F. Shepard,<sup>33</sup> T. Shibayama,<sup>41</sup> M. Shimojima,<sup>41</sup> M. Shochet,<sup>7</sup> J. Siegrist,<sup>21</sup> G. Signorelli,<sup>32</sup> A. Sill,<sup>39</sup> P. Sinervo,<sup>23</sup> P. Singh,<sup>16</sup> A. J. Slaughter,<sup>45</sup> K. Sliwa,<sup>42</sup> C. Smith,<sup>17</sup> F. D. Snider,<sup>10</sup> A. Solodsky,<sup>36</sup> J. Spalding,<sup>10</sup> T. Speer,<sup>13</sup> P. Sphicas,<sup>22</sup> F. Spinella,<sup>32</sup> M. Spiropulu,<sup>14</sup> L. Spiegel,<sup>10</sup> L. Stanco,<sup>30</sup> J. Steele,<sup>44</sup> A. Stefanini,<sup>32</sup> J. Strologas,<sup>16</sup> F. Strumia, <sup>13</sup> D. Stuart,<sup>10</sup> K. Sumorok,<sup>22</sup> T. Suzuki,<sup>41</sup> T. Takano,<sup>28</sup> R. Takashima,<sup>15</sup> K. Takikawa,<sup>41</sup> P. Tamburello,<sup>9</sup> M. Tanaka,<sup>41</sup> B. Tannenbaum,<sup>5</sup> W. Taylor,<sup>23</sup> M. Tecchio,<sup>24</sup> P. K. Teng,<sup>1</sup> K. Terashi,<sup>41</sup> S. Tether,<sup>22</sup> D. Theriot,<sup>10</sup> R. Thurman-Keup,<sup>2</sup> P. Tipton,<sup>35</sup> S. Tkaczyk,<sup>10</sup> K. Tollefson,<sup>35</sup> A. Tollestrup,<sup>10</sup> H. Toyoda,<sup>28</sup> W. Trischuk,<sup>23</sup> J. F. de Troconiz,<sup>14</sup> J. Tseng,<sup>22</sup> N. Turini,<sup>32</sup> F. Ukegawa,<sup>41</sup> J. Valls,<sup>37</sup> S. Vejcik III,<sup>10</sup> G. Velev,<sup>32</sup> R. Vidal,<sup>10</sup> R. Vilar,<sup>6</sup> I. Volobouev,<sup>21</sup> D. Vucinic,<sup>22</sup> R. G. Wagner,<sup>2</sup> R. L. Wagner,<sup>10</sup> J. Wahl,<sup>7</sup> N. B. Wallace,<sup>37</sup> A. M. Walsh,<sup>37</sup> C. Wang,<sup>9</sup> C. H. Wang,<sup>1</sup> M. J. Wang,<sup>1</sup> T. Watanabe,<sup>41</sup> D. Waters,<sup>29</sup> T. Watts,<sup>37</sup> R. Webb,<sup>38</sup> H. Wenzel,<sup>18</sup> W. C. Wester III,<sup>10</sup> A. B. Wicklund,<sup>2</sup> E. Wicklund,<sup>10</sup> H. H. Williams,<sup>31</sup> P. Wilson,<sup>10</sup> B. L. Winer,<sup>27</sup> D. Winn,<sup>24</sup> S. Wolbers,<sup>10</sup> D. Wolinski,<sup>24</sup> J. Wolinski,<sup>25</sup> S. Wolinski,<sup>24</sup> S. Worm,<sup>26</sup> X. Wu,<sup>13</sup> J. Wyss,<sup>32</sup> A. Yagil,<sup>10</sup> W. Yao,<sup>21</sup> G. P. Yeh,<sup>10</sup> P. Yeh,<sup>1</sup> J. Yoh,<sup>10</sup> C. Yosef,<sup>25</sup> T. Yoshida,<sup>28</sup> I. Yu,<sup>19</sup> S. Yu,<sup>31</sup> A. Zanetti,<sup>40</sup> F. Zetti,<sup>21</sup> and S. Zucchelli<sup>3</sup>
(CDF Collaboration)
<sup>1</sup> Institute of Physics, Academia Sinica, Taipei, Taiwan 11529, Republic of China
<sup>2</sup> Argonne National Laboratory, Argonne, Illinois 60439
<sup>3</sup> Istituto Nazionale di Fisica Nucleare, University of Bologna, I-40127 Bologna, Italy
<sup>4</sup> Brandeis University, Waltham, Massachusetts 02254
<sup>5</sup> University of California at Los Angeles, Los Angeles, California 90024
<sup>6</sup> Instituto de Fisica de Cantabria, University of Cantabria, 39005 Santander, Spain
<sup>7</sup> Enrico Fermi Institute, University of Chicago, Chicago, Illinois 60637
<sup>8</sup> Joint Institute for Nuclear Research, RU-141980 Dubna, Russia
<sup>9</sup> Duke University, Durham, North Carolina 27708
<sup>10</sup> Fermi National Accelerator Laboratory, Batavia, Illinois 60510
<sup>11</sup> University of Florida, Gainesville, Florida 32611
<sup>12</sup> Laboratori Nazionali di Frascati, Istituto Nazionale di Fisica Nucleare, I-00044 Frascati, Italy
<sup>13</sup> University of Geneva, CH-1211 Geneva 4, Switzerland
<sup>14</sup> Harvard University, Cambridge, Massachusetts 02138
<sup>15</sup> Hiroshima University, Higashi-Hiroshima 724, Japan
<sup>16</sup> University of Illinois, Urbana, Illinois 61801
<sup>17</sup> The Johns Hopkins University, Baltimore, Maryland 21218
<sup>18</sup> Institut für Experimentelle Kernphysik, Universität Karlsruhe, 76128 Karlsruhe, Germany
<sup>19</sup> Korean Hadron Collider Laboratory: Kyungpook National University, Taegu 702-701; Seoul National University, Seoul 151-742; and SungKyunKwan University, Suwon 440-746; Korea
<sup>20</sup> High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305, Japan
<sup>21</sup> Ernest Orlando Lawrence Berkeley National Laboratory, Berkeley, California 94720
<sup>22</sup> Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
<sup>23</sup> Institute of Particle Physics: McGill University, Montreal H3A 2T8; and University of Toronto, Toronto M5S 1A7; Canada
<sup>24</sup> University of Michigan, Ann Arbor, Michigan 48109
<sup>25</sup> Michigan State University, East Lansing, Michigan 48824
<sup>26</sup> University of New Mexico, Albuquerque, New Mexico 87131
<sup>27</sup> The Ohio State University, Columbus, Ohio 43210
<sup>28</sup> Osaka City University, Osaka 588, Japan
<sup>29</sup> University of Oxford, Oxford OX1 3RH, United Kingdom
<sup>30</sup> Universita di Padova, Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova, Italy
<sup>31</sup> University of Pennsylvania, Philadelphia, Pennsylvania 19104
<sup>32</sup> Istituto Nazionale di Fisica Nucleare, University and Scuola Normale Superiore of Pisa, I-56100 Pisa, Italy
<sup>33</sup> University of Pittsburgh, Pittsburgh, Pennsylvania 15260
<sup>34</sup> Purdue University, West Lafayette, Indiana 47907
<sup>35</sup> University of Rochester, Rochester, New York 14627
<sup>36</sup> Rockefeller University, New York, New York 10021
<sup>37</sup> Rutgers University, Piscataway, New Jersey 08855
<sup>38</sup> Texas A&M University, College Station, Texas 77843
<sup>39</sup> Texas Tech University, Lubbock, Texas 79409
<sup>40</sup> Istituto Nazionale di Fisica Nucleare, University of Trieste/ Udine, Italy
<sup>41</sup> University of Tsukuba, Tsukuba, Ibaraki 305, Japan
<sup>42</sup> Tufts University, Medford, Massachusetts 02155
<sup>43</sup> Waseda University, Tokyo 169, Japan
<sup>44</sup> University of Wisconsin, Madison, Wisconsin 53706
<sup>45</sup> Yale University, New Haven, Connecticut 06520
Abstract
We present a measurement of the cross section for production of two or more jets as a function of dijet mass, based on an integrated luminosity of 86 $`\mathrm{pb}^1`$ collected with the Collider Detector at Fermilab. Our dijet mass spectrum is described within errors by next-to-leading order QCD predictions using CTEQ4HJ parton distributions, and is in good agreement with a similar measurement from the D$`\mathrm{}`$ experiment.
PACS numbers: 13.85.Rm, 12.38.Qk,
Hard collisions between protons and antiprotons predominantly produce dijet events, which are events containing at least two high energy jets. A measurement of the dijet mass differential cross section provides a fundamental test of Quantum Chromodynamics (QCD) and a constraint on the parton distributions of the proton. We previously reported measurements of the inclusive jet transverse energy ($`E_T`$) spectrum and the cross section for events with large total $`E_T`$ . Both measurements indicated an excess of events at high $`E_T`$ compared to the predictions of QCD. This letter presents our most recent measurement of the dijet mass spectrum and compares it with the predictions of next-to-leading order QCD and the measurement of D$`\mathrm{}`$ . This measurement, with an integrated luminosity of $`86`$ pb<sup>-1</sup>, is significantly more sensitive to events at high dijet mass than our previous measurements of the dijet mass spectrum with integrated luminosities of $`4.2`$ pb<sup>-1</sup> and $`26`$ nb<sup>-1</sup>. We recently used this data sample combined with 20 pb<sup>-1</sup> of older data to measure dijet angular distributions and to search the dijet mass spectrum for new particles decaying to dijets .
A detailed description of the Collider Detector at Fermilab (CDF) can be found elsewhere . We use a coordinate system with the $`z`$ axis along the proton beam, transverse coordinate perpendicular to the beam, azimuthal angle $`\varphi `$, polar angle $`\theta `$, and pseudorapidity $`\eta =\mathrm{ln}\mathrm{tan}(\theta /2)`$. Jets are reconstructed as localized energy depositions in the CDF calorimeters, which are arranged in a projective tower geometry. The jet energy, $`E`$, is defined as the scalar sum of the calorimeter tower energies inside a cone of radius $`R=\sqrt{(\mathrm{\Delta }\eta )^2+(\mathrm{\Delta }\varphi )^2}=0.7`$, centered on the jet direction. Jets that share towers are combined if the total $`E_T`$ of the shared towers is greater than 75% of the $`E_T`$ of either jet; otherwise the towers are assigned to the nearest jet. The jet momentum, $`\stackrel{}{P}`$, is the vector sum: $`\stackrel{}{P}=E_i\widehat{u}_i`$, with $`\widehat{u}_i`$ being the unit vector pointing from the interaction point to the energy deposition $`E_i`$ inside the cone. The quantities $`E`$ and $`\stackrel{}{P}`$ are corrected for calorimeter non-linearities, energy lost in uninstrumented regions of the detector, and energy gained from the underlying event and additional $`p\overline{p}`$ interactions. We do not correct for energy lost outside the clustering cone, since a similar loss is present in the O($`\alpha _s^3`$) QCD calculation in which an extra gluon can be radiated outside the jet clustering cone. The jet energy corrections increase the measured jet energies on average by 20% (16%) for 100 GeV (400 GeV) jets. Full details of jet reconstruction and jet energy corrections at CDF can be found elsewhere .
We define the dijet system as the two jets with the highest transverse momentum in an event (leading jets) and define the dijet mass as $`M=\sqrt{(E_1+E_2)^2(\stackrel{}{P}_1+\stackrel{}{P}_2)^2}`$. Our data sample was obtained using four triggers that required at least one jet with uncorrected cluster transverse energies of 20, 50, 70 and 100 GeV, respectively. After correcting the jet energies these trigger samples were used to measure the dijet mass spectrum above 180, 217, 292, and 388 GeV/c<sup>2</sup>, respectively, where the trigger efficiencies were greater than 97%. The four data samples corresponded to integrated luminosities of $`0.091`$, $`2.2`$, $`11`$, and $`86`$ pb<sup>-1</sup> respectively. We selected events with two or more jets and required that the two leading jets have pseudorapidities of $`|\eta _1|<2`$ and $`|\eta _2|<2`$ and satisfy $`|\mathrm{cos}\theta ^{}|=|\mathrm{tanh}[(\eta _1\eta _2)/2]|<2/3`$, where $`\theta ^{}`$ is the scattering angle in the dijet center-of-mass frame. The $`\mathrm{cos}\theta ^{}`$ requirement ensures full acceptance as a function of the dijet mass. The $`z`$ position of the event vertex was required to be within 60 cm of the center of the detector; this cut removed 6% of the events. Backgrounds from cosmic rays, beam halo, and detector noise were removed by requiring $`\overline{)}E_T/\sqrt{E_T}<6`$ GeV<sup>1/2</sup> and $`E<2`$ TeV, where $`\overline{)}E_T`$ is the missing transverse energy , $`E_T`$ is the total transverse energy (scalar sum), and $`E`$ is the total energy in the event. These cuts selected 60,998 events.
The dijet mass resolution was determined using the PYTHIA Monte Carlo program and a CDF detector simulation. The true jet is defined from the true $`E_T`$ of particles emanating from the hard scattering, using the same jet algorithm as described above, but applied to towers of true $`E_T`$. The true $`E_T`$ of a tower is the $`E_T`$ of the generated particles that enter the tower. The simulated jet uses the $`E_T`$ of simulated calorimeter towers and the jet energy corrections for the CDF detector simulation. The $`E_T`$ of the simulated jets is corrected to equal the $`E_T`$ of the corresponding true jet on average. The dijet mass resolution function, $`\rho (M,m)`$, is then defined as the distribution of simulated dijet masses, $`M`$, for each value of true dijet mass, $`m`$. The dijet mass resolution was determined for six values of $`m`$ between 50 and 1000 GeV/c<sup>2</sup> and then a single smooth parameterization was used to interpolate between these values. The dijet mass resolution is approximately 10% for dijet masses above 150 GeV/c<sup>2</sup>.
The steeply falling dijet mass spectrum is distorted by the dijet mass resolution. We correct for this distortion with an unsmearing procedure. Define the smeared spectrum, $`S(M)`$, as the convolution of the true spectrum, $`T(m)`$, and the dijet mass resolution: $`S(M)=T(m)\rho (M,m)𝑑m`$. We parameterize the true dijet mass spectrum with $`T(m)=A(1m/\sqrt{s}+Cm^2/s)^N/m^P`$ where $`\sqrt{s}=1800`$ GeV. We then fit the smeared spectrum to our data to find the value of the four parameters $`A=6.67\times 10^{17}`$pb/(GeV/c<sup>2</sup>), $`C=2.95`$, $`N=6.98`$, and $`P=6.70`$. The fit has a $`\chi ^2`$ of 20.5 for 14 degrees of freedom. The unsmearing correction factors, $`K_i`$, are then defined as the ratio of the smeared to true spectrum, $`K_i=_iS(M)𝑑M/_iT(m)𝑑m`$, where the integration is over mass bin $`i`$. The value of $`K_i`$ smoothly decreases from $`1.07`$ at $`M=188`$ GeV/c<sup>2</sup>, to $`1.03`$ at $`M=540`$ GeV/c<sup>2</sup>, and then smoothly increases to $`1.12`$ at $`M=968`$ GeV/c<sup>2</sup>. The corrected cross section as a function of dijet mass is given by
$$d\sigma /dM=n_i/(K_iϵ_i\mathrm{\Delta }M),$$
(1)
where for each mass bin $`i`$, $`n_i`$ is the number of events, $``$ is the integrated luminosity, $`ϵ_i`$ is the efficiency of the trigger and $`z`$-vertex selections, and $`\mathrm{\Delta }M`$ is the width of the mass bin.
In Table I we list 12 independent sources of systematic uncertainty in the dijet mass cross section. They are the uncertainties in calorimeter calibration (cal), jet fragmentation (frag), underlying event (uevt), calorimeter stability over time (stab), relative jet energy scale as a function of pseudorapidity (rel), detector simulation (sim), the unsmearing procedure (unsm), the tails of the resolution function (tails), absolute luminosity of the jet 100 trigger (lum), and the relative luminosities of the jet 20, 50 and 70 triggers (J20, J50, and J70). The first four systematic uncertainties are equivalent to a combined uncertainty in the determination of the dijet mass variable which decreases from 2.7% at $`M=188`$ GeV/c<sup>2</sup> to 2.3% at $`M=968`$ GeV/c<sup>2</sup>. The uncertainty in detector simulation results from a $`0.5`$% uncertainty in the equality of the true dijet mass and the simulated dijet mass after all jet corrections are applied, independent from the first four systematic uncertainties mentioned above. To check that our unsmearing procedure is internally consistent, we applied the unsmearing procedure to a simulated dijet mass spectrum. The resulting $`K_i`$ were in agreement with the ratios of the simulated spectrum to true spectrum for each mass bin. Due to limited Monte Carlo statistics, the systematic uncertainty on the consistency of the unsmearing procedure was 4%. The uncertainty in the dijet mass resolution due to non-Gaussian tails was estimated by repeating the unsmearing procedure with a Gaussian resolution. The systematic uncertainties on the luminosity for the jet 20, 50 and 70 triggers came from the statistical uncertainty in matching the cross section of each trigger with the next higher threshold trigger (jet 70 was required to match jet 100 in the first bin of the jet 100 sample, jet 50 was required to match jet 70, etc.) Each of the independent systematic uncertainties in Table I are completely correlated as a function of dijet mass.
In Table II we present the fully corrected inclusive dijet mass spectrum for $`p\overline{p}`$ 2 jets + X, where X can be anything, including additional jets. We tabulate the differential cross section versus the mean dijet mass in bins of width approximately equal to the dijet mass resolution. Figure 1 shows the fractional difference between our data and O($`\alpha _s^3`$) QCD predictions from the parton level event generator JETRAD . Here the renormalization scale is $`\mu =0.5E_T^{max}`$, where $`E_T^{max}`$ is the maximum jet $`E_T`$ in the generated event. In the JETRAD calculation, two partons are combined if they are within $`R_{sep}=1.3R`$, which corresponds to the separation of jets in the data. Predictions are shown for various choices of parton distribution functions: CTEQ4M and MRST are standard sets and CTEQ4HJ adjusts the gluon distribution to give a better fit to the CDF inclusive jet $`E_T`$ spectrum at high $`E_T`$. Figure 1 shows that the CTEQ4HJ prediction models the shape and normalization of our dijet data better than CTEQ4M. The CTEQ4M prediction changes by less than 5% when the renormalization scale is changed to $`\mu =E_T^{max}`$, but it decreases between 7% and 17% for $`\mu =2E_T^{max}`$, and it decreases between 25% and 30% for $`\mu =0.25E_T^{max}`$. In Fig. 2 we compare the fractional difference between our data and QCD with that of the D$`\mathrm{}`$ experiment. The D$`\mathrm{}`$ measurement and the JETRAD prediction obtained by D$`\mathrm{}`$ required that each jet be in region $`|\eta |<1.0`$. Figure 2 shows that our data and the D$`\mathrm{}`$ data are in good agreement.
The covariance matrix for the dijet mass differential cross section is defined as $`V_{ij}=\delta _{ij}\sigma _i^2(stat)+\mathrm{\Sigma }_{k=1}^{12}\sigma _i(sys_k)\sigma _j(sys_k)`$. Here $`\delta _{ij}=1(0)`$ for $`i=j(ij)`$, $`\sigma _i(stat)`$ is the statistical uncertainty in mass bin $`i`$, and the sum is over each of the 12 systematic uncertainties $`\sigma _i(sys_k)`$ listed in Table I. Since the theory always predicts a smaller cross section than the data, the positive percent systematic uncertainty given in Table I was multiplied by the theoretical cross section to determine the $`\sigma _i(sys_k)`$. From the inverse of the covariance matrix, $`(V^1)_{ij}`$, and the difference between the data and the theory in each bin, $`\mathrm{\Delta }_i`$, we perform a $`\chi ^2`$ comparison between the data and the theory. Table III presents values for $`\chi ^2=\mathrm{\Sigma }_{i,j}\mathrm{\Delta }_i(V^1)_{ij}\mathrm{\Delta }_j`$ and the corresponding probability for a standard $`\chi ^2`$ distribution with 18 degrees of freedom (14 degrees of freedom for the row labeled Fit). Our data is in agreement within errors with the QCD prediction using CTEQ4HJ parton distributions, which has an enhanced gluon distribution at high $`E_T`$. Our data excludes CTEQ4M parton distributions, which have a standard gluon distribution. The $`\chi ^2`$ comparison shows that our data cannot exclude with high confidence QCD predictions using MRST parton distributions, even though the normalization of that prediction is well beneath that of our data. This is because of the presence of correlated systematic uncertainties that are large compared with the statistical uncertainties. Such correlated uncertainties can accommodate certain significant deviations in both normalization and shape between the data and the theory with a relatively small penalty in $`\chi ^2`$. Any theoretical prediction whose deviation from the data matches the shape of a correlated uncertainty will give a reasonable $`\chi ^2`$ provided that the normalization difference between the data and the prediction is no more than a few standard deviations.
In conclusion, we have measured the cross section for production of two or more jets in the kinematic region $`|\eta |<2`$ and $`|\mathrm{cos}\theta ^{}|<2/3`$ as a function of dijet invariant mass. The data at the highest values of dijet mass are above the QCD predictions using standard parton distributions, similar to the excess at high $`E_T`$ observed in previous measurements of the inclusive jet $`E_T`$ spectrum and the total $`E_T`$ spectrum . The CDF data are described within errors by next-to-leading order QCD predictions using CTEQ4HJ parton distributions, and are in good agreement with a similar measurement from the D$`\mathrm{}`$ experiment.
We thank the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported by the U.S. Department of Energy and National Science Foundation; the Italian Istituto Nazionale di Fisica Nucleare; the Ministry of Education, Science and Culture of Japan; the Natural Sciences and Engineering Research Council of Canada; the National Science Council of the Republic of China; the Swiss National Science Foundation; the A. P. Sloan Foundation; the Bundesministerium fuer Bildung und Forschung, Germany; and the Korea Science and Engineering Foundation.
Table II: For each bin we list the average dijet mass, the differential cross section, and the statistical and total systematic uncertainty on the cross section.
Table III: $`\chi ^2`$ and corresponding probability for theoretical predictions for the dijet mass spectrum with various choices of parton distribution functions and renormalization scales $`\mu =DE_T^{max}`$. The row labeled Fit is the parameterization used in the unsmearing (see text). |
no-problem/9912/physics9912017.html | ar5iv | text | # An elementary quantum mechanics calculation for the Casimir effect in one dimension
## 1 Introduction
A well known fact in quantum mechanics is that, even though the classical system admits a zero minimal energy, this does not generally hold for its quantum counterpart. The typical example is the $`\frac{1}{2}\mathrm{}\omega `$ value for the non-relativistic harmonic linear oscillator, where $`\mathrm{}`$ is the Planck constant and $`\omega `$ its proper frequency. More generally, if the system behaves as a collection of such oscillators, the minimal (or zero point) energy is
$`E_0={\displaystyle \frac{\mathrm{}}{2}}{\displaystyle \underset{n}{}}\omega _n,`$ (1)
where the sum extends over all proper frequencies $`\omega _n`$. As often pointed out in quantum field theory textbooks<sup>1,2</sup>, non-interacting quantized fields can be pictured this way, in the limit of an infinite spatial density of oscillators. In particular, for the scalar field the analogy with a set of coupled oscillators can be constructed in a precise manner<sup>1</sup>, as we shall also sketch below. We shall use here the oscillator model to obtain the Casimir effect for the massless field, in the case of one spatial dimension. The calculation is a simple exercise in non-relativistic quantum mechanics.
What is usually refered to as the Casimir effect<sup>3</sup> is the attraction force between two conducting parallel uncharged plates in vacuum. The phenomenon counts as a direct evidence for the zero point energy of the quantized electromagnetic field: assuming the plates are perfect conductors, the energy to area ratio reads<sup>1</sup> ($`c`$ is the speed of light and $`L`$ is the plates separation)
$`{\displaystyle \frac{E_0}{A}}={\displaystyle \frac{\pi ^2\mathrm{}c}{720L^3}},`$ (2)
from which the attraction force can be readily derived. Qualitatively, the $`L`$ dependence in $`E_0`$ is naturally understood as originating in that displayed by the proper frequencies of the field between the plates.
Actually, by summing over frequencies as in eq. (1) one obtains a divergent energy. This is a common situation in quantum field theory, being remedied by what is called renormalization: one basically subtracts a divergent quantity to render the result finite, with the justification that only energy $`differences`$ are relevant<sup>a</sup><sup>a</sup>aIn the assumption of neglecting gravitational phaenomena, see e.g. Ref 4.. Unfortunately, computational methods used to handle infinities to enforce this operation<sup>b</sup><sup>b</sup>bi.e. regularization methods. An example follows next paragraph. present themselves, rather generally, as a piece of technicality with no intuitive support; for the unaccustomed reader, they might very well leave the impression that the result is just a mathematical artifact. The oscillator analogy comes to provide a context to do the calculations within a physically transparent picture, with no extra mathematical input required.
## 2 Quantum field theory calculation
We briefly review first the field theoretical approach. Consider the uncharged massless scalar field in one dimension $`\mathrm{}<x<\mathrm{}`$,
$`\left({\displaystyle \frac{1}{c^2}}{\displaystyle \frac{^2}{t^2}}{\displaystyle \frac{^2}{x^2}}\right)\phi (x,t)=0,`$ (3)
subjected to the conditions
$`\phi (0,t)=\phi (L,t)=0`$ (4)
for some positive $`L`$. We are interested in the zero point energy as a function of $`L`$. We shall focus on the field in the “box” $`0<x<L`$. It is intuitively clear that the result for the exterior regions follows by making $`L\mathrm{}`$. Note that by eqs. (4) the field in the box is causally disconnected from that in the exterior regions, paralleling thus the situation for the electromagnetic field in the previous chapter.
Eqs. (3) and (4) define the proper frequencies as
$`\omega _n={\displaystyle \frac{n\pi }{L}},n=1,2,\mathrm{}\mathrm{},`$ (5)
obviously making $`E_0`$ a divergent quantity. A convenient way<sup>5</sup> to deal with this is by introducing the damping factors
$`\omega _n\omega _n\mathrm{exp}(\lambda \omega _n/c),\lambda >0,`$ (6)
and to consider $`E_0=E_0(L,\lambda )`$ in the limit $`\lambda 0`$. Performing the sum one obtains
$`E_0(L,\lambda )={\displaystyle \frac{\pi \mathrm{}c}{8L}}\left(\text{cth}^2{\displaystyle \frac{\pi \lambda }{2L}}1\right).`$ (7)
Using the expansion
$`\text{cth}z={\displaystyle \frac{1}{z}}+{\displaystyle \frac{z}{3}}+𝒪(z^3),`$ (8)
one finds
$`E_0(L,\lambda )={\displaystyle \frac{\mathrm{}c}{2\pi \lambda ^2}}L{\displaystyle \frac{\pi \mathrm{}c}{24L}}+𝒪\left({\displaystyle \frac{\lambda }{L}}\right).`$ (9)
Now, it is immediate to see that the $`\lambda ^2`$ term can be assigned to an infinite energy density corresponding to the case $`L\mathrm{}`$. The simple but essential observation is that, when considering also the energy of the exterior regions, the divergences add to an $`L`$-independent quantity, which makes them mechanically irrelevant. Renormalization amounts to ignore them. Thus one can set
$`E_0(L)={\displaystyle \frac{\pi }{24}}{\displaystyle \frac{\mathrm{}c}{L}},`$ (10)
which stands as the analogous result of eq. (2).
## 3 Quantum mechanics calculation
Consider the one dimensional system of an infinite number of coupled oscillators described by the Hamiltonian (all notations are conventional)
$`H={\displaystyle \underset{k}{}}{\displaystyle \frac{p_k^2}{2m}}+{\displaystyle \underset{k}{}}{\displaystyle \frac{k}{2}}(x_{k+1}x_k)^2.`$ (11)
$`x_k`$ measures the displacement of the $`k`$th oscillator from its equilibrium position, supposed equally spaced from the neighbored ones by distance $`a`$. Canonical commutations assure that the Heisenberg operators
$`x_k(t)=e^{\frac{i}{\mathrm{}}Ht}x_ke^{\frac{i}{\mathrm{}}Ht}`$ (12)
obey the classical equation
$`m{\displaystyle \frac{d^2x_k(t)}{dt^2}}k(x_{k+1}(t)+x_{k1}(t)2x_k(t))=0.`$ (13)
Let us consider the parameters $`m`$ and $`k`$ scaled such that
$`a^2{\displaystyle \frac{m}{k}}={\displaystyle \frac{1}{c^2}}.`$ (14)
As familiar from wave propagation theory in elastic media, eq. (13) becomes the d’Alembert equation (3) with the correspondence
$`x_k(t)\phi (ka,t),`$ (15)
and letting $`a0`$. $`x_k`$, $`p_m`$ commutations can be also shown to translate into the equal-time field variables commutations required by canonical quantization<sup>1</sup>. One can thus identify the quantum field with the continuum limit of the quantum mechanical system.
Our interest lies in the oscillator analogy when taking into account conditions (4). It is transparent from eq. (15) that they formally amount to set in $`H`$
$`x_0=x_N=0,p_0=p_N=0,`$ (16)
with $`N`$ some natural number. In other words, the 0th and the $`N`$th oscillator are supposed fixed. As in the precedent paragraph, we shall calculate the zero point energy of the oscillators in the “box” $`1kN1.`$
The first step is to decouple the oscillators by diagonalizing the quadratical form in coordinates in eq. (11). Equivalently, one needs the eigenvalues $`\lambda _n`$ of the $`N1`$ dimensional square matrix $`V_{km}`$ with elements
$`V_{k,k}=2,V_{k,k+1}=V_{k,k1}=1,`$ (17)
and zero in rest. One easily checks they are
$`\lambda _n=4\mathrm{sin}^2{\displaystyle \frac{n\pi }{N}},n=1,2,\mathrm{}N1,`$ (18)
with $`\lambda _n`$ corresponding to the (unnormalized) eigenvectors $`x_{n,k}=\mathrm{sin}\frac{nk}{N}`$. It follows
$`E_0(N,a)={\displaystyle \frac{\mathrm{}c}{a}}{\displaystyle \underset{n=1}{\overset{N1}{}}}\mathrm{sin}{\displaystyle \frac{n\pi }{2N}}.`$ (19)
To make connection with the continuous picture, we assign to the system the length
$`L=aN`$ (20)
measuring the distance between the fixed oscillators, and eliminate $`N`$ in favour of $`a`$ and $`L`$ in eq. (19). After summing the series one obtains
$`E_0(L,a)={\displaystyle \frac{\mathrm{}c}{2a}}\left(\text{ctg}{\displaystyle \frac{\pi a}{4L}}1\right).`$ (21)
With an expansion similar to eq. (8)
$`\text{ctg}z={\displaystyle \frac{1}{z}}{\displaystyle \frac{z}{3}}+𝒪(z^3),`$ (22)
it follows for $`aL`$
$`E_0(L,a)=\left({\displaystyle \frac{2\mathrm{}cL}{\pi a^2}}{\displaystyle \frac{\mathrm{}c}{2a}}\right){\displaystyle \frac{\pi }{24}}{\displaystyle \frac{\mathrm{}c}{L}}+𝒪\left({\displaystyle \frac{a}{L}}\right).`$ (23)
The result is essentially the same with that in eq. (9). The $`a`$ independent term reproduces the renormalized value (10). An identical comment applies to the $`a0`$ diverging terms. Note that the $`L\mathrm{}`$ energy density can be equally obtained by making $`N\mathrm{}`$ in eq. (19) and evaluating the sum as an integral. Physically put, this corresponds to an infinite crystal with vibration modes characterized by a continuous quasimomentum in the Brillouin zone
$`0k<{\displaystyle \frac{\pi }{a}},`$ (24)
and dispersion relation
$`\omega (k)={\displaystyle \frac{2c}{a}}\mathrm{sin}{\displaystyle \frac{ka}{2}}.`$ (25)
Note also that the second term, with no correspondent in eq. (9), can be absorbed into the first one with an irrelevant readjustment of the box length $`LL\frac{\pi a}{4}`$.
## 4 Quantum field vs oscillator model: quantitative comparison and a speculation
Let us define for $`a>0`$ the subtracted energy $`E_0^S(L,a)`$ as the difference between $`E_0(L,a)`$ and the paranthesis in eq. (23), so that
$`\underset{a0}{lim}E_0^S(L,a)=E_0(L).`$ (26)
One may ask when the oscillator model provides a good approximation for the quantum field, in the sense that
$`{\displaystyle \frac{E_0^S(L,a)}{E_0(L)}}=3\left\{\left({\displaystyle \frac{4L}{\pi a}}\right)\text{ctg}\left({\displaystyle \frac{\pi a}{4L}}\right)\left({\displaystyle \frac{4L}{\pi a}}\right)^2\right\}`$ (27)
is close to unity. Note that by eq. (20) expression above is a function of $`N`$ only. The corresponding dependence is plotted in Fig.1. One sees, quite surprisingly, that already a number of around twenty oscillators suffices to assure a relative difference smaller than $`10^4`$. More precisely, one has that the curve assymptotically approaches zero as
$`{\displaystyle \frac{\pi ^2}{240}}{\displaystyle \frac{1}{N^2}}.`$ (28)
We end with a bit of speculation. Suppose there exists some privileged scale $`l`$ (say, the Plank scale) which imposes a universal bound for lengths measurements, and consider the oscillator system with the spacing given by $`l`$. The indeterminacy in $`L`$ will cause an indeterminacy in energy (we assume $`Ll`$)
$`{\displaystyle \frac{\mathrm{\Delta }E_0^S}{E_0^S}}{\displaystyle \frac{\mathrm{\Delta }E_0}{E_0}}{\displaystyle \frac{l}{L}}.`$ (29)
On the other hand, the assymptotic expression (28) implies
$`{\displaystyle \frac{E_0^SE_0}{E_0}}\left({\displaystyle \frac{l}{L}}\right)^2.`$ (30)
We are led thus to the conclusion that, as far as Casimir effect measurements are considered, one could not distinguish between the “real” quantum field and its oscillator model. |
no-problem/9912/astro-ph9912159.html | ar5iv | text | # VLBI observations of two single dMe stars: spatial resolution and astrometry
## 1 Introduction
Hot coronae are the enigmatic link between cool, convective stars and their environment. While most of this hot plasma ($`>10^6`$ K) is contained by magnetic fields, some may escape due to overpressure as a hot stellar wind along open field lines. The extent of intense coronal emissions in X-rays or radio waves thus yields a lower limit to the size of closed magnetic loops, i.e. the size of the stellar magnetosphere.
Radio emission from solar active regions at 3 cm wavelength originates from altitudes above the photosphere between 5 to $`10\times 10^8`$cm, or 0.006 - 0.015 $`R_{}`$, (e.g. Aschwanden et al. 1995). It is generally attributed to thermal gyroresonance emission. The size of the radio image of the Sun increases at longer wavelengths. In soft X-rays and EUV line emissions, coronal loops reach $`3\times 10^{10}`$cm (0.5 $`R_{}`$) (e.g. Sturrock et al. (1996)). Magnetic loops in excess of $`1.4\times 10^{11}`$cm (2 $`R_{}`$) have been observed in metric U bursts (Labrum & Stewart (1970)). Coronograph observations in white light, however, do not regularly see loops of that size, suggesting that they are shortlived. In general, large magnetic loops are confined to low latitudes and, consequently, the radio image of the Sun is more extended in the equatorial direction.
Very Long Baseline Interferometry (VLBI) has made it possible to study the size and shape of main-sequence stellar coronae. At 3.6 cm, Benz et al. (1998) have resolved the dMe star UV Cet B at an higher and variable flux level into two components separated by $`4.4\times 10^{10}`$cm (4.4 $`R_{}`$). One of the components was spatially resolved with the size of about the stellar photosphere. The orientation of the two sources was found to lie along the probable axis of rotation, strongly suggestive of coronal enhancements extending at least $`2.1\times 10^{10}`$cm (2.1 $`R_{}`$) above the poles. Thus, UV Cet B differs considerably from the Sun in size and shape of the radio corona.
More VLBI observations of dMe stars have been reported at 18 cm wavelength. For YZ CMi, Benz & Alef (1991) found an upper limit of $`8.7\times 10^{10}`$cm, suggesting an extent of a circular corona above the photosphere of less than $`1.8\times 10^{10}`$cm (0.74 $`R_{}`$). Two size measurements of AD Leo by Benz et al. (1995) suggest a coronal extent of less than $`2\times 10^{10}`$cm (1.0 $`R_{}`$) and $`5.4\times 10^{10}`$cm (2.6 $`R_{}`$) above the photosphere, assuming a circular, concentric shape. A small total size of less than $`4.9\times 10^{10}`$cm for EQ Peg (Benz et al. 1995) refers to an observation during a totally polarized flare, and an extremely large size of $`2.2\times 10^{11}`$cm was reported for the dMe close binary YY Gem (Alef et al. (1997)).
The radio coronae of active, rapidly rotating dMe stars have been modeled to determine the emission process. It is generally agreed (Güdel (1994); White et al. (1989)) that weakly polarized radio emission is produced by the gyrosynchrotron mechanism of a population of mildly relativistic electrons. This mechanism, however, cannot account for polarizations exceeding about 50% which may originate from a coherent process. Circular polarizations of up to 80% have been reported for YZ CMi at 20 cm and 18 cm (Lang & Wilson (1986); Benz & Alef 1991).
Here we report on the results of VLBA experiments of YZ CMi and AD Leo at 3.6 cm. These are well known, nearby, young radio stars close to zero-age main sequence (ZAMS). Some of their general properties are listed in Table 1.
## 2 Observations and data reduction
The single dMe stars YZ CMi and AD Leo were observed on 1997 April 18/19 and 1996 December 12, respectively, with the VLBA and the phased-up VLA as a joint VLBI system yielding an angular resolution of better than one milliarcsecond (mas).
The VLA was also available in its normal interferometer mode and was used for total flux measurements at very small baseline length which allowed us to monitor the changes in total flux density and polarisation of both stars through the observations. 3C286 was used for the flux calibration of both stars.
The VLBI observations used a bandwidth of 8 MHz in both left and right circular polarization at 8.41 GHz (3.6 cm) and two bit sampling, giving a data rate of 128 Mbit/s. In order to reliably image such weak sources ($`<3.0`$ mJy) the phase referencing technique was used (Beasley & Conway (1995)) in which we switched between the target and a bright calibrator (quasar) in cycles of two to three minutes. For our targets YZ CMi and AD Leo these calibrators were 0736+017 and 1022+194 respectively (see Table 2), at 2.1 and 1.4 degrees separation from the target.
The amplitude calibration was performed with AIPS, followed by editing in DIFMAP (in order to flag more precisely single, bad visibilities of the VLBA data) and by a continued analysis in AIPS (including all the mapping). We first made hybrid maps of our calibrators using closure phase methods in order to determine their structure. We then determined the atmospheric contributions to the phase of the calibrator data. With these solutions it was possible to phase-correct the stellar data. We then made preliminary wide maps in order to find the stars and phase-rotated the data in order to bring the target stars to the image phase center. Finally we deconvolved the stellar images using CLEAN.
As a check of the reliability of the phase calibration, our observations also included a second bright calibrator for each star (0743-006 for YZ CMi, and 1013+208 for AD Leo, see Table 2). The maps of these secondary calibrators, made using the phase solutions found towards the primary calibrators showed a dynamic range of about 20:1. Since the separation between the two calibrators is approximately the same as that between the star and each calibrator, we can conclude that our stellar images have also a 20:1 dynamic range and hence are in practice noise limited rather than dynamic range limited. We also produced selfcalibration maps and datasets of the secondary calibrators. The plots of the fringe amplitude against u,v-distance show the same fall off than the ones produced from the phase referenced datasets. This confirms the value of the dynamic range of the images of the targets.
The source centroid positions of the phase referenced maps of the secondary calibrators were within 0.5 and 0.2 mas of the correlated positions (a-priori positions) for 0743-006 and 1013+208 respectively (see Table 2). This is consistent with the claimed accuracy of those correlated positions (0.5 mas, Johnston et al. 1995). This test gives us confidence in the astrometric accuracy of our stellar positions (see Sect. 3.4).
## 3 Results
### 3.1 Total flux monitoring
YZ CMi was found at a surprisingly high, slowly decreasing flux density (average 2.9 mJy) during the whole observation (see Fig. 1, left plot). As previously noted (see Sect. 1), the emission of YZ CMi is often considerably polarized. Throughout this observation the polarization was predominantly left circular (about 60 % circular polarisation during the quiescent emission, and 90 % during the flares). Two strong flares appear in the lightcurve obtained with the VLA. Their flux values reach up to 6.0 mJy. The detection of YZ CMi with both instruments was clear, as the rms noise in the VLA image was 0.063 mJy/beam, and in our high sensitivity VLBI maps made using only VLBA-VLA baselines the noise was 0.13 mJy/beam. Given these map noise values the peak intensity of the star was 47 $`\sigma `$ in the VLA image, and 16 $`\sigma `$ in the VLBA image.
The mean flux density value for AD Leo was at a more typical level of 0.7 mJy (see Fig. 1, right plot). Similar fluxes at 18 cm have previously been reported (e.g. Jackson et al. (1989); Benz et al. 1995). A weak flare appears in the VLA lightcurve (2.1 mJy). The star was detected by both instruments despite its low flux. The rms noise for the VLA map was 0.035 mJy/beam, while for the map made with only the VLA-VLBA baselines it was 0.18 mJy/beam. Peak map values were 15.6 $`\sigma `$ for the VLA and 5 $`\sigma `$ for the VLBA maps respectively. Tracing the lightcurve of AD Leo from the interferometric VLA data was a delicate task because of two strong sources in the field of view.
### 3.2 Size and shape of YZ CMi
The phase referenced VLBA map of YZ CMi is shown in Fig. 2. Evidence for spatial resolution was found in this map. Further evidence for extended emission can be most clearly seen by examing fringe amplitude versus baseline length obtained using the VLA-VLBA baselines (see Fig. 3, left plot). We obtained this plot by first coherently averaging the data on each baseline-scan (75 minutes) in order to increase the signal to noise ratio significantly above unity. We then binned the amplitude over baseline length finding the mean amplitude in each bin by incoherent averaging. The error bars on this average were determined from the internal scatter of the data.
The fall off noticeable in Fig. 3 left plot clearly indicates a resolved source. What is more, the phase values on VLBI baselines to the phased VLA over the whole observation show no significant variation from zero. There is therefore no evidence for anything other than a single centro-symmetric component. We searched also for other evidence for non-symmetrical structure looking at closure phases, but the SNR of these were too low.
Given the close to quadratic fall off of the fringe amplitude with *u,v*-distance shown in Fig. 3 left plot it is impossible to distinguish between gaussian, sphere, disk or ring like models (Pearson (1995)). We therefore fitted one-component gaussian models to the YZ CMi data. The numerical values for the fits are summarized in Table 1. The dimensions of sphere, disk and ring which would show similar fits are expected to be respectively 1.8, 1.6 and 1.1 times the gaussian FWHP values (Pearson 1995). Two ways of fitting were followed: fitting in AIPS using the task UVFIT and our own model fitting to the data. The first fitted an elliptical gaussian and obtained for the whole data set a major axis of FWHP of 1.4 $`\pm `$0.3 mas and a minor axis of 0.5 $`\pm `$0.25 mas (1$`\sigma `$ errors). There is therefore no strong evidence for ellipticity, and our subsequent fitting of the data outside of AIPS assumed only a circular gaussian. With such a model it was possible to fit most of the data within 1 $`\sigma `$. The best fitting FWHP size of the corona was found to be 0.98 $`\pm `$0.2 mas, which corresponds to 1.7 $`\pm `$0.3 stellar diameters.
Since the second VLBI scan corresponds precisely to one of the two strong flares (see Fig. 1, left plot), it was interesting to study it more closely. Fig. 3 (right plot) shows the amplitude versus u,v-distance. Of this scan we selected the subscan on the target with the highest flux density value and coherently averaged the data over it (3 minutes), obtaining one point per VLA-VLBA baseline. The solid line corresponds to the same gaussian model fitted to the whole data set (see Fig. 3, left plot) except for the total flux density which was increased to 5.1 mJy. We find that this scaled model fits the data within the errors and therefore there is no evidence for a change in source size during the flare. Independent gaussian fits to the data are also consistent with this conclusion (see Table 1).
We should add that the contribution of the proper motion and of the changing parallax of the star during the ten hours of observation is -0.32 mas and -0.16 mas in $`\alpha `$ and $`\delta `$, respectively. These values are small enough that they do not give a significant contribution on the spatial extent.
### 3.3 AD Leo
An image of AD Leo is shown in Fig. 2. It appears slightly elongated. This image is probably affected by the star’s high proper motion (Table 1), since the extension is exactly along the expected direction: the star appears blurred because of its motion during the synthesis observation.
As shown in the flux curve of AD Leo (Fig. 1, right plot), the total flux varied by a factor of 3 during the observations. Adding this fact to the high proper motion of the star, it is not surprising that attempts to fit a single gaussian to the $`u,v`$-data did not give consistent results. The weakness and variability of the star make the estimation of errors on the size difficult. However, from the image (see Fig. 2), we can note that the apparent FWHP perpendicular to the motion corresponds to the beam FWHP in this direction. The intrinsic FWHP size of the emitting region is therefore likely to be less than half the beam FWHP or about 1 mas, which equals the estimated optical diameter of the star (see Table 1). This might indicate a very compact corona or an emitting spot on the surface of the star. A very conservative upper limit on the size of the corona in the former case would be to assume the emitting region to be an optically thin sphere instead of a gaussian (Pearson (1995)) in which case the diameter is less than 1.8 times the photosphere diameter, and it therefore has an extent above the photosphere of less than 0.8 $`R_{}`$.
### 3.4 Astrometry
The relatively large signal-to-noise ratio for YZ CMi and the phase referencing to a calibrator with good ($`<`$0.5 mas) positional accuracy in the radio frame (Johnston et al. 1995) allowed us to determine a precise position for this star. This position was compared with the position given by the Hipparcos catalogue (ESA 1997). Correcting for proper motion and parallax, we found a discrepancy of 20.9 mas in $`\alpha `$ and 30.4 mas in $`\delta `$, thus a total deviation of 36.9 mas. The proper motion of the star is given in the Hipparcos catalogue as -344.9 $`\pm `$2.6 mas/yr in $`\alpha `$, and -450.8 $`\pm `$1.75 mas/yr in $`\delta `$. Considering the time interval between the two measurements of 6 years, the difference is 2 $`\sigma `$ and thus within the accuracy of the proper motion error bars. Combining the VLBA and Hipparcos positions (courtesy of F. Arenou), an improved proper motion of -348.6 $`\pm `$0.6 mas/yr in $`\alpha `$, and -446.6 $`\pm `$0.3 mas/yr in $`\delta `$ can be derived.
The position of AD Leo obtained with the VLBA was compared with those available in the Gliese and Tycho catalogues. The latter showed a deviation with the VLBA position of 176.3 mas and 100.0 mas in $`\alpha `$ and $`\delta `$, respectively. They are within one standard deviation of the Tycho catalogue accuracy.
## 4 Discussion and conclusions
VLBA observations have spatially resolved YZ CMi and the data could be fitted with a circular gaussian of a FWHP of 0.98 $`\pm `$0.2 mas. The radio corona extent is $`1.77\times 10^{10}\pm 8.8\times 10^9`$ cm above the photosphere (the phospheric radius is assumed to be $`2.6\times 10^{10}`$cm, Pettersen 1980). For AD Leo, which is closer and has a larger photosphere, but was observed at a much weaker flux level, the corona was not resolved and we set a robust upper limit of $`2.8\times 10^{10}`$ cm above the photosphere (see Sect. 3.3).
### 4.1 The brightness temperature
For YZ CMi (see Table 1) we obtain a mean brightness temperature $`T_b=7.3\times 10^7`$ K, while for AD Leo we set an upper limit of $`T_b=4.93\times 10^7`$ K. These mean $`T_b`$ values are smaller than previously reported for YZ CMi by Benz & Alef (1991) and AD Leo at 18 cm (Jackson et al. (1989)). The lower values at 3.6 cm are still consistent with a non-thermal spectrum from gyrosynchrotron but do not formally exclude thermal processes. However the significant circular polarisation found during the observations strongly argues for a gyrosychrotron emission mechanism.
The derived extent of the coronae above the photosphere of the dMe stars is compared to the Sun in Table 3. The solar value refers to stereoscopic measurements of the thermal gyroresonance emission of active regions. The average value at 10-14 GHz reported by Aschwanden et al. (1995) has been used.
The experience from the solar radio emission makes it clear that the observed radio size is only a lower limit of the size of the stellar corona. Nevertheless, these results indicate that the observed active dMe stars have much larger active coronae than the Sun. It might be that such dMe stars posess systems of closed loops reaching heights in excess of a stellar diameter. One way to realize such extended coronae is by large distances between footpoints as possibly seen in the case of UV Cet B (Benz et al. 1998). This indicates either that active regions are very large, or that active loops preferentially connect different active regions.
###### Acknowledgements.
We thank F. Arenou for the combined evaluation of VLBA and Hipparcos astrometry data. The Very Large Baseline Array and the Very Large Array are operated by Associated Universities, Inc. under contract with the US National Science Foundation. The work at ETH Zurich is financially supported by the Swiss National Science Foundation (grant No. 20-046656.96 ). |
no-problem/9912/hep-ex9912019.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The high luminosity delivered by LEP after the doubling of the $`e^+e^{}`$ collision energy means that LEP2 is now providing substantial samples of W bosons with which to make complementary tests of the Standard Model to those of LEP1. This, together with collision energies in excess of 200 GeV, is ensuring that the three central physics goals of the LEP2 programme are properly explored: the precise measurement of the W mass; the measurement of vector-boson self-interactions; and the search for new particles. This review discusses the first two of these objectives: the third is addressed elsewhere in these proceedings .
After a hesitant start of LEP above the W pair production threshold in 1996, subsequent years have seen increasingly large data samples accumulated by the experiments (Table 1). A total of around 480 pb<sup>-1</sup> has now been recorded by each experiment at LEP2. This integrated exposure can be expected to pass 650 pb<sup>-1</sup> by the time data-taking is completed in the second half of 2000. During 1999, the collision energy has reached, and surpassed<sup>1</sup><sup>1</sup>1At the time of the conference, the LEP collision energy had just reached 200 GeV. The figures quoted in the text refer to the achievements at the time of writing (November 1999)., its design value of $`\sqrt{s}=200`$ GeV. As illustrated in Figure 1, the machine performance has been better than ever in 1999, in spite of the increased load on the RF system imposed by the higher energies.
This report describes various Standard Model tests made with the first parts of the LEP2 data. Because analyses are in varying stages of completion essentially all results quoted are preliminary. In addition to the measurements made with W pairs, studies of fermion-pair production, QED tests, and Z pair production are reviewed.
## 2 Fermion-Pair Production
Although two to three orders of magnitudes less than at LEP1 (Figure 3b), the cross-section for fermion-pair production at LEP2 is still high compared to many other processes. The presence of the Z resonance at lower centre-of-mass energies strongly affects the characteristics of events at higher energies, because initial-state photon radiation leads to so-called “radiative return” events where the fermion-pair system has an invariant mass ($`\sqrt{s^{}}`$) close to the Z. As a result two typical populations of fermion-pair events are observed, as illustrated in Figure 3a: the radiative return events with $`\sqrt{s^{}}m_Z`$, and non-radiative events with $`\sqrt{s^{}}\sqrt{s}`$. The latter events are of more interest, as they probe the full centre-of-mass energy scale.
The cross-sections for fermion-pair production have been measured in the hadronic channel (q$`\overline{\mathrm{q}}`$ production), and for $`\mu ^+\mu ^{}`$, $`\tau ^+\tau ^{}`$ and $`e^+e^{}`$ (the latter dominated by t-channel Bhabha scattering). The Standard Model expectation describes the data well, as shown in Figure 4a for the combined LEP cross-sections.
For muon and tau pair production, the easily identifiable lepton charge is further employed to measure the forward-backward asymmetry of the non-radiative events. As shown in Figure 3, the asymmetry for non-radiative events is large at these energies, in contrast to that at LEP1. Again the Standard Model expectation describes the data well – an expanded view of the higher energy LEP-averaged asymmetries is shown in Figure 4b.
In addition to the results presented, measurements have further been made of heavy quark pair production at LEP2 energies . They too are well described by the Standard Model expectation. Constraints on a wide range of new physics scenarios have been placed with the fermion-pair data, ranging from four-fermion contact interactions to electroweak scale quantum gravity . Discussion of these topics is beyond the scope of this report.
## 3 QED Tests
A few electroweak processes at LEP2 do not have any significant contribution from massive vector boson exchange, and so may be employed to test the adequacy of quantum electrodynamics, QED, at the highest LEP energies.
Tests have been made with the process $`e^+e^{}\gamma \gamma (\gamma )`$. Possible deviations from the QED expectation are parameterised in terms of an effective cut-off parameter $`\mathrm{\Lambda }_\pm `$. Typical limits obtained by each experiment are $`\mathrm{\Lambda }_\pm `$ 290 GeV at 95% CL.
## 4 W Pair Production and Decays
At LEP2, three diagrams contribute to doubly-resonant W pair production, as shown in Figure 5. The neutrino exchange diagram dominates close to the W pair threshold, and in the Standard Model the main effect of the other two diagrams at LEP energies is a negative interference. This is illustrated in Figure 6, where the expected cross-section is shown with the full Standard Model structure, and if one or both of the diagrams with triple vector boson couplings is omitted. The effect of the triple gauge coupling is discussed further in section 6.
The typical selection efficiencies and purities for W pair events are given in Table 2. The main backgrounds arise from other four-fermion processes, and non-radiative $`\mathrm{q}\overline{\mathrm{q}}`$ events in the $`\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}`$ and $`\mathrm{q}\overline{\mathrm{q}}\mathrm{}\nu `$ decay channels, or lepton-pair production and multiperipheral interactions in the $`\mathrm{}\nu \mathrm{}\nu `$ channel. The cross-sections measured are shown in Figure 6 for the various LEP2 centre-of-mass energies. The peak cross-section, of around 17 pb, is more than three orders of magnitude less than the Z cross-section at LEP1: consequently even with the high luminosities collected at LEP2 the W pair events number a few thousand per experiment. Nonetheless, precision electroweak measurements can be made.
The branching ratio for W decays via the electron, muon, tau and hadronic modes have been measured by all four experiments . The LEP average results are given in Table 3. The results for the individual leptonic channels are consistent with lepton universality, and the average leptonic branching ratio is also consistent with the Standard Model expectation. The precision of the measurement of B(W $`\mathrm{}\nu `$) from LEP is now better than that from $`\mathrm{p}\overline{\mathrm{p}}`$ colliders.
The leptonic W branching ratio can be re-interpreted in terms of the CKM matrix element $`V_{cs}`$ without need for a CKM unitarity constraint, using the relatively well-known values of other CKM matrix elements involving light quarks . These indirect constraints currently lead to a value of $`|V_{cs}|=0.997\pm 0.020`$, much more precise than the value derived from D decays of 1.04 $`\pm `$ 0.16 .
## 5 Measurement of the W Mass and Width
At centre-of-mass energies above the W pair threshold, the technique for measuring the W mass lies in the reconstruction of the directions and energies of the four primary W decay products. These may be either four quarks, approximated by four jet directions and energies, for the $`\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}`$ channel; or two quarks/jets and a charged lepton for the $`\mathrm{q}\overline{\mathrm{q}}\mathrm{}\nu `$ channel, deducing the neutrino direction and energy from the missing momentum in the event. Decays to $`\mathrm{}\nu \mathrm{}\nu `$ are of limited use because at least two neutrinos are undetected. The W decay products are paired up to give reconstructed W mass estimates. A substantial improvement is made in the mass resolution for both $`\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}`$ and $`\mathrm{q}\overline{\mathrm{q}}\mathrm{}\nu `$ channels by applying a kinematic fit, constraining the total energy and momentum in the event to be that of the known colliding electron-positron system, making a small correction for possible unobserved initial-state radiation.
Typical reconstructed mass distributions from the kinematic fit are shown in Figure 7, for the $`\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}`$ and three $`\mathrm{q}\overline{\mathrm{q}}\mathrm{}\nu `$ channels. Clear W mass peaks are observed, in some cases with very low backgrounds. The W mass is extracted from the measured W masses in each data event using a Monte Carlo technique, the details of which differ from one experiment to another. The Monte Carlo techniques have in common that they use full detector simulations to correct for the effects of finite detector acceptance and resolution, as well as initial-state radiation, and in most cases a reweighting technique is used to model different true W mass values. The results obtained from the fits are given in Table 4, separately for the $`\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}`$ and $`\mathrm{q}\overline{\mathrm{q}}\mathrm{}\nu `$ channels (in the case of ALEPH, an analysis employing also purely leptonic W decays is included).
For all of these fits, the W width is taken to have its expected Standard Model dependence on the W mass. The results are consistent with each other, and also with the W mass extracted from the measurement of the W pair threshold cross-section at 161 GeV. The overall LEP average W mass measurement obtained is thus :
$$m_W=80.350\pm 0.056\text{ GeV (LEP)}$$
(1)
The LEP W mass measurement is slightly more precise than that from $`\mathrm{p}\overline{\mathrm{p}}`$ colliders, of $`80.448\pm 0.062`$ GeV . The two measurements have similar precision but use very different techniques, and so are essentially uncorrelated. A substantial improvement is obtained by averaging them :
$$m_W=80.394\pm 0.042\text{ GeV (World Average)}$$
(2)
The width of the W mass distributions shown in Figure 7 has components from the true W width and from detector resolution. In many events the mass resolution is comparable to, or better than, the true width. It is consequently possible to measure directly both the W mass and width, and in practice the two results are little correlated. This has been done by three experiments , the combined result currently being:
$$\mathrm{\Gamma }_W=2.12\pm 0.20\text{ GeV.}$$
(3)
With the full LEP2 statistics the precision should improve on the current measurement from CDF, of 2.055$`\pm `$0.125 GeV .
With the current uncertainties, the measurement of the W mass starts to provide an interesting further test of the Standard Model relative to other precision electroweak measurements. This is illustrated in Figure 8: the predicted W and top masses extracted from fits to lower energy electroweak data are consistent with the direct measurements from LEP and the Tevatron , and the precision of the measurements is similar to that of the prediction. From the overlaid curves showing the Standard Model expectation as a function of the Higgs boson mass, it is further evident that both the precise lower energy measurements, and the direct W and top mass measurements taken together, separately favour a Standard Model Higgs boson in the relatively low mass region.
At the time of writing, the LEP W mass analyses are in a stage of detailed review and improvement, as careful systematic studies are needed to match the statistical precision: for this reason the W mass results of all experiments are preliminary. It is not possible to predict with certainty the final W mass error from LEP2, but it is interesting to consider the main error sources, in order to extrapolate to the full data sample. If the present LEP average W mass result is broken down into statistical and systematic components, they are respectively approximately 36 MeV and 43 MeV. This does not mean simply that a systematic limit is being reached, because the analysis is performed in two channels, $`\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}`$ and $`\mathrm{q}\overline{\mathrm{q}}\mathrm{}\nu `$, of approximately equal statistical weight but with different systematic error behaviour. In the $`\mathrm{q}\overline{\mathrm{q}}\mathrm{}\nu `$ case the main systematic errors arise from detector calibration uncertainties and Monte Carlo statistics: these are amenable to reduction with more statistics, and are uncorrelated between the different LEP experiments. In neither case do they give a large contribution to the combined LEP error. In the $`\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}`$ channel, on the other hand, the systematic errors are dominated by contributions from final-state interaction effects such as colour-reconnection and Bose-Einstein correlations , and the modelling of backgrounds and fragmentation uncertainties. These error sources are largely correlated between different experiments, and will be relatively difficult to reduce. Consequently, with the full LEP2 data sample the $`\mathrm{q}\overline{\mathrm{q}}\mathrm{}\nu `$ channel analysis should remain statistics limited, but the $`\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}`$ channel will probably be limited by systematics at some level below the currently estimated error. A realistic expectation for the overall combined W mass error from LEP2 then lies in the region of 30 MeV.
## 6 Gauge Boson Self-Interactions
As mentioned in Section 4, W pair production at LEP2 probes the triple gauge boson vertices WW$`\gamma `$ and WWZ. The form and strength of these vertex couplings is unambiguously predicted by the gauge structure of the Standard Model. Possible new physics beyond the Standard Model may additionally bring in extra effective interactions between three gauge bosons without affecting other sectors, mandating a test of this aspect of the theory.
The general lagrangian for the WWV (V=$`\gamma `$ or Z) interaction contains 14 parameters. Constraints of C, P and CP invariance and U(1)<sub>em</sub> gauge invariance reduce these to five parameters, and constraints from low energy measurements reduce these further to three, conventionally taken to be $`\kappa _\gamma `$, $`g_1^Z`$ and $`\lambda _\gamma `$, respectively 1, 1 and 0 in the Standard Model. The LEP experimental analyses are performed in terms of these three variables, or equivalently deviations from their respective Standard Model values.
The triple gauge couplings affect the characteristics of W pair production in three ways: the total cross-section changes, as shown in Figure 6, by an amount which increases rapidly with $`\sqrt{s}`$; the production angular distribution of the W is modified, as shown in Figure 9a; finally, the helicity mixture of the Ws produced at a given $`\mathrm{cos}\theta `$ is affected. This last effect can be measured experimentally by using the W decay as a polarisation analyser. Typical W decay angle distributions under different coupling hypotheses are shown in Figure 9b.
Values of triple gauge coupling parameters are extracted from the W pair data using the W production and decay angles . This typically employs so-called optimal observables constructed from these angles, which has the advantage of not requiring analysis of a five-dimensional differential distribution. The preliminary results obtained, averaging over all experiments and including also the less sensitive single-W and $`\nu \nu \gamma `$ constraints , are :
$`\kappa _\gamma `$ $`=`$ $`1.04\pm 0.08`$ (4)
$`g_1^Z`$ $`=`$ $`0.99\pm 0.03`$ (5)
$`\lambda _\gamma `$ $`=`$ $`0.04\pm 0.04`$ (6)
strikingly well described by the Standard Model predictions of unity, unity and zero, respectively. The results are quoted for the case where only one of the anomalous parameters differs from the Standard Model at a time: fits have also been performed allowing up to three parameters to vary at once: consistent results are obtained.
An alternative perspective on the W-pair production process is provided by analyses which directly measure the relative rates of production of transversely and longitudinally polarised W bosons. A recent analysis along these lines by L3 gives the fraction of longitudinal W polarisation as (24.4$`\pm `$4.8$`\pm `$3)% at 189 GeV. Overall, longitudinal W polarisation is established at the five standard deviation level. This contrasts with W production from the $`\mathrm{q}\overline{\mathrm{q}}W`$ process at $`\mathrm{p}\overline{\mathrm{p}}`$ colliders, where the W is transversely polarised.
Recently a study has been carried out by OPAL of the quartic gauge couplings between WW$`\gamma \gamma `$ and WWZ$`\gamma `$. These couplings are non-zero in the Standard Model, but the effect on the data is tiny for the LEP2 sample. However, constraints have been placed on possible large anomalous values of these parameters using WW$`\gamma `$ production with photon energies above 10 GeV, and also the $`\nu \overline{\nu }\gamma \gamma `$ final-state, where there is sensitivity from the W fusion diagrams. This analysis places the first, albeit weak, direct limits on quartic gauge couplings.
## 7 Z Pair Production
Since 1997, LEP has been running at centre-of-mass energies around and above the Z pair production threshold. Unlike the W case, Z pair production involves no triple gauge coupling diagrams in the Standard Model, but instead simply those with $`t`$ and $`u`$ channel electron exchange. Production is suppressed by factors of $`(14\mathrm{sin}^2\theta _W)`$ at two eeZ vertices, so that the Z pair cross-section is significantly lower than that for W pairs. The measured cross-section is shown in Figure 10, compared to the Standard Model prediction. A particular interest of this process is that it forms an irreducible background to potential Higgs boson production if the Higgs mass were around the Z mass, in cases where one Z decays to b quarks. Figure 10 indicates that ZZ production is well understood.
## 8 Summary
With the excellent performance of the LEP machine at high energy in the last couple of years, electroweak physics at LEP2 now truly merits the epithet “precise”. The core measurements of the LEP2 programme, the W mass and the vector boson self-couplings, have been made with precision better, in some cases substantially so, than elsewhere. Tests of the Standard Model with other processes serve to confirm the superb description it provides of the data.
Finalisation of the current analyses, and inclusion of the 1999 and 2000 data samples, will provide significant further improvements in precision, although requiring care and attention to the encroaching systematic difficulties. By the time of the next Lepton-Photon meeting, the full fruits of this labour should be harvested.
Much credit is due to the LEP electroweak working group for the preparation of most of the averages and figures presented. The work of this team makes the job of a rapporteur simpler and more enjoyable. For their help during the preparation of this talk, I wish to thank: R. Bailey, R. Clare, J. Ellison, M. Grünewald, R. Hemingway, M. Hildreth, E. Lançon, M. Lancaster, C. Matteuzzi, D. Miller, K. Mönig, D. Plane, A. Straessner, D. Strom, M. Thomson, H. Voss, P. Ward and P. Wells.
Discussion
Howie Haber (UCSC): Could you comment on the maximum energy achievable at LEP?
Charlton: Increasing the energy beyond 200 GeV will be difficult, and it is unclear how the performance will evolve. An absolute maximum is 205-206 GeV, but 202-203 GeV may be more realistic.
Tom Ferbel (University of Rochester): You mentioned the observation of longitudinal W polarization at LEP2. There is also dominant longitudinal W production in top quark decay, and this has been reported by the Tevatron experiments.
Charlton: Yes, that is correct. The significance observed in that case, however, is at a much lower level than that from LEP.
Michael Peskin (SLAC): You quoted a large systematic error for the W mass measurement in WW$`\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}`$—50-90 MeV—due to color reconnection and Bose-Einstein effects. Models of color reconnection predict observable manifestations in the final state, and since these are not observed, the constraints should put bounds on these errors. Could you comment on this?
Charlton: Some models of colour-reconnection effects have been excluded by LEP data; however, others will be hard to test even with the full LEP2 statistics. The errors I quoted also have large Monte Carlo statistical components. A final error on the $`\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}`$ channel from this source of around 30 MeV may be achievable. |
no-problem/9912/astro-ph9912155.html | ar5iv | text | # Monte Carlo Simulations of Globular Cluster Evolution - II. Mass Spectra, Stellar Evolution and Lifetimes in the Galaxy.
## 1 Introduction
The development of numerical methods for simulating the dynamical evolution of dense star clusters in phase space started in the 1970’s with Monte Carlo techniques (Henon 1971a,b; Spitzer 1987, and references therein), and several groups applied these techniques to address problems related to the evolution of globular clusters. A method based on the direct numerical integration of the Fokker-Planck equation in phase space was later developed by Cohn (1979, 1980). The Fokker-Planck (hereafter F-P) methods have since been greatly improved, and they have been extended to more realistic simulations that take into account (approximately) the presence of a mass spectrum and tidal boundaries (Takahashi 1995, 1996, 1997; Takahashi & Portegies Zwart 1998, 1999), binary interactions (Gao et al. 1991; Drukier et al. 1999), gravitational shock heating by the galactic disk and bulge (Gnedin, Lee, & Ostriker 1999), and mass loss due to stellar evolution (see Meylan & Heggie 1997 for a recent review). Direct $`N`$-body simulations can also be used to study globular cluster dynamics (see Aarseth 1999 for a recent review), but, until recently, they have been limited to rather unrealistic systems containing very low numbers of stars. The GRAPE family of special-purpose computers now make it possible to perform direct $`N`$-body integrations for clusters containing up to $`N32,000`$ single stars, although the computing time for such large simulations remains considerable (see Makino et al. 1997, and references therein). This is the second of a series of papers in which we study globular cluster dynamics using a Monte Carlo technique similar to the original Henon (1971b) method. Parallel supercomputers now make it possible for the first time to perform Monte Carlo simulations for the dynamical evolution of dense stellar systems containing up to $`N10^510^6`$ stars in less than $`1`$day of computing time.
The evolution of globular clusters in the Galactic environment has been studied using a variety of theoretical and numerical techniques. The first comprehensive study of cluster lifetimes was conducted by Chernoff & Weinberg (1990, hereafter CW) using F-P simulations. They included the effects of a power-law mass spectrum, a tidal cut-off radius imposed by the tidal field of the Galaxy, and mass loss due to stellar evolution. Their results were surprising, and far reaching, since they showed for the first time that the majority of clusters with a wide range of initial conditions would be disrupted in $`10^{10}`$yr, and would not survive until core collapse. CW carried out their calculations using a 1-D F-P method, in which the stellar distribution function in phase space is assumed to depend on the orbital energy only. However, more recently, similar calculations undertaken using direct $`N`$-body simulations gave cluster lifetimes up to an order of magnitude longer compared to those computed by CW (Fukushige & Heggie 1995; Portegies Zwart et al. 1998). The discrepancy appears to be caused by an overestimated mass loss rate in the 1-D F-P formulation (Takahashi & Portegies Zwart 1998), which does not properly account for the velocity anisotropy in the cluster. To overcome this problem, new 2-D versions of the F-P method (in which the distribution function depends on both energy and angular momentum) have been employed (Takahashi 1995, 1996, 1997; Drukier et al. 1999).
The 2-D F-P models provide cluster lifetimes in significantly better agreement with direct $`N`$-body integrations (Takahashi & Portegies Zwart 1998). However, the 2-D F-P models still exhibit a slightly higher mass loss rate compared to $`N`$-body simulations. This may result from the representation of the system in terms of a continuous distribution function in the F-P formulation, which effectively models the behavior of the cluster in the $`N\mathrm{}`$ limit. To test this possibility, Takahashi & Portegies Zwart (1998) introduced an additional free parameter $`\nu _{\mathrm{esc}}`$ in their F-P models, attempting to take into account the finite ratio of the crossing time to the relaxation time (see also Lee & Ostriker 1987; Ross et al. 1997). They used this free parameter to lower the overall mass loss rate in their F-P models and obtained better agreement with $`N`$-body simulations (performed with up to $`N=32,768`$). Takahashi & Portegies Zwart (2000, hereafter TPZ) show that, after calibration, a single value of $`\nu _{\mathrm{esc}}`$ gives consistent agreement with $`N`$-body simulations for a broad range of initial conditions.
The first paper in this series presented details about our new parallel Monte Carlo code as well as the results of a series of initial test calculations (Joshi, Rasio & Portegies Zwart 2000, hereafter Paper I). We found excellent agreement between the results of our test calculations and those of direct $`N`$-body and 1-D Fokker-Planck simulations for a variety of single-component clusters (i.e., containing equal-mass stars). However, we found that, for tidally truncated clusters, the mass loss rate in our models was significantly lower, and the core-collapse times significantly longer, than in corresponding 1-D F-P calculations. We noted that, for a single case (a $`W_0=3`$ King model), our results were in good agreement with those of 2-D F-P calculations by Takahashi (1999).
In this paper, we extend our Monte Carlo calculations to multi-component clusters (described by a continuous, power-law stellar mass function), and we study the evolution of globular clusters with a broad range of initial conditions. Our calculations include an improved treatment of mass loss through the tidal boundary, as well as mass loss due to stellar evolution. Our new method treats the mass loss through the tidal boundary more carefully in part by making the timestep smaller, especially in situations where the tidal mass loss can lead to an instability resulting in rapid disruption of the cluster. We also account for the shrinking of the tidal boundary in each timestep by iteratively removing stars with apocenter distances greater than the tidal boundary, and recomputing the tidal radius using the new (lower) mass of the cluster. We compare our new results with those of CW and TPZ. We also go beyond these previous studies and explore several other issues relating to the pre-collapse evolution of globular clusters. We study in detail the importance of the velocity anisotropy in determining the stellar escape rate. We also compare the orbital properties of escaping stars in disrupting and collapsing clusters. Finally, we consider the effects of an eccentric orbit in the Galaxy, allowing for the possibility that a cluster may not fill its Roche lobe at all points in its orbit.
As in most previous studies, the calculations presented in this paper are for clusters containing single stars only. The dynamical effects of hard primordial binaries for the overall cluster evolution are not significant during most of the the pre-collapse phase, although a large primordial binary fraction could accelerate the evolution to core collapse since binaries are on average more massive than single stars. Energy generation through binary – single star and binary – binary interactions becomes significant only when the cluster approaches core collapse and interaction rates in the core increase substantially (Hut, McMillan & Romani 1992; Gao et al. 1991; McMillan & Hut 1994). Formation of hard “three-body” binaries can also be neglected until the cluster reaches a deep core-collapse phase. During the pre-collapse evolution, hard binaries behave approximately like single more massive stars, while soft binaries (which have a larger interaction cross section) may be disrupted. Since we do not include the effects of energy generation by primordial binaries in our calculations, the (well-defined) core-collapse times presented here may be re-interpreted as corresponding approximately to the onset of the “binary-burning” phase, during which a similar cluster containing binaries would be supported in quasi-equilibrium by energy-generating interactions with hard binaries in its core (Spitzer & Mathieu 1980; Goodman & Hut 1989; McMillan, Hut & Makino 1990, Gao et al. 1991). Our calculations of disruption times (for clusters that disrupt in the tidal field of the Galaxy before reaching core collapse) are largely independent of the cluster binary content, since the central densities and core interaction rates in these clusters always remain very low.
Our paper is organized as follows. In §2, we describe the treatment of tidal stripping and mass loss due to stellar evolution in our Monte Carlo models, along with a discussion of the initial conditions for our simulations. In §3, we present the results of our simulations and comparisons with F-P calculations. In §4, we summarize our results.
## 2 Monte Carlo Method
Our code, described in detail in Paper I, is based on the orbit-averaged Monte Carlo method first developed by Henon (1971a,b). Although in Paper I we only presented results of test calculations performed for single-component clusters, the method is completely general, and the implementation of an arbitrary mass spectrum is straightforward. This section describes additional features of our code that were not included in Paper I: an improved treatment of mass loss through the tidal boundary (§2.1), and a simple implementation of stellar evolution (§2.2). The construction of initial multi-component King models for our study of cluster lifetimes is described in §2.3. The highly simplified treatments of tidal effects and stellar evolution adopted here are for consistency with previous studies, since our intent in this paper is still mainly to establish the accuracy of our code by presenting detailed comparisons with the results of other methods. In future work, however, we intend to implement more sophisticated and up-to-date treatments of these effects.
### 2.1 Tidal Stripping of Stars
In an isolated cluster, the mass loss rate (up to core collapse) is relatively small, since escaping stars must acquire positive energies mostly through rare, strong interactions in the dense cluster core (see the discussion in Paper I, §3.1). In contrast, for a tidally truncated cluster, the mass loss is dominated by diffusion across the tidal boundary (also referred to as “tidal stripping”). In our Monte Carlo simulations, a star is assumed to be tidally stripped from the cluster (and lost instantaneously) if the apocenter of its orbit in the cluster is outside the tidal radius. This is in contrast to the energy-based escape criterion that is used in 1-D F-P models, where a star is considered lost if its energy is greater than the energy at the tidal radius, regardless of its angular momentum. As noted in Paper I, the 2-D treatment is crucial in order to avoid overestimating the escape rate, since stars with high angular momentum, i.e., on more circular orbits, are less likely to be tidally stripped from the cluster than those (with the same energy) on more radial orbits.
A subtle, yet important aspect of the mass loss across the tidal boundary, is the possibility of the tidal stripping process becoming unstable if the tidal boundary moves inward too quickly. As the total mass of the cluster decreases through the escape of stars, the tidal radius of the cluster shrinks. This causes even more stars to escape, and the tidal boundary shrinks further. If at any time during the evolution of the cluster the density gradient at the tidal radius is too large, this can lead to an unstable situation, in which the tidal radius continues to shrink on the dynamical timescale, causing the cluster to disrupt. The development of this instability characterizes the final evolution of all clusters with a low initial central concentration that disrupt in the Galactic tidal field before reaching core collapse.
We test for this instability at each timestep in our simulations, by iteratively removing escaping stars and recomputing the tidal radius with the appropriately lowered cluster mass. For stable models, this iteration converges quickly, giving a finite escape rate. Even before the development of the instability, this iterative procedure must be used for an accurate determination of the mass loss rate. When the mass loss rate due to tidal stripping is high, we also impose a timestep small enough that no more than 1% of the total mass is lost in a single timestep. This is to ensure that the potential is updated frequently enough to take the mass loss into account. This improved treatment of tidal stripping was not used in our calculations for Paper I. However, all the results presented in Paper I were for clusters with equal-mass stars, with no stellar evolution. Under those conditions, all models reach core collapse, with no disruptions. The issue of unstable mass loss is not significant in those cases, and hence the results of Paper I are unaffected.
### 2.2 Stellar Evolution
Our simplified treatment follows those adopted by CW and TPZ. We assume that a star evolves instantaneously to become a compact remnant at the end of its main-sequence lifetime. Indeed, since the evolution of our cluster models takes place on the relaxation timescale (i.e., the timestep is a fraction of the relaxation time $`t_r10^9`$yr), while the dominant mass loss phase during late stages of stellar evolution takes place on a much shorter timescale ($`10^6`$yr), the mass loss can be considered instantaneous. We neglect mass losses in stellar winds for main-sequence stars. We assume that the main-sequence lifetime and remnant mass is a function of the initial stellar mass only. Table 1 shows the main-sequence lifetimes of stars with initial masses up to $`15M_{}`$, and the corresponding remnant masses. In order to facilitate comparison with F-P calculations (CW, and TPZ), we use the same lifetimes and remnant masses as CW. For stars of mass $`m<4M_{}`$, the remnants are white dwarfs of mass $`0.58M_{}+0.22(mM_{})`$, while for $`m>8M_{}`$, the remnants are neutron stars of mass $`1.4M_{}`$. Stars with intermediate masses are completely destroyed (Iben & Renzini 1983). The lowest initial mass considered by CW was $`0.83M_{}`$. For lower mass stars, in order to maintain consistency with TPZ, we extrapolate the lifetimes assuming a simple $`m^{3.5}`$ scaling (Drukier 1995). We interpolate the values given in Table 1 using a cubic spline to obtain lifetimes for stars with intermediate masses, up to $`15M_{}`$. In our initial models (see §2.3), we assign masses to stars according to a continuous power-law distribution. This provides a natural spread in their lifetimes, and avoids having large numbers of stars undergoing identical stellar evolution. In contrast, in F-P calculations, the mass function is approximated by $`20`$ discrete logarithmically spaced mass bins over the entire range of masses. The mass in each bin is then reduced linearly in time from its initial mass to its final (remnant) mass, over a time interval equal to the maximum difference in main-sequence lifetimes spanned by the stars in that mass bin (see TPZ for further details). This has the effect of averaging the effective mass loss rate over the masses in each bin.
We assume that all stars in the cluster were formed in the same star formation epoch, and hence all stars have the same age throughout the simulation. During each timestep, all the stars that have evolved beyond their main-sequence lifetimes are labelled as remnants, and their masses are changed accordingly. In the initial stages of evolution ($`t10^8`$yr), when the mass loss rate due to stellar evolution is highest, care is taken to make the timestep small enough so that no more than 1% of the total mass is lost in a single timestep. This is to ensure that the system remains very close to virial equilibrium through this phase.
### 2.3 Initial Models
The initial condition for each simulation is a King model with a power-law mass spectrum. In order to facilitate comparison with the F-P calculations of CW and TPZ, we select the same set of initial King models for our simulations, with values of the dimensionless central potential $`W_0=`$1, 3, and 7. Most of our calculations were performed with $`N=10^5`$ stars, with a few calculations repeated with $`N=3\times 10^5`$ stars and showing no significant differences in the evolution. We construct the initial model by first generating a single-component King model with the selected $`W_0`$. We then assign masses to the stars according to a power-law mass function
$$f(m)m^\alpha ,$$
(1)
with $`m`$ between $`0.4M_{}`$ and $`15M_{}`$. We consider three different values for the power-law index $`\alpha =`$1.5, 2.5, and 3.5, assuming no initial mass segregation. Although this method of generating a multi-component initial King model is convenient and widely used to create initial conditions for numerical work (including $`N`$-body , F-P, and Monte Carlo simulations), the resulting initial model is not in strict virial equilibrium since the masses are assigned independently of the positions and velocities of stars. However, we find that the initial clusters relax to virial equilibrium within just a few timesteps in our simulations. Virial equilibrium is then maintained to high accuracy during the entire calculation, with the virial ratio $`2T/|W|=1`$ to within $`<1`$%.
In addition to selecting the dimensionless model parameters $`W_0`$, $`N`$, and $`\alpha `$ (which specify the initial dynamical state of the system), we must also relate the dynamical timescale with the stellar evolution timescale for the system. The basic unit of time in our models is scaled to the relaxation time. Since the stellar evolution timescale is not directly related to the dynamical timescale, the lifetimes of stars (in years) cannot be computed directly from our code units. Hence, in order to compute the mass loss due to stellar evolution, we must additionally relate the two timescales by converting the evolution time to physical units. To maintain consistency with F-P calculations, we use the same prescription as CW. We assume a value for the initial relaxation time of the system, which is defined as follows:
$$t_\mathrm{r}=2.57F[\mathrm{Myr}],$$
(2)
where
$$F\frac{M_0}{M_{}}\frac{R_\mathrm{g}}{\mathrm{kpc}}\frac{220\mathrm{km}\mathrm{s}^1}{v_\mathrm{g}}\frac{1}{\mathrm{ln}N}.$$
(3)
Here $`M_0`$ is the total initial mass of the cluster, $`R_\mathrm{g}`$ is its distance to the Galactic center (assuming a circular orbit), $`v_\mathrm{g}`$ is the circular speed of the cluster, and $`N`$ is the total number of stars. (This expression for the relaxation time is derived from CW’s eqs. , , and with $`m=M_{}`$, $`r=r_\mathrm{t}`$, and $`c_1=1`$.) Following CW, a group of models with the same value of $`F`$ (constant relaxation time) at the beginning of the simulation is referred to as a “Family.” Our survey covers CW’s Families 1, 2, 3 and 4. For each value of $`W_0`$ and $`\alpha `$, we consider four different models, one from each Family.
To convert from our code units, or “virial units” (see Paper I, §2.8 for details) to physical units, we proceed as follows. For a given Family (i.e, a specified value of $`F`$), cluster mass $`M_0`$, and $`N`$, we compute the distance to the Galactic center $`R_g`$ using equation (3). The circular velocity of $`220\mathrm{km}\mathrm{s}^1`$ for the cluster (combined with $`R_g`$) then provides an inferred value for the mass of the Galaxy $`M_g`$ contained within the cluster orbit. Using $`M_0`$, $`M_g`$, and $`R_g`$, we compute the tidal radius for the cluster, as $`r_t=R_g(M_0/3M_g)^{1/3}`$, in physical units (pc). The ratio of the tidal radius to the virial radius (i.e., $`r_t`$ in code units) for a King model depends only on $`W_0`$, and hence is known for the initial model. This gives the virial radius in pc. The unit of mass is simply the total initial cluster mass $`M_0`$. Having expressed the units of distance and mass in physical units, the unit of evolution time (which is proportional to the relaxation time) can easily be converted to physical units (yr) using equation (31) from Paper I.
Table 2 shows the value of $`F`$ for the four selected Families. For reference, we also give the relaxation time at the half-mass radius $`t_{\mathrm{rh}}`$ for the models with $`W_0=3`$ and $`\alpha =2.5`$ (mean stellar mass $`\overline{m}1M_{}`$), which we compute using the standard expression (see, e.g., Spitzer 1987),
$$t_{\mathrm{rh}}=0.138\frac{N^{1/2}r_\mathrm{h}^{3/2}}{\overline{m}^{1/2}G^{1/2}\mathrm{ln}N},$$
(4)
where $`r_\mathrm{h}`$ is the half-mass radius of the cluster.
## 3 Results
In Paper I we presented our first results for the evolution of single-component clusters up to core collapse. We computed core-collapse times for the entire sequence of King models ($`W_0=112`$), including the effects of a tidal boundary. Here we extend our study to clusters with a power-law mass spectrum, and mass loss due to stellar evolution.
### 3.1 Qualitative Effects of Tidal Mass Loss and Stellar Evolution
We begin by briefly reviewing the evolution of single-component, tidally truncated systems. In Figure 1, we show the core-collapse times for King models with $`W_0=112`$ (Paper I). The core-collapse times for tidally truncated models are compared with equivalent isolated models. Although the isolated models also begin as King models with a finite tidal radius, the tidal boundary is not enforced during their evolution, allowing the cluster to expand freely. The most notable result is that the maximum core-collapse time for the tidally truncated clusters occurs at $`W_05`$, compared to $`W_0=1`$ for isolated clusters. This is because the low $`W_0`$ King models have a less centrally concentrated density profile, and hence a higher density at the tidal radius compared to the high $`W_0`$ models. This leads to higher mass loss through the tidal boundary, which reduces the mass of the cluster and shortens the core-collapse time. This effect is further complicated by the introduction of a non-trivial mass spectrum, and mass loss due to stellar evolution in the cluster.
In Figure 2, we show a comparison of the mass loss rate due to the tidal boundary, a power-law mass spectrum, and stellar evolution. We consider the evolution of a $`W_0=3`$ King model, in four different environments. All models considered in this comparison belong to Family 1 (cf. §2.3). We first compare an isolated, single-component model (without an enforced tidal boundary), and a tidally truncated model (as in Fig. 1). Clearly, the presence of the tidal boundary is responsible for almost all the mass loss from the cluster, and it slightly reduces the core-collapse time. Introducing a power-law mass spectrum further reduces the core-collapse time, since mass segregation increases the core density, and accelerates the development of the gravothermal instability. The shorter core-collapse time reduces the total mass loss through the tidal boundary by leaving less time for evaporation. This results in a higher final mass compared to the single-component system, even though the mass loss rate is higher. Finally, allowing mass loss through stellar evolution causes even faster overall mass loss, which eventually disrupts the system. The introduction of a Salpeter-like power-law initial mass function ($`\alpha =2.5`$) is sufficient to cause this cluster to disrupt before core collapse.
The presence of a tidal boundary causes stars on radial orbits in the outer regions of the cluster to be preferentially removed. This produces a significant anisotropy in the outer regions as the cluster evolves. As noted in Paper I, a proper treatment of this anisotropy is essential in computing the mass loss rate. A star in an orbit with low angular momentum has a larger apocenter distance compared to a star (with the same energy) in a high angular momentum orbit. Hence stars in low angular momentum (i.e., radial) orbits are preferentially lost through the tidal boundary, causing an anisotropy to develop in the cluster. In 1-D F-P models, this is not taken into account, and therefore 1-D F-P models predict a much larger mass loss compared to 2-D models. In Figure 3, we show the anisotropy parameter $`\beta =1\sigma _t^2/\sigma _r^2`$, for a $`W_0=3`$ King model ($`\alpha =2.5`$, Family 1), at two different times during its evolution. Here, $`\sigma _t`$ and $`\sigma _r`$ are the 1-D tangential and radial velocity dispersions, respectively. The initial King model is isotropic. At later times, the anisotropy in the outer region grows steadily as the tidal radius moves inwards.
Another important consequence of stellar evolution and mass segregation is the gradual flattening of the stellar mass function as the cluster evolves. In Figure 4, we show the main-sequence mass spectrum in the core and at the half-mass radius of a $`W_0=7`$ King model ($`\alpha =2.5`$, Family 2), at two different times during its evolution. Since the heavier stars concentrate in the core, and have lower mean velocities, the mass loss across the tidal boundary occurs preferentially for the lighter stars. This leads to a gradual flattening of the overall mass function of the cluster. However, this picture is somewhat complicated by stellar evolution, which continuously depletes high-mass stars from the cluster. The remaining heavier stars gradually accumulate in the inner regions as the cluster evolves. Therefore the flattening of the mass function becomes particularly evident in the cluster core.
### 3.2 Cluster Lifetimes: Comparison with Fokker-Planck results
We now present our survey of cluster lifetimes, and we compare our results with equivalent 1-D and 2-D F-P results. For each combination of $`W_0`$ and $`\alpha `$, we perform four different simulations (Families $`14`$), corresponding to different initial relaxation times (cf. Table ). We follow the evolution until core collapse, or disruption, whichever occurs first. We also stop the computation if the total bound mass decreases below 2% of the initial mass, and consider the cluster to be disrupted in such cases. We compare our results with those of two different F-P studies: the 1-D F-P calculations of Chernoff & Weinberg (1990, CW), and the more recent 2-D calculations of Takahashi & Portegies Zwart (2000, TPZ).
#### 3.2.1 Comparison with 1-D Fokker-Planck models
Table 3 compares the our Monte Carlo (MC) models with the 1-D F-P calculations conducted by CW. Following the same notation as CW, the final core collapse of a cluster is denoted by ‘C’, and disruption is denoted by ‘D’. The final mass of the cluster (in units of the initial mass) and the lifetime in units of $`10^9`$ yr (time to disruption or core collapse) are also given. The evolution of clusters that reach core collapse is not followed beyond the core-collapse phase. The core-collapse time is taken as the time when the the innermost lagrange radius (radius containing 0.3% of the total mass of the cluster) becomes smaller than 0.001 (in virial units). For disrupting clusters, CW provide a value for the final mass, which corresponds to the point at which the tidal mass loss becomes unstable and the cluster disrupts on the dynamical timescale. However, we find that the point at which the instability develops depends sensitively on the method used for computing the tidal mass loss and requires the potential to be updated on a very short timescale. In this regime, since the system evolves (and disrupts) on the dynamical timescale, the orbit-averaged approximation used to solve the Fokker-Planck equation also breaks down. This is true for both Monte Carlo and F-P simulations. The only way to determine the point of instability reliably is to follow the evolution on the dynamical timescale using direct $`N`$-body integrations. Hence, for disrupting models, we quote the final mass as zero, and only provide the disruption time (which can be determined very accurately).
We find that all our Monte Carlo models disrupt later than those of CW. However, for models that undergo core collapse, the core-collapse times are shorter in some cases compared to CW, because the lower mass loss rate in our Monte Carlo models causes core collapse to take place earlier. The discrepancy in the disruption times sometimes exceeds an order of magnitude (e.g., $`W_0=1`$, $`\alpha =2.5`$). On the other hand, the discrepancy in the lifetimes of the clusters with $`\alpha =1.5`$, $`W_0=1`$ & 3 is is only about a factor of two. These models disrupt very quickly and a proper treatment of anisotropy does not extend their lifetimes very much, since the combination of a flat initial mass function and a shallow initial potential leads to rapid disruption.
Out of 36 models, we find that half (18) of our Monte Carlo models reach core collapse before disruption, compared to fewer than 30% (10) of models in the CW survey. The longer lifetimes of our models allow more of the clusters to reach core collapse in our simulations. All the clusters that experience core collapse according to CW also experience core collapse in our calculations. Since the main difference between our models and those of CW comes from the different mass loss rates, we predictably find that our results match more closely those of CW in all cases where the overall mass loss up to core collapse is relatively small. For example, the more concentrated clusters ($`W_0=7`$) with steep mass functions ($`\alpha =`$2.5 and 3.5) show very similar behavior, with the discrepancy in final mass and core-collapse time being less than a factor of two. However, we cannot expect complete agreement even in these cases, since the effects of anisotropy cannot be completely ignored.
The overall disagreement between our Monte Carlo models and 1-D F-P models is very significant. This was also evident in some of the results presented in Paper I, where we compared core-collapse times for tidally truncated single-component King models, with 1-D F-P calculations by Quinlan (1996). This discrepancy has also been noted by Takahashi & Portegies Zwart (1998), and Portegies Zwart et al. (1998). The improved 2-D F-P code developed by Takahashi (1995, 1996, 1997) is now able to properly account for the anisotropy, allowing for a more meaningful comparison with other 2-D calculations, including our own.
#### 3.2.2 Comparison with 2-D Fokker-Planck models
Comparisons of the mass loss evolution is shown in Figures 5, 6, and 7, where the solid lines show our Monte Carlo models, and the dashed lines show the 2-D F-P models from TPZ.
In Figure 5, we show the evolution of $`W_0=1`$ King models. The very low initial central density of these models makes them very sensitive to the tidal boundary, leading to very rapid mass loss. As a result, almost all the $`W_0=1`$ models disrupt without ever reaching core collapse. In addition, these models demonstrate the largest variation in lifetimes depending on their initial mass spectrum. For a relatively flat mass function ($`\alpha =1.5`$), the disruption time is $`2\times 10^7`$yr. The large fraction of massive stars in these models, combined with the shallow initial central potential, leads to very rapid mass loss and complete disruption. For a more realistic, Salpeter-like initial mass function ($`\alpha =2.5`$), the $`W_0=1`$ models have a longer lifetime, but still disrupt in $`10^9`$yr. The $`\alpha =3.5`$ models have very few massive stars, and hence behave almost like models without stellar evolution. We see that it is only with such a steep mass function, that the $`W_0=1`$ models can survive until the present epoch ($`10^{10}`$yr). We also find that the Family 1 and 2 models can reach core collapse despite having lost most of their mass, while Family 3 and 4 models are disrupted.
We see very good agreement throughout the evolution between our Monte Carlo models and the 2-D F-P models. In all cases, the qualitative behaviors indicated by the two methods are identical, even though the Monte Carlo models consistently have somewhat longer lifetimes than the F-P models. The average discrepancy in the disruption times for all models is approximately a factor of two. The discrepancy in disruption times is due to a slightly lower mass loss rate in our models, which allows the clusters to live longer. Since the F-P calculations correspond to the $`N\mathrm{}`$ limit, they tend to overestimate the overall mass loss rate (we discuss this issue in more detail in the next section). This tendency has been pointed out by Takahashi & Portegies Zwart (1998), who compared the results of 2-D F-P simulations with those of direct $`N`$-body simulations with up to $`N=32,768`$. They have attempted to account for the finiteness of the system in their F-P models by introducing an additional parameter in their calculations to modify the mass loss rate. The comparison shown in Figures 5, 6 and 7 is for the unmodified $`N\mathrm{}`$ F-P models.
We find complete agreement with TPZ in distinguishing models that reach core collapse from those that disrupt. The only case in which there is some ambiguity is the $`W_0=1,\alpha =3.5`$, Family 2 model, which clearly collapses in our calculations, while TPZ indicate nearly complete disruption. This is obviously a borderline case, in which the cluster reaches core collapse just prior to disruption in our calculation. Since the cluster has lost almost all its mass at core collapse, the distinction between core collapse and disruption is largely irrelevant. It is important to note, however, that we find the boundary between collapsing and disrupting models at almost exactly the same location in parameter space ($`W_0`$, $`\alpha `$, and relaxation time) as TPZ. This agreement is as significant, if not more, than the comparison of final masses and disruption times.
In Figure 6, we show the comparison of $`W_0=3`$ King models. Again, the overall agreement is very good, except for the slightly later disruption times for the Monte Carlo models. The most notable difference from the $`W_0=1`$ models, is that the $`W_0=3`$ models clearly reach core collapse prior to disruption for $`\alpha =3.5`$. The core collapse times for the $`\alpha =3.5`$ models are very long ($`3\times 10^{10}3\times 10^{11}`$yr), with only $`20\%`$ of the initial mass remaining bound at core collapse. Here also we find perfect agreement between the qualitative behaviors of the F-P and Monte Carlo models.
In Figure 7, we show the evolution of the $`W_0=7`$ King models. In the presence of a tidal boundary, the $`W_05`$ King models have the distinction of having the longest core-collapse times (see Fig. 1). This is because they begin with a sufficiently high initial core density, and do not expand very much before core collapse. Hence, the mass loss through the tidal boundary is minimal. King models with a lower $`W_0`$ lose more mass through the tidal boundary, and evolve more quickly toward core collapse or disruption, while models with higher $`W_0`$ have very high initial core densities, leading to short core-collapse times. All our $`W_0=7`$ models reach core collapse. Even the models with a very flat mass function ($`\alpha =1.5`$) achieve core collapse, although the final bound mass in that case is very small. We again see very good overall agreement between the Monte Carlo and F-P models, except for the slightly higher mass loss rate predicted by the F-P calculations. In the next section, we discuss the possible reasons for this small discrepancy in the mass loss rate between the Monte Carlo and F-P models.
#### 3.2.3 Comparison with finite Fokker-Planck models
We first highlight some of the general issues relating to mass loss in the systems we have considered. In Figure 8, we show the relative rates of mass loss due to stellar evolution and tidal stripping, for $`W_0=`$1, 3, and 7 King models, with different mass spectra ($`\alpha =`$ 1.5, 2.5, and 3.5). We see that stellar evolution is most significant in the early phases, while tidal mass loss dominates the evolution in the later phases. The relative importance of stellar evolution depends on the fraction of massive stars in the cluster, which dominate the mass loss early in the evolution. Hence, the $`\alpha =1.5`$ models suffer the greatest mass loss due to stellar evolution, accounting for up to 50% of the total mass loss in some cases (e.g., $`W_0=7,\alpha =1.5`$). All models shown belong to Family 2. It is important to note the large variation in the timescales, and in the relative importance of stellar evolution versus tidal mass loss across all models.
Through comparisons with $`N`$-body simulations, Takahashi & Portegies Zwart (1998) have argued that assuming $`N\mathrm{}`$ leads to an overestimate of the mass loss rate due to tidal stripping of stars. To compensate for this, they introduce a free parameter $`\nu _{\mathrm{esc}}`$ in their calculations, to account for the finite time (of the order of the crossing time) it takes for an escaping star to leave the cluster. They calibrate this parameter through comparisons with $`N`$-body simulations, for $`N=1,02432,768`$). Since for low $`N`$, the $`N`$-body models are too noisy, and the F-P models are insensitive to $`\nu _{\mathrm{esc}}`$ for large $`N`$, TPZ find that the calibration is most suitably done using $`N16,000`$ (for further details, see the discussion by TPZ). They show that a single value of this parameter gives good agreement with $`N`$-body simulations for a wide range of initial conditions. Using this prescription, TPZ provide results of their calculations for finite clusters with $`N=3\times 10^5`$ in addition to their $`N\mathrm{}`$ results. They find that their finite models, as expected, have lower mass loss rates, and consequently longer lifetimes compared to their infinite models.
In Table 4, we compare the results of our Monte Carlo calculations with $`N=3\times 10^5`$ stars with the finite and infinite F-P models of TPZ. We consider four cases: $`W_0=`$ 1 and 3, Families 1 and 4, $`\alpha =2.5`$. All finite TPZ models have longer lifetimes than their infinite models. However, there is practically no difference between their finite and infinite models for core-collapsing clusters. Hence we focus our attention only on the disrupting models. We see that in all four cases, the longer lifetimes of the finite models are in better agreement with our Monte Carlo results, although the agreement is still not perfect. The largest difference between the finite and infinite F-P models is for the $`W_0=1`$ models, in which case the Monte Carlo results lie between the finite and infinite F-P results. For $`W_0=3`$ models, the Monte Carlo disruption times are still slightly longer than those of the finite F-P models, although the agreement is better.
Both Monte Carlo and F-P methods are based on the orbit-averaged Fokker-Planck approximation, which treats all interactions in the weak scattering limit, i.e., it does not take into account the effect of strong encounters. Both methods compute the *cumulative* effect of distant encounters in one timestep (which is a fraction of the relaxation time). In this approximation, events on the crossing timescale (such as the escape of stars) are treated as being instantaneous. Since the relaxation time is proportional to $`N/\mathrm{ln}N`$ times the crossing time, this is equivalent to assuming $`N\mathrm{}`$ in the F-P models. However, in our Monte Carlo models, there is *always* a finite $`N`$, since we maintain a discrete representation of the cluster at all times and follow the same phase space parameters as in an $`N`$-body simulation. Thus, although both methods make the same assumption about the relation between the crossing time and relaxation time, for all other aspects of the evolution, the Monte Carlo models remain finite. This automatically allows most aspects of cluster evolution, including the escape of stars, stellar evolution, and computation of the potential, to be handled on a discrete, star-by-star basis. On the other hand, the F-P models use a few coarsely binned individual mass components represented by continuous distribution functions (consistent with $`N\mathrm{}`$) to model all processes. In this sense, the Monte Carlo models can be regarded as being intermediate between direct $`N`$-body simulations and F-P models.
The importance of using the correct value of $`N`$ in dynamical calculations for realistic cluster models has also been demonstrated through $`N`$-body simulations, which show that the evolution of finite clusters scales with $`N`$ in a rather complex way (see Portegies Zwart et al. 1998 and the “Collaborative Experiment” by Heggie et al. 1999). Hence, despite correcting for the crossing time, it is not surprising that the finite F-P models are still slightly different from the Monte Carlo models. It is also possible that the calibration of the escape parameter obtained by TPZ may not be applicable to large $`N`$ clusters, since it was based on comparisons with smaller $`N`$-body simulations. It is reassuring to note, however, that the Monte Carlo models, without introducing any new free parameters, have consistently lower mass loss rates compared to the infinite F-P models, and show better agreement with the finite F-P models.
### 3.3 Velocity and Pericenter Distribution of Escaping Stars
A major advantage of the Monte Carlo method is that it allows the evolution of specific subsets of stars, or even individual stars, to be followed in detail. We use this capability to examine, for the first time in a cluster simulation with realistic $`N`$, the properties of stars that escape from the cluster through tidal stripping. We also examine the differences between the properties of escaping stars in clusters that reach core collapse, and those that disrupt. In Figure 9, we show the distribution of escaping stars for two different models ($`W_0=3`$ and 7, Family 1, $`\alpha =2.5`$). In each case, we show a 2-D distribution of the pericenter distance and the velocity at infinity for all the escaping stars. The velocity at infinity is computed as $`v_{\mathrm{}}=\sqrt{2(E\varphi _t)}`$, where $`E`$ is the energy per unit mass of the star, and $`\varphi _t`$ is the potential at the tidal radius. We see that the distribution of pericenter distances is very broad, indicating that escape takes place from within the entire cluster, and not just near the tidal boundary. We see that the distribution of pericenters is slightly more centrally peaked in the $`W_0=7`$ model than in the $`W_0=3`$ case. Note that the sizes of the cores are very different for the two clusters. The $`W_0=7`$ cluster initially has a core radius of 0.2 (in virial units), which gets smaller as the cluster evolves, while the $`W_0=3`$ cluster has an initial core radius of 0.5, which does not change significantly as the cluster evolves and disrupts. The main difference between the clusters, however, is in the velocity distribution of escaping stars. In the disrupting cluster ($`W_0=3`$), the escaping stars have a wide range of escape energies at all pericenter distances, whereas in the collapsing cluster ($`W_0=7`$), a large fraction of the stars escape with close to the minimum energy. Only the escapers from within the central region have a significant range of escape energies.
The very narrow distribution of escape energies for the collapsing cluster suggests that the mechanism for escape in collapsing and disrupting clusters may be qualitatively different. It also suggests that the single escape parameter used by TPZ to correct for the tidal mass loss rate in their finite F-P calculations may be insufficient in correcting for both types of escaping stars. This might also account for the fact that TPZ find almost no change in the mass loss rate after introducing their $`\nu _{\mathrm{esc}}`$ parameter in core-collapsing models, while disrupting models show a significant difference.
### 3.4 Effects of Non-circular Orbits on Cluster Lifetimes
In all the calculations presented above (as in most previous numerical studies of globular cluster evolution), we assumed that the cluster remained in a circular orbit at a fixed distance from the center of the Galaxy. We also assumed that the cluster was born filling its Roche lobe in the tidal field of the Galaxy. Both of these assumptions are almost certainly unrealistic for the majority of clusters. However, one could argue that even for a cluster on an eccentric orbit, one might still be able to model the evolution using an appropriately *averaged* value of the tidal radius over the orbit of the cluster. Here we briefly explore the effect of an eccentric orbit, by comparing the evolution of one of our Monte Carlo models ($`W_0=3`$, $`\alpha =2.5`$, Family 2) on a Roche-lobe filling circular orbit, and on an eccentric orbit. We assume that the *pericenter* distance of the eccentric orbit is equal to the radius of the circular orbit. This is to ensure that the cluster fills its Roche lobe at the same location, and the same value of $`R_g`$ is used to compute $`F`$ in the models being compared (see eq. ). If we alternatively selected the orbit such that the cluster fills its Roche lobe at apocenter, instead of pericenter, the outcome would be obvious: the mass loss at pericenter would be considerably higher, leading to much more rapid disruption of the cluster compared to the circular orbit.
In Figure 10, we show the evolution of the selected model for three different orbits. The leftmost line shows the evolution for the circular orbit. The rightmost line shows the evolution for an eccentric *Keplerian* orbit with a typical eccentricity of 0.6 (see, e.g., Odenkirchen et al. 1997). The Keplerian orbit assumes that the inferred mass of the Galaxy interior to the circular orbit is held fixed for the eccentric orbit as well. The intermediate line shows the evolution for an orbit in a more realistic potential for the Galaxy, which is still spherically symmetric, but with a constant circular velocity of $`220\mathrm{km}\mathrm{s}^1`$ in the region of the cluster orbit (Binney & Tremaine 1987). The orbit is chosen so that it has the same pericenter and apocenter distance as the Keplerian orbit. However, since the orbital velocity is higher, it has a shorter period compared to the Keplerian orbit. In each of the two eccentric orbits, we see that the cluster lifetime is extended slightly (by a factor of $`2`$). Most of the mass loss takes place during the short time that the cluster spends near its pericenter, where it fills its Roche lobe. The Keplerian orbit gives the longest lifetime, since the cluster spends most of its time near its apocenter, where it does not fill its Roche lobe.
This comparison suggests that the lifetime of a cluster can vary by at most a factor of a few, depending on the shape of its orbit. However, such corrections should be taken into account in building accurate numerical models of real clusters. In addition, other effects that we have neglected here, such as tidal shocking during Galactic disk crossings, may affect cluster lifetimes more significantly (see §4).
## 4 Summary
We have calculated lifetimes of globular clusters in the Galactic environment using 2-D Monte Carlo simulations with $`N=10^53\times 10^5`$ King models, including the effects of a mass spectrum, mass loss in the Galactic tidal field, and stellar evolution. We have studied the evolution of King models with $`W_0=1`$, 3, and 7, and with power-law mass functions $`m^\alpha `$, with $`\alpha =`$1.5, 2.5, and 3.5, up to core collapse, or disruption, whichever occurs first. In our broad survey of cluster lifetimes, we find very good overall agreement between our Monte Carlo models and the 2-D F-P models of TPZ for all 36 models studied. This is very reassuring, since it is impossible to verify such results using direct $`N`$-body integrations for a realistic number of stars. The Monte Carlo method has been shown to be a robust alternative for studying the evolution of multi-component clusters. It is particularly well suited to studying finite, but large-$`N`$ systems, including many different processes, such as tidal stripping and stellar evolution, which operate on different timescales. We find that our Monte Carlo models are in better agreement with the finite-$`N`$ F-P models of TPZ, compared to their standard F-P ($`N\mathrm{}`$) models, although our models still appear to have a slightly lower overall mass loss rate.
Even though our simulations are becoming more sophisticated and realistic with the inclusion of many new important processes, there still remain substantial difficulties in relating our results directly to observed clusters. We ignore several potentially important effects in these calculations, including the tidal shock heating of the cluster following passages through the Galactic disk, and the presence of primordial binaries, which can support the core against collapse. In recent studies using 1-D F-P calculations, it has been shown that shock heating and shock-induced relaxation of clusters caused by repeated close passages near the bulge and through the disk of the Galaxy can sometimes be as important as two-body relaxation for their overall dynamical evolution (Gnedin, Lee, & Ostriker 1999). In addition, the initial mass function of clusters is poorly constrained observationally, and our simple power laws may not be realistic. In our study, we assume that clusters begin their lives filling their Roche lobes. But, as we have shown, a cluster on an eccentric orbit may spend most of its time further away in the Galaxy, where it might not fill its Roche lobe. This can lead to somewhat longer lifetimes.
The broad survey of cluster lifetimes presented here, and the similar effort by TPZ, lay the foundations for more detailed calculations, which may one day allow us to conduct reliable population synthesis studies to understand in detail the history, and predict the future evolution, of the Galactic globular cluster system.
We are very grateful to Simon Portegies Zwart for insightful comments and helpful discussions. We are also grateful to Koji Takahashi for kindly providing valuable data and answering numerous questions. This work was supported by NSF Grant AST-9618116 and NASA ATP Grant NAG5-8460. C.P.N. acknowledges partial support from the UROP program at MIT. F.A.R. was supported in part by an Alfred P. Sloan Research Fellowship. This work was also supported by the National Computational Science Alliance under Grant AST980014N and utilized the SGI/Cray Origin2000 supercomputer at Boston University. |
no-problem/9912/hep-ph9912312.html | ar5iv | text | # Untitled Document
M/C-TH 99-16
DAMTP-1999-167
EXCLUSIVE VECTOR PHOTOPRODUCTION:
CONFIRMATION OF REGGE THEORY
A Donnachie
Department of Physics, Manchester University
P V Landshoff
DAMTP, Cambridge University
email addresses: ad@a3.ph.man.ac.uk, pvl@damtp.cam.ac.uk
Abstract Recent small-$`t`$ ZEUS data for exclusive $`\rho `$ photoproduction are in excellent agreement with exchange of the classical soft pomeron with slope $`\alpha ^{}=0.25`$ GeV<sup>-2</sup>. Adding in a flavour-blind hard-pomeron contribution, whose magnitude is calculated from the data for exclusive $`J/\psi `$ photoproduction, gives a good fit also to the ZEUS data for $`\rho `$ photoproduction at larger values of $`t`$, and to $`\varphi `$ photoproduction.
The ZEUS collaboration has recently suggested$`^{\text{[1]}}`$ that their data for exclusive $`\rho `$ photoproduction at HERA, when combined with lower-energy data$`^{\text{[2]}}`$ lead to a slope $`\alpha ^{}`$ for the trajectory of the soft pomeron that differs significantly from the classical value $`\alpha ^{}=0.25`$ GeV<sup>-2</sup>. A main message of this paper is to disagree with this conclusion; we show that in fact the classical value is confirmed by the data.
The slope $`\alpha ^{}`$ should be determined from the data at small $`t`$, but the ZEUS measurements extend also to rather large $`t`$. At HERA energy, soft-pomeron exchange dominates the differential cross-section out to values of $`|t|`$ of about 0.4 GeV<sup>2</sup>. Beyond that, some new contribution is needed. For exclusive $`J/\psi `$ photoproduction, a new contribution is needed even at very small $`t`$. We have shown recently$`^{\text{[3]}}`$ that introducing a “hard pomeron” gives an excellent description of data not only for $`J/\psi `$ photoproduction, but also for the charm structure function $`F_2^c`$ and the small-$`x`$ behaviour of the total structure function $`F_2`$. A second message in the present paper is that the introduction of the same hard pomeron also provides a good description of the large-$`t`$ $`\rho `$ photoproduction data. Applying the model to $`\varphi `$ photoproduction gives a satisfactory description of these data too.
Figure 1: Data for the forward differential cross section for exclusive $`\rho ^0`$ photoproduction. The solid curve corresponds to the exchange of the soft pomeron together with $`f`$ and $`a_2`$, while the dashed curve includes also a hard-pomeron contribution.
Consider first small-$`t`$ $`\rho `$ photoproduction. In order to extract the soft-pomeron slope $`\alpha ^{}`$, it is necessary to consider data from HERA together with measurements at much lower energy. We have shown previously$`^{\text{[4]}}`$ that the description of the lower-energy data needs a significant contribution from $`f`$ and $`a_2`$ exchange; this is missing from the ZEUS analysis$`^{\text{[1]}}`$. Further, as is apparent from the data shown in figure 1, the relative normalisation of the lower energy experiments is somewhat erratic and it is not correct to use just one or two energies for comparison. It is necessary to perform a global fit to average out the discrepancies. In our previous analysis$`^{\text{[4]}}`$, we first assumed that the contribution from soft-pomeron and $`f,a_2`$ exchanges to the $`\rho ^0p`$ total cross section is the same as in the average of the $`\pi ^+p`$ and $`\pi ^{}p`$ cross sections$`^{\text{[5]}}`$. We then used $`\rho `$-dominance, with a factor of 0.84 to allow for finite-width corrections to $`\rho e^+e^{}`$ decay$`^{\text{[6]}}`$, to calculate the forward differential cross section for $`\gamma p\rho p`$. Figure 1 (solid curve) shows the resulting cross section at $`t=0`$ as a function of energy.
In order to extend this away from the forward direction, we need the two Regge trajectories
$$\alpha _{P_1}(t)=1.08+\alpha _{P_1}^{}t\alpha _{P_1}^{}=0.25$$
$$\alpha _R(t)=0.55+\alpha _R^{}t\alpha _R^{}=0.93$$
$`(1)`$
We need also to decide the mass scale $`s_0`$ by which we must divide $`s`$ before we raise it to the Regge power. There is no theory that determines this. We adopt the dual-model prescription$`^{\text{[7]}}`$ that, for a trajectory of slope $`\alpha ^{}`$, one should take $`s_0=1/\alpha ^{}`$. It is well-established$`^{\text{[8]}}`$ that the trajectories couple to the proton through the Dirac electric form factor
$$F(t)=\frac{4m^22.79t}{4m^2t}\left(\frac{1}{1t/0.71}\right)^2$$
$`(2)`$
but their coupling $`G_\rho (t)`$ to the $`\gamma \rho `$ vertex is unknown. We find that a good description of the data is provided by the choice
$$G_\rho (t)=\frac{1}{1t/0.71}$$
$`(3)`$
Putting these things together, we have for the soft part of the amplitude for $`\gamma p\rho p`$
$$T_{\text{SOFT}}(s,t)=iF(t)G_\rho (t)\left[A_{P_1}(\alpha _{P_1}^{}s)^{\alpha _{P_1}(t)1}e^{{\scriptscriptstyle \frac{1}{2}}i\pi (\alpha _{P_1}(t)1)}+A_R(\alpha _R^{}s)^{\alpha _R(t)1}e^{{\scriptscriptstyle \frac{1}{2}}i\pi (\alpha _R(t)1)}\right]$$
$`(4a)`$
with
$$A_{P_1}=6.0A_R=15.9$$
$`(4b)`$
The amplitude is normalised such that $`d\sigma /dt=|T(s,t)|^2`$ in $`\mu b`$ GeV<sup>-2</sup>. Figure 2: Data$`^{\text{[2]}}`$ for $`\gamma p\rho p`$ at $`\sqrt{s}=6.86`$ and 10.4 GeV, with Regge fit.
Figure 3: ZEUS data$`^{\text{[1]}}`$ for $`\gamma p\rho p`$. The lower-$`t`$ data are at $`\sqrt{s}=`$71.7 GeV and the higher-$`t`$ at 94 GeV. The solid line is the Regge fit with soft exchanges only; the dashed line includes the hard pomeron (the fits change very little between the two energies)
Figure 2 shows the differential cross section at $`\sqrt{s}=`$6.86 and 10.4 GeV, together with CERN Omega data$`^{\text{[2]}}`$. The data are not normalised. The solid line in figure 3 shows the same fit at $`\sqrt{s}=`$94 GeV, together with ZEUS data$`^{\text{[1]}}`$ (which are normalised). The success of the fit at small $`t`$ is evidence that the classical value 0.25 of the soft-pomeron slope $`\alpha _{P_1}^{}`$ is correct. The dashed lines in figures 1 and 3 include an additional contribution which we now discuss.
What we have said so far should be uncontroversial. The remainder of this paper concerns the hard pomeron and so may be less generally accepted, though it adds to the already-strong body of evidence in support of the concept. We first introduced$`^{\text{[9]}}`$ a hard pomeron, with intercept $`\alpha _{P_0}`$ a little greater than 1.4, to explain the data for the proton structure function $`F_2(x,Q^2)`$ at small $`x`$. We then observed$`^{\text{[3]}}`$ that the ZEUS data$`^{\text{[10]}}`$ for the charm component $`F_2^c(x,Q^2)`$ of $`F_2(x,Q^2)`$ seem to confirm its existence, and tentatively deduced the slope of the trajectory from the H1 data$`^{\text{[11]}}`$ for the differential cross section for the process $`\gamma pJ/\psi p`$:
$$\alpha _{P_0}(t)=1.44+\alpha _{P_0}^{}t\alpha _{P_0}^{}=0.1$$
$`(5)`$
We found also that the coupling $`G_{J/\psi }(t)`$ to the $`\gamma J/\psi `$ vertex is rather flatter in $`t`$ than the coupling $`G_\rho (t)`$ to the $`\gamma \rho `$ vertex that we specify in (3), and we took it to be constant.
Figure 4: $`\gamma pJ/\psi p`$: H1 data$`^{\text{[11]}}`$ at three $`t`$ values, and ZEUS data$`^{\text{[1]}}`$ at $`\sqrt{s}`$=94 GeV. The fits include the hard and soft pomeron contributions
So the hard-pomeron contribution to the amplitude for $`\gamma pJ/\psi p`$ is taken as
$$iF(t)\left[A_{P_0}(\alpha _{P_0}^{}s)^{\alpha _{P_0}(t)1}e^{{\scriptscriptstyle \frac{1}{2}}i\pi (\alpha _{P_0}(t)1)}\right]$$
$`(6)`$
This differs from what we used in reference in that we divide $`s`$ by the mass scale $`s_0=1/\alpha _{P_0}^{}`$ before raising it to the Regge power. We assume Zweig’s rule, so that the $`f,a_2`$ trajectory decouples and $`A_R=0`$, although a contribution from the soft pomeron is retained. Figure 4 shows the comparison with the H1 data$`^{\text{[11]}}`$ at three values of $`t`$, and the ZEUS data$`^{\text{[1]}}`$ at $`\sqrt{s}`$=94 GeV, taking
$$A_{P_0}=0.016A_{P_1}=0.17$$
$`(7)`$
Our fits to the data for $`F_2`$ at small $`x`$ and for $`F_2^c`$ suggest that the coupling of the pomeron to quarks is flavour-blind. So, in order to relate the strengths of the hard-pomeron couplings in the processes $`\gamma pJ/\psi p`$ and $`\gamma p\rho p`$ we need just to include wave-function effects. Although the hard pomeron couples to photon-induced reactions, its coupling to purely hadronic processes is extremely small$`^{\text{[9]}}`$. So it seems reasonable to assume that it is the pointlike component of the photon that is largely responsible, rather than the hadron-like component. This in turn implies that in $`\gamma pVp`$, the strength of the hard-pomeron coupling depends on the magnitude of the $`V`$ wave function at the origin and the relevant quark charges, and that therefore it is proportional to $`\sqrt{\mathrm{\Gamma }_{Ve^+e^{}}/m_V}`$. This implies that for $`\gamma p\rho p`$ we should use
$$A_{P_0}=0.036$$
$`(8)`$
Adding such a hard-pomeron term to the amplitude gives the dashed curve in figure 3.
It might have been thought that a Regge cut, for example from two-pomeron exchange, could have been used to explain the $`\rho `$ data at larger $`|t|`$ as the cut has a less strong t-dependence than the pole. However the cut has the opposite sign to the pole, so far from enhancing the cross section at large $`|t|`$ it actually reduces it.
Finally we apply the model to $`\varphi `$ photoproduction. As before we can use the flavour-blind nature of the coupling of the hard pomeron to quarks to specify its contribution uniquely. This gives
$$A_{P_0}=0.014$$
$`(9)`$
Just as for the $`\rho `$ there are two unknowns in the soft pomeron contribution to $`\varphi `$ photoproduction: the magnitude of the coupling of the soft pomeron to strange quarks and the mass scale in the $`\varphi `$ form factor. We know that Vector Meson Dominance is not a good approximation for the $`\varphi `$ and that wave function effects are important $`^{\text{[4]}}`$, so the normalisation can only be specified by the data. Naively the mass in the form factor is simply that of the $`\varphi `$ but higher-mass $`s\overline{s}`$ states must contribute making the effective mass somewhat larger. In analogy with the $`\rho `$ case we use
$$G_\varphi =\frac{1}{1t/1.5}$$
$`(10)`$
choosing 1.5 instead of 0.71 on the grounds that $`m_\varphi ^22m_\rho ^2`$. As for the $`J/\psi `$ we can neglect any contribution from $`f,a_2`$ exchange. Fitting the soft pomeron coupling to the data yields
$$A_{P_1}=1.49$$
$`(11)`$
and the results are shown in figure 5.
Figure 5: $`\gamma p\varphi p`$: Data for the total cross section$`^{\text{[12]}}`$ and the differential cross section$`^{\text{[1]}}`$$`^{\text{[13]}}`$ at $`\sqrt{s}`$=94 GeV. The dashed lines show the soft pomeron contributions and the solid lines include also the hard pomeron
This research is supported in part by the EU Programme ‘‘Training and Mobility of Researchers", Networks ‘‘Hadronic Physics with High Energy Electromagnetic Probes" (contract FMRX-CT96-0008) and ‘‘Quantum Chromodynamics and the Deep Structure of Elementary Particles’’ (contract FMRX-CT98-0194), and by PPARC
References
relax1 ZEUS collaboration: J Breitweg et al, Europ Phys Jour C1 (1998) 81 and hep-ex/9910038 relax2 D Aston et al, Nuclear Physics B209 (1982) 56 relax3 A Donnachie and P V Landshoff, hep-ph/9910262 relax4 A Donnachie and P V Landshoff, Physics Letters B348 (1995) 213 relax5 A Donnachie and P V Landshoff, Physics Letters B296 (1992) 227 relax6 G Gounaris and J J Sakurai, Physical Review Letters 21 (1968) 244 F M Renard, Nuclear Physics B15 (1970) 267 relax7 G Veneziano, Nuovo Cimento 57A (1968) 190 relax8 A Donnachie and P V Landshoff, Nuclear Physics B231 (1983) 189 relax9 A Donnachie and P V Landshoff, Physics Letters B437 (1998) 408 relax10 ZEUS collaboration: A Breitweg et al, hep-ex/9908012 relax11 H1 Collaboration, submitted to the International Europhysics Conference on High Energy Physics HEP99, Tampere, Finland, July 1999 relax12 J Busenitz et al, Physical Review D40 (1989) 1 relax13 ZEUS collaboration: M Derrick et al, Physics Letters B377 (1996) 259 |
no-problem/9912/quant-ph9912106.html | ar5iv | text | # Atom Chips
## Abstract
Atoms can be trapped and guided using nano-fabricated wires on surfaces, achieving the scales required by quantum information proposals. These Atom Chips form the basis for robust and widespread applications of cold atoms ranging from atom optics to fundamental questions in mesoscopic physics, and possibly quantum information systems.
In mesoscopic quantum electronics, electrons move inside semiconductor structures and are manipulated using potentials where at least one dimension is comparable to the de-Broglie wavelength of the electrons . Similar potentials can be created for neutral atoms moving microns above surfaces, using nano-fabricated charged and current carrying structures . Surfaces carrying such structures form Atom Chips which, for coherent matter wave optics, may form the basis for a variety of novel applications and research tools, similar to what integrated circuits are for electronics.
In this work we make use of the magnetic interaction $`V_{mag}=\stackrel{}{\mu }\stackrel{}{B}`$ based on the coupling of the atomic magnetic moment $`\stackrel{}{\mu }`$ to the magnetic field $`\stackrel{}{B}`$ to trap and manipulate atoms close to the surface of an Atom Chip. The trapping potentials are created by superposing a homogeneous magnetic bias field with the field generated by a thin current carrying wires. The trap depth is given by the homogeneous field, the gradients and curvatures by the magnetic fields from the wire .
We have previously reported on the manipulation of neutral atoms using thin (down to below $`1\mu m`$) charged wires and current carrying wires (down to $`25\mu m`$) to form guides , beam splitters , and Z or U shaped 3-dimensional traps . These structures were free standing.
The next step was to turn to surface mounted wires which was recently achieved for large structures . However,the full potential in surface mounted atom optics lies in the robust miniaturization down to the mesoscopic scale. Such a move is primarily motivated by the theoretically required scale needed to achieving entanglement with neutral atoms through controlled collisions or cavity QED , entanglement being the basic building block for quantum information devices.
Here we present such a nanofabricated device with which the required ground state size of less than $`100nm`$ was achieved. This is a first step towards our vision, the realization of a fully integrated Atom Chip. We start by describing the chip and the experimental setup, followed by a presentation of the results. Finally, we discuss potential applications and future perspectives.
The chip we have used in this work is made of a $`2.5\mu m`$ gold layer placed on a $`600\mu m`$ thick GaAs substrate . The gold layer is patterned using nano-fabrication technology. The scale limit of the process used is well below $`100nm`$.
In figure 1a we present the main elements of the chip design used in the work described here. Each of the large U-shaped wires, together with a bias field, creates a quadrupole field, which may be used to form a Magneto-Optical-Trap (MOT) on the chip as well as a magnetic trap. Both U-shaped wires together may be used to form a strong magnetic trap in order to ’load’ atoms into the smaller structures, or as an on-board (i.e. without need for external coils) bias field, for guides and traps created by the thin wire running between them. The thin wires are $`10\mu m`$ wide and depending on the contact used, may form a U-shaped or a Z-shaped magnetic trap or a magnetic guide. The chip wires are all defined by boundaries of $`10\mu m`$ wide etchings in which the conductive gold has been removed. This leaves the chip as a gold mirror (with $`10\mu m`$ etchings) and it can be used to reflect the laser beams for the MOT during the cooling and collecting of atoms. Figure 1b presents the mounted chip before it is introduced into the vacuum chamber. In addition, a U-shaped $`1mm`$ thick wire, capable of carrying up to $`20A`$ of current, has been put underneath the chip in order to assist with the loading of the chip. Its location and shape are identical to those of one of the $`200\mu m`$ U-shaped wires and it differs only in the amount of current it can carry.
The chip assembly (Fig. 1b) is then mounted inside a vacuum chamber used for atom trapping experiments, with optical access for the laser beams and the observation cameras and with the possibility of applying the desired bias fields (Fig. 2). For a more detailed description of the apparatus and the atom trapping procedure, see .
The experimental procedure for loading cold atoms into the small traps on the chip is the following:
In the first step typically $`10^8`$ $`{}_{}{}^{7}Li`$ atoms are loaded from an effusive atomic beam into a MOT . Because the atoms have to be collected a few millimeters away from a surface we use a ’reflection’ MOT . Thereby, the 6 laser beams needed for the MOT are formed from 4 beams by reflecting two of them off the chip surface (Fig. 2). Hence atoms above the chip actually encounter six light beams. To assure a correct magnetic field configuration needed for the formation of a MOT, one of the reflected light beams has to be in the axis of the MOT coils. Figure 3a shows a top view of the chip and the reflection MOT sitting above the U-shaped wires.
The large external quadrupole coils are then switched off while the current in the U-shaped wire underneath the chip is switched on (up to $`16A`$), together with an external bias field ($`8G`$). This forms a nearly identical, but spatially smaller, quadrupole field as compared to the fields of the large coils. The atoms are thus transferred to a secondary MOT which by construction is always well aligned with the chip (Fig. 3b). By changing the bias field, the MOT can be shifted close to the chip surface (typically, $`2mm`$). The laser power and detuning are changed to further cool the atoms, giving us a sample with a temperature below $`200\mu K`$.
In the next step, the laser beams are switched off and the quadrupole field serves as a magnetic trap in which the low field seeking atoms are attracted to the minimum of the field. Without the difficulties of near surface shadows hindering the MOT, the magnetic trap can now be lowered further towards the surface of the chip (Fig. 3c). This is simply done by increasing the bias field (up to $`19G`$). Atoms are now close enough so that they can be trapped by the chip fields. The loading of the chip has begun.
Next, 2A are sent through each of the two $`200\mu m`$ U-shaped wires on the chip and the current in the U-shaped wire located underneath the chip is ramped down to zero. This procedure brings the atoms even closer to the chip, compresses the trap considerably, and transfers the atoms to a magnetic trap formed by the currents in the chip. The distances of the atoms from the surface are now typically a few hundred microns (Fig. 3d).
Finally, the $`10\mu m`$ wire trap is loaded in much the same way. It first receives a current of $`300mA`$. Then the current in both the U-shaped wires is ramped down to zero (Fig. 4). Atoms are now typically a few tens of microns above the surface (Fig. 3e).
These guides and traps can be further compressed by ramping up the bias magnetic field. In this process we typically achieve gradients of $`>25kG/cm`$. By applying a bias field of $`40G`$ and a current of $`200mA`$ in the $`10\mu m`$ wire we achieve trap parameters with a transverse ground state size below 100 nm and frequencies of above $`100kHz`$ (as required by the quantum computation proposals ).
By running the current through a longer $`10\mu m`$ section of the thin wire, we turn the magnetic trap into a guide, and atoms could be observed expanding along it (Fig. 3f).
In an additional experiment we used the thick wires on the chip to create an on chip bias fields for the trapping. In the experiment this is done by sending current through the two U-traps in the opposite direction with respect to the current in the $`10\mu m`$ wire, which creates a magnetic field parallel to the chip surface. Hence, we demonstrate trapping of atoms on a self contained chip.
In these small traps, the atom gas can be compressed to the point where direct visual observation is difficult. In such a case, we observe those atoms after guiding or trapping, by ’pulling’ them up from the surface into a less compressed wire trap (by increasing the wire current or decreasing the bias field).
During the transfer from the large magnetic trap to the small 10 $`\mu m`$ trap the density of the atomic cloud is increased by up to a factor 350. As the trap is compressed, the temperature of the atoms rises, and if in this course the trapping potential is not deep enough atoms are lost. In our case, the trap depth is uniquely determined by the bias field used, which leads to depths $`E=m_Fg_F\mu _B|B|`$ ranging between $`6MHz`$ ($`0.25mK`$) for the $`8G`$ bias field and $`|m_F|=1`$ to $`70MHz`$ ($`3mK`$) for the $`50G`$ bias field and $`|m_F|=2`$. This adiabatic heating and the finite trap depth limited the transfer efficiency for atoms from the large magnetic quadrupole into the smallest chip trap to $`<`$50 %.
Since we use an trapped atomic sample consisting of 3 different spin states ($`|F=2,m_F=2`$, $`|F=2,m_F=1`$, and $`|F=1,m_F=1`$) the large compression also increases the rate for inelastic two body spin flip collisions dramatically. This rate is for our Li sample similar to the elastic collision rate and is therefore a good estimate of the achievable collision rates in a polarized sample. From measured decay curves we estimate the collision rate to be in the order of 20 $`s^1`$ for atoms in a typical small chip trap. This estimate of the scattering rate in the small chip traps is supported by the observation that the atoms expand very fast into the wire guide, indicating that energy gained from the transverse compression of the trap is transformed efficiently into longitudinal velocity at a very high rate.
The above shows that the concept of an Atom Chip clearly works. We have demonstrated that a wide variety of magnetic potentials may be realized with simple wires on surfaces. Wires together with a bias field can produce quadrupole fields for a MOT, 3D minima for trapping, and 2D minima for guiding. Furthermore it is very easy to manipulate the center of the trap and its width. We have shown that loading such an atom trap $`\mu m`$ above the surface does not present a major problem and trap parameters with a transverse ground state size below $`100nm`$ and frequencies of above $`100kHz`$ have been achieved. In addition we could trap atoms exclusively with the chip fields, creating the required bias fields ’on board’. Last but not least, it has been shown that standard nano-fabrication techniques and materials may be utilized to build these Atom Chips. The wires on the surface can stand sufficiently high current densities ($`>10^6A/cm^2`$) in vacuum and at room temperature. Together with the scaling laws of these traps , this will allow us to use much thinner wires and reach traps with ground state sizes of $`10nm`$ and trap frequencies in the MHz range.
We conclude with a long term outlook. In this work we have successfully realized a step which is but one of many still needed. A final integrated Atom Chip, should have a reliable source of cold atoms, for example a BEC , with an efficient loading mechanism, single mode guides for coherent transportation of atoms, nano-scale traps, movable potentials allowing controlled collisions for the creation of entanglement between atoms, extremely high resolution light fields for the manipulation of individual atoms, and internal state sensitive detection to read out the result of the processes that have occurred (e.g. the quantum computation). All of these, including the bias fields and probably even the light sources, could be on-board a self-contained chip. This would involve sophisticated 3D nano-fabrication and the integration of a diversity of electronic and optical elements, as well as extensive research into fundamental issues such as decoherence near a surface. Such a robust and easy to use device, would make possible advances in many different fields of quantum optics: from applications in atom optics such as clocks and sensors to implementations of quantum information processing and communication .
We would like to thank A. Chenet, A. Kasper and A. Mitterer for help in the experiments. Atom chips used in the preparation of this work and in the actual experiments were fabricated at the Institut für Festkörperelektronik, Technische Universität Wien, Austria, and the Sub-micron center, Weizmann Inst. of Science, Israel. We thank E.Gornik, C. Unterrainer and I. Bar-Joseph of these institutions for their assistance. Last but not least, we gratefully acknowledge P. Zoller and T. Calarco who are responsible for the theoretical vision. This work was supported by the Austrian Science Foundation (FWF), project S065-05 and SFB F15-07, the Jubiläums Fonds der Österreichischen Nationalbank, project 6400, and by the European Union, contract Nr. TMRX-CT96-0002. B.H. acknowledge financial support form Svenska Institutet. |
no-problem/9912/hep-th9912140.html | ar5iv | text | # Untitled Document
SPIN-1999/31 DFTT-99-66 ITFA-99-41
hep-th/9912140
Thermal effects in perturbative noncommutative gauge theories
G. Arcioni$`^{\mathrm{a},\mathrm{c},}`$<sup>1</sup> E-mail: Arcioni@to.infn.it, G.Arcioni@phys.uu.nl and M.A. Vázquez-Mozo$`^{\mathrm{b},\mathrm{c},}`$<sup>2</sup> E-mail: vazquez@wins.uva.nl, M.Vazquez-Mozo@phys.uu.nl
$`^\mathrm{a}`$ Dipartimento di Fisica Teorica, Università di Torino,
Via P. Giuria 1, I-10125 Torino, Italy
$`^\mathrm{b}`$ Instituut voor Theoretische Fysica, Universiteit van Amsterdam,
Valckenierstraat 65, 1018 XE Amsterdam, The Netherlands
$`^\mathrm{c}`$ Spinoza Instituut, Universiteit Utrecht,
Leuvenlaan 4, 3584 CE Utrecht, The Netherlands
The thermodynamics of gauge theories on the noncommutative plane is studied in perturbation theory. For $`U(1)`$ noncommutative Yang-Mills we compute the first quantum correction to the ideal gas free energy density and study their behavior in the low and high temperature regimes. Since the noncommutativity scale effectively cutoff interactions at large distances, the theory is regular in the infrared. In the case of $`U(N)`$ noncommutative Yang-Mills we evaluate the two-loop free energy density and find that it depends on the noncommutativity parameter through the contribution of non-planar diagrams.
12/99
1. Introduction
Noncommutative geometry \[1\] has been a recurrent issue in physics in the last decades. After several attempts to incorporate the mathematical formalism in string field theory and even the standard model \[3\]\[4\] and the quantum Hall effect \[5\], it has recently re-emerged in the context of string/M-theory in the presence of constant background fields \[6\]\[7\]\[8\]. Since the low-energy limit of these configurations is described in terms of a supersymmetric gauge theory living in a noncommutative space, the study of these type of nonlocal field theories has received renewed attention lately. Now, however, because of their stringy connections, new tools are available to study the physics of noncommutative field theories. As a matter of example, the extension of the AdS/CFT correspondence to backgrounds with constant vacuum values of the (Neveu-Schwarz)<sup>2</sup> tensor field \[9\]\[10\] makes it feasible the study of noncommutative field theories also in the strong coupling regime \[11\]\[12\]\[13\]\[14\]\[15\]. On the perturbative side, several aspects of noncommutative gauge theories have been recently addressed in \[16\]\[17\]\[18\].
The physical idea behind the application of noncommutative geometry is that of the quantization of space-time itself by introducing noncommuting space-time coordinates
$$[x^\mu ,x^\nu ]=2i\theta ^{\mu \nu },\mu ,\nu =0,\mathrm{},d1.$$
Roughly speaking, in ordinary quantum theory the canonical commutation relations lead to a quantization of the phase space that results in a smeared symplectic geometry at short distances due to the uncertainty principle. Following a similar line of reasoning, one is lead to think that the commutation relations (1.1) will smear the space-time picture at distances shorter than $`\sqrt{\theta }`$, imposing thus a natural cutoff for the description of Nature in terms of a local quantum field theory.
In the formalism on noncommutative geometry, the geometrical features of the noncommutative manifold are reconstructed by considering the deformation of the $`C^{}`$-algebra of continuous complex functions defined on it and vanishing at infinity, using the Weyl product
$$(fg)(x)=f(x)e^{i\stackrel{}{_i}\theta ^{ij}\stackrel{}{}_j}g(x).$$
Consequently, quantum field theories on noncommutative spaces can be formulated by writing the ordinary action and replacing the commutative product with the $``$-product defined by (1.1); because of its non-polynomial character, the resulting field theories will be non-local. It is actually this non-locality what is argued to smear physics at short distances.
The study of systems at finite temperature usually provides good insights into their physical behavior. In this note we will be concerned with the thermodynamics of Yang-Mills theories on noncommutative spaces (NCYM) of the type $`_\theta \times 𝐑_t`$ where $`_\theta `$ is some $`(d1)`$-dimensional noncommutative space, typically $`𝐑_\theta ^{d1}`$, characterized by a deformation matrix $`\theta ^{ij}`$ ($`i,j=1,\mathrm{},d1`$, $`d>2`$). In this setup, we can compute the thermodynamical potentials using the imaginary time formalism by compactifying the euclidean time to length $`\beta =T^1`$. The corresponding Feynman rules are thus obtained from the Euclidean Feynman rules of zero temperature noncommutative gauge theories by quantizing the time components of the momenta in units of $`2\pi T`$. One technical payoff of restricting noncommutativity to the spatial sections is that the resulting non-polynomial functions of the momenta in the Feynman integrals do not involve the discrete Euclidean momentum. Thus, the Matsubara sums that appear in the computation of the free energy are of the same kind that one encounters in ordinary commutative quantum field theories.
In the following we will focus our attention on gauge theories on $`𝐑_\theta ^2\times 𝐑_t`$ and $`𝐑_\theta ^3\times 𝐑_t`$. Actually, in the four-dimensional case we can always find a rigid orthogonal coordinate transformation $`\stackrel{~}{x}=Ax`$ that takes a generic antisymmetric matrix $`\theta _{ij}`$ to its block off-diagonal form
$$A\left(\begin{array}{ccc}0& \theta _{12}& \theta _{13}\\ \theta _{12}& 0& \theta _{23}\\ \theta _{13}& \theta _{23}& 0\end{array}\right)A^T=\left(\begin{array}{ccc}0& \theta & 0\\ \theta & 0& 0\\ 0& 0& 0\end{array}\right)$$
with $`\theta ^2=\theta _{12}^2+\theta _{13}^2+\theta _{23}^2`$. Thus, when expressed in the appropriate system of coordinates, we see that $`𝐑_\theta ^3\times 𝐑_t`$ is actually equivalent to $`𝐑_\theta ^2\times 𝐑\times 𝐑_t`$.
The present paper is organized as follows: in Sec. 2 we study the loop corrections to the thermodynamics of $`U(1)`$ pure NCYM theories at finite temperature. Sec. 3 will be devoted to the study of the non-abelian $`U(N)`$ case (and its supersymmetric extensions), where we will find that all the dependence of the two-loop free energy density on $`\theta `$ comes from the contribution of the $`U(1)`$ part of $`U(N)`$ to non-planar diagrams. Finally, in Sec. 4 we will summarize our conclusions.
2. Thermodynamics of $`U(1)`$ NCYM
Noncommutative pure $`U(1)`$ Yang-Mills theory is specially interesting, since in this case interaction appears solely as the result of noncommutativity. The action of a $`U(1)`$ gauge field on $`𝐑_\theta ^{d1}\times 𝐑_t`$ can be written as
$$S_{U(1)}=\frac{1}{4}d^dxF_{\mu \nu }F^{\mu \nu }$$
where the star product is defined by (1.1) and the field strength is given in terms of the Moyal bracket $`\{f,g\}_{\mathrm{MB}}=fggf`$ as
$$F_{\mu \nu }=_\mu A_\nu _\nu A_\mu +ig\{A_\mu ,A_\nu \}_{\mathrm{MB}}.$$
The Feynman rules for this theory are easily written in momentum space, as shown in references \[19\]\[20\]. The resulting diagrammatic expansion is qualitatively similar to ordinary non-abelian Yang-Mills theories, except for the momentum dependence of the vertices through the function $`\mathrm{sin}(\theta _{ij}p^iq^j)`$. As we will see later, this extra dependence on the momenta with respect to the ordinary non-abelian Yang-Mills theory has important consequences on the infrared behavior of the noncommutative theory.
At one-loop level, the free energy density is determined by the quadratic part of the action and as a consequence it is independent of the noncommutativity of the base space. Thus, the result is identical of that of pure QED<sub>d</sub>, namely
$$(T)_{1\mathrm{loop}}=(d2)\frac{\mathrm{\Gamma }(d/2)}{\pi ^{\frac{d}{2}}}\zeta (d)T^d.$$
Corrections to the ideal gas contributions can be computed in perturbation theory using the Feynman rules given in ref. \[20\] (see also \[21\]). The first term correcting eq. (2.1) comes from two-loop diagrams. In our case, the final result can be cast in the form
$$(T)_{2\mathrm{loop}}=g^2(d2)^2\frac{d^{d1}p}{(2\pi )^{d1}}\frac{d^{d1}q}{(2\pi )^{d1}}\frac{n_b(p)n_b(q)}{\omega _p\omega _q}\mathrm{sin}^2\theta (p,q)$$
where $`\omega _p=p`$, $`\theta (p,q)=\theta _{ij}p^iq^j`$ and
$$n_b(p)=\frac{1}{e^{p/T}1}$$
is the Bose-Einstein distribution function. Ultraviolet divergences in the zero temperature part of the diagrams are taken care of by inserting the corresponding one-loop couterterms at $`T=0`$ \[20\]\[22\].
A first thing to be noticed is that, in spite of the similarities between the diagrammatic expansion of the free energy of noncommutative $`U(1)`$ gauge theory and that of pure YM<sub>d</sub>, here the infrared behavior is much softer due to the presence of the factor $`\mathrm{sin}^2\theta (p,q)`$. which when $`p0`$ will vanish as $`𝒪(p^2)`$. Thus, the two-loop correction (2.1) is well defined for all $`d3`$.
Let us first analyze the three dimensional case ($`d=3`$). Here, the antisymmetric matrix $`\theta _{ij}`$ can be written as $`\theta _{ij}=\theta ϵ_{ij}`$ and the integration over angular variables in (2.1) can be easily performed with the result
$$(T)_{2\mathrm{loop}}=\frac{g^2T^2}{8\pi ^2}_0^{\mathrm{}}𝑑u_0^{\mathrm{}}𝑑v\frac{1J_0(2\theta T^2uv)}{(e^u1)(e^v1)}.$$
We notice that this integral is both infrared ($`u,v0`$) and ultraviolet ($`u,v\mathrm{}`$) convergent. Let us study first the case when the temperature is much smaller than the energy scale $`\frac{1}{\sqrt{\theta }}`$ associated with noncommutativity effects. If $`T\sqrt{\theta }1`$, we can expand the Bessel function in power series and integrate term by term. The result is an asymptotic series valid for small $`T\sqrt{\theta }`$ whose first term is
$$(T)_{2\mathrm{loop}}\frac{\zeta (3)^2}{2\pi ^2}g^2\theta ^2T^6+\mathrm{}$$
The asymptotic character of the series is easily understood by realizing that by expanding the Bessel function in power series and truncating the series we fall short in reproducing the integrand in the ultraviolet region, but this is precisely the region of the integral that is effectively cutoff at low temperatures.
The evaluation of (2.1) in the opposite region $`T1/\sqrt{\theta }`$ is more complicated due to the peculiar infrared structure of the theory at hand. We know that, asymptotically, the Bessel function oscillates very fast for large values of the argument. If we introduce a cutoff $`\mathrm{\Lambda }_\theta `$ (that in principle will depend on the value of $`T\sqrt{\theta }`$) to isolate the infrared sector of the integral, we have that
$$_{\mathrm{\Lambda }_\theta }^{\mathrm{}}𝑑u_{\mathrm{\Lambda }_\theta }^{\mathrm{}}𝑑v\frac{1J_0(2\theta T^2uv)}{(e^u1)(e^v1)}_{\mathrm{\Lambda }_\theta }^{\mathrm{}}\frac{du}{e^u1}_{\mathrm{\Lambda }_\theta }^{\mathrm{}}\frac{dv}{e^v1}$$
since for large values of the argument the rapidly oscillating Bessel function will be averaged to zero. Thus, when $`T\sqrt{\theta }1`$ we can write
$$(T)_{2\mathrm{loop}}\frac{1}{3}(T,\mathrm{\Lambda }_\theta )_{2\mathrm{loop}}^{SU(2)}+f(T,\mathrm{\Lambda }_\theta )_{\mathrm{IR}}$$
where $`(T,\mathrm{\Lambda }_\theta )_{2\mathrm{loop}}^{SU(2)}`$ is the two-loop free energy density of ordinary pure YM<sub>3</sub> with gauge group $`SU(2)`$ and infrared momentum cutoff $`\mathrm{\Lambda }_\theta T`$, and $`f(T,\mathrm{\Lambda }_\theta )_{\mathrm{IR}}`$ is some contribution containing the information from the infrared sector of the theory.
We can interpret the decomposition (2.1) in the sense that, in the large-$`T\sqrt{\theta }`$ limit, the ultraviolet sector of the $`U(1)`$ noncommutative gauge theory is well described in terms of an ordinary Yang-Mills theory with $`C_2(G)=2`$ \[22\]\[19\]\[20\]. The numerical prefactor $`\frac{1}{3}`$ just reflects the fact that the $`U(1)`$ noncommutative gauge theory only has one propagating “photon” while the $`SU(2)`$ Yang-Mills theory has three propagating gluons. Thus, the contribution to the free energy from the ultraviolet part of the theory is given by the free energy per vector boson of an $`SU(2)`$ commutative gauge theory. On the other hand, we see that the theory in the infrared is radically different from ordinary Yang-Mills which is infrared divergent at two loops in three dimensions. This feature is very much reminiscent of the Morita equivalence \[23\] between $`U(1)`$ Yang-Mills theory on the noncommutative torus and a $`U(N)`$ gauge theory on a commutative one in the presence of a magnetic flux, where the infrared sector of the theory is regularized by the presence of the background twisted gauge field \[24\]\[25\].
Let us focus our attention now on the four-dimensional case. From the discussion in the Introduction, we know that we can choose coordinates $`x,y,z`$ such that noncommutativity is restricted to the $`xy`$-plane, $`[x,y]=2i\theta `$, $`[x,z]=[y,z]=0`$. Since the interacting character of the theory is entirely due to the noncommutativity of the base space, we find that our theory will be free whenever the momenta are orthogonal to the $`xy`$-plane. Thus, in order to study the integral (2.1) it is convenient to use cylindrical coordinates where the $`z`$-coordinate coincides with the “central” direction. Again we can integrate over angular variables to find
$$\begin{array}{cc}\hfill (T)_{2\mathrm{loop}}& =\frac{g^2T^4}{8\pi ^4}_{\mathrm{}}^{\mathrm{}}𝑑u_z_{\mathrm{}}^{\mathrm{}}𝑑v_z\hfill \\ \hfill \times & _0^{\mathrm{}}\frac{udu}{\sqrt{u^2+u_z^2}}_0^{\mathrm{}}\frac{vdv}{\sqrt{v^2+v_z^2}}\frac{1J_0(2T^2\theta uv)}{(e^{\sqrt{u^2+u_z^2}}1)(e^{\sqrt{v^2+v_z^2}}1)}\hfill \end{array}$$
As in three dimensions, when $`T\sqrt{\theta }`$ is small we can expand the Bessel function to get an asymptotic series in powers of $`T\sqrt{\theta }`$,
$$(T)_{2\mathrm{loop}}2g^2\left(\frac{\pi ^2}{45}\right)^2\theta ^2T^8+\mathrm{}$$
When $`T\sqrt{\theta }1`$ it is difficult to estimate the value of the integral (2.1). Since the function resulting from integration of the angular variables is the same than in the three-dimensional case, we can argue along similar lines that, again, the free energy can be decomposed as in (2.1) into a $`SU(2)`$-like ordinary Yang-Mills piece and a contribution that takes care of the infrared sector of the theory. Although ordinary Yang-Mills in four-dimensions is infrared finite at two loops and each term in (2.1) is finite in the limit $`\mathrm{\Lambda }_\theta 0`$ (when $`T\sqrt{\theta }\mathrm{}`$), the infrared part $`f(T,\mathrm{\Lambda }_\theta )_{\mathrm{IR}}`$ gives always a non-trivial contribution, even in this limit. This is due to the fact that in the infrared region the function multiplying $`J_0(2T^2\theta uv)`$ in the integrand is unbounded and rapidly varying close to $`u,v=0`$. Thus, when $`T\sqrt{\theta }`$ is very large the highly oscillatory Bessel function is not able to average it to zero since the function multiplying it is not approximately constant in a single period.
In principle, one could compute higher order corrections to the thermodynamic potential (2.1). In ordinary YM<sub>4</sub> at finite temperature the self-interaction of gluons introduce infrared divergences at three loops that do not cancel order by order in perturbation theory and have to be taken care of by resumming the so-called ring diagrams. The result is a mild breakdown of the perturbative expansion that now is no longer a series in $`g^2`$ alone but also contains terms of order $`g^3`$ \[26\], $`g^4\mathrm{log}g^2`$ \[27\] and $`g^5`$ \[28\]\[29\](for similar results in QED see \[30\]). The situation is worsened by extra divergences due to the self-interaction of the transverse gluons that invalidates perturbation theory at $`𝒪(g^6)`$.
We can study what happens with $`U(1)`$ Yang-Mills theory on $`𝐑_\theta ^2\times 𝐑\times 𝐑_t`$ for higher loops contributions to the free energy. A first problem to be solved would be whether the noncommutative $`U(1)`$ theory itself is renormalizable at $`T=0`$ beyond one loop. Let us however assume that the ultraviolet divergences in the zero temperature sector can be handled by some cutoff and concentrate our attention on the (ultraviolet finite) temperature dependent contributions. At three loops, infrared divergences are associated with the existence of a non-vanishing thermal mass at one loop. In the case at hand, however, if we compute the static limit of the one-loop self-energy of the gluon we find that it vanishes quadratically with the external spatial momentum,
$$\mathrm{\Pi }_{00}(q_0=0,\stackrel{}{q}0)=2\mathrm{\Pi }_\mu ^\mu (q_0=0,\stackrel{}{q}0)\frac{16\pi ^2}{45}g^2\theta ^2T^4q_{xy}^2+𝒪(q_{xy}^3)$$
where $`q_{xy}`$ is the modulus of the projection of the external momentum $`\stackrel{}{q}`$ on the $`xy`$-plane. The first consequence of this fact, is that three-loop diagrams are free of infrared divergences and therefore the next correction to the free energy is of order $`g^4`$.
The soft behavior of $`U(1)`$ NCYM at low momenta (or large distances) follows from the fact that in this limit the theory becomes free. In physical terms, this is because the theory should reduce itself to its commutative version (a free $`U(1)`$ pure gauge theory) at length scales much bigger than the typical scale of noncommutative effects, i.e. $`\sqrt{\theta }`$. In this sense, this scale plays the role of an infrared cutoff for the noncommutativity-induced interactions, which in the ultraviolet resemble those of a non-abelian gauge theory. Thus, the absence of infrared divergences and non-analytic terms in $`g^2`$ in the lowest orders in perturbation theory seems to be a generic feature of the whole perturbative expansion which would be a series in integer powers of $`g^2`$.
Before closing this Section, let us make some remarks about the possible supersymmetric extensions of $`U(1)`$ NCYM. In the commutative case, we can construct a trivial supersymmetric theory by adding to the pure YM<sub>4</sub> theory the action of a free massless Majorana or Weyl spinor. This theory can be deformed into an interacting supersymmetric theory by switching on the noncommutativity of the base space<sup>3</sup> Extended supersymmetric theories can be obtained by considering a similar action in dimension six or ten and performing dimensional reduction to four dimensions.
$$S=d^4x\left[\frac{1}{4}F_{\mu \nu }F^{\mu \nu }+i\overline{\psi }\gamma ^\mu D_\mu \psi \right],$$
where the covariant derivative is defined by $`D_\mu \psi =_\mu \psi i(A_\mu \psi \psi A_\mu )`$. To compute the two-loop free energy density now we have to add to the result for pure NCYM the contribution coming from the fermion loop with the result
$$(T,\theta )_{2\mathrm{loop}}=4g^2\frac{d^3p}{(2\pi )^3}\frac{d^3q}{(2\pi )^3}\frac{1}{\omega _p\omega _q}[n_b(p)n_b(q)+2n_b(p)n_f(q)+n_f(p)n_f(q)]\mathrm{sin}^2\theta (p,q)$$
with $`n_f(p)=1/(e^{p/T}+1)`$ the Fermi-Dirac distribution function. The structure of expression (2.1) is similar to the two-loop free energy of ordinary SYM \[31\]. Here again the theory at low momenta becomes trivial (a free gauge field plus an “adjoint” $`U(1)`$ fermion), rendering the theory infrared finite, while in the ultraviolet it resembles $`𝒩=1`$ SYM<sub>4</sub> with $`C_2(G)=2`$. The three-dimensional case can be worked out along similar lines, starting with the action (2.1) in three dimensions (with $`\psi `$ a Majorana spinor). As in the non-supersymmetric case, the result is infrared finite.
3. Non-abelian NCYM
Let us consider now non-abelian four-dimensional noncommutative gauge theories on $`𝐑_\theta ^2\times 𝐑\times 𝐑_t`$. The non-abelian generalization of the action (2.1) is easily written as (group generators are normalized according to $`\mathrm{Tr}T^aT^b=\frac{1}{2}\delta ^{ab}`$)
$$S=\frac{1}{2}d^4x\mathrm{Tr}[F_{\mu \nu }F^{\mu \nu }]$$
where $`F_{\mu \nu }=_\mu A_\nu _\nu A_\mu ig(A_\mu A_\nu A_\nu A_\mu )`$. Now since our fields are matrix-valued functions, the $``$-product is defined by tensoring the Weyl product (1.1) with the ordinary product of matrices. A first consequence of this is that in order for the gauge fields to form a closed algebra we should restrict the gauge group to $`U(N)`$ \[8\], since we have to demand the group generators to form a closed algebra under ordinary matrix multiplication<sup>4</sup> From the point of view of string theory, one can in principle allow for other groups such as $`SO(N)`$ and $`USp(2N)`$ by introducing orientifold planes. Naively, the introduction of orientifolds will project out the (Neveu-Schwarz)<sup>2</sup> antisymmetric tensor field, and so it looks like one can only obtain commutative space-times. However, the orientation projection is compatible with having quantized background values of the (Neveu-Schwarz)<sup>2</sup> field \[32\] that will lead to a deformed product of functions on the manifold. This deformed product, however, is not necessarily non-commutative. We thank J. de Boer for a discussion on this point.. From a purely quantum field theoretical point of view, one can try to construct noncommutative gauge theories using Moyal brackets with gauge groups different from $`U(N)`$. At this level the obstruction to consider these other groups arises in the form of inconsistencies of the resulting non-local quantum field theory. For $`SU(N)`$, for example, the gauge variation of the vector field $`A_\mu `$, $`\delta A_\mu =_\mu \lambda +i(\lambda A_\mu A_\mu \lambda )`$, gets an extra piece proportional to the identity matrix
$$\delta A_\mu =\frac{i}{2N}(\lambda ^aA_\mu ^aA_\mu ^a\lambda ^a)\mathrm{𝟏}+\mathrm{}$$
where the $``$-product on the right hand side corresponds to that of ordinary functions. Thus, ordinary gauge transformations do not keep the vector field in the adjoint representation of $`SU(N)`$. Notice, however, that this extra piece scales as $`𝒪(1/N)`$ and therefore disappears when $`N\mathrm{}`$ where the theory reduces itself to $`U(N)`$ in the large-$`N`$ limit, as well as in the commutative limit when we recover ordinary $`SU(N)`$ YM.
In the ideal gas approximation, the free energy is easily computed by adding the contribution of the different degrees of freedom. Since the noncommutativity parameter $`\theta `$ appears only in the interactions terms, the result is identical to the corresponding ordinary Yang-Mills theory
$$(T)_{1\mathrm{loop}}=\mathrm{dim}G\frac{\pi ^2}{45}T^4$$
where $`\mathrm{dim}G`$ is the dimension of the gauge group, $`N^2`$ for $`U(N)`$.
In order to compute loop corrections to the free gas approximation, we need the Feynman rules for the non-abelian noncommutative Yang-Mills theory. Now, in contrast with the $`U(1)`$ case, there are interactions surviving the commutative $`\theta 0`$ limit, so the structure of the vertices will be more involved. In order to make the computation more transparent, instead of using the “trigonometric basis” for the gauge group generators \[33\], we will write Feynman rules using the structure constants $`f^{abc}`$ and Gell-Mann tensor $`d^{abc}`$ of the gauge group, defined in terms of the generators $`T^a`$ by the identities
$$\begin{array}{cc}\hfill f^{abc}& =2i\mathrm{Tr}[T^a,T^b]T^c\hfill \\ \hfill d^{abc}& =2\mathrm{Tr}\{T^a,T^b\}T^c\hfill \end{array}$$
where by $`\{,\}`$ we represent the anticommutator of the two generators.
The Feynman rules for the noncommutative $`U(N)`$ Yang-Mills can be easily obtained by writing the action (3.1) in momentum space. It turns out that the only change with respect to the Feynman rules of $`U(N)`$ ordinary Yang-Mills is the replacement on each vertex of the structure constants according to
$$f^{a_1a_2a_3}f^{a_1a_2a_3}\mathrm{cos}\theta (p_1,p_2)+d^{a_1a_2a_3}\mathrm{sin}\theta (p_1,p_2)$$
where $`a_i`$ is the color index associated with the particle with (incoming) momentum $`p_i`$. With this only change, the first quantum correction to (3.1) can be computed to give
$$(T,\theta )_{2\mathrm{loop}}=g^2\frac{d^3p}{(2\pi )^3}\frac{n_b(p)}{\omega _p}\frac{d^3q}{(2\pi )^3}\frac{n_b(q)}{\omega _q}\left[f^{abc}f^{abc}\mathrm{cos}^2\theta (p,q)+d^{abc}d^{abc}\mathrm{sin}^2\theta (p,q)\right].$$
However, for $`U(N)`$ ($`N>1`$) it turns out to be that<sup>5</sup> For $`N=1`$ we recover the results of the previous Section by setting $`f^{aaa}=0`$ and $`d^{aaa}=2`$.
$$f^{abc}f^{abc}=N(N^21),d^{abc}d^{abc}=N(N^2+1)$$
and therefore the two-loop free energy density is
$$\begin{array}{cc}\hfill (T,\theta )_{2\mathrm{loop}}=& g^2N(N^21)\left[\frac{d^3p}{(2\pi )^3}\frac{n_b(p)}{\omega _p}\right]^2+2g^2N\frac{d^3p}{(2\pi )^3}\frac{n_b(p)}{\omega _p}\frac{d^3q}{(2\pi )^3}\frac{n_b(q)}{\omega _q}\mathrm{sin}^2\theta (p,q)\hfill \\ \hfill =& \frac{N^21}{144}g^2NT^4+2g^2N\frac{d^3p}{(2\pi )^3}\frac{n_b(p)}{\omega _p}\frac{d^3q}{(2\pi )^3}\frac{n_b(q)}{\omega _q}\mathrm{sin}^2\theta (p,q)\hfill \end{array}$$
The first ($`\theta `$-independent) part of $`(T,\theta )_{2\mathrm{loop}}`$ is just the result for the two-loop free energy of ordinary pure $`SU(N)`$ Yang-Mills theories in four dimensions \[26\]. On the other hand, the $`\theta `$-dependence in the free energy density is subleading in the large-$`N`$ limit and comes from the contribution of the $`U(1)`$ part of $`U(N)`$ to the two-loop non-planar diagrams. Incidentally, in the large-$`T\sqrt{\theta }`$ limit the contribution of the $`U(1)`$ part of $`U(N)`$ in the ultraviolet combines with contribution of the $`SU(N)`$ part in the same region in such a way that the final result is of order $`𝒪(N^2)`$ for finite $`N`$. This is a further evidence of the planar character of noncommutative Yang-Mills in the large-$`\theta `$ regime at fixed temperature (or at large temperatures and fixed $`\theta `$) \[34\]\[12\]\[15\]\[16\].
In evaluating the free energy density at three loops, we are faced with the customary infrared divergences coming from the “commutative” $`SU(N)`$ part, since the $`U(1)`$ part decouples at large distances ($`p1/\sqrt{\theta }`$) and will not induce infrared divergences. As usual, these infrared singularities can be handled by resumming $`SU(N)`$ propagators in loops. Since the infrared structure of the theory is similar to commutative $`SU(N)`$ Yang-Mills, thermal perturbation theory is expected to break down at order $`𝒪[(g^2N)^3]`$. As in the previous Section, for small $`T\sqrt{\theta }`$ we can obtain an asymptotic expansion in powers of $`T\sqrt{\theta }`$. In the opposite limit we find that the $`U(1)`$ gauge boson effectively interacts like a non-abelian Yang-Mills field in the ultraviolet.
This result extends to supersymmetric theories straightforwardly \[35\]\[19\]. The Lagrangian of extended NCSYM theories can be written using the trick of dimensional reduction proposed in \[31\]. We start with a $`𝒟_{\mathrm{max}}`$-dimensional “maximal” $`𝒩=1`$ theory on $`_\theta \times 𝐑_t\times (𝐒^1)_R^{𝒟_{\mathrm{max}}4}`$ described by
$$=\frac{1}{2}\mathrm{Tr}F_{\mu \nu }F^{\mu \nu }+i\mathrm{Tr}\overline{\mathrm{\Psi }}\mathrm{\Gamma }^AD_A\mathrm{\Psi }.$$
As in \[31\] we take the limit $`R0`$ and retain only the zero modes of the fields in the internal coordinates, which in loop computations amounts to restrict momenta to four-dimensions in the Feynman integrals. Since all the internal coordinates are commutative, the matrix $`\theta _{ij}`$ is completely oblivious of the fact that we are performing dimensional reduction. Thus, we get $`𝒩=1`$, $`𝒩=2`$ and $`𝒩=4`$ NCSYM<sub>4</sub> by taking $`𝒟_{\mathrm{max}}=4,6,10`$ respectively.
Again, Feynman rules for $`U(N)`$ NCSYM are retrieved by the replacement (3.1) on the rules for ordinary SYM. Repeating the analysis above, we find again that all dependence in $`\theta `$ comes from the contribution from $`U(1)`$ fields in non-planar diagrams, namely
$$\begin{array}{cc}\hfill (T,\theta )_{2\mathrm{loop}}^{U(N)}& =(T,\theta =0)_{2\mathrm{loop}}^{SU(N)}+\frac{1}{2}(𝒟_{\mathrm{max}}2)^2\hfill \\ \hfill \times & g^2N\frac{d^3p}{(2\pi )^3}\frac{d^3q}{(2\pi )^3}\frac{1}{\omega _q\omega _p}\left[n_b(p)n_b(q)+2n_b(p)n_f(q)+n_f(p)n_f(q)\right]\mathrm{sin}^2\theta (p,q)\hfill \end{array}$$
where $`(T,\theta =0)_{2\mathrm{loop}}^{SU(N)}`$ is the two-loop free energy density of $`SU(N)`$ ordinary SYM \[36\]\[31\]\[37\]\[38\].
4. Conclusions
In the present paper we have studied several aspects of the thermodynamics of perturbative gauge theories on noncommutative spaces. The case of $`U(1)`$ NCYM is the most interesting example from a dynamical point of view. We have computed the quantum corrections to the one loop result and found that, due to the restoration of the free theory at low momenta, the theory is free of infrared divergences at three loops, due to the fact that the one loop two-point function of the gauge field in the static limit vanishes quadratically with the external momenta. Although we did not explicitly compute corrections beyond order $`𝒪(g^2)`$, the general structure of the theory seems to indicate that there is no onset of infrared divergences at any order.
For large values of $`T\sqrt{\theta }`$ we have seen that the theory behaves in the ultraviolet as an ordinary non-abelian Yang-Mills theory with $`C_2(G)=2`$. This is in perfect accordance with the result of references \[19\]\[20\]\[22\], in the sense that the ultraviolet divergences of the theory are obtained by averaging the factors $`\mathrm{sin}^2\theta (p,q)`$ in the amplitudes. On the other hand, the theory is completely different from an ordinary non-abelian gauge theory in the infrared. This is due to the fact that at large distances noncommutativity effects are negligible and a free theory is restored.
In the case of $`U(N)`$ NCYM, we have found that the two-loop free energy density only depends on $`\theta `$ through terms of order $`𝒪(1)`$ in the large-$`N`$ limit that correspond to the contribution of the $`U(1)`$ part of $`U(N)`$ in non-planar loop diagrams. Since these $`U(1)`$ contributions are regular in the infrared, we find that the infrared structure of the theory is determined by the $`SU(N)`$ part and thus it is identical to that of ordinary YM theories. In particular, we will have infrared singularities at three loops that can be resummed to give contributions of order $`𝒪[(g^2N)^{3/2}]`$ and $`𝒪[(g^2N)^2\mathrm{log}g^2N]`$, as well as $`𝒪[(g^2N)^{5/2}]`$. Thermal perturbation theory will break down at four loops due to the self-interaction of magnetic $`SU(N)`$ gluons.
Here we have restricted ourselves to gauge theories on the noncommutative plane. It would be interesting to better understand the case of $`U(1)`$ Yang-Mills theories in the noncommutative torus, specially in the case of rational $`\theta `$. The computation of quantum corrections in this case is straightforward, and essentially amounts to replacing in our expressions momentum integrals by discrete sums and rescaling $`\theta _{ij}\pi \theta _{ij}`$. By using Morita equivalence, one should be able to relate the free energy of these theories with that of ordinary gauge theories in the presence of twisted background fields.
5. Acknowledgements
We are pleased to thank José Barbón, Jan de Boer, César Gómez and Niels Obers for enlightening discussions. We also heartily thank Shiraz Minwalla for pointing out to us a mistake in the first version of this paper concerning Section 3. We are indebted to The Lorentz Center (Leiden University) and the organizers of the Workshop on Noncommutative Gauge Theories where part of this work was done. G.A. would like also to thank Spinoza Institute and especially Gerard ’t Hooft for kind hospitality. The work of M.A.V.-M. has been supported by the FOM (Fundamenteel Ordenzoek van the Materie) Foundation and by University of the Basque Country Grants UPV 063.310-EB187/98 and UPV 172.310-G02/99, and Spanish Science Ministry Grant AEN99-0315.
References
relax A. Connes, Noncommutative Geometry, Academic Press 1994. relax E. Witten, Nucl. Phys. B268 (1986) 253. relax A. Connes and J. Lott, Nucl. Phys. Proc. Suppl. 18 (1990) 29. relax A. Connes, Non-commutative geometry and physics, in: ”Gravitation and Quantizations”, Proceedings of the 1992 Les Houches Summer School. Eds. B. Julia and J. Zinn-Justin. Elsevier 1995. relax J. Bellisard, A. van Elst and H. Schulz-Baldes, The non-commutative geometry of the quantum Hall effect, cond-mat/9411052. relax A. Connes, M. Douglas and A. Schwarz, JHEP 02 (1998) 003 . (hep-th/9711162) relax M.R. Douglas and C.M. Hull, JHEP 02 (1998) 008. (hep-th/9711165) relax N. Seiberg and E. Witten, JHEP 09 (1999) 032. (hep-th/9908142) relax A. Hashimoto and N. Itzhaki, Phys. Lett. B465 (1999) 142. (hep-th/9907166) relax J.M. Maldacena and J.G. Russo, JHEP 09 (1999) 25 . (hep-th/9908134) relax M. Alishahiha, Y. Oz and M.M. Seikh-Jabbari, Supergravity and large N noncommutative field theories, JHEP 11 (1999) 007. (hep-th/9909215) relax J.L.F. Barbón and E. Rabinovici, On 1/N corrections to the entropy of noncommutative gauge theories, Preprint RI-15-99. (hep-th/9910019) relax R.-G. Cai and N. Ohta, On the thermodynamics of large-N noncommutative super Yang-Mills theories, Preprint OU-HET-329 (hep-th/9910092). relax A. Hashimoto and N. Itzhaki, On the hierarchy between non-commutative and ordinary supersymmetric Yang-Mills, Preprint NSF-ITP-99-133, (hep-th/9911057). relax T. Harmark and N.A. Obers, Phase structure of noncommutative field theories and spinning brane bound states, Preprint NBI-HE-99-47 (hep-th/9911169). relax S. Minwalla, M. Van Raamsdonk and N. Seiberg, Noncommutative perturbative dynamics, Preprint PUPT-1905, (hep-th/9912072). relax I.Ya. Aref’eva, D.M. Belov and A.S. Koshelev, Two-loop diagrams in noncommutative $`\varphi _4^4`$ theory, Preprint SMI-15-99 (hep-th/9912075). relax M. Hayakawa, Perturbative analysis on infrared aspects of noncommutative QED on $`𝐑^4`$, (hep-th/9912094). relax M.M. Seikh-Jabbari, JHEP 06 (1999) 015. (hep-th/9903107) relax T. Krajewski and R. Wulkenhaar, Perturbative quantum gauge fields on the noncommutative torus, Preprint CPT-99/P.3794. (hep-th/9903187) relax T. Krajewski, Géométrie non commutative et interactions fondamaentales, Ph.D. Thesis. (math-ph/9903047) relax C.P. Martín and D. Sánchez-Ruiz, Phys. Rev. Lett. 83 (1999) 476. (hep-th/9903077) relax B. Pioline and A. Schwarz, JHEP 08 (1999) 021. (hep-th/9908019) relax G. ’t Hooft, Nucl. Phys. B153 (1979) 141; Commun. Math. Phys. 81 (1981) 267. relax P. van Baal, Commun. Math. Phys. 85 (1982) 529; J. Troost, Contant field strenghts on $`T^{2n}`$, Preprint VUB-TENA-99-04. (hep-th/9909187) relax J.I. Kapusta, Nucl. Phys. B148 (1979) 461. relax T. Toimela, Phys. Lett. B124 (1983) 407. relax P. Arnold and C. Zhai, Phys. Rev. D50 (1994) 7603 (hep-ph/9408276); Phys. Rev. D51 (1995) 1906 (hep-ph/9410360); C. Zhai and B. Kastening, Phys. Rev. D52 (1995) 7232 (hep-ph/9507380). relax E. Braaten and A. Nieto, Phys. Rev. D53 (1996) 3421. (hep-th/9510408) relax R. Parwani and C. Corianò, Nucl. Phys. B434 (1995) 56 (hep-ph/9409069); Phys. Rev. Lett. 73 (1994) 2398 (hep-ph/9405343). relax M.A. Vázquez-Mozo, Phys. Rev. D60 (1999) 106010. (hep-th/9905030) relax M. Bianchi, G. Pradisi and A. Sagnotti, Nucl. Phys. B376 (1991) 365; A. Sen and S. Sethi, Nucl. Phys. B499 (1997) 45 (hep-th/9703157); M. Bianchi, Nucl. Phys. B528 (1998) 73 (hep-th/9711201); E. Witten, JHEP 02 (1998) 006 (hep-th/9712028). relax D.B. Fairlie, P. Fletcher and C.K. Zachos, J. Math. Phys. 31 (1990) 1088. relax D. Bigatti and L. Susskind, Magnetic fields, branes and nocommutative geometry, Preprint SU-ITP-99-39. (hep-th/9908056) relax H. García-Compean, Nucl. Phys. B541 (1999) 651. (hep-th/9804188) relax A. Fotopoulos and T.R. Taylor, Phys. Rev. D59 (1999) 061701. (hep-th/9811224) relax C. Kim and S.-J. Rey, Thermodynamics of large-N super Yang-Mills and the AdS/CFT correspondence, Preprint SNUST-99-005 (hep-th/9905205). relax A. Nieto and M.H.G. Tytgat, Effective field theory approach to N=4 supersymmetric Yang-Mills at finite temperature, Preprint CERN-TH-99-153 (hep-th/9906147). |
no-problem/9912/nucl-th9912006.html | ar5iv | text | # Quadrupole pairing interaction and signature inversion
## I Introduction
The invariance of the intrinsic Hamiltonian with respect to the signature symmetry gives rise to the occurrence of two rotational bands in odd and odd-odd nuclei that differ in spin by $`1\mathrm{}`$. These bands are built upon the same intrinsic configuration but differ in their signature symmetry eigenvalue, $`r`$ . Often the signature exponent $`\alpha `$ \[$`I\alpha `$ mod 2\] is used to label these bands where $`\alpha `$ is related to $`r`$ via $`r=e^{i\pi \alpha }`$.
In general, signature partner bands are not equivalent energetically. The energy difference is called signature splitting when measured in the rotating frame of reference as a function of frequency. The origin of the splitting is essentially due to the mixing of the $`\mathrm{\Omega }=1/2`$ state into wave function. Since this component has a non-zero diagonal matrix element of the cranking operator $`\omega \widehat{j}_x`$ of opposite sign for each signature, one expects one signature to gain (or lose) energy with increasing frequency with respect to the other. For high-$`j`$ one-quasiparticle (1-qp) unique-parity configurations, the favored signature is obtained by the simple rule, $`\alpha _f=()^{j1/2}1/2`$ . However, in unique parity 2-qp configuration in odd-odd nuclei, it may occur that $`\alpha _f=()^{j_\nu 1/2}1/2+()^{j_\pi 1/2}1/2`$ band may become energetically unfavored. Similar effects have been observed in some odd-A nuclei in 3-qp configuration . This phenomenon is called signature inversion .
Signature inversion has been observed, for example, in odd-odd nuclei for the $`\pi h_{11/2}\nu h_{11/2}`$ configurations in the A$``$130 and $`\pi h_{11/2}\nu i_{13/2}`$ configurations in A$``$160 regions. Different theoretical attempts have been presented to interpret the phenomenon. Triaxiality of the nuclear shape was suggested in Refs. . The general condition for signature inversion to take place was positive triaxial deformation, $`\gamma >0`$, within the Lund convention combined with a specific position of the Fermi surface, see also . One should point out, however, that (i) the $`\gamma `$-values are almost impossible to measure directly in experiment; (ii) the values of the $`\gamma `$ parameter necessary to account for the experimental data are not always supported by potential energy calculations which in many cases predicts almost axial shapes . Triaxiality seems, therefore, to be not always sufficient calling for alternative or supplementary mechanism.
Hamamoto and Matsuzaki analyzed the long-range, residual proton-neutron ($`pn`$) $`\chi Q_pQ_n`$ interaction. They concluded that the self-consistent value of the strength parameter $`\chi `$ is far too weak to account for the empirical data. Semmes and Ragnarsson considered a residual $`pn`$ contact force with spin-spin interaction, $`V_{pn}\delta (𝒓_p𝒓_n)(u_0+u_1\stackrel{}{\sigma }_n\stackrel{}{\sigma }_p)`$, in the framework of the particle-rotor model. Remarkably, using one set of parameters $`u_0`$, $`u_1`$, it appeared possible to reproduce very accurately some $`\pi h_{11/2}\nu h_{11/2}`$ bands in the A$``$130 region and $`\pi h_{11/2}\nu i_{13/2}`$ bands in rare-earth nuclei, without invoking triaxiality . However, to account for the $`\pi h_{9/2}\nu i_{13/2}`$ bands in rare-earth nuclei, a substantial reduction of the strength parameters $`u_0`$ and $`u_1`$ was necessary . Signature inversion is also obtained in the projected shell-model at axial symmetric shapes, resulting from the crossing of different bands with opposite signature dependence.
Another probe of the mechanism behind signature inversion are the electromagnetic transition amplitudes. The ratios of the BM1/BE2 transition rates are very sensitive to the deformation parameters ($`\gamma `$ and $`\beta `$) and whether the decay goes from favored states to unfavored or vice versa. In the present paper, however, we restrict ourselves entirely to the systematics of Routhians and spins. Calculations of BM1/BE2 ratios are beyond the scope of this work.
In Ref. we have published the results of deformation and pairing self-consistent Total Routhian Surface (TRS) calculations for <sup>120</sup>Cs. It has been demonstrated that the inclusion of quadrupole pairing correlations ($`QQ`$-pairing), number projection and a proper treatment of blocking into the TRS model made it possible to reproduce signature inversion without invoking the $`pn`$ interaction. In this paper, we demonstrate that our extended TRS model is able to reproduce rather accurately and systematically the signature inversion in odd-odd <sup>120-124</sup>Cs, <sup>124-128</sup>La, <sup>154-158</sup>Tb, <sup>156-160</sup>Ho and <sup>158-162</sup>Tm nuclei and that the mechanism causing signature inversion is triaxiality combined with the contribution of the $`(\lambda \mu )=(22)`$ component of the $`QQ`$-pairing force to the single-particle potential.
In the next section, we present a brief outline of our model. A general discussion on the role of the $`QQ`$-pairing force with respect to signature inversion is given in Sec. III. The discussion of empirical spin assignments and the results of self-consistent TRS calculations are presented in Sec. IV for the odd-odd <sup>120-124</sup>Cs, <sup>124-128</sup>La, <sup>154-158</sup>Tb, <sup>156-160</sup>Ho and <sup>158-162</sup>Tm nuclei. A summary is given in the last section.
## II The model
The TRS method is a macroscopic-microscopic approach in a uniformly-rotating body-fixed frame of reference . The total Routhian of a nucleus is calculated on a grid in deformation space, using the Strutinsky shell correction method . The model employs the deformed Woods-Saxon potential of Ref. and liquid-drop model of Ref. . The pairing energy is calculated using a separable interaction of seniority and doubly-stretched quadrupole type:
$$\overline{v}_{\alpha \beta \gamma \delta }^{(\lambda \mu )}=G_{\lambda \mu }g_{\alpha \overline{\beta }}^{(\lambda \mu )}g_{\gamma \overline{\delta }}^{(\lambda \mu )},$$
(1)
where
$$g_{\alpha \overline{\beta }}^{(\lambda \mu )}=\{\begin{array}{cc}\delta _{\alpha \overline{\beta }},\hfill & \lambda =0,\mu =0\hfill \\ \alpha |\widehat{Q}_\mu ^{\prime \prime }|\overline{\beta },\hfill & \lambda =2,\mu =0,1,2.\hfill \end{array}$$
(2)
The above expression employs good signature basis , $`\overline{\alpha }=\widehat{T}\alpha `$ stands for the time-reversed state and $`\overline{|r=\pm i}=\pm |r=i`$. To avoid the sudden collapse of pairing correlations, we use the Lipkin-Nogami approximate number projection . It is important to stress that the model is free of adjustable strength parameters. The monopole pairing strength, $`G_{00}`$, is determined by the average gap method of Ref. and the $`QQ`$-pairing strengths, $`G_{2\mu }`$, are calculated to restore the Galilean invariance broken by the seniority pairing force, see the prescription in Ref. . For example, in the A$``$130 mass region, where the 56(66) lowest proton (neutron) single-particle levels are used for the pairing calculation, the pairing strengths parameters are equal $`G_{00}165(145)`$ keV and $`G_{2\mu }4.4(3.3)`$ keV/fm<sup>4</sup> for protons (neutrons). In the A$``$160 mass region, single-particle levels between 10-70(15-90) were used for the pairing calculations for protons (neutrons) and typical values of the strength parameters are $`G_{00}140(105)`$ keV and $`G_{2\mu }3.0(1.6)`$ keV/fm<sup>4</sup> for protons (neutrons). Let us recall that the use of the double-stretched generators for the $`QQ`$-pairing force result in, to a large extent, isotropic and shape independent values of $`G_{2\mu }`$ .
The resulting cranked-Lipkin-Nogami (CLN) equation takes the form of the well known Hartree-Fock-Bogolyubov-like (HFB) equation. In the TRS model, the CLN equation is solved self-consistently at each frequency and each grid point in deformation space which includes quadrupole $`\beta _2,\gamma `$ and hexadecapole $`\beta _4`$ shapes (pairing self-consistency). Finally, the equilibrium deformations are calculated by minimizing the total Routhian with respect to the shape parameters (shape self-consistency). For further details concerning the formalism, we refer the reader to Ref. .
## III The influence of the quadrupole pairing force on signature inversion
In our previous works , it has been shown that $`QQ`$-pairing, in particular its time-odd $`\mathrm{\Delta }_{21}`$ component, plays an important role in the alignment of quasi-particles. It strongly influences the moment of inertia and partly can account for twinning of the spectra of odd and even super-deformed nuclei in the A$``$190 mass region. The two-body pairing interaction enters also the single-particle channel via the $`\mathrm{\Gamma }`$ potential:
$$\mathrm{\Gamma }_{\alpha \beta }^{(\lambda \mu )}=G_{\lambda \mu }\underset{\gamma \delta >0}{}g_{\alpha \overline{\gamma }}^{(\lambda \mu )}g_{\beta \overline{\delta }}^{(\lambda \mu )}\rho _{\delta \gamma },$$
(3)
where $`\rho `$ is the density matrix. The single-particle contribution coming from the separable pairing force is usually small and in many applications simply disregarded. However, in the Lipkin-Nogami approach, it has to be taken into account for the consistency of the method. Interestingly, in spite of its weakness, the $`\mathrm{\Gamma }^{(22)}`$ field plays a rather important role for the signature inversion as will be demonstrated in the following.
Let us consider an odd-odd nucleus with an unpaired proton and neutron occupying the high-$`j`$, unique-parity orbits. The favored signature is expected to be $`\alpha _f=()^{j_\pi 1/2}1/2+()^{j_\nu 1/2}1/2`$. However, when one of the particles is low in the shell and the other occupies orbits around the middle or above the middle of the shell, signature inversion can occur. In the A$``$130 region e.g., two signatures, $`\alpha _{f(uf)}=(\alpha _\pi =1/2)(\alpha _\nu =1/2)=1(0)`$, of the $`\pi h_{11/2}\nu h_{11/2}`$ configuration are seen in experiment. For the protons, only the signature-favored $`\alpha _\pi =1/2`$ orbit is observed since the signature splitting of $`\pi h_{11/2}`$ low-$`\mathrm{\Omega }`$ levels is typically several hundred keV. For instance, in <sup>120</sup>Cs, it is calculated to be more than 0.7 MeV at $`\mathrm{}\omega =0.2`$ MeV. The role of this low-$`\mathrm{\Omega }`$, high-$`j`$ orbit is to induce a deformation close to axial symmetry with slight tendency towards positive $`\gamma `$ value . Otherwise it acts as spectator and the signature splitting is entirely due to the neutrons.
Fig. 1 shows the influence of each $`Q_{2\mu }`$ ($`\mu =0,1,2`$) component of the $`QQ`$-pairing interaction on the signature splitting $`\mathrm{\Delta }e^{}e^{(uf,\alpha =0)}e^{(f,\alpha =1)}`$, between the unfavored $`e^{(uf,\alpha =0)}`$ and favored $`e^{(f,\alpha =1)}`$ Routhians as a function of the quadrupole pairing strength $`G_{2\mu }`$ relative to the value $`G_{2\mu }^{sc}`$ determined according to Ref. . The calculations were performed for <sup>120</sup>Cs at fixed frequency and axial shape in order to address the effects of the $`QQ`$-pairing force alone. To find the origin of the signature inversion, we performed calculations with and without the mean-field contributions $`\mathrm{\Gamma }^{(2\mu )}`$. The figure nicely demonstrates the role of the $`\mathrm{\Gamma }^{(22)}`$ potential: It is the only component of ($`\lambda \mu `$) that creates signature inversion of the order of a few tens of keV already at axial shape.
To gain a better understanding of the role played by the mean-field potential $`\mathrm{\Gamma }^{(2\mu )}`$ as a function of neutron number, Fig. 2 shows the contribution to the signature splitting steming from the $`\mathrm{\Gamma }`$-potential $`\mathrm{\Delta }e_\mathrm{\Gamma }=e_\mathrm{\Gamma }^{(uf,\alpha =0)}e_\mathrm{\Gamma }^{(f,\alpha =1)}`$, where:
$$e_\mathrm{\Gamma }^{(\alpha )}=\frac{1}{2}𝑻𝒓(\mathrm{\Gamma }^{(\alpha )}\rho ^{(\alpha )})$$
(4)
as a function of $`N`$ in the A$``$130 mass region (superscript $`(\alpha )`$ refers to the signature of the blocked $`h_{11/2}`$ quasi-particles). Neutron number N=59 corresponds to the occupation of the $`\mathrm{\Omega }=1/2`$ orbit, while at N=75 the $`\mathrm{\Omega }=9/2`$ orbit of the $`h_{11/2}`$ subshell becomes occupied. Note again, that axial, fixed deformation was chosen to address the role of the $`QQ`$-pairing force effects alone.
Obviously, the contribution $`\mathrm{\Delta }e_\mathrm{\Gamma }`$ varies with the position of the Fermi energy. There is a clear tendency for all components of the pairing force to favor the unfavored signature. At the bottom of the $`\nu h_{11/2}`$ subshell (corresponding to 59$``$N$``$61) the $`\mathrm{\Gamma }^{(20)}`$, $`\mathrm{\Gamma }^{(21)}`$ and even $`\mathrm{\Gamma }^{(00)}`$ show trends (negative contributions) towards the anomalous signature splitting. In these cases, however, the signature splitting caused by the Coriolis force is considerably larger, implying that signature inversion cannot occur. Around the middle of the shell (corresponding to 63$``$N$``$69) where the signature splitting induced by the Coriolis force is rather small or even absent at low rotational frequencies, the $`\mathrm{\Gamma }^{(22)}`$ potential favors the inversion and may even compete with the Coriolis force. Note, that this corresponds to the neutron numbers where indeed the signature inversion has been observed experimentally, see e.g. Ref. and references therein.
The signature splitting, $`\mathrm{\Delta }e^{}`$, as a function of the neutron shell filling with and without $`QQ`$-pairing is shown in Fig. 3. The calculations were done at fixed triaxial shape of $`\gamma =15^{}`$ and at $`\mathrm{}\omega =0.2`$ MeV. Both sets of the calculations result in signature inversion for neutron numbers $`63`$. However, a clear increase in the anomalous splitting of the order $``$40 keV is caused by the $`QQ`$-pairing force. The size of the signature inversion as a function of the triaxiality parameter $`\gamma `$ for the case of <sup>120</sup>Cs can be studied in Fig. 4. Note, that the contribution due to the $`QQ`$-pairing is almost independent of $`\gamma `$ due to double stretching . In a previous work on <sup>120</sup>Cs, the $`\gamma `$ deformation obtained from the earlier TRS calculations (including only seniority pairing treated non-self-consistently and no number projection) could not account for the observed signature splitting. In the extended TRS calculations, the combined effects of triaxiality and $`QQ`$-pairing force reproduce the data in this nucleus very well .
For the A$``$130 and A$``$160 nuclei, the anomalous signature splitting happens at low rotational frequencies where the Coriolis mixing is weak. However, with increasing rotational frequency, the Coriolis force dominates always and restores signature splitting to normal order. The critical frequency, $`\mathrm{}\omega _c`$, at which this takes place is another characteristic quantity of signature inversion. Fig. 5 shows a significant increase in $`\mathrm{}\omega _c`$ due to the presence of $`QQ`$-pairing. Finally, Fig. 6 shows the effect of the $`QQ`$-pairing force on the total angular momentum $`I_x`$ as a function of the shell filling. The calculations were performed at $`\mathrm{}\omega =0.2`$ MeV. In general, $`QQ`$-pairing slightly increases (decreases) $`I_x`$ for the unfavored (favored) signature, respectively. Moreover, the effect is somewhat larger for the unfavored signature branch. Although globally the effect is rather modest, it clearly reduces the normal signature splitting or enhances anomalous signature splitting, depending on neutron number.
Let us make the following few remarks summarizing the discussion of this section. It seems rather well documented that signature inversion (in axially deformed nuclei) in our calculation is due to the $`\mathrm{\Gamma }^{(22)}`$ potential. The effect is not accidently related to the particular choice of pairing strength parameters. Indeed, a change of $`G_{00}`$ and(or) $`G_{2\mu }`$ (see Fig. 1) by$`\pm `$10% only weakly affects the calculated value of $`\mathrm{\Delta }e^{}`$. Our calculations also do not show that the effect is related to number-projection. Analogous calculations but performed within the BCS approximation led us to similar conclusions. However, it is of crucial importance to perform rigorous blocking for each signature separately. Indeed, approximating the odd-N system by a one quasiparticle state created on top of the odd-N vacuum does not result in any inversion. Blocking of signature partners at non-zero rotational frequency (where signature splitting due to time-reversal symmetry breaking sets in) leads to, in general small, differences in density matrices $`\rho ^{(\alpha _f)}`$ and $`\rho ^{(\alpha _{uf})}`$ which in turn result in differences between $`\mathrm{\Gamma }_{\alpha _f}^{(\lambda \mu )}`$ and $`\mathrm{\Gamma }_{\alpha _{uf}}^{(\lambda \mu )}`$ potentials for favored and unfavored signature bands, respectively. However, we were not able to recognize a simple mechanism causing the small differences in the density matrices to add up ”coherently” and eventually form such a regular pattern as depicted in Figs. 1-6.
## IV The TRS results: Comparison with experiment for the A$``$130 and A$``$160 nuclei
In this section, we present the results from the pairing and deformation self-consistent TRS model described in Sec. II for the odd-odd <sup>120-124</sup>Cs, <sup>124-128</sup>La, <sup>154-158</sup>Tb, <sup>156-160</sup>Ho and <sup>158-162</sup>Tm nuclei. In order to compare theory and experiment, the correct spin assignments of the bands are crucial. Indeed, by changing the spin values by one unit, the signature splitting becomes inverted and a totally different pattern emerges. Particularly in odd-odd nuclei, due to the complexity of the low-spin spectra, spins are not always directly determined in experiment. This is the case for many of the rotational bands associated with the $`\pi h_{11/2}\nu h_{11/2}`$ configuration in the $`A130`$ region and $`\pi h_{11/2}\nu i_{13/2}`$ configurations in the $`A160`$ region. Therefore, we start our discussion by revisiting current spin assignments in these bands.
As mentioned above, the spin assignments of many rotational bands associated with the $`\pi h_{11/2}\nu h_{11/2}`$ configuration in the $`A130`$ region and with the $`\pi h_{11/2}\nu i_{13/2}`$ in the $`A160`$ region are tentative and based mainly on systematics. However, if the underlying assumptions or ”the first guesses” are false, incorrect assignment may spread over many nuclei. Indeed, the spin assignments in these nuclei are very controversial. In <sup>128</sup>La, for example, the recent experiment firmly established the experimental spins via in-beam and $`\beta `$-decay measurements. The previous spin assignment of Ref. was lower by $`3\mathrm{}`$!
Our calculation is in good agreement with the experimental data in <sup>128</sup>La based on the recent spin assignment , see Fig. 8. Also for <sup>120</sup>Cs , very good agreement between calculations and experiment is achieved, see Fig. 8 and . Therefore, we choose these two nuclei as the reference nuclei to verify spin assignments in the $`\pi h_{11/2}\nu h_{11/2}`$ bands in the Cs and La isotopes. We further assume in our analysis that nuclear moments of inertia $`𝒥^{(1)}`$$`I_x/\omega (I)`$ smoothly decrease with increasing neutron number. This assumption is supported by comparison with neighboring even-even nuclei and by deformation systematics, see also Fig. 10. Based on the systematic analysis and at the same time compared with our cranking calculations, new spin values are suggested as shown in Table I.
The experimental $`𝒥^{(1)}`$ moments of inertia for the Cs and La isotopes corresponding to our new assignments are shown in Fig. 7. They decrease smoothly with increasing Fermi energy. The sensitivity of the method is shown for the case of <sup>130,132</sup>La. As is seen in Fig. 7, changing the spins by $`\pm 1\mathrm{}`$ introduces rather sharp kinks in the sequence of the moments of inertia in these nuclei, imposing rather strong restrictions for the relative spin values between the different isotopes. This analysis can of course not replace any experimental verification, but at present corresponds to the most reliable way in the absence of accurate data.
We have also investigated the spin assignments for the $`\pi h_{11/2}\nu i_{13/2}`$ bands of <sup>154,156</sup>Tb, <sup>156-160</sup>Ho and <sup>158-162</sup>Tm. For this mass region, our calculations agree well with the bulk part of the data as well as the recent analysis. Systematics of the empirical moments of inertia ($`𝒥^{(1)}`$ is expected to increase with neutron number in this mass region) and spins suggest that the experimental assignments of <sup>158</sup>Ho and <sup>156</sup>Tb are too low. Such assignments can in general not account for the initial alignment that one expects from low-$`\mathrm{\Omega }`$, high-$`j`$ orbits. With the guidance of the available spin assignments and our cranking calculations, we suggest the spin values presented in Table I. The comparison between experimental (with our spin assignments) and theoretical values of $`I_x`$ versus $`\mathrm{}\omega `$ are shown in Figs. 8 and 9 for the A$``$130 and A$``$160 mass regions, respectively.
In Refs. , the spin assignments for the A$``$130 and A$``$160 nuclei were investigated by energy systematics. Our assignments are in general consistent with their results. However, Liu et al. give three different sets of assignments for the Cs isotopes depending on the choice of the reference nucleus. One of the sets coincides with our result. Systematical changes in the assigned spin values (e.g. by 2$`\mathrm{}`$) do not change the (energy) systematics . We have chosen <sup>128</sup>La and the lighter isotope of <sup>120</sup>Cs guided by theoretical results, see Fig. 8. This assignment is also consistent with the measurements for <sup>130</sup>Cs. The nice agreement between the calculation and experiment (in particular for <sup>128</sup>La) gives some confidence for the present method, see Fig. 8. Especially the low spin-part agrees rather well with the experimental data. The increase in angular momentum with frequency for the favored signature is in general too strong when compared to experiment.
Fig. 10 depicts the equilibrium deformation parameters $`\beta _2`$ and $`\gamma `$ obtained from the TRS calculations. For the $`A130`$ nuclei, in general, the $`\beta _2`$ and $`\gamma `$ deformations decrease with increasing neutron number. However, the nuclear shapes of some $`A130`$ nuclei are usually rather soft, which may lead to some inaccuracy in determining the equilibrium deformations as well as it may affect other observables, in particular $`I_x`$ values. The calculated deformations of the two signatures are rather close, but one should be aware that this may not always be the case, see e.g <sup>122</sup>Cs (Fig. 10).
For the $`A160`$ isotopes, the the $`\beta _2`$ ($`\gamma `$) deformation value is increasing (decreasing) with neutron number, $`N`$. This is consistent with the increase of the moments of inertia with $`N`$. The $`\gamma `$ deformations of the $`A160`$ nuclei are rather small. With increasing $`N`$, the $`\gamma `$ deformations are predicted to change from small positive to small negative values. For most cases, the calculations yield a decrease of the deformation with increasing rotational frequency.
The quality of the present calculations is nicely demonstrated in Fig. 11 and Fig. 12, where we compare experimental and calculated Routhians, $`e^{}`$. Especially for the Cs- and La-siotopes, the agreement is in general within 50 keV. For the case of the heavier Rare Earth nuclei, at relatively high frequency $`\mathrm{}\omega 0.25`$ MeV, the calculated Routhians do not agree well with the experimental data. The deviations between experiment and theory can be linked to the fact that the unblocked neutron $`i_{13/2}`$ crossings occur too early in the calculations for some cases.
We also compare the difference of the Routhians $`\mathrm{\Delta }e^{}=e_f^{}e_u^{}`$ obtained in the TRS-calculations and experiment, Fig. 13 and Fig. 14. <sup>*</sup><sup>*</sup>*Note that here the sign of $`\mathrm{\Delta }e^{}`$ is opposite to the one chosen in previous figures In order to calculate the experimental values of $`\mathrm{\Delta }e^{}(\mathrm{}\omega )`$ we performed a linear interpolation of the unfavoured Routhians $`e_u^{}`$ at the frequency values of the favoured signature. Again, the agreement for the A=120-130 region is quite good. On the other hand, for the Rare Earth nuclei, the inversion is calculated to continue to somewhat higher frequencies and also to be more pronounced than in experiment. However, given the rather modest values of the inversion, in general $`<30`$ keV, one can be quite content with the results. Above the frequency of $`\mathrm{}\omega =0.5`$ MeV (not seen in Fig. 14), the inversion has disappeared for all cases considered here. The TRS-calculations clearly show that the mean-field model can account for the signature inversion phenomenon, once shape-polarization and pairing effects are treated self-consistently. The anomalous signature splitting is obtained already at almost axial shapes, due to the contribution of the $`(\lambda \mu )=(22)`$ $`QQ`$-pairing interaction to the single-particle potential. The overall agreement between theory and experiment appears satisfactory.
## V Summary
We have demonstrated that shape and pairing self-consistent mean-field calculations can account for the rotational band structure of the unique parity high-$`j`$ orbits in the odd-odd A$``$130 and A$``$160 nuclei. In particular, the signature inversion phenomenon is reproduced. The agreement between theory and the data is satisfactory provided that some of the empirical spin assignments are revised. New spins are suggested based on a systematic analysis of the $`𝒥^{(1)}`$ moments of inertia which rather firmly establishes relative spins along each isotopic chain considered here. The absolute spin values are finally determined partly guided (see Sec. IV) by theoretical values.
The experimental signature-splittings are quite well reproduced although the TRS calculations yield rather modest values of the triaxiality parameter $`\gamma `$ ($`\gamma <5^{}`$) for most cases. This can be understood in terms of the additional enhancement of the anomalous signature splitting caused by the mean-field contribution of the $`(\lambda \mu )=(22)`$ component of the quadrupole pairing interaction, see Sec. III.
Some deficiencies of the model are clearly visible particularly at high rotational frequency and for $`\gamma `$-soft nuclei. Other effects, such as the coupling between rotation and $`\gamma `$-vibration, may need to be taken into account. Also more elaborate forces, in particular accounting for the valence neutron-proton interaction , might further improve the agreement to the data. For a full understanding of this intriguing phenomenon, the calculations of electromagnetic transitions rates are important. This, however, is beyond the scope of our work.
This work was supported by the Swedish Institute (SI), Swedish Natural Science Research Council (NFR), the Göran Gustafsson Foundation and the Polish Committee for Scientific Research (KBN) under Contract No. 2 P03B 040 14. |
no-problem/9912/hep-ph9912313.html | ar5iv | text | # Introduction
### Introduction
Gravitinos are created in the early Universe by thermal collisions , and some time ago it was pointed out that they will also be created non-thermally, starting from the vacuum fluctuation that exists well before horizon exit during inflation. To see whether creation from the vacuum is significant, one needs equations which describe the evolution of the gravitino mode functions. Such equations have recently been presented , for the case that only one chiral superfield is relevant. Using them, the authors find that the number density of gravitinos created just after inflation is of order $`10^2M^3`$, where $`M`$ is the mass of the inflaton after inflation, and they have noted that these gravitinos may be more abundant after reheating than those created from thermal collisions. A similar result has been obtained for hybrid inflation, which involves at least two fields, assuming that in this context the gravitino becomes the goldstino of global supersymmetry (see also ).
In this note, we argue that this is unlikely to be the end of the story . Rather, creation is likely to continue, maintaining about the same number density, until either the ‘intermediate epoch’ when Hubble parameter falls below the gravitino mass, or the reheat epoch if that occurs earlier. We begin by verifying that such late-time creation indeed occurs if only a single chiral superfield is relevant, using the the description of the helicity $`1/2`$ gravitino provided recently in . Then we consider the case where other fields are relevant; a different field to break supersymmetry in the vacuum, a third field to allow hybrid inflation, and the fields corresponding to particles created just after inflation by preheating. In all these cases, we argue that late-time creation will continue until the intermediate epoch, unless it is terminated by preheating. Finally, we consider the cosmological consequences of late-time creation, which are rather dramatic in the most popular models of inflation.
### Describing the gravitino in the early Universe
To calculate the abundance of gravitinos created from the vacuum, one needs equations describing the evolution of the mode functions for the helicity $`1/2`$ and $`3/2`$ gravitino states, as seen by a comoving observer in the expanding Universe.
For the helicity $`3/2`$ state, one can safely proceed by using the Rarita-Schwinger equation with appropriate constraints, evaluated in Robertson-Walker spacetime, and with time-dependent mass $`m_{3/2}(t)`$ given by the usual $`N=1`$ supergravity formula. (We denote the vacuum value of $`m_{3/2}(t)`$ by $`m_{3/2}`$ without an argument.) A suitably-defined mode function satisfies the spin $`1/2`$ equation, with the same effective mass $`m_{3/2}(t)`$ that appears in the Rarita-Schwinger equation . As $`m_{3/2}(t)`$ can hardly be bigger than the Hubble parameter $`H`$ if it is to vary non-adiabatically,<sup>1</sup><sup>1</sup>1The $`N=1`$ supergravity expression for $`H^2`$ contains a term $`m_{3/2}^2(t)`$, which can hardly vary rapidly if it is canceled to high accuracy by the rest of the expression. Of course, a precise cancellation does occur at the present time, for some unknown reason (the cosmological constant problem). the abundance of helicity $`3/2`$ gravitinos created from the vacuum will no bigger than that of modulini created from the same mechanism, which is negligible compared with the abundance of gravitinos from thermal collisions.
For the helicity $`1/2`$ state, the same procedure may in general require modification, because the would-be goldstino (existing in the limit of global supersymmetry) may be a time-dependent mixture of the spin $`1/2`$ fields. To avoid this problem, the case where only a single chiral supermultiplet is relevant has been studied . It is found that a suitably-defined mode function again satisfies the spin $`1/2`$ equation, but with a different effective mass $`\stackrel{~}{m}(t)`$.
In a general model of inflation, the idealization of a single chiral supermultiplet will not be adequate. In hybrid inflation models one needs, in addition to the slowly rolling inflaton field, one or more additional fields to provide the constant part of the potential. Even if the inflaton field is the only one relevant for inflation, a second field is generally needed to break supersymmetry in the vacuum. Even so, the possibility that the inflaton field is the only relevant one is not excluded. Also, the field responsible for supersymmetry breaking may at some stage oscillate and dominate the energy density of the Universe, even if it is not responsible for inflation. Let us proceed on the assumption that only a single field is involved in gravitino creation, so that the equations presented in can be used.
### Gravitino creation from the oscillation of the supersymmetry-breaking field
The effective mass $`\stackrel{~}{m}(t)`$ of the helicity $`1/2`$ mode function is given by <sup>2</sup><sup>2</sup>2In Eq. (3.23) of , the signs of the second and third terms should be reversed , after which that equation becomes identical with Eq. (1) upon making the identification $`a^1\mathrm{\Omega }_\mathrm{L}\stackrel{~}{m}(t)`$. The sign of the effective mass is not physically significant, and in particular it will not affect the gravitino abundance. (Equivalently, the sign of the last term of Eq. (12) is not physically significant, since it corresponds to replacing a mode function by its complex conjugate.) We have checked that Eq. (1) is equivalent to the more complicated expressions given in . In both and , it is assumed that the field is canonically normalized, but the results given there are valid also for arbitrary normalization . In fact, for a single real scalar field, one can always transform at least locally from arbitrary normalization to canonical normalization. In the supergravity model, $`\varphi `$ is the real part of a complex field, and the corresponding transformation of the complex field is at least locally holomorphic, leading to an equivalent supergravity theory.
$$\stackrel{~}{m}(t)=m_{3/2}(t)\frac{3}{2}m_{3/2}(t)(1+A_1)\frac{3}{2}HA_2\mu ,$$
(1)
where
$`A_1`$ $``$ $`{\displaystyle \frac{P3M_\mathrm{P}^2m_{3/2}^2(t)}{\rho +3M_\mathrm{P}^2m_{3/2}^2(t)}}`$ (2)
$`A_2`$ $``$ $`{\displaystyle \frac{2}{3}}{\displaystyle \frac{3M_\mathrm{P}\dot{m}_{3/2}(t)}{\rho +3M_\mathrm{P}^2m_{3/2}^2(t)}}`$ (3)
$`A`$ $``$ $`A_1+iA_2=e^{i\chi }`$ (4)
$`\mu `$ $``$ $`{\displaystyle \frac{1}{2}}\dot{\chi }.`$ (5)
In this expression, $`M_\mathrm{P}=2.4\times 10^{18}\text{GeV}`$ is the Planck scale.
The energy density $`\rho `$ and the pressure $`P`$ are supposed to be dominated by a real scalar field $`\varphi `$. Taking its kinetic term to be canonical,
$`\rho `$ $`=`$ $`V+{\displaystyle \frac{1}{2}}\dot{\varphi }^2`$ (6)
$`P`$ $`=`$ $`V+{\displaystyle \frac{1}{2}}\dot{\varphi }^2,`$ (7)
where $`V(\varphi )`$ is the potential. The continuity equation is $`\dot{\rho }=3H(\rho +P)`$, equivalent to the field equation
$$\ddot{\varphi }+3H\dot{\varphi }+V^{}=0$$
(8)
where $`H`$ is the Hubble parameter, related to the energy density by $`3M_\mathrm{P}^2H^2=\rho `$.
For future reference, let us note that the equations for $`A_1`$, $`A_2`$ and $`\mu `$ may be written in terms of $`H`$, $`wP/\rho `$, and $`m_{3/2}(t)`$, with the time-derivative of this last quantity eliminated using Eq. (3). One finds
$`A_1`$ $``$ $`{\displaystyle \frac{wH^2m_{3/2}^2(t)}{H^2+m_{3/2}^2(t)}}`$ (9)
$`A_2`$ $``$ $`{\displaystyle \frac{\left[1w^2+2\left(1+w\right)m_{3/2}^2(t)/H^2\right]^{\frac{1}{2}}}{1+m_{3/2}^2(t)/H^2}}`$ (10)
$`\mu `$ $``$ $`{\displaystyle \frac{3}{2}}m_{3/2}(t)(1+A_1){\displaystyle \frac{1}{2}}{\displaystyle \frac{\dot{w}3H(1+w)(wA_1)}{A_2(1+m_{3/2}^2(t)/H^2)}}.`$ (11)
For momentum $`k/a`$, a suitably defined helicity $`1/2`$ mode function satisfies the equation
$$u^{\prime \prime }+\left(k^2+(a\stackrel{~}{m})^2+i(a\stackrel{~}{m})^{}\right)u=0,$$
(12)
where the prime denotes differentiation with respect to conformal time $`\text{d}\eta =\text{d}t/a`$, and $`a`$ is the scale factor of the Universe such that $`H=\dot{a}/a`$. To calculate the gravitino abundance, one starts at early times with the negative-frequency solution $`ue^{i\omega \eta }`$, corresponding to the vacuum. At late times there is a linear combination of positive and negative frequency modes, and the occupation number is the coefficient of the positive frequency mode. Significant production occurs with momentum $`k/a`$ if there is appreciable violation of a weak adiabaticity condition
$$|\overline{(a\stackrel{~}{m})^{}}|\omega ^2k^2+(a\stackrel{~}{m})^2,$$
(13)
where the average is over a conformal time interval $`\omega ^1`$. In practice $`k_{\mathrm{max}}`$, the biggest $`k`$ for which significant creation occurs, is simply the biggest value achieved by $`a\stackrel{~}{m}`$, within the regime where $`\stackrel{~}{m}`$ varies non-adiabatically ($`|\dot{\stackrel{~}{m}}|\text{ }>\stackrel{~}{m}^2`$).
Let us follow the evolution of $`\stackrel{~}{m}`$, to estimate $`k_{\mathrm{max}}`$. During inflation, $`\stackrel{~}{m}m_{3/2}(t)`$ is much less than $`H`$ in magnitude. After inflation, $`\varphi `$ oscillates about its vacuum value which we set equal to zero so that $`\varphi \varphi _0(t)\mathrm{sin}Mt`$ where $`MH`$ is the inflaton mass. We now have $`\stackrel{~}{m}\mu \frac{1}{2}\dot{\chi }`$, with
$`\mathrm{cos}\chi `$ $``$ $`{\displaystyle \frac{\frac{1}{2}\dot{\varphi }^2V3M_\mathrm{P}^2m_{3/2}^2(t)}{\frac{1}{2}\dot{\varphi }^2+V+3M_\mathrm{P}^2m_{3/2}^2(t)}}`$ (14)
$``$ $`{\displaystyle \frac{\mathrm{cos}2Mtm_{3/2}^2(t)/H^2}{1+m_{3/2}^2(t)/H^2}}.`$ (15)
When $`\dot{\varphi }=0`$, $`\mathrm{cos}\chi =1`$. The crucial point is that the maximum value of $`\mathrm{cos}\chi `$ is less than $`+1`$, because $`\mathrm{cos}\chi =1`$ would correspond to $`V=3M_\mathrm{P}^2m_{3/2}^2(t)<0`$. It follows that $`\chi `$ oscillates in some range $`ϵ<\chi <2\pi ϵ`$. With the reasonable assumption $`m_{3/2}^2(t)m_{3/2}^2`$, $`ϵm_{3/2}/H1`$. While $`\chi `$ is rising, $`\stackrel{~}{m}M`$, and while it is falling, $`\stackrel{~}{m}+M`$. Because of the sign switches, the evolution of the mode function is non-adiabatic, and gravitino creation continues, with every-increasing comoving momentum $`kaM`$.
When $`H`$ becomes of order $`m_{3/2}`$, the sudden switches in sign of $`\stackrel{~}{m}`$ give way to a complicated variation, still on the timescale $`M^1`$ and with typical value of order $`M`$. (If desired, this variation is conveniently calculated from Eqs. (9)–(11), with $`w=\mathrm{cos}2Mt`$.) Creation finally ceases only when $`H`$ falls below $`m_{3/2}`$, and according to Eqs. (1) and (11) $`\stackrel{~}{m}`$ falls smoothly to the true value $`m_{3/2}`$, recovering the flat spacetime description of the gravitino.
We studied this example primarily to provide an existence proof, that late-time gravitino creation occurs in at least one case. This is the case that the oscillating field responsible for the energy density of the Universe, is the same as the field breaking supersymmetry in the vacuum. Before moving on, we note that this case may by realized in Nature if supersymmetry breaking is gravity mediated, though the mass $`M`$ would then probably be too small for late-time creation to be significant. Indeed, superstring theory suggests the existence of several fields (moduli) with only gravitational strength interactions, and mass very roughly of order $`m_{3/2}`$. (Kähler stabilization of the moduli can give masses several orders of magnitude bigger, for instance a dilaton mass of order $`10^6\text{GeV}`$ is found in .) At least some of the moduli may well be displaced from the vacuum at the end of inflation, and one of them might be the inflaton). Any displaced modulus at first tracks the changing vacuum value, but it starts to oscillate at the intermediate epoch , and then dominates the energy density. Finally, one of the moduli is usually supposed to be responsible for supersymmetry breaking in the gravity-mediated case. The case that we have studied will be realized if this modulus is also the inflaton, and is the only one that oscillates. As far as the oscillation regime is concerned, the case is also realized if the inflaton is different from the supersymmetry breaking modulus, but decays early so that the latter is the only oscillating field at the intermediate epoch.
### Late-time gravitino creation in the general case
When additional fields and/or particles are involved, equations describing the evolution of the gravitino field are not yet available. However, late-time creation seems likely to occur quite generally up to the intermediate era, unless reheating occurs first.
Consider first models where the slowly-rolling inflaton field is the only one relevant for inflation, but something else breaks supersymmetry in the vacuum while ensuring that the potential vanishes there. In the model where the inflaton field did both jobs, late-time creation occurred because supergravity corrections to global supersymmetry become very important every time the potential the potential dips to zero, causing the mode function to vary non-adiabatically. One should not expect that the introduction of something else to break supersymmetry, would restore the adiabaticity.
Next, consider hybrid inflation models, where two fields are oscillating after inflation, with presumably a third breaking supersymmetry in the vacuum. In supersymmetric models, the two oscillating fields typically have the same mass $`M`$ after inflation. In general they are not oscillating in phase (though see for a case where they are), which means that they cannot be replaced by a single field. Just after inflation, gravitino creation in this situation may be estimated in the global supersymmetry limit, as described in for the case of $`F`$-term models. The number density will again be of order $`M^3`$, since the inflaton mass climbs to that value in a time of order $`M^1`$. Again, there is no reason to expect gravitino production to stop, since supergravity corrections will again become important every time the potential dips to zero.
In both of these situations, a definite calculation will clearly become possible when the supergravity formalism of is extended to include two or more chiral superfields. It is not so clear how to calculate things when particles as opposed to homogeneous fields become important, but still it is fairly clear what will happen. Consider first the case that preheating converts most of the oscillating field energy into marginally relativistic particles. The energy of the latter will be somewhat reduced by redshifting even if it does not decay promptly into radiation, and one expects that after a few Hubble times oscillating fields again account for a non-negligible proportion of the energy density. Then late-time gravitino creation will presumably continue until the intermediate era, unless reheating happens first. After reheating, practically all of the energy density is in radiation, and one expects creation to stop, because nothing is varying rapidly. We emphasize that, although these expectations look quite reasonable, it is at the moment totally unclear how to describe the gravitino, when supersymmetry breaking comes mainly from the particle gas in the early Universe.
### Cosmological significance of late-time gravitino production
After production stops, the occupation number will be of order 1 below $`k=k_{\mathrm{max}}`$, giving number density
$`n`$ $``$ $`{\displaystyle \frac{2}{4\pi ^2}}a^3{\displaystyle _0^{k_{\mathrm{max}}}}k^2𝑑k`$ (16)
$``$ $`10^2(k_{\mathrm{max}}/a)^3.`$ (17)
The number density when creation stops is $`n10^2p^3`$, where $`p=k_{\mathrm{max}}/a`$ is the maximum momentum of the created gravitinos.
We focus on the extreme case, where gravitino creation ends only at the intermediate era. This corresponds to energy density of order $`M_\mathrm{S}^4`$, where $`M_\mathrm{S}\sqrt{M_\mathrm{P}m_{3/2}}`$ is the scale of supersymmetry breaking. In gravity-mediated models of supersymmetry breaking, $`m_{3/2}100\text{GeV}`$ and $`M_\mathrm{S}10^{10}\text{GeV}`$. In typical gauge-mediated models, $`m_{3/2}100\text{keV}`$ and $`M_\mathrm{S}10^7\text{GeV}`$.
At the intermediate era, the gravitino number density is of order $`10^2M^3`$. The relative abundance at nucleosynthesis is therefore
$`{\displaystyle \frac{n}{s}}`$ $``$ $`10^2{\displaystyle \frac{\gamma T_\mathrm{R}M^3}{M_\mathrm{S}^4}}`$ (18)
$``$ $`10^2{\displaystyle \frac{\gamma T_\mathrm{R}M^3}{M_\mathrm{P}^2m_{3/2}^2}}.`$ (19)
Here, $`s`$ is the entropy density at nucleosynthesis, and $`\gamma ^1`$ is the increase in entropy per comoving volume (if any), between reheating at temperature $`T_\mathrm{R}`$ and nucleosynthesis. If the entropy increase comes only from a late-decaying particle, there can be only a modest increase corresponding to $`\gamma T_{\mathrm{FR}}/T_{\mathrm{EQ}}`$, where EQ is the epoch when the particle first dominates the energy density, and FR is the final reheat epoch when it decays. This gives $`\gamma T\text{ }>T_{\mathrm{FR}}\text{ }>10\text{MeV}`$, where the bound comes from nucleosynthesis. However, $`N`$ $`e`$-folds of thermal inflation may also occur. This would reduce $`\gamma `$ by an additional factor $`e^{3N}`$. One bout of thermal inflation typically gives $`N10`$ and a total $`\gamma `$ perhaps of order $`10^{15}`$. Two bouts might give $`N20`$ and up to $`\gamma 10^{30}`$ .
The cosmological significance of the gravitino depends on its true mass $`m_{3/2}`$. With gravity-mediated supersymmetry breaking, one expects $`m_{3/2}100\text{GeV}`$ to $`1\text{TeV}`$, and observation then requires
$$n/s\text{ }<10^{13}.$$
(20)
The abundance of gravitinos from thermal collisions is then
$$n/s10^{13}(\gamma T_\mathrm{R}/10^9\text{GeV}),$$
(21)
leading to the bound $`\gamma T_\mathrm{R}\text{ }<10^9\text{GeV}`$.<sup>3</sup><sup>3</sup>3We refer here to the thermal creation at the initial reheating. If thermal inflation subsequently occurs, the most significant thermal creation occurs at the final reheating, and $`\gamma T_\mathrm{R}`$ in Eq. (21) is to be replaced by the final reheat temperature. Using instead Eq. (19), we find
$$10^{13}\frac{n}{s}\left(\frac{M}{10^7\text{GeV}}\right)^3\left(\frac{\gamma T_\mathrm{R}}{10^9\text{GeV}}\right)\left(\frac{100\text{GeV}}{m_{3/2}}\right)^2\text{ }<1.$$
(22)
Gravitinos created from the vacuum are more abundant than those from thermal collisions, if $`M\text{ }>10^7\text{GeV}`$. Without thermal inflation, we need $`\gamma T_\mathrm{R}\text{ }>10\text{MeV}`$ and therefore $`M\text{ }<10^{11}\text{GeV}`$. In the worst case $`T_\mathrm{R}M_\mathrm{S}`$, even a single bout of thermal inflation allows only $`M\text{ }<10^{11}\text{GeV}`$. With two bouts, or with one bout and low $`T_\mathrm{R}`$, more or less any $`M`$ up to $`M_\mathrm{P}`$ might be accommodated.
With gauge-mediated supersymmetry breaking, one expects very roughly $`1\text{keV}\text{ }<m_{3/2}\text{ }<100\text{GeV}`$, with the upper decades disfavoured. Unless $`m_{3/2}\text{ }>100\text{MeV}`$, this leads to a stable gravitino with present density
$$\mathrm{\Omega }_{3/2}10^5\left(\frac{m_{3/2}}{100\text{keV}}\right)\frac{n}{s}\text{ }<1$$
(23)
If creation from thermal collisions dominates, then very roughly
$$\mathrm{\Omega }_{3/2}\left(\frac{100\text{keV}}{m_{3/2}}\right)\left(\frac{\gamma T_\mathrm{R}}{10^4\text{GeV}}\right).$$
(24)
Using instead Eq. (19),
$$\mathrm{\Omega }_{3/2}\left(\frac{M}{10^8\text{GeV}}\right)^3\left(\frac{100\text{keV}}{m_{3/2}}\right)\left(\frac{\gamma T_\mathrm{R}}{10^4\text{GeV}}\right)\text{ }<1.$$
(25)
In this case, creation from the vacuum is more efficient if $`M\text{ }>10^8\text{GeV}`$, and we need thermal inflation if $`M\text{ }<10^{10}\text{GeV}`$. In the favored case $`m_{3/2}100\text{keV}`$, corresponding to $`M_\mathrm{S}10^7\text{GeV}`$, the constraints on $`M`$ are similar to those in the gravity-mediated case.
So much for the case that reheating happens after the intermediate epoch corresponding to $`T_\mathrm{R}\text{ }<M_\mathrm{S}`$. By way of contrast, consider the opposite extreme of instant reheating, $`T_\mathrm{R}V^{1/4}`$. Then,
$$\frac{n}{s}10^2\gamma \left(\frac{M}{V^{\frac{1}{4}}}\right)^3.$$
(26)
Either $`M/V^{1/4}`$ should be sufficiently small, or thermal inflation is again needed.
### Conclusion
Let us end by considering the implications of late-time gravitino creation for models of the early Universe, and in particular of inflation. Supersymmetric models of inflation are reviewed for example in . In some of them, notably those using only flat directions, the mass of the field(s) oscillating after inflation is small, and late-time creation cannot be significant. However, in the most models, normalized to give the correct prediction for large scale structure, the inflaton mass is bigger than $`10^{10}\text{GeV}`$. Using the popular names, examples include chaotic inflation, the most popular hybrid inflation models ($`D`$ term, and the usual $`F`$ term model) and most new inflation potentials. The popular hybrid inflation models require $`MV^{1/4}10^{15}\text{GeV}`$ or so.
In all of these cases, the possibility of late-time gravitino creation drastically changes ones view about the reheat temperature. Starting the discussion with instant reheat, Eq. (26) represents a significant constraint. The constraint becomes stronger as the reheat temperature is lowered, because gravitino creation persists to a later epoch so that $`n/s`$ increases. Only when the reheat temperature falls below $`M_\mathrm{S}`$, does the constraint, now represented by Eqs. (22) and (25), start to become weaker. In many cases, the constraint cannot be met simply by lowering the reheat temperature. Instead, the entropy dilution factor $`\gamma `$ has to be small, often so small that thermal inflation is required.
To avoid these constraints, one might turn to models using only flat directions, leading to $`M`$ perhaps of order $`m_{3/2}`$, and giving negligible gravitino creation. Such models include the modular inflation mentioned earlier (so far without a concrete realization in the context of string theory) and certain hybrid inflation models.
The next step will be to check that late-time gravitino production actually occurs in the models mentioned, using equations that describe the gravitino in the presence of several fields. After that would come the more challenging task of describing the gravitino when a particle gas is the dominant source of supersymmetry breaking.
## Acknowledgments
I am indebted to Andrei Linde, Renata Kallosh and Toni Riotto for useful discussions. |
no-problem/9912/astro-ph9912286.html | ar5iv | text | # Hyperluminous Infrared Galaxies
## 1 Introduction: Have we detected primeval galaxies ?
This paper is directed towards the question; Have we already detected primeval galaxies ? The characteristics of a primeval galaxy might be
* high redshift ($`z>1`$)
* undergoing a major episode of star formation
(to form $`10^{11}M_{}`$ in $`10^9`$ yrs, we need a star formation rate $`\dot{M}_{}10^2M_{}yr^1`$)
* high gas fraction, say $`M_{gas}10^{11}M_{}`$
* evidence of interactions, mergers, dynamical youth.
First efforts to find such galaxies centred on spectroscopic searches for Ly$`\alpha `$-emitting galaxies (see eg the review by Djorgovski and Thompson 1992). While examples of such galaxies are now being found, these early surveys suggested that either star formation must be a more protracted process, occurring in smaller bursts (as expected in many bottom-up scenarios), or that dust extinction must play a large role.
Steidel et al (1996) have shown that star-forming galaxies at z $`>`$ 3 can be found through deep ground-based photometry in the U, G and R bands. The high redshift galaxies manifest themselves as U-band ’dropouts’ as the Lyman limit absorption is redshifted into the U band. Over 500 spectroscopically confirmed high redshift galaxies have now been found by this technique. Many have weak or non-existent Lyman $`\alpha `$ emission, which accounts for the lack of success of the spectroscopic surveys. The role of dust in these galaxies has been discussed by Pettini et al (1997, 1998a,b), Meurer et al (1997, 1998), Calzetti (1998). Pettini et al (1997, 1998a,b), Dickinson (1998) and Steidel et al (1998). Even at redshift 3 it appears that dust extinction may be appreciable. However star formation rates in these galaxies are not exceptional, typically 1-30 $`M_{}/yr`$.
The first evidence for galaxies with very high rates of star formation came from infrared surveys. Starburst galaxies had been first identified by Weedman (1975) from their ultraviolet excesses and characteristic emission-line spectra. Prior to the launch of IRAS, balloon and airborne measurements had demonstrated that $`L_{fir}>L_{opt}`$ for several starburst galaxies (see review by Sanders and Mirabel 1996). Joseph et al (1984) proposed that interactions and mergers might play a role in triggering starbursts. One of the major discoveries of the IRAS mission was the existence of ultraluminous infrared galaxies, galaxies with $`L_{fir}>10^{12}h_{50}^2L_{}(h_{50}=H_o/50)`$. The peculiar Seyfert 2 galaxy Arp 220 was recognised as having an exceptional far infrared luminosity early in the mission (Soifer et al 1984).
The conversion from far infrared luminosity to star formation rate has been discussed by many authors (eg Scoville and Young 1983, Thronson and Telesco 1986, Rowan-Robinson et al 1997). Rowan-Robinson (1999) has given an updated estimate of how the star-formation rate can be derived from the far infrared luminosity, finding
$`\dot{M}_{,all}/[L_{60}/L_{}]`$ = 2.2 $`\varphi /ϵ`$ x$`10^{10}`$
where $`\varphi `$ takes account of the uncertainty in the IMF (= 1, for a standard Salpeter function) and $`ϵ`$ is the fraction of uv light absorbed by dust, estimated to by 2$`/`$3 for starburst galaxies (Calzetti 1998). We see that the star-formation rates in ultraluminous galaxies are $`>10^2M_{}yr^1`$. However the time-scale of luminous starbursts may be in the range $`10^710^8`$ yrs (Goldader et al 1997), so the total mass of stars formed in the episode may typically be only 10$`\%`$ of the mass of a galaxy.
In this paper I discuss an even more extreme class of infrared galaxy, hyperluminous infrared galaxies, which I define to be those with rest-frame infrared (1-1000 $`\mu `$m) luminosities, $`L_{bol,ir}`$, in excess of $`10^{13.22}h_{50}^2L_{}`$ (=$`10^{13.0}h_{65}^2L_{}`$). For a galaxy with an M82-like starburst spectrum this corresponds to $`L_{60}10^{13}h_{50}^2L_{}`$, since the bolometric correction at 60 $`\mu `$m is 1.63. Sanders and Mirabel (1996) have a slightly more stringent criterion, $`L_{bol,ir}10^{13}h_{75}^2L_{}`$, but in practice they use an estimate of $`L_{bol}`$ based on IRAS fluxes, which results in a demarcation almost identical to that adopted here. While the emission at rest-frame wavelengths 3-30 $`\mu `$m in these galaxies is often due to an AGN dust torus (see below), I argue that their emission at rest-frame wavelengths $`50\mu `$m is primarily due to extreme starbursts, implying star formation rates in excess of 1000 $`M_o/yr`$. These then are excellent candidates for being primeval galaxies, galaxies undergoing a very major episode of star formation.
A preliminary version of these arguments was given by Rowan-Robinson (1996). Granato et al (1996) modelled the seds of 4 hyperluminous galaxies (F10214, H1413, P09104 and F15307) in terms of an AGN dust torus model. Hughes et al (1997) argued that hyperluminous galaxies can not be thought of as primeval galaxies on the basis of their estimates of the gas mass and star formation rates in these galaxies. However I show below that their arguments are not compelling. Fabian et al (1994, 1998) argued from X-ray evidence that 09104+4109 and 15307+325 are obscured AGN and this interpretation is supported for these objects by the non-detection of CO and submm continuum radiation (Yun and Scoville 1999, Evans et al 1999). However the very severe upper limits on X-ray emission set by Wilman et al (1999) for several hyperluminous infrared galaies led the latter to conclude that the objects might be powered by starbursts. McMahon et al (1999) interpret the submillimetre emission from high redshift quasars and other hyperluminous infrared galaxies as powered, at least in part, by the AGN rather than by star formation. On the other hand Frayer et al (1999a,b) favour a starburst interpretation for 14011+0252 and 02399-0136. I discuss these arguments further below and try to resolve the question of what fraction of the far infrared luminosity is powered by a starburst or AGN. The use of model spectral energy distributions derived from accurate radiative transfer codes is a significant advance on some previous work.
I assume throughout that $`H_o`$ = 50, $`\mathrm{\Omega }_o`$ = 1. With lower $`\mathrm{\Omega }_o`$, more galaxies, especially at higher redshifts, would satisfy the definition adopted.
## 2 Properties of ultraluminous infrared galaxies
Sanders et al (1988) discussed the properties of 10 IRAS ultraluminous galaxies with 60 $`\mu `$m fluxes $`>`$ 5 Jy and concluded that (a) all were interacting, merging or had peculiar morphologies, (b) all had AGN line spectra. On the other hand Leech et al (1989) found that only 2 of their sample of 6 ultraluminous IRAS galaxies had an AGN line spectrum. Leech et al (1994) found that 67 $`\%`$ of a much larger sample (42) of ultraluminous galaxies were interacting, merging or peculiar. Lawrence et al 1989 had found a much lower fraction amongst galaxies of high but less extreme infrared luminosity. The incidence of interacting, merging or peculiar galaxies by ir luminosity is summarised in Fig 1 of Rowan-Robinson (1991): the proportion of galaxies which are peculiar, interacting or merging increases steadily from 10-20$`\%`$ at low ir luminosities to $`>80\%`$ for ultraluminous ir galaxies. The situation on point (b) remains controversial, though, since Lawrence et al (1999) find only a fraction 21$`\%`$ of 81 ultraluminous galaxies in the QDOT sample to have AGN spectra (but on the basis only of low-resolution spectra). Veilleux et al (1995) find for a smaller sample (21 galaxies) that 33$`\%`$ of ultraluminous galaxies are Seyfert 1 or 2, with an additional 29$`\%`$ having liner spectra, which they also classify as AGN. Veilleux et al (1999a) find that 24$`\%`$ of 77 galaxies with $`10^{12}<L_{ir}<10^{12.3}`$ are Seyfert 1 or 2, increasing to 49$`\%`$ of 31 galaxies with $`10^{12.3}<L_{ir}<10^{12.8}`$ (for $`H_o=75`$). Veilleux et al (1999b) find, from near-ir imaging studies, no evidence that liners should be considered to be AGN. Sanders (1999) reports that most nearby ultraluminous ir galaxies contain an AGN at some level.
Rowan-Robinson and Crawford (1989) found that their standard starburst galaxy model gave an excellent fit to the far infrared spectrum of Mk 231, an archetypal ultraluminous ir galaxy. However their models for Arp 220 appeared to require a much higher optical depth in dust than the typical starburst galaxy. Condon et al (1991) showed that the radio properties of most ultraluminous ir galaxies were consistent with a starburst model and argued that many of these galaxies required an exceptionally high optical depth. This suggestion was confirmed by the detailed models of Rowan-Robinson and Efstathiou (1993) for the far infrared spectra of the Condon et al sample.
Quasars and Seyfert galaxies, on the other hand, tend to show a characteristic mid infrared continuum, broadly flat in $`\nu S_\nu `$ from $`330\mu `$m. This component was modelled by Rowan-Robinson and Crawford (1989) as dust in the narrow-line region of the AGN with a density distribution n(r) $`\alpha `$ $`r^1`$ . More realistic models of this component based on a toroidal geometry are given by Pier and Krolik (1992), Granato and Danese (1994), Rowan-Robinson (1995), Efstathiou and Rowan-Robinson (1995). Rowan-Robinson (1995) suggested that most quasars contain both (far ir) starbursts and (mid ir) components due to (toroidal) dust in the narrow line region.
Rowan-Robinson and Crawford (1989) were able to fit the IRAS colours and spectral energy distributions of galaxies detected in all 4 IRAS bands with a mixture of 3 components, emission from interstellar dust (’cirrus’), a starburst and an AGN dust torus. Recently Xu et al (1998) have shown that the same 3-component approach can be used to fit the ISO-SWS spectra of a large sample of galaxies. To accomodate the Condon et al (1991) and Rowan-Robinson and Efstathiou (1993) evidence for higher optical depth starbursts, Ferrigno et al (1999) have extended the Rowan-Robinson and Crawford (1989) analysis to include a fourth component, an Arp220-like, high optical depth starburst, for galaxies with log $`L_{60}>`$ 12. Efstathiou et al (1999) have given improved radiative transfer models for starbursts as a function of the age of the starburst, for a range of initial dust optical depths.
Sanders et al (1988) proposed, on the basis of spectroscopic arguments for a sample of 10 objects, that all ultraluminous infrared galaxies contain an AGN and that the far infrared emission is powered by this. Sanders et al (1989) proposed a specific model, in the context of a discussion of the infrared emission from PG quasars, that the far infrared emission comes from the outer parts of a warped disk surrounding the AGN. This is a difficult hypothesis to disprove, because if an arbitrary density distribution of dust is allowed at large distances from the AGN, then any far infrared spectral energy distribution could in fact be generated. In this paper I consider whether the AGN dust torus model of Rowan-Robinson (1995) can be extended naturally to explain the far infrared and submilllimetre emission from hyperluminous infrared galaxies, but conclude that in many cases this does not give a satisfactory explanation. I also place considerable weight on whether molecular gas is detected in the objects via CO lines.
Rigopoulou et al (1996) observed a sample of ultraluminous infrared galaxies from the IRAS 5 Jy sample at submillimetre wavelengths, with the JCMT, and at X-ray wavelengths, with ROSAT. They found that most of the far infrared and submillimetre spectra were fitted well with the starburst model of Rowan-Robinson and Efstathiou (1993). The ratio of bolometric luminosities at 1 keV and 60 $`\mu `$m lie in the range $`10^510^4`$ and are consistent with a starburst interpretation of the X-ray emission in almost all cases. Even more conclusively, Lutz et al (1996) and Genzel et al (1998) have used ISO-SWS spectroscopy to show that the majority of ultraluminous ir galaxies are powered by a starburst rather than an AGN.
## 3 Hyperluminous Infrared Galaxies
In 1988 Kleinmann et al identified P09104+4109 with a z = 0.44 galaxy, implying a total far infrared luminosity of 1.5x$`10^{13}`$, a factor 3 higher than any other ultraluminous galaxy seen to that date. In 1991, as part of a programme of systematic identification and spectroscopy of a sample of 3400 IRAS Faint Source Survey (FSS) sources, Rowan-Robinson et al discovered IRAS F10214+4724, an IRAS galaxy with z = 2.286 and a far infrared luminosity of 3x$`10^{14}h_{50}^2L_{}`$ . This object appeared to presage an entirely new class of infrared galaxies. The detection of a huge mass of CO by Brown and vandenBout (1991), $`10^{11}h_{50}^2M_{}`$, confirmed by the detection of a wealth of molecular lines (Solomon et al 1992), and of submillimetre emission at wavelengths 450-1250 $`\mu `$m (Rowan-Robinson et al 1991,1993, Downes et al 1992), implying a huge mass of dust, $`10^9h_{50}^2M_{}`$ confirmed that this was an exceptional object. Early models suggested this might be a giant elliptical galaxy in the process of formation (Elbaz et al 92). Simultaneously with the growing evidence for an exceptional starburst in F10214, the Seyfert 2 nature of the emission line spectrum (Rowan-Robinson et al 1991, Elston et al 1994a) was supported by the evidence for very strong optical polarisation (Lawrence et al 93, Elston et al 94b). Subsequently it has become clear that F10214 is a gravitationally lensed system (Graham and Liu 1995, Broadhurst and Lehar 1995, Serjeant et al 1995, Eisenhardt et al 1996) with a magnification of about 10 at far infrared wavelengths, but not much greater than that (Green and Rowan-Robinson 1996). Even when the magnification of 10 is allowed for, F10214 is still an exceptionally luminous far ir source.
In 1992 Barvainis et al successfully detected submillimetre emission from the z=2.546 ’clover-leaf’ gravitationally lensed QSO, H1413+117, which suggested that H1143 is of similar luminosity to F10214. Subsequently (Barvainis et al 1995) they realized that the galaxy was an IRAS FSC source.
Here I want to place emphasis on the hyperluminous galaxies detected as a result of unbiassed surveys at far infrared (and submillimetre) wavelengths. The program of follow-up of IRAS FSS sources which led to the discovery of F10214 (Rowan-Robinson et al 1991, Oliver et al 1996) has also resulted in the discovery of a further 6 galaxies or quasars with far ir luminosities $`>10^{13.22}h_{50}^2L_{}`$ (McMahon et al 99). Four galaxies from the PSCz survey (Saunders et al 1996) of IRAS galaxies brighter than 0.6 Jy at 60 $`\mu `$m fall into the hyperluminous category (a further two are blazars, 3C345 and 3C446, and these are not considered further here), and a further example has been found by Stanford et al (1999) in a survey based on a comparison of the IRAS FSS with the VLA FIRST radio survey. Two galaxies detected in submillimetre surveys at 850 $`\mu `$m with SCUBA also fall into the hyperluminous category (but one of these only because of the effect of gravitational lensing).
Cutri et al (1994) reported a search for IRAS FSS galaxies with ’warm’ 25/60 $`\mu `$m colours, which yielded the z = 0.93 Seyfert 2 galaxy, F15307+3252 (see also Hines et al 1995). Wilman et al (1999) have reported a further 2 hyperluminous galaxies from a more recent search for ’warm’ galaxies by Cutri et al (1999, in preparation). Dey and van Breugel (1995) reported a comparison of the Texas radio survey with the IRAS FSS catalogue, which resulted in 5 galaxies with fir ir luminosities $`>10^{13}h_{50}^2L_{}`$. However three of these are present only in the FSS Reject Catalogue and have not been able to be confirmed as far infrared sources to date. The other two are discussed below. Four PG quasars from the list of Sanders et al (1989) (two of which are part of the study of Rowan-Robinson (1995)), fall into the hyperluminous category. Recently Irwin et al (1999) have found a z = 3.91 quasar which is associated with with IRAS FSS source F08279+5255, the highest redshift IRAS object to date.
Finally, inspired by the success in finding highly redshifted submillimetre continuum and molecular line emission in F10214, several groups have studied an ad hoc selection of very high redshift quasars and radio-galaxies, with several notable successes (Andreani et al 1993, Dunlop et al 1995, Isaak et al 1995, McMahon et al 1995b, Ojik et al 1995, Ivison 1995, Omont et al 1997, Hughes et al 1997, McMahon et al 1999). Many of these detections imply far ir luminosities $`>10^{13.22}h_{50}^2L_{}`$ , assuming that the far ir spectra are typical starbursts. In all there are now 39 hyperluminous infrared galaxies known, which are listed in Tables 1-3 according to whether they are (1) the result of unbiased 60 $`\mu `$m (or submm) surveys, (2) found from comparison of known quasar and radio-galaxy lists with 60 $`\mu `$m catalogues, (3) found through submillimetre observations of known high redshift AGN. Table 4 lists some luminous infrared galaxies which do not quite meet my criteria, but have far infrared luminosities $`>10^{13.0}L_{}`$. But to set these in perspective there are a further 20 PSCz galaxies which have 13.00 $`<log_{10}L_{ir}/L_{}<`$ 13.22 (for $`H_o`$ = 50).
From the surveys summarised in Table 1 we can estimate that the number of hyperluminous galaxies per sq deg brighter than 200 mJy at 60 $`\mu `$m is 0.0027-0.0043, which would imply that there are 100-200 hyperluminous IRAS galaxies over the whole sky, 25 of which are listed in Tables 1 and 2.
## 4 Models for Hyperluminous Infrared Galaxies
For a small number of these galaxies we have reasonably detailed continuum spectra from radio to uv wavelengths. Figures 1-17 show the infrared continua of these hyperluminous galaxies, with fits using radiative transfer models (specifically the standard M82-like starburst model and an Arp220-like high optical depth starburst model from Efstathiou et al (1999) and the standard AGN dust torus model of Rowan-Robinson (1995). More than half of those shown have measurements at at least 9 independent wavelengths
We now discuss the individual objects (and a few not plotted) in turn:
$`\mathrm{𝐅𝟏𝟎𝟐𝟏𝟒}+\mathrm{𝟒𝟕𝟐𝟒}`$
The continuum emission from F10214 was the subject of a detailed discussion by Rowan-Robinson et al (1993). Green and Rowan-Robinson (1995) have discussed starburst and AGN dust tori models for F10214. Fig 1 shows M82-like and Arp 220-like starburst models fitted to the sumbillimetre data for this galaxy. The former gives a good fit to the latest data. The 60 $`\mu `$m flux requires an AGN dust torus component. To accomodate the upper limits at 10 and 20 $`\mu `$m, it is necessary to modify the Rowan-Robinson (1995) AGN dust torus model so that the maximum temperature of the dust is 1000 K rather than 1600 K. I have also shown the effect of allowing the dust torus to extend a further factor 3.5 in radius. This still does not account for the amplitude of the submm emission. The implied extent of the narrow-line region for this extended AGN dust torus model, which we use for several other objects, would be 326 $`(L_{bol}/10^{13}L_{})^{1/2}`$ pc consistent with 60-600 $`(L_{bol}/10^{13}L_{})^{1/2}`$ pc quoted by Netzer (1990). Evidence for a strong starburst contribution to the ir emission from F10214 is given by Kroker et al (1996) and is supported by the high gas mass detected via CO lines (see section 6). Granato et al (1996) attempt to model the whole sed of F10214 with an AGN dust torus model, but still do not appear to be able to account for the 60 $`\mu `$m emission.
$`\mathrm{𝐅𝟎𝟎𝟐𝟑}+\mathrm{𝟏𝟎𝟐𝟒}`$
A starburst model fits the IRAS and ISO data (Verma et al 1999) well and there is a strong limit on any AGN dust torus contribution .
$`\mathrm{𝐒𝐌𝐌𝐉𝟎𝟐𝟑𝟗𝟗}\mathrm{𝟎𝟏𝟑𝟔}`$
A starburst model fits the submm data well and the ISO detection at 15 $`\mu `$m gives a very severe constraint on any AGN dust torus component. The starburst interpretation of the submm emission is supported by the gas mass estimated from CO detections (Frayer et al 1999b).
$`\mathrm{𝐅𝟏𝟒𝟐𝟏}+\mathrm{𝟑𝟖𝟒𝟓}`$
A starburst model is the most likely interpretation of the IRAS and ISO 60-180 $`\mu `$m data, but there are discrepancies. There is a strong limit on any AGN dust torus component .
$`\mathrm{𝐓𝐗𝟎𝟎𝟓𝟐}+\mathrm{𝟒𝟕𝟏𝟎}`$
There is little evidence for a starburst contribution to the sed of this galaxy. An extended AGN dust torus model fits the data reasonably well, apart from the ISO detection by Verma et al (1999) at 180 $`\mu `$m.
$`\mathrm{𝐅𝟎𝟖𝟐𝟕𝟗}+\mathrm{𝟓𝟐𝟓𝟓}`$
An M82-like starburst is a good fit to the submm data and an AGN dust torus model is a good fit to the 12-100 $`\mu `$m data. The high gas mass detected via CO lines (Downes et al 1998) supports a starburst interpretation, though the ratio of $`L_{sb}/M_{gas}`$ is on the high side (see section 6). The submm data can also be modelled by an extension of the outer radius of the AGN dust torus and in this case the starburst luminosities given in Table 2 will be upper limits.
$`\mathrm{𝐏𝟎𝟗𝟏𝟎𝟒}+\mathrm{𝟒𝟏𝟎𝟗}`$
The 100 $`\mu `$m upper limit places a limit on the starburst contribution and there is no detection of CO emission in this galaxy. The 12-60 $`\mu `$m data can be fitted by the extended AGN dust torus model.
$`\mathrm{𝐏𝐆𝟏𝟏𝟒𝟖}+\mathrm{𝟓𝟒𝟗}`$
A starburst is a natural explanation of the 100 $`\mu `$m excess compared to the AGN dust torus fit to the 25 and 60 $`\mu `$m data, but detection in the submm and in CO would be important for confirming this interpretation.
$`\mathrm{𝐏𝐆𝟏𝟐𝟎𝟔}+\mathrm{𝟒𝟓𝟗}`$
The IRAS 12-60 $`\mu `$m data and the ISO 12-200 $`\mu `$m data (Haas et al 1998) are well-fitted by the extended AGN dust torus model and there is no evidence for a starburst.
$`\mathrm{𝐇𝟏𝟒𝟏𝟑}+\mathrm{𝟏𝟏𝟕}`$
The submm data is well fitted by an M82-like starburst and the gas mass implied by the CO detections (Barvainis et al 1994) supports this interpretation. The extended AGN dust torus model discussed above does not account for the submm emission. However Granato et al (1996) model the whole sed of H1413 in terms of an AGN dust torus model.
$`\mathrm{𝟏𝟓𝟑𝟎𝟕}+\mathrm{𝟑𝟐𝟓}`$
A starburst model gives a natural explanation for the 60-180 $`\mu `$m excess compared to the AGN dust torus model required for the 6.7 and 14.3 $`\mu `$m emission (Verma et al 1999), but the non-detection of CO poses a problem for a starburst intepretation.
$`\mathrm{𝐏𝐆𝟏𝟔𝟑𝟒}+\mathrm{𝟕𝟎𝟔}`$
The IRAS 12-100 $`\mu `$m data and the ISO 150-200 $`\mu `$m data (Haas et al 1998) are well-fitted by the extended AGN dust torus model and an upper limit can be placed on any starburst component. The non-detection of CO is consistent with this upper limit.
$`\mathrm{𝟒}𝐂\mathrm{𝟎𝟔𝟒𝟕}+\mathrm{𝟒𝟏𝟑𝟒}`$
An M82-like starburst model gives a reasonable fit to the submm data (although the 1250 $`\mu `$m flux seems very weak). Observations in the mid-ir are needed to constrain the AGN dust torus. Since the AGN is not seen directly we have no constraints on its optical luminosity. We have used the non-detection of this source by IRAS (which we take to imply S(60) $`<`$ 250 mJy) to set a limit on $`L_{tor}`$. This limit is not strong enough to rule out an AGN dust torus interpretation of the submm data.
$`\mathrm{𝐁𝐑𝟏𝟐𝟎𝟐}\mathrm{𝟎𝟕𝟐𝟓}`$
The submm data are well-fitted by an M82-like starburst model. The QSO is seen directly, although with strong self-absorption in the lines (Storrie- Lombardi et al 1996), so we can probably not use the QSO bolometric luminosity to set a limit on $`L_{tor}`$. Wilkes et al (1998) have detected this galaxy at 7, 12 and 25 $`\mu `$m with ISO, which would imply that the QSO is undergoing very strong extinction. An AGN dust torus model is capable of accounting for the whole spectrum but the starburst interpretation of the submm emission is supported by the gas mass estimated from CO detections (Ohta et al 1996, Omont et al 1996).
$`\mathrm{𝐏𝐆𝟏𝟐𝟒𝟏}+\mathrm{𝟏𝟕𝟔}`$
The ISO data of Haas et al (1999) can be fitted with an AGN dust torus model and there is no evidence for a starburst component. The 1.3 mm flux is probably an extrapolation of the radio continuum.
$`\mathrm{𝐏𝐆𝟏𝟐𝟒𝟕}+\mathrm{𝟐𝟔𝟕}`$ The ISO data of Haas et al (1999) can be fitted with an AGN dust torus model and there is no evidence for a starburst component.
$`\mathrm{𝐏𝐆𝟏𝟐𝟓𝟒}+\mathrm{𝟎𝟒𝟕}`$ The ISO data of Haas et al (1999) can be fitted with an AGN dust torus model and there is only weak evidence for a starburst component.
$`\mathrm{𝐁𝐑𝐈𝟏𝟑𝟑𝟓}\mathrm{𝟎𝟒𝟏𝟕}`$
The submm data can be fitted with a starburst model and since the QSO is seen directly we can use its estimated bolometric luminosity to set a limit on $`L_{tor}`$, which makes it unlikely that the submm emission is from an dust torus. The starburst interpretation is supported by gas mass estimated from CO detections (Guilloteau et al 1997).
$`\mathrm{𝐏𝐂𝟐𝟎𝟒𝟕}+\mathrm{𝟎𝟏𝟐𝟑}`$
The limit on $`L_{tor}`$ from the bolometric luminosity of the QSO makes it unlikely that an AGN dust torus is responsible for the 350 $`\mu `$m emission. However a starburst model can not fit both the 350 and 1250 $`\mu `$m observed fluxes. Observations at other submm wavelengths may help to clarify the situation.
For the remaining objects we have only 60 $`\mu `$m or single submillimetre detections and for these we estimate their far infrared luminosity, and other properties, using the standard starburst model of Efstathiou et al (1999). Tables 1-4 give the luminosities inferred in the starburst ($`L_{sb}`$) (and fits of the A220 model in brackets) and AGN dust torus ($`L_{tor}`$) components, or limits on these, with an indication, from the row position of the estimate, of which wavelength the estimate is made at. In Fig 18 we show the far infrared luminosity derived for an assumed starburst component, versus redshift, for hyperluminous galaxies, with lines indicating observational constraints at 60, 800 and 1250 $`\mu `$m. Three of the sources with (uncorrected) total bolometric luminosities above $`10^{14}h_{50}^2L_o`$ are strongly gravitationally lensed. IRAS F10214+4724 was found to be lensed with a magnification which ranges from 100 at optical wavelengths to 10 at far infrared wavelengths (Eisenhardt et al 1996, Green and Rowan-Robinson 1996). The ’clover-leaf’ lensed system H1413+117 has been found to have a magnification of 10 (Yun et al 1997). Downes et al (1998) report a magnification of 14 for F08279+5255. Also, Ivison et al estimate a magnification of 2.5 for SMMJ02399-0136 and Frayer et al (1999a) quote a magnification of 2.75 $`\pm `$0.25 for SMMJ14011+0252.
These magnifications have to be corrected for in estimating luminosities (and dust and gas masses) and these corrections are indicated in Fig 18. It appears to be a reasonable assumption that if a starburst luminosity in excess of $`10^{14}L_{}`$ is measured then the source is likely to be lensed, so F14218 and TX1011 merit further more careful study.
On the other hand there is strong evidence for a population of galaxies with far ir luminosities in the range 1-3x$`10^{13}h_{50}^2L_{}`$ . I have argued that in most cases the rest-frame radiation longward of 50 $`\mu `$m comes from a starburst component. The luminosities are such as to require star formation rates in the range 3-10x$`10^3h_{50}^2M_{}`$yr, which would in turn generate most of the heavy elements in a $`10^{11}M_{}`$ galaxy in $`10^710^8`$ yrs. Most of these galaxies can therefore be considered to be undergoing their most significant episode of star formation, ie to be in the process of ‘formation’.
## 5 The role of AGN
It appears to be significant that a large fraction of these objects are Seyferts, radio-galaxies or QSOs. For the galaxies in Tables 2 and 3, this is a selection effect in that these objects are deliberately selected to be, or to be biased towards, AGN. For the population of objects found from direct optical follow-up of IRAS samples or 850 $`\mu `$m surveys (Table 1), out of 12 objects, 5 are QSOs or Seyfert 1, 1 is Seyfert 2, and 6 are narrow-line objects. Thus in at least 50 $`\%`$ of cases, this phase of exceptionally high far ir luminosity is accompanied by AGN activity at optical and uv wavelengths. This proportion might increase if high resolution spectroscopy were available for all the galaxies. For comparison the proportion of ultraluminous galaxies which contain AGN has also been estimated as 49 $`\%`$ (Veilleux et al 1999). However despite the high proportion of ultraluminous and hyperluminous galaxies which contain AGN, this does not prove that an AGN is the source of the rest-frame far infrared radiation. The ISO-LWS mid-infrared spectroscopic programme of Genzel et al (1998), Lutz et al (1998), has shown that the far infrared radiation of most ultraluminous galaxies is powered by a starburst, despite the presence of an AGN in many cases. Wilman et al (1999) have shown that the X-ray emission from several hyperluminous galaxies is too weak for them to be powered by a typical AGN.
In the Sanders et al (1989) picture, the far infrared and submillimetre emission would simply come from the outer regions of a warped disk surrounding the AGN. Some weaknesses of this picture as an explanation of the far infrared emission from PG quasars have been highlighted by Rowan-Robinson (1995). A picture in which both a strong starburst and the AGN activity are triggered by the same interaction or merger event is far more likely to be capable of understanding all phenomena (cf Yamada 1994, Taniguchi et al 1999).
Where hyperluminous galaxies are detected at rest-frame wavelengths in the range 3-30 $`\mu `$m (and this can correspond to observed wavelengths up to 150 $`\mu `$m), the infrared spectrum is often found to correspond well to emission from a dust torus surrounding an AGN (eg Figs 1, 6-10, 12). This emission often contributes a substantial fraction of the total infrared (1-1000 $`\mu `$m) bolometric luminosity. For the 12 ir-selected objects of Table 1, the luminosity in the dust torus component exceeds that in the starburst for 5 of the galaxies (42$`\%`$). The advocacy of this paper for luminous starbursts relate only to the rest-frame emission at wavelengths $`50\mu `$m. Figure 19 shows the correlation between the luminosity in the starburst component, $`L_{sb}`$, and the AGN dust torus component, $`L_{tor}`$, for hyperluminous infrared galaxies, PG quasars (Rowan-Robinson 1995), and IRAS galaxies detected in all 4 bands (Rowan-Robinson and Crawford 1989) (this extends Fig 8 of Rowan-Robinson 1995). The range of the ratio between these quantities, with 0.1 $`L_{sb}/L_{tor}`$ 10, is similar over a very wide range of infrared luminosity (5 orders of magnitude), showing that the proposed separation into these two components for hyperluminous ir galaxies is not at all implausible.
## 6 Dust and gas masses
The radiative transfer models can be used to derive dust masses and hence, via an assumed gas-to-dust ratio, gas masses. For the M82-like starburst model used here the appropriate conversion is $`M_{dust}=10^{4.6}L_{sb}`$, in solar units (Green and Rowan-Robinson 1996). These estimates have been converted into estimates of gas mass assuming $`M_{gas}/M_{dust}`$ = 300 (tables 1-4, col 9, bracketed values). However these estimates will not assist in deciding the plausibility of the starburst models, because the radiative transfer models are automatically self-consistent models of massive star-forming molecular clouds. Estimates derived from $`\nu ^\beta B_\nu (T_d)`$ fits to the spectral energy distributions are even less physically illuminating.
Far more valuable are the cases where direct estimates of gas mass can be derived from molecular line (generally CO) observations. Where available, these estimates have been given in Tables 1-4, col 9, taken from Frayer et al 1999a (and references therein), Barvainis et al 1998, Evans et al (1999), Yun and Scoville (1999). Figure 20 shows a comparison of the estimates of $`L_{sb}`$ derived here with estimates of $`M_{gas}`$ derived from CO observations. Also shown are results for ultraluminous ir galaxies (Solomon et al 1997) and for more typical IRAS galaxies (Sanders et al 1991) (after rationalisation of some objects in common).
The appropriate conversion factor from CO luminosity to gas mass is a matter of some controversy. For luminous infrared galaxies, Sanders et al (1991) used a characteristic value for molecular clouds in our Galaxy, 4.78 $`M_{}(Kkms^1pc^2)^1`$. Solomon et al (1997) found that such a value led to gas mass estimates for ultraluminous infrared galaxies a factor of 3 or more in excess of the dynamical masses and concluded that a value of 1.4 $`M_{}(Kkms^1pc^2)^1`$ was more appropriate for these galaxies. Downes and Solomon (1998) studied several ultraluminous infrared galaxies in detail in CO 2-1 and 1-0 with the IRAM interferometer, deriving an even lower value of 0.8 $`M_{}(Kkms^1pc^2)^1`$ on the basis of radiative transfer models for the CO lines. However their gas masses are on average only 1/6th of the (revised) inferred dynamical masses. In their detailed model for Arp 220, Scoville et al (1997) found a conversion factor 0.45 times the Galactic value, ie 2.15 $`M_{}(Kkms^1pc^2)^1`$. Combining this with an estimated ratio for T(3-2)/T(1-0) of 0.6, Frayer et al (1999a) justify a value of 4 $`M_{}(Kkms^1pc^2)^1`$ for gas mass estimates of hyperluminous galaxies derived from CO 3-2 observations.
In Tables 1-4 and Fig 19, I have followed Frayer et al (1999a) in using a conversion factor of 4 $`M_{}(Kkms^1pc^2)^1`$ for hyperluminous galaxies. For other galaxies in Fig 19 with luminosities $`>10^{11.5}L_{}`$, I have used a conversion factor of 2 $`M_{}(Kkms^1pc^2)^1`$ (in line with Scoville et al 1997, but a factor of 2 or so higher than advocated by Downes and Solomon 1998); for galaxies with luminosities $`<10^{11.5}L_{}`$, I have used the standard Galactic value, 4.78 $`M_{}(Kkms^1pc^2)^1`$.
The range of ratios of $`L_{sb}/M_{gas}`$ for hyperluminous galaxies is consistent with that derived for ultraluminous starbursts. For cases where we have estimates of gas mass both from CO lines and from dust emission, the agreement is remarkably good (within a factor of 2). There is a tendency for the time-scale for gas-consumption, assuming a star formation rate given by eqn (1), to be shorter for the more luminous objects, in the range $`10^710^8`$ yrs (alternatively this could indicate a higher value for the low-mass cutoff in the IMF). The cases where a strong limit can be set on $`M_{gas}`$ are also, generally, those where the seds do not support the presence of a starburst component. After correction for the effects of gravitational lensing, gas masses ranging up to 1-3 x $`10^{11}M_{}`$ are seen in most hyperluminous galaxies, comparable with the total stellar mass of an $`L_{}`$ galaxy ($`10^{11.2}(M/4L)h_{50}^2`$). In fact 24/39 hyperluminous galaxies in Tables 1-3 have gas masses estimated either from CO or from dust emission $`>10^{11}M_{}`$ (after correction for effects of lensing, where known). Hughes et al (1997) argue that a star-forming galaxy can not be considered primeval unless it contains a total gas mass of $`10^{12}M_{}`$, but this seems to neglect the fact that 90 $`\%`$ of the mass of galaxies resides in the dark (probably non-baryonic) halo.
## 7 Conclusions
(1) About 50 $`\%`$ of hyperluminous infrared galaxies selected in unbiassed infrared surveys have AGN optical spectra. This is not in fact higher than the proportion seen in ultraluminous ir galaxies by Vielleux et al (199a). For about half of the galaxies in this sample, the AGN dust torus is the dominant contribution to the total ir (1-1000 $`\mu `$m) bolometric luminosity, while in half of cases a starburst seems to be the dominant contributor.
(2) There is a need for both an AGN dust torus and starburst components to understand most seds of hyperluminous ir galaxies (29/39). Measured gas masses support, in most cases, the starburst interpretation of rest-frame far-infrared and submm ($`\lambda _{em}50\mu `$m)emission.
(3) There is a broad correlation between the luminosities of starburst and AGN dust torus components (Fig 19). This may imply that there is a physical link between the triggering of star formation and the feeding of a massive black hole. Taniguchi et al (1999) have argued that during a merger giving rise to a luminous starburst, a pre-existing black hole of $`10^7M_{}`$ may grow into a large one $`>10^8M_{}`$ and hence form a quasar. Alternatively they suggest that a large black hole might be formed out of star clusters with compact remnants.
(3) There is no evidence in most objects that an AGN powers a significant fraction of radiation at rest-frame wavelengths $``$ 50 $`\mu `$m. For P09104 and PG1634, the non-detection of CO emission is consistent with the absence of evidence in the sed for a starburst component. In F08279, the mass of CO detected suggests a limit on the starburst luminosity which implies that the observed submm radiation may simply be the long-wavelength tail of its AGN dust torus emission. F15307 poses a problem: the sed can be understood as radiation from both an AGN dust torus and an Arp-220 like starburst, but the upper limit on the molecular mass from the non-detection of CO would then imply a very extreme ir-luminosity to gas-mass ratio.
(4) After correction for the effects of lensing, star-formation rates in excess of 2000 $`M_{}/yr`$ are inferred in many of these galaxies (for a Salpeter IMF). This would be sufficient to exhaust the observed reservoir of gas in $`10^8`$ yrs. These galaxies are undergoing extremely major episodes of star formation, but we can not yet establish whether this is their first major burst of star formation.
(5) Further submm continuum and molecular line observations can provide a strong test of the models for the seds proposed here. |
no-problem/9912/astro-ph9912451.html | ar5iv | text | # The Variability of Seyfert 1.8 and 1.9 Galaxies at 1.6 microns
## 1. Introduction
The discovery that Seyfert 2 galaxies such as can have reflected or polarized broad line emission has led to an approach coined ‘unification’ towards interpreting the differences between active galactic nuclei (AGNs) in terms of orientation angle (Antonucci (1993)). The dusty torus of this unification paradigm absorbs a significant fraction of the optical/UV/X-ray luminosity of an active galaxy and consequently reradiates this energy at infrared wavelengths NGC 1068. As a result of this extinction it is difficult to observe continuum radiation from Seyfert 2 galaxies at optical and UV wavelengths (e.g., Mulchaey et al. (1994)). An additional complication is that in a given aperture it may be difficult to identify the percentage of flux from a non-stellar nuclear source (e.g., Alonso-Herrero et al. (1996)). For example in Seyfert 2 galaxies much of the nuclear emission may originate from nuclear star formation (e.g., Maiolino et al. (1997); Gonzalez-Delgado & Perez (1993)).
The high sensitivity and resolution of near infrared imaging with the Hubble Space Telescope (HST) using NICMOS (the Near Infrared Camera and Multi-Object Spectrograph) allows us to probe galactic centers at wavelengths which experience reduced extinction compared to the optical, and with a beam area about 30 times smaller than is typically achieved with ground-based observations at these wavelengths. This enables us to separate the nuclear emission from that of the surrounding galaxy with unprecedented accuracy. Though a previous survey using WFPC2 at $`0.6\mathrm{\mu m}`$ did not detect unresolved nuclear continuum emission from Seyfert 2 galaxies (Malkan et al. (1998)), about 60% of the RSA and CfA samples (described below) of Seyfert 1.8-2.0 galaxies display prominent unresolved nuclear sources with diffraction rings in NICMOS images at $`1.6\mathrm{\mu m}`$ (McDonald et al. (2000)). Though we suspect that these unresolved continuum sources are most likely associated directly with an AGN, they could also be from unresolved star clusters, which are found in a number of normal galaxies (Carollo et al. (1997)).
Variability observed in the continuum (e.g., Fitch, Pacholczyk, & Weymann (1967)) is an intrinsic property of active galactic nuclei (AGNs) which demonstrates that the energy causing the emission must arise from a very small volume. This led early studies to suggest that accretion onto a massive black hole is responsible for the luminosity (Salpter (1964); Zeldovich & Novikov (1964)). Long term multi-year monitoring programs have found that Seyfert 1 galaxies are variable in the near-infrared (Clavel, Wamsteker & Glass (1989); Lebofsky & Rieke (1980)), however these programs have only seen a few Seyfert 2 nuclei vary (e.g., Glass (1997); Lebofsky & Rieke (1980)). Evidence for variability in the unresolved sources seen in HST observations of Seyfert 2 galaxies would provide evidence that this nuclear emission is non-stellar and so arises from the vicinity of a massive black hole.
## 2. Observations
In this paper we present a study of variability in Seyfert galaxies. We have searched the HST archive for galaxies (Seyfert and normal) which were imaged twice by HST at $`1.6\mathrm{\mu m}`$ in the F160W filter with the NICMOS cameras. The Seyfert galaxies with unplanned duplicate observations either satisfy the Revised Shapely-Ames Catalog criterion (described by Maiolino & Rieke 1995) or are part of the CfA redshift survey (Huchra & Burg 1992). The Seyfert observations are discussed in Regan & Mulchaey (1999) and Martini & Pogge (1999) and the observations of the normal or non-Seyfert galaxies are described by Seigar et al. (2000) and Böker et al. (1999). The observations are listed in Table 1 and are grouped by the NICMOS cameras in which they were observed. Images were reduced with the nicred data reduction software (McLeod (1997)) using on orbit darks and flats. Each set of images in the F160W filter was then combined according to the position observed. The pixel sizes for the NICMOS cameras are $`0.043`$, $`0.076`$ and $`0^{\prime \prime }.204`$ for Cameras 1, 2 and 3 respectively.
At the center of these galaxies we expect contribution from an underlying stellar component in addition to that from an unresolved non-stellar component. To measure the flux from the unresolved component we must subtract a resolved stellar component. However this procedure is dependent upon assumptions made about the point spread function, the form of the stellar surface brightness profile fit to the image and the region over which we fit this profile. This procedure adds uncertainty in the measurement of the unresolved component. However, aperture photometry has proved quite robust with observations of flux calibration standard stars showing variation less than 1% over the lifetime of NICMOS (M. Rieke, private communication 1999). We therefore opt to use aperture photometry to measure flux variations, and then subsequently correct for contamination of the aperture by the background galaxy.
From each pair of images we measure fluxes in apertures of the same angular size. No background was subtracted since the level of background expected at $`1.6\mathrm{\mu m}`$ is negligible compared to the galaxy surface brightnesses. Apertures are listed in Table 1 and were chosen so that more than 75% of the flux of an unresolved source would be contained in the aperture. We chose apertures based on which two cameras were used to observe the object. We list in Table 1 the difference divided by the mean of the two flux measurements for each pair of images.
To determine whether the nuclear sources are variable we need to quantify the level of intrinsic scatter in our flux measurements. As a control sample we use the galaxies not identified as Seyfert galaxies and those containing Seyfert nuclei but lacking an unresolved nuclear component. Comparing Camera 2 and Camera 3 measurements for this control sample we find a mean difference of $`\mu =0.9\pm 0.7\%`$ with a variance of $`\sigma =2.0\%`$ in the measurements. Comparing measurements with two observations in Camera 2 for this control sample we find a mean difference of $`\mu =0.6\pm 0.6\%`$ with a variance of $`\sigma =1.4\%`$. Unfortunately our control sample only contains 2 galaxies with observations in Camera 1 and Camera 2 (MRK 266 and NGC 5929). To supplement this we also measured stars observed in both Camera 1 and 2 in the vicinity of the Galactic Center. Differences in fluxes measured in these 3 image pairs were less than 3%. The statistics of our control sample suggest that the intrinsic scatter of our measurements is smaller than a difference of 3% for all pairs of images. We therefore estimate that flux differences greater than 6% are statistically significant (at $`2\sigma `$ level) and likely to be caused by variability and not by scatter in the measurements. The galaxies in which we measure differences larger than this level are listed in Table 2.
We did not find that the unresolved nuclear sources in NGC 404 our NGC 2903 were variable. As demonstrated with UV spectra by Maoz et al. (1998), it is possible that the unresolved component in NGC 404 is from a young star cluster. The same is probably true in NGC 2903 which also contains a compact nuclear source and has a nuclear HII region type spectrum. The scatter in our aperture measurements does not appear to be dependent on the surface brightness profile of the galaxy. No large differences were measured between image pairs for galaxies lacking an unresolved nuclear source.
To estimate the level of variability in the unresolved component we must measure the contribution within the aperture of this component. For each camera we measured a point spread function from stars in the images. We then fit the sum of an exponential bulge profile and the point spread function to the surface brightness profile. The error in this procedure we estimated from the scatter in the residuals and was about $`\pm 15\%`$ of the total unresolved flux measured. We used the flux from the unresolved component and the shape of the point spread function to estimate the contribution to the flux measured in the apertures listed in Table 1. The differences in the aperture flux measurements are lower limits for differences in the fluxes of the unresolved components (in the limit that the galaxy contributes no flux in these apertures). The mean unresolved fluxes from the two epochs and extent of variability of the unresolved components (the difference divided by the mean) are listed in Table 2.
## 3. Discussion
In 9 out of 14 Seyfert 1.5-2.0 galaxies with unresolved components we find a variation greater than 10% in the flux of their unresolved continuum nuclear sources in 2 epochs of observations at $`1.6\mathrm{\mu m}`$. A control sample of Seyfert galaxies lacking unresolved sources and galaxies lacking Seyfert nuclei show less than 3% instrumental variation in equivalent aperture measurements. This suggests that the variability detected is statistically significant at the level of $`2\sigma `$. Since we see variations between 0.7-14 month timescales the unresolved sources are probably non-stellar and associated with the central pc of active galactic nuclei. The luminous Seyfert 1 galaxy in our sample, NGC 4151, shows a variation of 10% in its nuclear flux, similar to that seen in the other Seyfert galaxies.
From Table 2 we see that most of the variable sources are Seyfert 1.8 or 1.9 galaxies. NGC 1275, NGC 5033, and NGC 5273 are usually classified as Seyfert 1.9 galaxies though Ho, Filippenko & Sargent (1995) classify them as S1.5. There are two Seyfert 2 galaxies exhibiting variability: MRK 533 and NGC 5347. In MRK 533 a broad component in Pa$`\alpha `$ was detected by Ruiz, Rieke, & Schmidt (1994) and so this galaxy could be classified as a Seyfert 1.9. Seyfert 1.8 and 1.9 galaxies are more likely to display unresolved nuclear sources than Seyfert 2.0 galaxies (McDonald et al. (2000)). In the context of the unification model, reduced extinction towards the continuum emitting region at $`1.6\mathrm{\mu m}`$ would be expected in Seyfert galaxies which display faint broad line emission. However, this might also suggest that the sizes of the Broad Line Region and 1.6 micron continuum emission region are small compared to the material responsible for the bulk of the extinction.
Two major sources for AGN continuum variability are generally considered: 1) instabilities in an accretion disk (e.g., Shakura & Sunyaev (1973)) and 2) jet related processes (e.g., as discussed by Tsvetanov et al. 1998). The second case could be a possible explanation for variability NGC 1275 since it is bright at radio wavelengths and is significantly polarized at optical wavelengths (as discussed by Angel & Stockman (1980)). However, the luminosity of the compact nucleus of this galaxy at 1.3 GHz is about 20 times lower than that we measure at 1.6 microns (using the flux from Taylor & Bermeulen ). So the $`1.6\mathrm{\mu m}`$ flux is higher than what would be expected from synchrotron emission and could be from an additional thermal component (e.g., from hot dust). Better measurements showing the shape of the spectral energy distribution spanning the optical and near-infrared region (to see if two components are present) or a polarization measurement at $`1.6\mathrm{\mu m}`$ would help differentiate between a thermal or non-thermal origin for the near-infrared emission.
For the remainder of the Seyferts, their low radio power implies that jet related processes are not responsible for the variability. From observations of the Seyfert 1 galaxy Fairall 9, Clavel et al. (1989) observed large, 400 day, time delays between variations seen at 2 and $`3\mathrm{\mu m}`$ and those seen in the UV. Little or no time delay was seen at $`1.2\mathrm{\mu m}`$. This led them to suggest that the longer wavelength emission was associated with hot dust located outside the Broad Line Region (e.g., Lebofsky & Rieke (1980); Barvainis (1987); Netzer & Laor (1993)) and that the shorter wavelength emission was reprocessed near the UV emitting region.
For hot dust to cause the $`1.6\mathrm{\mu m}`$ emission, dust grain temperatures resulting from absorption of UV radiation must be quite high, nearly that expected for sublimation ($`T2000`$K). The grain temperature should reach this level at a radius $`r0.06\mathrm{pc}\left(\frac{L}{10^{44}\mathrm{erg}/\mathrm{s}}\right)^{1/2}`$ (following the estimate given in Barvainis (1987)). This radius would have a characteristic variability timescale of $`70`$ days or 2 months for a source of $`10^{44}`$ ergs/s. We can crudely estimate the bolometric luminosity of our sources from that at $`1.6\mathrm{\mu m}`$ (which are listed in Table 2) by assuming a ratio of $`10`$ between the $`1.6\mathrm{\mu m}`$ and mid-IR luminosity (e.g, Fadda et al. (1998) for the Seyfert 2s) and a ratio of $`10`$ between the mid-IR and bolometric luminosity (e.g., Spinoglio et al. (1995)). The timescales over which we see variations for the brighter sources such as NGC 1275, MRK 533 and UGC 12138 ($`L10^{44}`$ ergs/s) are consistent with the 2 month minimum estimated for emission from hot dust. The least luminous of our sources, NGC 4395 ($`L10^{41}`$ ergs/s), could have a variability timescale of only a few days for hot dust emitting at $`1.6\mathrm{\mu m}`$, again consistent with the timescale (a few weeks) over which we see a variation.
Emission from hot dust may not necessarily dominate at $`1.6\mathrm{\mu m}`$ since the emitting material would require a temperature near the sublimation point of graphites and silicates (Netzer & Laor (1993)). However transient super heating at larger radii could still cause emission from hot dust at this wavelength. While the timescales over which we see variability are comparable to those expected from hot dust near a sublimation radius, a long term study comparing flux variations between the near-infrared and X-ray emission would be needed to determine the exact nature of the $`1.6\mathrm{\mu m}`$ emission. This kind of study would also place strong constraints on disk and torus models for the infrared emission (e.g., Efstathiou & Rowan-Robinson (1995); Fadda et al. (1998)).
Most of the unresolved nuclear sources studied here exhibit variability. This suggests that most of the many unresolved continuum sources recently discovered in near-infrared surveys (McDonald et al. (2000); Alonso-Herrero et al. (1996)) (and not seen in previous optical surveys) are non-stellar and associated with the central pc of an AGN. The near-infrared continuum in low luminosity AGNs can now be studied in a set of objects comprising a larger range of luminosity and orientations. This should provide tests of the unification model for Seyfert 1 and 2 galaxies as well as the nature of accretion in these lower luminosity sources.
We thank the referee, Ski Antonnuci, for comments which have improved this paper. We thank Brad Peterson and Chien Peng for helpfull discussions on this work. Support for this work was provided by NASA through grant number GO-07869.01-96A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. We also acknowledge support from NASA project NAG-53359 and NAG-53042 and from JPL Contract No. 961633. |
no-problem/9912/cond-mat9912057.html | ar5iv | text | # Radial fluctuations induced stabilization of the ordered state in two dimensional classical clusters.
## Abstract
Melting of two dimensional (2D) clusters of classical particles is studied using Brownian dynamics and Langevin molecular dynamics simulations. The particles are confined by a circular hard wall or a parabolic external potential and interact through a dipole or a screened Coulomb potential. We found that with decreasing strength of the inter–particle interaction clusters with short-range inter-particle interaction which are confined by a hard wall exhibit a re-entrant behavior in its orientational order.
The structural and dynamical properties of small classical two-dimensional (2D) clusters have been the subject of recent experimental studies and Monte-Carlo and molecular dynamics simulations . It was found earlier that the particles are arranged in shells and that melting of finite clusters is a two step process. With increasing temperature inter-shell motion develops and the system losses angular order. Consecutively radial diffusion switches on and destroys the shell structure of the cluster. The spectrum of such clusters was obtained in Refs. for different number of particles. The derived minimal frequencies and the corresponding energy barriers showed that ‘non-close packed’ clusters are unstable against inter-shell rotation. High symmetry clusters (i.e. the so-called magic number clusters) have energy barriers for intershell motion which are several orders of magnitude larger than those for ‘non-close packed’ clusters.
Recently, Bubeck et al observed re-entrant melting in two dimensional (2D) colloidal clusters. The clusters consist of paramagnetic colloidal spheres which were confined in a circular hard wall vessel. The external magnetic field induces a magnetic moment $`\stackrel{}{M}`$ in the particles and they interact through a dipole potential $`V(\stackrel{}{r}_i,\stackrel{}{r}_j)=\mu _0M^2/4\pi r_{ij}^3`$, where $`\mu _0`$ is the magnetic permitivity, and $`r_{ij}`$ the interparticle distance. The coupling parameter, which is the inter-particle interaction energy measured in units of the particle kinetic energy $`\mathrm{\Gamma }=V/k_BT`$, characterizes the order of the system. It decreases by lowering the external magnetic field. In Ref. it was found that with decreasing $`\mathrm{\Gamma }`$, first inter-shell rotation appears which destroys the angular order of the cluster. Further decreasing the parameter $`\mathrm{\Gamma }`$ the system unexpectantly regained angular order within a narrow range of $`\mathrm{\Gamma }`$ and then melts when $`\mathrm{\Gamma }`$ is further decreased. It was suggested that the observed re-entrant melting behavior is due to the increasing role of the radial particle fluctuations which is very similar to an earlier investigation of laser induced melting of 2D colloidal crystals .
Earlier theoretical work on parabolic confined clusters did not find such a re-entrant behavior which suggests that the shape of the confinement potential may be very important. Another difference is that in the experimental system the particle motion is strongly damped because the colloidal particles move in water. In the present paper we investigate the mechanism for the re-entrant behavior and address the specific role played by the type of confinement (i.e. hard wall versus parabolic) and of the functional form of the inter-particle interaction (short range versus long range) on the melting of the clusters.
In our model the particles are confined by a circular hard wall potential ($`V_p=0`$ for $`rR`$ and $`V_p=\mathrm{}`$ at $`r>R`$) or by a parabolic potential $`V_p=\alpha r^2`$. The particles interact through a dipole potential $`V(\stackrel{}{r}_i,\stackrel{}{r}_j)=q^2/\stackrel{}{r}_i\stackrel{}{r}_j^3`$, where $`q^2=\mu _0M^2/4\pi `$, or through a screened Coulomb potential $`V(\stackrel{}{r}_i,\stackrel{}{r}_j)=(q^2/\stackrel{}{r}_i\stackrel{}{r}_j)\mathrm{exp}(\kappa \stackrel{}{r}_i\stackrel{}{r}_j)`$, with $`q`$ the ‘particle charge’, $`\stackrel{}{r}_i`$ is the coordinate of the $`i^{th}`$ particle, and $`1/\kappa `$ is the screening length where $`\kappa =0`$ for a Coulomb cluster and we took $`\kappa =2/a_0`$ for the screened Coulomb cluster, where $`a_0`$ is the mean inter-particle distance. For given type of inter-particle interaction and external confinement, only two parameters characterize the order of the system: the number of particles $`N`$ and the coupling parameter $`\mathrm{\Gamma }`$. We define the characteristic energy of inter–particle interaction for dipole clusters as $`E_0=q^2/a_0^3`$ and $`E_0=q^2/a_0`$ for screened Coulomb clusters, where $`a_0=2R/N^{1/2}`$ for the hard wall and $`a_0=q^{2/5}\alpha ^{1/5}`$ for parabolic confinement. In the present calculation we define the coupling parameter as $`\mathrm{\Gamma }=q^2/a_0^3k_BT`$ for dipole clusters and $`\mathrm{\Gamma }=(q^2/a_0k_BT)\mathrm{exp}(\kappa a_0)`$ for screened Coulomb clusters. In a different dimensionless parameter $`\mathrm{\Gamma }`$ was introduced, where $`V`$ was taken to be the sum over all pairs of particles. Our coupling parameter $`\mathrm{\Gamma }`$ is a factor $`2.2447`$ smaller than the one of for $`N=29`$.
The ratio of the particle velocity relaxation time versus the particle position relaxation time is very small due to the viscosity of water and therefore the motion of the particles is diffusive. In our simulations we will neglect hydrodynamic interactions. Following we rewrite the stochastic Langevin equations of motion for the position of the particles as the equations of motion for Brownian particles
$$\frac{d\stackrel{}{r}_i}{dt}=\frac{k_BT}{D_0m}(\underset{j=1}{\overset{N}{}}\frac{dV(\stackrel{}{r}_i,\stackrel{}{r}_j)}{d\stackrel{}{r}}+\frac{dV_p(\stackrel{}{r}_i)}{d\stackrel{}{r}})+\frac{\stackrel{}{F}_L}{m},$$
(1)
where $`D_0`$ is the self-diffusion coefficient, $`m`$ the particle mass, and $`\stackrel{}{F}_L`$ the randomly fluctuating force acting on the particles due to the surrounding media. In the numerical solution of Eq. (1) we took a time step $`\mathrm{\Delta }t10^4/(nD_0)`$, where $`n=N/(\pi R^2)`$ is the particle density and $`\mathrm{\Delta }t`$ was varied within a range $`(0.02÷0.05)sec`$. The radius of the circular vessel $`R=36\mu m`$ and the self-diffusion coefficient $`D_0=0.35\mu m^2/s`$ are taken from the experiment . Following Ref. we consider dipole clusters consisting of $`N=29,30,34`$ particles which have different types of packing. In the ordered state the systems of $`N=29,30`$ particles are arranged in a triangular ‘closed packed’ structure having, respectively, the shell structure (3:9:17) and (3:9:18) and ground-state energy $`E=2.2447E_0`$ and $`E=2.2798E_0`$. The cluster with $`N=34`$ particles (4:11:19) $`E=2.4198E_0`$ has a non-close packed structure.
We find first the ground-state configuration using the Monte-Carlo (MC) technique. Our results for the minimal energy configuration coincides exactly with the energy found in for Coulomb clusters and in for dipole clusters with parabolic confinement. In the experiment the system was first equilibrated and after that the particle trajectories were recorded during $`30`$ minutes. In the simulation we equilibrated the system for about $`(510^5÷10^6)`$ MC steps after which we started with the statistical averaging over time of the different observables. To obtain reliable results with small statistical error we follow the particle trajectories typically during $`10^7`$ time steps.
We calculated the time dependence of the mean angular displacement of the particles in a specific shell $`\theta (t)=\underset{i=1}{\overset{N}{}}(\theta _i(t)\theta _i(0))/N`$, where $`\theta _i(t)`$ is the angular position of the $`i^{th}`$ particle and $`N`$ is the number of particles in the shell. In Fig. 1 the angular displacements of the particles in the first shell (the most inner) and the second shell are given as function of time for the cluster of $`N=29`$ particles. The angular motion in the third shell is very small because its motion is hindered by the hard wall. Notice that both the first (thick curve) and the second (thin curve) shells take part in the angular motion, but the former rotation is more prominent. The inter-shell motion has no preferential direction and with time it can be either a clock wise or a counter clock wise rotation. With decreasing coupling parameter $`\mathrm{\Gamma }`$ inter-shell rotation becomes more pronounced.
In order to characterize the order of the system we calculate the angular diffusion of the particles over a $`30min\times 1000`$ time interval. The angular diffusion coefficient can be written as
$$D_\theta =(<\mathrm{\Delta }\theta (t)^2><\mathrm{\Delta }\theta (t)>^2)/t,$$
(2)
where $`<>`$ refers to a time averaging, and the mean relative angular displacement rotation of the first shell ($`\theta ^1(t)`$) relative to the second ($`\theta ^2(t)`$) one is defined as $`\mathrm{\Delta }\theta (t)=\theta ^1(t)\theta ^2(t).`$ The radial diffusion coefficient is
$$\mathrm{\Delta }R^2=\frac{1}{N}\underset{i=1}{\overset{N}{}}(<r_i(t)^2><r_i(t)>^2)/a_0^2,$$
(3)
which is a measure of the radial order in the system.
In Fig. 2 the angular and radial diffusion coefficients are shown for three different dipole clusters with hard wall confinement subjected to the same conditions as in the experiment . For the cluster with $`N=29`$ particles the angular diffusion (solid dots in Fig. 2(a)) monotonically increases with decreasing coupling parameter up to $`\mathrm{\Gamma }30`$ which is a manifestation of angular melting. In the interval $`\mathrm{\Gamma }=10÷30`$ the inter-shell diffusion remains practically constant, and with further decreasing $`\mathrm{\Gamma }`$ it is reduced to about a $`20\%`$ smaller value. In the latter region the radial diffusion coefficient starts to rise (open dots in Fig. 2(a)), but the cluster retains its shell structure. In the range $`3<\mathrm{\Gamma }<8`$ the cluster oscillates between the ground state (3:9:17) and the metastable state (4:8:17) which leads to a reduction of the angular fluctuations. Further decreasing the coupling to $`\mathrm{\Gamma }5`$ both $`D_\theta `$ and $`\mathrm{\Delta }R^2`$ rises quickly, indicating the onset of melting. A similar qualitative behavior was observed for the dipole cluster with $`N=30`$ particles (see Fig. 2(b)). In the non-close-packed cluster with $`N=34`$ intershell rotation occurs over all $`\mathrm{\Gamma }`$’s considered in the experiment (see Fig. 2(c)) and no clear regaining of angular order is found in the region $`3<\mathrm{\Gamma }<8`$.
In order to obtain further insight into this re-entrant behavior we investigated the conditions under which this novel effect can be observed. Therefore, we varied the following parameters: 1) the viscosity of the medium the particles are moving in; 2) the way of decreasing the coupling of the system (i.e. temperature change versus inter-particle interaction strength change); 3) the range of the inter-particle interaction; and 4) the form of the confinement potential.
To study the melting behavior of the cluster under condition of low viscosity with hard wall confinement we performed Langevin molecular dynamics (MD) simulations. The Langevin equations for the system of $`N`$ interacting particles are
$$\frac{d^2\stackrel{}{r}_i}{dt^2}=\nu \frac{d\stackrel{}{r}_i}{dt}+\frac{\stackrel{}{F}_i}{m}+\frac{\stackrel{}{F}_L}{m},$$
(4)
where $`\nu =D_0m/k_BT`$ is the friction and $`\stackrel{}{F}_i`$ the force from the inter-particle interaction. As an example, we consider $`N=30`$ dipole particles moving in a medium with a viscosity which is $`10^4`$ times smaller than the one of water. Such a low viscosity corresponds to the situation of colloidal particles moving in a gas with pressure 1Pa. In Fig. 3 the angular diffusion coefficient (solid circles) is plotted as function of $`\mathrm{\Gamma }`$. Note that now $`D_\theta `$ is about a factor $`10^4`$ larger as compared to previous case (see Fig. 2(b)). It is clear that changing viscosity does not destroy the re-entrant like behavior. In fact changing viscosity will not alter the statistical properties of the system but will only change the time scale for relaxation to equilibrium.
The coupling ($`\mathrm{\Gamma }`$) of the system can be varied in two different ways. One can decrease the inter-particle interaction as was done in the experiment or increase the temperature of the system in order to induce melting. The latter approach was used for the $`N=30`$ cluster and the angular diffusion constant is shown in Fig. 3 (open circles). The inter-particle interaction was chosen equal to the previous one at $`\mathrm{\Gamma }=300`$ for $`T=300K`$. By heating the system no re-entrant melting behavior is observed and $`D_\theta `$ increases monotonically with increasing temperature $`T1/\mathrm{\Gamma }`$. Thus re-entrant behavior is only found when the coupling of the system is decreased by changing the inter–particle interaction strength at fixed T. The reason for this unexpected behavior is that by increasing temperature the self-diffusion constant, $`D_0=k_BT/\nu m`$, increases which, because $`D_\theta D_0`$, implies that the solid dot results in Fig. 3 have to be multiplied with $`T1/\mathrm{\Gamma }`$, and the resulting curve (open dots) does not exhibit a local minimum. The numerical results for the dimensionless variables (e.g. $`D_\theta /D_0`$) depend only on $`\mathrm{\Gamma }`$ (and N, confinement and inter-particle interaction) but when converting them into physical variables $`T,\nu ,m`$ enters.
Next we investigated whether the type of inter-particle interaction influences the occurrence of re-entrant behavior. We consider a cluster with long range Coulomb interaction ($`N=37`$ (3:9:25) having the same internal structure as the $`N=30`$ dipole cluster) with hard wall confinement using Brownian dynamics (1). In Fig. 4(a) the angular and radial diffusion coefficients are shown as function of $`\mathrm{\Gamma }`$. Notice that the Coulomb cluster shows completely different melting behavior and $`D_\theta `$ increases monotonically with decreasing $`\mathrm{\Gamma }`$. Melting takes place at $`\mathrm{\Gamma }40`$. Next we consider short range inter-particle interaction and as an example we took the screened Coulomb cluster ($`\kappa =2/a_0`$) with $`N=30`$ (3:9:18) particles. In Fig. 4(b) the angular and radial diffusion is shown as function of the coupling. $`D_\theta `$ exhibits a clear re-entrant behavior which correlates with an increase of the radial diffusion. Thus we may conclude that only clusters with short range inter-particle interaction in a hard wall vessel shows re-entrant behavior.
Last we study the effect of the shape of the confinement potential and consider the melting behavior of a cluster with short range inter-particle interaction in a parabolic well. We choose the $`N=25`$ dipole cluster (3:9:13). Fig. 5 shows the angular diffusion and the radius of the cluster as function of $`\mathrm{\Gamma }`$. $`D_\theta `$ clearly does not exhibit a local minimum, it rises uniformly with decreasing $`\mathrm{\Gamma }`$, and melting occurs for $`\mathrm{\Gamma }5`$, as in the case of hard wall confinement. The radius of the cluster $`R`$ and the mean inter-particle distance changes proportionally to $`\mathrm{\Gamma }^{1/2}`$.
In conclusion, we studied the melting transition of 2D clusters with dipole and screened Coulomb type of interaction confined by a hard wall or a parabolic external potential using Brownian dynamics simulations. Langevin molecular dynamics simulations were carried out in the case of small viscosity. We found that only clusters with short range inter-particle interaction and confined by a hard wall well exhibit angular freezing before melting, irrespective of the value of viscosity. But it is essential that the coupling of the system is reduced by decreasing the inter-particle interaction at constant temperature as was done in the experiment . In the other cases, either of Coulomb clusters or with parabolic confinement the system shows the usual two step melting behavior without any re-entrance.
We showed that re-entrant behavior is a consequence of the interplay between angular order and radial oscillations where an increase of the radial fluctuations is able to induce angular order in clusters with magic number. With decreasing $`\mathrm{\Gamma }`$, first angular motion sets in, because it is governed by the lowest energy barriers. Further increasing $`\mathrm{\Gamma }`$ leads to an increase of the radial motion/fluctuations which hinder the angular motion. The later prevents angular motion in case of hard wall confinement. But for parabolic confinement the average inter-particle distance (see Fig. 5) decreases which results in an equal scaling of the energy barriers for inter-shell and intra-shell motion. This contrasts with the hard wall confinement case where the inter-particle distances are unaltered and the energy barrier for inter-shell jumps decreases leading to an increase of the radial fluctuations and the inter-shell jumps. Thus anharmonic effects are essential for the occurrence of this re-entrant behavior. Anharmonicity is enhanced in systems with hard wall confinement and for short-range inter-particle interaction.
One of us (FMP) thanks C. Bechinger for fruitful discussions. This work is supported by the Flemish Science Foundation (FWO-Vl), IUAP-IV and the Russian Foundation for Fundamental Investigation 96–023–9134a. One of us (FMP) is a Researcher Director with FWO-Vl and I.V.S and V.A.S are supported by a DWTC–fellowship.
FIGURES
FIG. 1. The angular displacement of the particles on the first (thick curves) and second (thin curves) shell for a cluster with $`N=29`$ particles for different values of the coupling parameter: (a) $`\mathrm{\Gamma }=150`$, (b) $`\mathrm{\Gamma }=46`$, (c) $`\mathrm{\Gamma }=10`$, and (d) $`\mathrm{\Gamma }=3`$.
FIG. 2. The angular diffusion $`D_\theta `$ (solid circles) and the radial diffusion ($`\mathrm{\Delta }R^2`$) (open circles) coefficients as function of $`\mathrm{\Gamma }`$ for clusters with (a) $`N=29`$, (b) $`N=30`$, and (c) $`N=34`$ particles.
FIG. 3. The angular diffusion coefficients as a function of the strength of the inter-particle interaction at constant temperature (solid circles) and as function of temperature ($`\mathrm{\Gamma }1/T`$) for fixed inter-particle interaction strength (open circles) for a dipole cluster with $`N=30`$ in the case of small viscosity.
FIG. 4. The angular (solid circles) and radial diffusion (open circles) coefficients as function of $`\mathrm{\Gamma }`$ for: (a) the Coulomb cluster with $`N=37`$ particles, and (b) the screened Coulomb cluster with $`N=30`$ particles confined by a hard wall potential.
FIG. 5. The angular diffusion coefficient (solid circles) and radius of the cluster (open triangles) as function of $`\mathrm{\Gamma }`$ for a $`N=25`$ dipole cluster in a parabolic well. |
no-problem/9912/astro-ph9912480.html | ar5iv | text | # The Reappearance of the Transient Low Mass X-ray Binary X1658-298
## 1 Introduction
X1658$``$298 is a soft X-ray transient discovered in 1976 by Lewin, Hoffmann, & Doty (1976). The detection of type I bursts indicates that the compact object in the system is a neutron star. Observations during a temporary brightening of the source in 1978 showed dips in the X-ray lightcurve. Detailed analysis of the combined 1976–1978 data set by Cominsky & Wood (1984, 1989) revealed that X1658$``$298 is one of the rare low mass X-ray binary systems (LMXBs) that exhibits eclipses of the central X-ray source by the mass donating star. The dipping activity lasts for about 25% of the 7.1 hour orbital cycle followed by an eclipse of $`15`$ min duration.
The optical counterpart of X1658$``$298 was identified during the 1978 X-ray outburst with a faint ($`V=18.3`$), blue star (V2134 Oph) by Doxsey et al. (1979). Spectroscopic observations show a typical LMXB spectrum, a blue continuum with emission lines of He II $`\lambda `$4686 and the C III/N III $`\lambda `$4640/4650 blend (Canizares, McClintock, & Grindlay 1979). X1658$``$298 entered an X-ray off–state in 1979 and the counterpart became undetectable with a magnitude limit of $`V>23`$ (Cominsky, Ossmann, & Lewin 1983).
Renewed X-ray activity from X1658$``$298 was detected by BeppoSAX on April 2–3 1999 (In’t Zand et al. 1999), marking the first X-ray detection of the source since 1978. Follow-up observations were quickly scheduled with RXTE under a public Target of Opportunity program, and with optical telescopes at CTIO. In this paper we present the first optical light curve of X1658$``$298, and the results of our RXTE eclipse timing and spectral fitting analysis.
## 2 Observations
### 2.1 X–ray
X1658$``$298 was observed with the RXTE satellite for a series of four public observations between 1999 April 5–15, soon after the recommencement of X-ray activity. The X-ray data we present here were obtained using the RXTE Proportional Counter Array (PCA) instrument with the Standard 2 and E\_125us\_64M\_0\_1s configurations, with time resolutions of 16 sec and 125 $`\mu `$sec, respectively. The PCA consists of five Xe proportional counter units (PCUs), with a combined effective area of about 6500 cm<sup>2</sup> (Jahoda et al. 1996). For operational reasons, differing numbers of PCUs were utilized in each observation. In Table 1 we list the observation times and the PCUs on during each observation. Data extraction was performed using the RXTE standard analysis software, FTOOLS v4.2. The "skyvle/skyactiv" models generated by the RXTE PCA team were used for background subtraction, and found to be accurate to better than 1 cs<sup>-1</sup>. Light curves and spectra were analyzed in the 2–20 keV band. Barycentric corrections have been applied to all X-ray timings. 2% systematic errors were added to the spectral data before fitting, to represent the current uncertainties in response matrix generation.
### 2.2 Optical
CCD $`V`$ and $`I_C`$ band photometry of V2134 Oph was performed with the CTIO 1.5m and YALO telescope from UT 1999 April 29 to May 3. The image scale at the telescopes was 0.24″ pix<sup>-1</sup> and 0.30″ pix<sup>-1</sup>, respectively. The data were overscan corrected, bias corrected and flat-fielded in the standard manner using IRAF. Photometry was performed by point spread function fitting with DAOPHOT II (Stetson 1993). The instrumental magnitudes were transformed to the standard system through comparison with previously calibrated local standards (Wachter & Smale 1998). The intrinsic 1$`\sigma `$ error of the relative photometry is about $`\pm 0.02`$ mag as derived from the rms scatter in the lightcurve of comparison stars of similar brightness. The standardized magnitudes are accurate to about $`\pm 0.10`$ mag. Exposure times were 300 sec for the YALO data and 200 to 240 sec (around the times of eclipse) for the 1.5m data, depending on the observing conditions.
## 3 Results
### 3.1 X-ray
The X-ray observations were scheduled to occur centered on the expected times of eclipse, as extrapolated from the ephemeris of Cominsky & Wood (1989). As intended, one complete eclipse was observed per observation. We have determined the duration, mid-point and transition times for each eclipse by modeling each ingress and egress transition with a “step and ramp” model, consistent with the methodology adopted in studies of eclipses from the similar transient LMXB X0748$``$676 (e.g. Parmar et al. 1986, Corbet et al. 1994). The model assumes a linear transition into and out of eclipse, and has four free parameters per transition: the start and end time, and the count rates before and after the transition. From these we derive the ingress and egress durations, $`\mathrm{\Delta }T_{ing}`$ and $`\mathrm{\Delta }T_{egr}`$, the eclipse duration $`\mathrm{\Delta }T_{ecl}`$ (measured from the end of ingress to the beginning of egress), and the eclipse mid-points (midway between the end of ingress and the start of egress). Table 2 contains the measured values of these quantities for each eclipse. We find a spread of ingress/egress times of 6–13 sec, with mean values for $`\mathrm{\Delta }T_{ing}`$ and $`\mathrm{\Delta }T_{egr}`$ of 9.1$`\pm `$3.0 sec and 9.5$`\pm `$3.3 sec respectively, and a mean eclipse duration of 901.9$`\pm `$0.8 sec. The X-ray eclipse transitions of X1658$``$298 together with the model fits are shown in Figure 1.
These four eclipse centers occur an average of 407.4 sec earlier than predicted by the ephemeris of Cominsky & Wood (1989). We have combined our eclipse timings with the eclipse centers (corrected to TDB) from the HEAO A-1 and SAS 3 observations of Cominsky & Wood (1984, 1989), to produce the updated ephemeris presented in Table 3. A parabolic ephemeris is required to obtain a good fit to the eclipse timings; the $`\dot{P}`$/$`P`$ term implies that the orbital period of the system is decreasing on a timescale of 10<sup>7</sup> yr. This ephemeris was then used to phase our optical data, in the following sections.
We have also performed a spectral analysis of the RXTE PCA data. From each dataset, we extracted a spectrum of the persistent (non-eclipse, non-dip) emission, and an in-eclipse spectrum. The spectra of the persistent emission each contain $``$1300 sec of data, and the eclipse spectra $``$880–896 sec. In each case the persistent spectrum can be well fit using a power law plus high energy cutoff model, with power law index $`\alpha `$=2.1$`\pm `$0.1, cutoff 8.6$`\pm `$0.6 keV, and a hydrogen column density $`N_H`$ of (5.0$`\pm `$0.6)$`\times `$10<sup>22</sup> cm<sup>-2</sup>. A Comptonized Sunyaev and Titarchuk model also provides reasonable fits to the data, with $`kT`$=3.9$`\pm `$0.2 keV, $`\tau `$=7.1$`\pm `$0.3, $`N_H`$=(6.1$`\pm `$0.5)$`\times `$10<sup>22</sup> cm<sup>-2</sup>. The reduced $`\chi ^2`$ values for both models are acceptable, in the range 1.0–1.2. Two component models (such as a powerlaw plus blackbody) will also fit the data, although an F-test does not justify the inclusion of the second component. The mean persistent 2–20 keV flux of the source throughout the observations is 1.05$`\times `$10<sup>-9</sup> erg cm<sup>-2</sup> s<sup>-1</sup>. The eclipse spectra can be consistently fit with a simple, steeper power law with $`\alpha `$=3.5$`\pm `$0.4 and $`N_H`$=(15$`\pm `$5)$`\times `$10<sup>22</sup> cm<sup>-2</sup>. Over the 2–20 keV range, the eclipse flux level is measured to be 1.9$`\pm `$0.7% of the persistent emission.
The durations we measure for the X-ray eclipse transitions are shorter than those determined from the previous activity cycle (mean $`\mathrm{\Delta }T_{ing}`$=41$`\pm `$13 sec, mean $`\mathrm{\Delta }T_{egr}`$=19$`\pm `$13 sec; Cominsky & Wood 1989). However, a broad spread of values for the eclipse transitions from a given source may be common; the similar source X0748$``$676 shows transition times from 1.5–40 sec (Parmar et al. 1991). Transition times are defined by the atmospheric scale height of the companion, which can be affected by flaring activity or the presence of an X-ray-induced evaporative wind or corona; a more detailed discussion of of such effects in X1658$``$298 will be worthwhile once a larger sample of eclipses is obtained.
Period changes have been previously detected in six other LMXBs, and may provide valuable clues about the progression of binary evolution. For conservative mass transfer, the loss of angular momentum leads to an expected timescale for evolution of the orbital period ($`\tau `$=$`P_{orb}/\dot{P}_{orb}`$) of 10<sup>8-10</sup> yr. However, the timescales measured to date have been considerably shorter than this. The period of X1822$``$371 and X2127$`+`$119 are increasing on timescales of $`\tau `$=2.9$`\times `$10<sup>6</sup> yr and $`\tau `$=1.1$`\times `$10<sup>6</sup> yr, respectively (Hellier et al. 1990; Homer & Charles 1998), while X1820$``$303 and Her X–1 show decreasing orbital periodicities with $`\tau `$=1.9$`\times `$10<sup>7</sup> yr (van der Klis et al. 1993 and references within) and $`\tau `$=7.6$`\times `$10<sup>7</sup> yr (Deeter et al. 1991). Cyg X–3 (possibly not an LMXB) shows an increasing orbital period, with $`\tau `$=7.3$`\times `$10<sup>6</sup> yr, with a possible second period derivative (van der Klis & Bonnet-Bidaud 1989; Kitamoto et al. 1992). Most complex of all, X0748$``$676 shows a period change behavior initially seen to decrease (Parmar et al. 1991) but later impossible to reconcile with a simple constant period derivative. A sinusoidally-varying orbital period (Asai et al. 1992) provided an acceptable fit until the RXTE era, when an unusually large excursion from this pattern was detected that defies straightforward parameterization (Hertz, Wood, & Cominsky 1997). The variation observed in X1658$``$298 is of a similar magnitude to these cases, despite the fact that (presumably) mass transfer was not occurring during the interval 1978–1999. This may pose a difficulty in explaining the change using models based on angular momentum coupling, irradiation of the secondary, or magnetic cycling (e.g. Parmar et al. 1991; Richman, Applegate, & Patterson 1994; Hertz et al. 1997).
### 3.2 Optical
Apart from the discovery observations during the 1978 outburst and limited follow-up during the subsequent decay, optical data of V2134 Oph are sparse. The few more recent spectroscopic observations of V2134 Oph during the extended X-ray off–state (Cowley, Hutchings & Crampton 1988; Shahbaz et al. 1996; Navarro 1996) all imply a substantially brighter counterpart than the $`V>23`$ limit discussed in Cominsky et al. (1983). We conducted the first photometric study of V2134 Oph in 1997 April/May and found the source only $``$ 1 mag fainter than the brightness reported when the system is X-ray active (Wachter & Smale 1998). Our lightcurve surprisingly also did not show any modulation across the binary cycle. X-ray transients in quiescence generally display photometric variability due to ellipsoidal variations (see van Paradijs & McClintock 1995 for extensive references). A comparison between our quiescent and outburst frames from 1997 and 1999 (Figure 2) resolves this puzzling behavior: the star we observed (“A”) is in fact an unrelated close companion to the actual optical counterpart. The true position of V2134 Oph was measured from a large number of frames with excellent seeing conditions (0.7-0.8″) to be 0.8″ east and 1.0″ north of the star marked “A”. The presence of this unrelated companion cannot be discerned from the original finding chart of Doxsey et al. (1979). There is no evidence for intermittent brightening of the source in X-rays during the quiescent years, and consequently it is probable that the counterpart remained at the faint $`V>23`$ magnitudes reported by Cominsky et al. (1983). This magnitude is inaccessible for spectroscopic studies with the telescopes used in the observations referenced above and it is likely that none of these observations were of the true quiescent counterpart.
We subsequently reanalyzed our 1997 data in order to determine quiescent magnitudes of the true counterpart which is only very faintly visible in our pre-outburst data. After coadding the six $`I`$ frames (600 sec each) with the best seeing conditions, we obtain $`I=22.1\pm 0.3`$ for V2134 Oph in quiescence. Unfortunately, the quiescent counterpart is too faint for accurate photometry in our 1997 $`V`$ and $`R`$ band data. Filippenko et al. (1999) measured $`R=23.6\pm 0.4`$ for V2134 Oph in quiescence. Combining these two measurements and assuming a reddening of $`E_{BV}`$=0.3 (van Paradijs 1995) results in $`(RI)_0=1.2`$ which corresponds to a spectral type of M2. A star of this spectral type would not fill its Roche lobe in a 7.1 hour LMXB. As discussed in Wachter & Smale (1998), empirical period–mass relations for mass transfer systems imply a K0 star instead. Note, however, that the $`R`$ magnitudes of the comparison stars A and E (A and 1 in our nomenclature) in Filippenko et al. (1999) are systematically 0.6-0.7 mag fainter than those given in Wachter & Smale (1998). If the $`R`$ magnitude of V2134 Oph is similarly too faint, the resulting $`(RI)_0`$ is consistent with the required early K spectral type within the errors. A K0 companion would have $`V=23.6`$, in accordance with the observed $`V>23`$ limit.
Our 1999 $`V`$ and $`I`$ band lightcurves of V2134 Oph during outburst are shown in Figure 3. Data obtained with the CTIO 1.5m telescope are indicated with filled circles for $`V`$ and triangles for $`I`$, YALO data with stars. The observations span almost a full orbital cycle on each night. Superposed on a gradual brightness variation with $`0.70.8`$ mag amplitude, a distinct, narrow eclipse feature of about 0.2 mag is visible on each night. Strong nightly variations in the shape of the lightcurve are also evident. For the nights with simultaneous $`V`$ and $`I`$ band coverage we rebinned the data to the average time sampling interval using linear interpolation and calculated the $`(VI)`$ color index. There is no evidence for any $`(VI)`$ color variation across the orbit or for a change in color from night to night. We obtain an average of $`(VI)=0.645\pm 0.054`$ from the combined color data of the three nights.
## 4 Discussion
Figure 4 shows the outburst data folded according to our updated X-ray ephemeris (Table 3). Following the usual convention, the time of the X-ray mid-eclipse is defined as phase 0. The deepest point of the optical lightcurves on each night was chosen as a reference point for the brightness of the system and the data (vertically) shifted accordingly. X-ray dips are observed for X1658$``$298 between the phases of 0.6–0.8. No analogous stable optical feature is evident in the folded lightcurves, however, our data sampling in that phase interval is fairly sparse. The folded $`V`$ band lightcurve clearly shows a distinct central drop in brightness within $`0.2`$ mag of the faintest observed magnitude which is also characterized by reduced scatter compared to other phases of the lightcurve. The presence of such a narrow central component is evident in the individual lightcurves of each separate night as well. We determined the optical eclipse center to occur at phase $`0.004\pm 0.003`$ by selecting the folded $`V`$ band data within 0.2 mag of the faintest magnitude and calculating the time on either side of which the area within the eclipse profile was equal. The average data sampling interval in this part of the folded $`V`$ band lightcurve is about 1 minute. A close-up of the central region is shown on the bottom left hand side of Figure 4 together with the fit to the 1999 Apr 5 X-ray eclipse (dotted line). Our data indicate that there is no significant offset between the time of mid-eclipse in the X-ray and optical and that the narrow component of the optical eclipse is of the same duration as the X-ray eclipse (we measure a FWHM of $`14\pm 2`$ minutes for this optical feature). This implies a distinct optical emission region associated with the X-ray emitting area.
The only other LMXBs known to exhibit X-ray eclipses are X0748$``$676, X2129+470, X1822$`371`$, Her X$``$1 and X0921$``$630. For systems with inclinations $`75^{}i80^{}`$ (X1658$``$298, X0748$``$676, Her X$``$1), both dips and total eclipses are observed in X-rays. In higher inclination systems ($`i80^{}`$; X1822$`371`$, X2129+470), only partial X-ray eclipses are seen; the accretion disk is thought to block the direct line-of-sight to the central X-ray source and the observed X-ray flux is due to scattering in an extended accretion disk corona (ADC). The optical/UV eclipse in the ADC source X1822$``$371, one of the most extensively studied systems, is found to be much broader than the X-ray eclipse (Hellier & Mason, 1989; Puchnarewicz, Mason, & Cordova 1995) indicating an accretion disk radius of about twice the ADC radius. The optical lightcurve of X1822$``$371 varies very little from night to night and even over a timespan of years. Modeling shows that several emission components such as the X-ray heated face of the mass donor and the accretion disk rim contribute to produce the overall morphology of the optical lightcurve (Mason & Cordova 1982). It is therefore difficult to determine the time of ingress and egress of the optical eclipse for a given system solely from the shape of the lightcurve.
In contrast to X1822$``$371, our X1658$``$298 data clearly display a narrow optical feature of the same duration as the X-ray eclipse. However, the data do not reveal whether this feature merely represents the central core of a wider optical eclipse. Due to the highly variable shape of the lightcurve outside this narrow component, we cannot ascertain the presence or absence of wider ingress/egress signatures. The standard model calls for successively longer eclipse durations when moving from observations in X-rays to longer wavelengths to account for the eclipse of the cooler outer regions of an extended accretion disk (which would not be visible in X-rays). Our data clearly indicate an accretion disk structure characterized by enhanced optical emission coincident with the central X-ray emitting area. We would consequently predict equivalent optical structures in all systems in which the X-ray source is believed to be viewed directly. An optical feature on the timescale of the X-ray eclipse has been observed in Her X$``$1 (Kippenhahn, Schmidt, & Thomas 1980). For X0748$``$676, an inspection of the individual optical lightcurves displayed in Crampton et al. (1986) and van Paradijs, van der Klis, & Pedersen (1988) also reveals a narrow central eclipse component very similar to that of our X1658$``$298 data. However, in both cases the authors conclude that the optical eclipse is twice as wide as the X-ray eclipse, based on which part of the lightcurve looks like “clearly an eclipse” and/or consideration of an average lightcurve which does not exhibit any central structure. While it is difficult to tell with certainty from the published figures, it appears likely that a reexamination of these X0748$``$676 data would also show this narrow optical component to have the same duration as the X-ray eclipse.
We thank Alistair Walker for assigning us director’s discretionary time for the optical observations of this project. This research has made use of the Simbad database, operated at CDS, Strasbourg, France and of results provided by the ASM/RXTE team at MIT and NASA/GSFC. C. Bailyn is supported by NSF AST 97-30774. |
no-problem/9912/astro-ph9912242.html | ar5iv | text | # Electron-, Mu-, and Tau-Number Conservation in a Supernova Core
## I Introduction
The evidence for neutrino flavor oscillations from atmospheric and solar neutrinos as well as the LSND experiment is now so compelling that the debate in neutrino physics has fundamentally changed. The current experimental results point to a few very specific regions in the parameter space of mass differences and mixing angles. Therefore, the experimental task at hand is to verify the proposed solutions by independent means such as long-baseline oscillation experiments. From the astrophysical perspective, it is no longer an academic exercise to investigate systematically if neutrino mixings with these specific parameters lead to observable effects other than explaining the solar neutrino deficit.
One consequence of the apparent violation of $`e`$, $`\mu `$, and $`\tau `$ flavor lepton number is that the leptons of a perfectly thermalized system are no longer characterized by six independent chemical potentials, but only by one for the charged leptons and one for the neutrinos, i.e. $`\mu _e=\mu _\mu =\mu _\tau `$ and $`\mu _{\nu _e}=\mu _{\nu _\mu }=\mu _{\nu _\tau }`$. In practice one knows of only two types of environment where neutrinos achieve thermal equilibrium and where this effect could be important, the hot early universe and the inner cores of collapsed stars. In the standard picture of the early universe, the neutrino chemical potentials are very small so that flavor equilibration would not make much of a difference except for the case of oscillations between active and sterile neutrinos , a possibility that we will mostly ignore.
In a supernova (SN) core, on the other hand, the electron lepton number of the progenitor star’s iron core is trapped, leading to highly degenerate $`e`$ and $`\nu _e`$ distributions. If the trapped electron lepton number were shared among all flavors, the impact on the effective equation of state would be large. Equilibration on the infall time scale of a few hundred milliseconds could well be disastrous for the SN explosion mechanism. However, in a previous paper two of us found that even for maximal neutrino mixing, equilibrium between $`\nu _e`$ and the other flavors is not achieved on the time scale of a few seconds unless $`\delta m^2`$ exceeds about $`10^5\mathrm{eV}^2`$. The rate of flavor conversion is very slow because the mixing angle is strongly suppressed by neutrino refractive effects. Since the current indications for neutrino oscillations point to much smaller mass differences, the electron lepton number in a SN core is almost perfectly conserved except for diffusive or convective transfer to the stellar surface.
We presently study the corresponding conversion between $`\nu _\mu `$ and $`\nu _\tau `$, which has not yet been investigated. It is usually assumed that the chemical potentials for these flavors vanish in a SN core so that their distribution functions are equal, i.e. that flavor equilibrium exists from the start. However, the muon mass of $`106\mathrm{MeV}`$ is not very large, considering that typical temperatures can exceed $`T=30\mathrm{MeV}`$ and that the average thermal energy of a relativistic particle is around $`3T`$. Beta equilibrium by reactions of the form $`\nu _\mu +np+\mu ^{}`$ implies the condition $`\mathrm{\Delta }\mu \mu _n\mu _p=\mu _\mu \mu _{\nu _\mu }`$, which is familiar from the electron flavor. The exact value of $`\mathrm{\Delta }\mu `$ depends sensitively on the equation of state; a typical range is 50–100 MeV. Since initially the trapped $`\mu `$ lepton number is zero, and taking $`\mathrm{\Delta }\mu =50\mathrm{MeV}`$ and $`T=30\mathrm{MeV}`$ as an example, we find $`\mu _\mu 32\mathrm{MeV}`$ and $`\mu _{\nu _\mu }18\mathrm{MeV}`$ so that there is a significant excess of $`\mu ^{}`$ over $`\mu ^+`$, compensated by an equal excess of $`\overline{\nu }_\mu `$ over $`\nu _\mu `$. On the other hand, the large value $`m_\tau =1777\mathrm{MeV}`$ completely suppresses the presence of $`\tau `$ leptons. Together with the absence of trapped $`\tau `$ lepton number this implies $`\mu _{\nu _\tau }=0`$. The initial thermal distributions are quite different between the $`\mu `$ and $`\tau `$ flavors!
Apart from these initial differences, it was recently recognized that the opacities for $`\nu _\mu `$, $`\overline{\nu }_\mu `$, $`\nu _\tau `$ and $`\overline{\nu }_\tau `$ are all different from each other if nucleon recoil effects and muonic beta reactions are taken into account, leading to a transient build-up of $`\mu `$ and $`\tau `$ lepton number as neutrinos diffuse out of the star . Again, it is important to understand if the $`\mu `$ and $`\tau `$ lepton numbers are locally conserved on the diffusion time scale or if the condition $`\mu _{\nu _\mu }=\mu _{\nu _\tau }`$ is enforced by flavor-conversion processes.
The SuperKamiokande measurement of the atmospheric neutrino anomaly suggests maximal $`\nu _\mu `$-$`\nu _\tau `$-mixing and $`\delta m^2`$ roughly between $`10^3`$ and $`10^2\mathrm{eV}^2`$. We now study if these mixing parameters lead to chemical equilibrium between $`\nu _\mu `$ and $`\nu _\tau `$ on the diffusion time scale of a few seconds.
## II Flavor conversion
Chemical equilibrium between charged leptons $`\mathrm{}`$ and neutrinos $`\nu _{\mathrm{}}`$ of a given flavor is established by beta processes of the form $`\mathrm{}^{}+pn+\nu _{\mathrm{}}`$. They are “infinitely fast” relative to all other time scales of interest. Chemical equilibrium between $`\mu `$\- and $`\tau `$-flavored leptons, on the other hand, is achieved by neutrino oscillations and by collisions which break the coherence of mixed neutrino states. The evolution of the $`\nu _\mu `$-$`\nu _\tau `$-system is best described in terms of the $`2\times 2`$ density matrices $`\rho _𝐩`$ for each momentum $`𝐩`$. In the flavor basis, the diagonal entries of $`\rho _𝐩`$ are the usual $`\nu _\mu `$ and $`\nu _\tau `$ occupation numbers, respectively, while the off-diagonal entries represent phases produced by oscillations. The evolution of $`\rho _𝐩`$, and of $`\overline{\rho }_𝐩`$ for anti-neutrinos, is governed by a generalized Boltzmann collision equation which simultaneously includes the effects of oscillations and collisions .
For our simple estimates, however, it will suffice to use a “pedestrian” method where the neutrino ensemble is represented by a single mode with momentum $`𝐩`$ or energy $`E|𝐩|`$. Without collisions, the flavor content of this mode would oscillate forever. Interactions with the medium, however, destroy the coherence between the flavor components, leading eventually to an equal, incoherent mixture. Maximally mixed neutrinos approach this state of flavor equilibrium with a rate
$$\mathrm{\Gamma }_{\mathrm{flavor}}=\omega _{\mathrm{vac}}^2\frac{D}{D^2+(\delta V)^2},$$
(1)
where
$`\omega _{\mathrm{vac}}{\displaystyle \frac{\delta m^2}{2E}}`$ $`=`$ $`1.7\times 10^{11}\mathrm{eV}\mathrm{\Delta }_3E_{30}^1`$ (2)
$`=`$ $`2.5\times 10^4\mathrm{s}^1\mathrm{\Delta }_3E_{30}^1`$ (3)
is the oscillation frequency in vacuum. Here, $`\mathrm{\Delta }_3\delta m^2/10^3\mathrm{eV}^2`$ represents the lower end of the mass range implied by SuperKamiokande and $`E_{30}E/30\mathrm{MeV}`$ where 30 MeV is a typical temperature in a SN core. Further, $`\delta V`$ is the energy difference between $`\nu _\mu `$ and $`\nu _\tau `$ of equal momenta caused by the medium, i.e. $`\delta V`$ is the difference of the medium’s weak potential for our two neutrino flavors.
Finally, $`D`$ is the damping or decoherence rate, i.e. the rate by which interactions with the medium “measure” the flavor content of a mixed neutrino state. Typically $`D`$ is of the order of the neutrino collision rate, but the exact relationship between the two quantities is not trivial. For example, in a situation where one of the neutrino flavors scatters with a rate $`\mathrm{\Gamma }_{\mathrm{coll}}`$ while the other is sterile, one finds $`D=\mathrm{\Gamma }_{\mathrm{coll}}/2`$ . Equation (1) applies in the “strong damping limit” defined by $`D\omega _{\mathrm{vac}}`$, a condition which is satisfied in our scenario.
Equation (1) is easily interpreted in two limiting cases. For $`\delta V=0`$, the flavor content of a given state oscillates as $`\frac{1}{2}[1+\mathrm{cos}(\omega _{\mathrm{vac}}t)]`$. The oscillations are interrupted by those collisions which “measure” the difference between $`\nu _\mu `$ and $`\nu _\tau `$ . If this “measurement rate” or “decoherence rate” $`D=\tau _D^1`$ is much larger than $`\omega _{\mathrm{vac}}`$, the flavor oscillation is interrupted when $`\omega _{\mathrm{vac}}t1`$ so that the flavor content evolves as $`(\omega _{\mathrm{vac}}t/2)^2`$ until it is interrupted. This happens with a rate $`\tau _D^1`$ so that the rate of flavor conversion must scale as $`(\omega _{\mathrm{vac}}\tau _D)^2\tau _D^1`$ or
$$\mathrm{\Gamma }_{\mathrm{flavor}}=\frac{\omega _{\mathrm{vac}}^2}{D}.$$
(4)
In this case the flavor conversion rate decreases with increasing $`D`$, and vanishes for infinite $`D`$. This situation is known as the “Quantum Zeno Paradox” or “Watched Pot Effect” : The neutrino remains “frozen” in its flavor state because it is frequently “watched” or measured to be in this state by the interactions with the medium.
A more familiar limiting case obtains when $`|\delta V|D`$ so that $`D^2`$ in the denominator of Eq. (1) can be neglected. Since $`D\omega _{\mathrm{vac}}`$ by assumption, we also have $`|\delta V|\omega _{\mathrm{vac}}`$, implying that the oscillation frequency is $`\delta V`$ instead of $`\omega _{\mathrm{vac}}`$ because the energy difference between $`\nu _\mu `$ and $`\nu _\tau `$ of equal momenta is now dominated by $`\delta V`$, not by $`\omega _{\mathrm{vac}}`$. Since $`|\delta V|D`$, the collisions are rare relative to the oscillation period.<sup>*</sup><sup>*</sup>*Sometimes this situation is referred to as “weak damping.” However, we follow the convention of Ref. where “weak damping” means $`D\omega _{\mathrm{vac}}`$. Averaging over an oscillation period, and if we begin with one flavor, the average probability for the appearance of the other is $`\frac{1}{2}\mathrm{sin}^2(2\mathrm{\Theta })`$ where $`\mathrm{\Theta }`$ is the in-medium mixing angle. If the oscillations are interrupted with a rate $`\mathrm{\Gamma }_{\mathrm{coll}}`$, we have
$$\mathrm{\Gamma }_{\mathrm{flavor}}=\mathrm{sin}^2(2\mathrm{\Theta })D$$
(5)
if $`D`$ is interpreted as $`\mathrm{\Gamma }_{\mathrm{coll}}/2`$. For maximally mixed neutrinos, the in-medium mixing angle is given by
$$\mathrm{tan}(2\mathrm{\Theta })=\frac{\omega _{\mathrm{vac}}}{\delta V}.$$
(6)
Since $`|\omega _{\mathrm{vac}}/\delta V|1`$ we have $`\mathrm{tan}(2\mathrm{\Theta })\mathrm{sin}(2\mathrm{\Theta })`$ so that we recover Eq. (1). A large refractive energy difference between the flavors suppresses the conversion rate, an effect which evidently can be interpreted as a suppression of the mixing angle in the medium.
## III Rate of Decoherence
In order to estimate the rate of flavor conversion we are thus left with the task of estimating the damping rate, $`D`$, and the refractive effect $`\delta V`$ for the conditions of a SN core. The largest possible conversion rate obtains for $`\delta V=0`$ so that we first consider this case. If $`\mathrm{\Gamma }_{\mathrm{flavor}}`$ were slow in this limit, a further discussion of refractive effects would be unnecessary.
We thus begin by comparing $`\mathrm{\Gamma }_{\mathrm{flavor}}=\omega _{\mathrm{vac}}^2D^1`$ with the diffusion rate $`\tau _{\mathrm{diffusion}}^11\mathrm{s}^1`$,
$$\mathrm{\Gamma }_{\mathrm{flavor}}\tau _{\mathrm{diffusion}}=\frac{6.4\times 10^8\mathrm{s}^1}{D}\frac{\mathrm{\Delta }_3^2}{E_{30}^2},$$
(7)
where we have used Eq. (2). Evidently it is the low-energy neutrino modes with their fast oscillation frequency which are most effective for flavor conversion.
There are two conceptually different contributions to $`D`$. Oscillations are interrupted by scattering processes which are sensitive to the neutrino flavor such as $`\nu _\mu +e^{}\nu _e+\mu ^{}`$, a process which has no analogue for $`\nu _\tau `$ because of the large $`\tau `$ mass. In addition, a mixed neutrino can scatter to higher energies, for example by $`\nu +e^{}e^{}+\nu `$, without interrupting the oscillation process. However, after this “up-scattering,” the probability for processes which do distinguish between the flavors increases significantly since the cross sections for all neutrino processes increase with energy. Moreover, at large energies the fast beta process $`\nu _\mu +np+\mu ^{}`$ becomes kinematically allowed. Effectively, the up-scattering of a low-energy neutrino causes the oscillations to be interrupted, even if the up-scattering process itself is flavor-diagonal. Therefore, up to a numerical factor, the decoherence rate $`D`$ of low-energy neutrinos is identical with their scattering rate to higher energies.
The most important energy-changing process for neutrinos in a SN core is the scattering on the nuclear medium. Ignoring the subdominant vector-current interaction, the differential cross-section for neutral-current scattering of a neutrino of energy $`E_1`$ to energy $`E_2`$ is
$$\frac{d\sigma }{dE_2}=\frac{3C_A^2G_F^2}{\pi }E_2^2\frac{S(E_1E_2)}{2\pi }.$$
(8)
Here, $`S(\omega )`$ is the dynamical structure function for the axial-vector current interaction in the long-wavelength limit . In a dilute medium $`S(\omega )=2\pi \delta (\omega )`$, leading to the usual neutral-current elastic scattering cross section. In a nuclear medium, nucleon-nucleon interactions cause $`S(\omega )`$ to be a broadly smeared-out function which obeys the detailed-balancing condition $`S(\omega )=S(\omega )e^{\omega /T}`$. Unless spin-spin correlations and degeneracy effects are important, the structure function has the norm $`S(\omega )𝑑\omega /2\pi =1`$.
In contrast with elastic neutrino scattering processes, the cross section Eq. (8) does not vanish for small neutrino energies. With $`E_1=0`$, the neutrino scattering rate on nucleons is
$$\mathrm{\Gamma }_{\mathrm{nuc}}(0)=\frac{3C_A^2G_F^2}{\pi }T^2n_B_0^{\mathrm{}}𝑑xx^2\frac{TS(Tx)}{2\pi }$$
(9)
where $`n_B`$ is the baryon (nucleon) number density. With $`C_A=1.26/2`$ for neutral-current processes, the coefficient before the integral is
$$0.98\times 10^8\mathrm{s}^1\frac{\rho }{3\times 10^{14}\mathrm{g}\mathrm{cm}^3}T_{30}^2,$$
(10)
where $`T_{30}T/30\mathrm{MeV}`$.
The dimensionless integral vanishes if $`S(\omega )`$ is very narrow, corresponding to the usual case of a vanishing elastic neutrino scattering cross section at $`E_1=0`$. The integral also vanishes when it is very broad because $`S(\omega )`$ decreases at least exponentially for large negative $`\omega `$. If $`S(\omega )`$ is normalized, and if it is a smoothly varying broad function, the maximum possible value for the integral expression is about $`0.25`$ which obtains when the width of $`S(\omega )`$ is of order the temperature, probably corresponding to realistic SN conditions. The rate gets reduced by nucleon degeneracy effects and spin-spin anticorrelations. Therefore, for low-energy neutrinos the upscattering rate is probably not larger than a few times $`10^7\mathrm{s}^1`$. For low-energy neutrinos, this rate is larger than any other up-scattering process that we could identify such as neutrino-electron or neutrino-neutrino scattering.
Below the muon production threshold in beta processes, the most important reactions which distinguish between $`\nu _\mu `$ and $`\nu _\tau `$ appear to be $`\nu _\mu +e^{}\mu ^{}+\nu _e`$ and $`\overline{\nu }_\mu +\mu ^{}e^{}+\overline{\nu }_e`$. We find that for typical conditions in a SN core and for neutrino energies around $`T`$ the rate is at most a few times $`10^7\mathrm{s}^1`$ and thus comparable to neutrino nucleon scattering.
Comparing these numbers with Eq. (2) reveals that $`D\omega _{\mathrm{vac}}`$. Therefore, the strong damping limit indeed applies as had been assumed earlier.
The actual decoherence rate $`D`$ is smaller than our estimates of the corresponding collision rates. We have already mentioned that $`D`$ is half the collision rate in a situation where one flavor scatters while the other is sterile . Altogether, we believe that
$$D3\times 10^7\mathrm{s}^1$$
(11)
is a realistic upper limit for typical SN conditions of $`\rho =3\times 10^{14}\mathrm{g}\mathrm{cm}^3`$ and $`T=30\mathrm{MeV}`$, implying
$$\mathrm{\Gamma }_{\mathrm{flavor}}\tau _{\mathrm{diffusion}}20\mathrm{\Delta }_3^2E_{30}^2.$$
(12)
Averaging this expression over a thermal neutrino distribution, we note that $`E^2=0.38T^2`$ so that we finally conclude that for typical SN conditions
$$\mathrm{\Gamma }_{\mathrm{flavor}}\tau _{\mathrm{diffusion}}10\mathrm{\Delta }_3^2.$$
(13)
Near the upper end of the mass range suggested by SuperKamiokande we have $`\delta m^2=10^2\mathrm{eV}^2`$ or $`\mathrm{\Delta }_3=10`$, implying that the flavor conversion rate increases by two orders of magnitude relative to our estimate.
In summary, it appears that the flavor conversion between $`\nu _\mu `$ and $`\nu _\tau `$ in a SN core would be much faster than the diffusion time scale of about one second if refractive effects could be ignored.
## IV Refractive Effects
Turning to neutrino refraction in the SN medium, we begin with the usual lowest-order weak potential,
$$\delta V^{(1)}=\sqrt{2}G_F\left(\mathrm{\Delta }n_\mu +\mathrm{\Delta }n_{\nu _\mu }\mathrm{\Delta }n_\tau \mathrm{\Delta }n_{\nu _\tau }\right),$$
(14)
where $`G_F`$ is the Fermi constant while $`\mathrm{\Delta }n_j`$ stands for the number density of particle $`j`$ minus the density of anti-particles. Since the total $`\mu `$ and $`\tau `$ lepton number trapped in a SN core is zero, we have initially $`\delta V^{(1)}=0`$. Therefore, the flavor conversion rate would seem to be initially fast until a significant weak potential difference has built up.
However, second-order effects can not be neglected. At one-loop level, the neutral-current interactions of $`\nu _\mu `$ and $`\nu _\tau `$ with the nucleons are not identical because of the different charged-lepton masses in the loop. The second-order energy difference in a normal medium was found to be
$`|\delta V^{(2)}|`$ $`=`$ $`{\displaystyle \frac{3G_F^2m_\tau ^2}{2\pi ^2}}n_B\left[\mathrm{ln}\left({\displaystyle \frac{m_W^2}{m_\tau ^2}}\right)1+{\displaystyle \frac{Y_n}{3}}\right]`$ (15)
$`=`$ $`6.3\times 10^4\mathrm{eV}{\displaystyle \frac{\rho }{3\times 10^{14}\mathrm{g}/\mathrm{cm}^3}}{\displaystyle \frac{6.61+Y_n/3}{7}}`$ (16)
where $`n_B`$ is the baryon density and $`Y_n`$ the neutron number fraction. With $`6.3\times 10^4\mathrm{eV}=9.6\times 10^{11}\mathrm{s}^1`$ we find $`|\delta V^{(2)}|D\omega _{\mathrm{vac}}`$ and that even initially the flavor conversion time scale (the inverse of $`\mathrm{\Gamma }_{\mathrm{flavor}}`$) far exceeds the one for diffusion. This must be the only example where radiative corrections to the neutrino refractive index are of direct practical relevance!
We stress, however, that $`\delta V^{(2)}`$ is by no means exotically large. Its importance in the present context derives from the cancellation of $`\delta V^{(1)}`$ and from the smallness of $`\omega _{\mathrm{vac}}`$. For example, the in-medium mixing angle in the present case is
$$\mathrm{tan}(2\mathrm{\Theta })=2.6\times 10^8\frac{3\times 10^{14}\mathrm{g}/\mathrm{cm}^3}{\rho }\frac{7}{6.61+Y_n/3},$$
(18)
i.e. second-order refractive effects suppress the mixing angle by about eight orders of magnitude! The real surprise is that $`\omega _{\mathrm{vac}}`$, despite its smallness, is large enough to cause flavor equilibrium if it were not for the refractive suppression of the mixing angle.
Another second-order contribution to $`\delta V`$ arises from the low-energy tail of the $`W^\pm `$ and $`Z^0`$ resonance in neutrino forward scattering on other neutrinos and charged leptons . This term is the dominant second-order correction in the early universe where the particle-antiparticle asymmetries are small, but in a SN core Eq. (15) is more important because of the huge baryon density.
Once diffusion processes begin to build up a net $`\mu `$ or $`\tau `$ lepton number in the SN core, the first-order refractive effect becomes important. Its magnitude is understood if we calculate the refractive energy shift caused by neutrinos with a chemical potential $`\mu _\nu T`$ which we express as $`\eta _\nu =\mu _\nu /T`$. To lowest order we have $`\mathrm{\Delta }n_\nu =T^3\eta _\nu /6+𝒪(\eta _\nu ^2)`$ so that
$$\delta V_\nu ^{(1)}=0.074\mathrm{eV}T_{30}^3\eta _\nu .$$
(19)
Likewise, for muons we find $`\mathrm{\Delta }n_\mu =(T^3/\pi ^2)f(m_\mu /T)\eta _\mu `$ where the function $`f(m_\mu /T)`$ is 1.077 for $`m_\mu /T=3`$. Therefore, muons provide a similar energy shift. Altogether, if we use $`\eta `$ as some characteristic chemical potential for the muons and neutrinos, the first-order energy shift is given by Eq. (19), leading to an in-medium mixing angle of
$$\mathrm{tan}(2\mathrm{\Theta })=2.2\times 10^{10}\frac{\mathrm{\Delta }_3}{\eta E_{30}T_{30}^3}.$$
(20)
Again, the mixing angle is hugely suppressed unless $`\eta `$ is finely tuned to zero. Considering that $`\mathrm{\Gamma }_{\mathrm{flavor}}=\mathrm{sin}^2(2\mathrm{\Theta })D`$ the suppression of the conversion rate is quadratic in the small in-medium mixing angle. For $`D=𝒪(10^7\mathrm{s}^1)`$ even a tiny value for $`\eta `$ is enough to suppress flavor conversion completely.
As $`\mu `$ and $`\tau `$ lepton number builds up it may happen that the first and second-order contributions to $`\delta V`$ cancel for suitable conditions, allowing briefly for a fast flavor conversion rate. However, the slightest deviation from this condition again suppresses the mixing angle and thus quenches any further flavor conversion. Any attempt to reach flavor equilibrium by neutrino oscillations is robustly self-quenching.
We have mostly studied typical core conditions in the SN. In regions below the neutrino sphere the density is about three orders of magnitude less than our benchmark value of nuclear density, and the temperature may be a factor of 5 smaller than our standard figure of 30 MeV. For such conditions, the presence of muons is strongly suppressed so that the decoherence rate $`D`$ is much smaller than what we have estimated for average core conditions. On the other hand, the in-medium mixing-angle Eq. (20) is still very small if we use $`T_{30}=0.2`$ and $`E_{30}=0.2`$ as characteristic values near the neutrino sphere. Therefore, our conclusion that flavor equilibrium cannot be achieved applies to conditions throughout the SN core.
## V Conclusions
The chemical equilibration between the $`\mu `$ and $`\tau `$ flavors in a SN core by neutrino oscillations would be fast on the neutrino diffusion time scale if the refractive energy shift $`\delta V`$ between $`\nu _\mu `$ and $`\nu _\tau `$ were small. The usual first-order contribution indeed vanishes due to the lack of trapped $`\mu `$\- and $`\tau `$-lepton number, but second-order contributions are large enough to suppress flavor conversion.
In the course of neutrino transport to the stellar surface, $`\mu `$\- and $`\tau `$-lepton number will build up due to the differences between the $`\nu _\mu `$, $`\overline{\nu }_\mu `$, $`\nu _\tau `$, $`\overline{\nu }_\tau `$ diffusion constants . It is conceivable that local conditions can be reached where the refractive energy shift $`\delta V`$ vanishes to all orders. While this cancellation would momentarily lead to a fast rate of flavor conversion, the resulting re-distribution of $`\mu `$ and $`\tau `$ lepton number quickly produces a $`\delta V`$ so large that the conversion rate is suppressed again. The equilibrium condition $`\mu _{\nu _\mu }=\mu _{\nu _\tau }`$ implies a $`\delta V`$ so large that it can never be reached, i.e. the flavor conversion process is inevitably self-quenching.
For the small neutrino mass differences indicated by the experiments, the lack of conversion between $`\nu _e`$ and the other flavors is much easier to understand because the large amount of trapped electron-lepton number causes $`\delta V^{(1)}`$ to be so large that the in-medium mixing angle is easily seen to be vastly suppressed .
The phenomenon of neutrino mixing and neutrino oscillations may have a variety of astrophysical consequences which need to be explored. In the past, most of the attention was focussed on the possibility of detecting evidence for neutrino oscillations in the astrophysical context. However, as neutrino oscillations become more and more experimentally established, the problem of unravelling the neutrino mass matrix may become a less pressing astrophysical preoccupation than the reverse question: given the experimentally measured neutrino parameters, is the effect of flavor violation important or ignorable in a given environment?
We think it is intriguing that the core of a SN is protected from the consequences of flavor-lepton number violation by the phenomenon of neutrino refraction and that second-order effects have to be included. While the significance of the weak-interaction potential for the resonant enhancement of oscillations in the spirit of the MSW effect has been widely acknowledged, the suppression of oscillations in a SN core is another consequence of neutrino refraction with real and important astrophysical ramifications.
## Acknowledgments
We thank Leo Stodolsky for reading the manuscript and several helpful comments. In Munich, this work was partly supported by the Deutsche Forschungsgemeinschaft under grant No. SFB 375. In Copenhagen, it was supported by a grant from the Carlsberg Foundation. |
no-problem/9912/cond-mat9912411.html | ar5iv | text | # Phase separation in disordered exclusion models
## 1 Introduction
The one-dimensional asymmetric simple exclusion process (ASEP) was introduced by Spitzer in 1970 as an example of an interacting stochastic process . In the probabilistic community it has been widely used for rigorous studies of the emergence of hydrodynamic behavior from stochastic microscopic dynamics . Already thirty years ago similar models were considered in the context of biopolymerization , while recent applications have focused on the problem of vehicular traffic flow . The interest of statistical physicists has been further fueled by the discovery of boundary-induced phase transitions as well as the relations to interface growth and directed polymers in random media . In short, the ASEP is a generic model of driven single file transport which combines utmost simplicity with a remarkable richness of behaviors.
Figure 1 illustrates the model. Particles occupy the sites of a one-dimensional lattice subject to the simple exclusion rule (at most one particle per site). In an infinitesimal time interval $`dt`$ particle $`i`$ at site $`x_i`$ attempts a jump to the right (left) with probability $`pdt`$ ($`qdt`$). The jump succeeds if the neighboring site is empty and is suppressed otherwise. In general the jump rates $`p`$ and $`q`$ may depend on both the particle label $`i`$ and the position $`x`$ on the lattice. In much of the paper I will restrict myself to the totally asymmetric case $`q=0`$.
In the present article I want to address the effects that quenched disorder in the jump rates has on the behavior of the ASEP. Disorder effects can be quite dramatic in one-dimensional single file systems, as is evidenced by the everyday experience with platoons and traffic jams caused by slow vehicles, accidents or road construction on highways . Also in the context of driven transport on biomolecules a certain amount of disorder seems unavoidable .
It is natural to distinguish between particlewise disorder with $`p=p_i`$, $`q=q_i`$ independent of $`x`$, and sitewise disorder with $`p=p(x)`$, $`q=q(x)`$ independent of $`i`$. Both particlewise and sitewise disorder generically induces phase separation in the sense that, for global particle densities $`\rho `$ in a certain interval $`[\rho _c^{},\rho _c^+]`$, the system breaks up into regions of density $`\rho _c^{}`$ and $`\rho _c^+`$ separated by sharp density discontinuities (“shocks”). These shocks are typically associated with bottlenecks, i.e. slow particles or slow sites in the particlewise and sitewise cases, respectively. If the system is started from a homogeneous initial condition, the average size $`\xi `$ of phase separated regions grows as a power law
$$\xi (t)t^{1/z},$$
(1)
defining a dynamic exponent $`z`$; an example of the time evolution in the particlewise case is shown in Figure 2. Two kinds of questions will therefore be asked in the following: First, how can the density interval $`[\rho _c^{},\rho _c^+]`$ of phase separation be determined? Second, what is the value of the dynamic exponent, and how does it depend on the distribution of the disordered jump rates?
For particlewise disorder a number of exact analytic results are available which have been reviewed elsewhere . This case will therefore only be briefly summarized in Section 2. The more difficult problem of sitewise disorder has been studied numerically by Tripathy and Barma and others , but little is known analytically. In Section 3 some progress in this direction will be reported. Specifically, I derive a rigorous bound on the critical densities based on the results for the particlewise case, and obtain predictions for the coarsening behavior for various types of disorder distributions. The relation to directed polymers in random media is briefly discussed in Section 3.4, and some conclusions and open questions are formulated in Section 4.
## 2 Particlewise disorder
### 2.1 Steady state and critical density
For particlewise disorder the configurations of the system are most naturally described in terms of the headways $`u_i=x_{i+1}x_i1`$ in front of the particles. The key simplifying feature is that different headways become statistically independent in the steady state, with a geometric distribution
$$P_i(u)=(1\alpha _i)\alpha _i^u$$
(2)
for the headway in front of particle $`i`$. In the totally asymmetric case the parameters $`\alpha _i`$ are determined by the jump rates $`p_i`$ through the simple relation
$$\alpha _i=v/p_i$$
(3)
where $`v`$ is the (common) mean speed of the particles in the steady state. Eq.(3) expresses the plausible fact that the headways in front of slow particles are larger than in front of fast ones. The geometric distribution (2) remains valid in the case of partial asymmetry, but then (3) is replaced by a more complicated relation . The steady state distribution for the totally asymmetric model with parallel update has a similar form .
In the following we consider the totally asymmetric case and take the $`p_i`$ to be independent random variables with a probability density $`f(p)`$ supported on the interval $`[c,1]`$, with a minimal speed $`c`$ bounded away from zero. Since particles cannot pass each other, it is clear that the steady state speed $`v`$ in an infinite system cannot exceed $`c`$. To compute it, one determines the mean headway in front of particle $`i`$ from (2) and performs the disorder average. In a system of density $`\rho `$ the resulting average headway must be $`(1\rho )/\rho `$. This yields the implicit equation
$$\rho =\left[1+v_c^1\frac{dpf(p)}{pv}\right]^1$$
(4)
for the speed as a function of density. Two cases are to be distinguished. If the integral on the right hand side of (4) diverges in the limit $`vc`$, then $`v(\rho )<c`$ for all $`\rho >0`$. In this case the $`\alpha _i`$ in (2) are bounded away from unity for all $`i`$, the headway distributions are normalizable, and the system remains homogeneous. If, on the other hand, the integral remains finite in this limit, then the right hand side of (4) evaluated at $`v=c`$ defines a critical density $`\rho _c`$ such that $`v(\rho )c`$ in the entire interval $`[0,\rho _c]`$. For the slowest particles with $`p_ic`$ this implies that the headway distributions (2) are no longer normalizable. Large gaps appear in front of these particles, and the faster particles form platoons behind them, a phenomenon familiar from vehicular traffic on country roads . The system phase separates into regions of density $`\rho _c^{}=0`$ (the gaps) and regions of density $`\rho _c^+=\rho _c`$ (the platoons).
It is evident from (4) that the condition for phase separation translates into a condition on the behavior of the disorder distribution $`f(p)`$ near $`p=c`$. Introducing an exponent $`n`$ through
$$f(p)(pc)^n,pc,$$
(5)
phase separation occurs iff $`n>0`$. At the critical point $`\rho =\rho _c`$ the disorder averaged headway distribution has a power law tail $`u^{(n+2)}`$ . Evans has emphasized the close analogy to Bose-Einstein condensation, where $`f(p)`$ plays the role of a density of states, and the slowest particle in the system corresponds to the quantum mechanical ground state.
### 2.2 Coarsening behavior
No exact results pertaining to the dynamics of phase separation are available, apart from the observation that the existence of a well-defined hydrodynamic limit implies that inhomogeneities are restricted to scales smaller than $`t`$, and therefore
$$\underset{t\mathrm{}}{lim}\xi (t)/t=0.$$
(6)
Considerable evidence has however accumulated in favor of the idea that the coarsening behavior for particlewise disorder can be described in terms of a simpler, deterministic model, in which particles move ballistically on the real line with fixed random speeds and coalesce upon overtaking. Such a model was first introduced by Newell , and later a detailed kinetic theory was worked out by Ben-Naim, Krapivsky and Redner .
Within the deterministic model, the dynamic exponent $`z`$ can be determined through a simple extremal statistics argument. The key idea is that the particles heading the platoons at time $`t`$ are those with the smallest speeds among of the order of $`\xi (t)`$ particles. Elementary probability theory suffices to show that, for a probability density behaving as (5), these extremal speeds cluster in an interval of size $`\xi ^{1/(n+1)}`$ above the minimal speed $`c`$. Therefore the speed difference $`\mathrm{\Delta }v`$ between two platoons is of the order $`\xi ^{1/(n+1)}`$, and the faster platoon will merge with the slower one on a time scale $`t\xi /\mathrm{\Delta }v\xi ^{(n+2)/(n+1)}`$. Inverting this relation one obtains the coarsening law (1) with
$$z=\frac{n+2}{n+1}.$$
(7)
Numerical results supporting (7) have been reported for models with parallel update , in simulations of jam dissolution and in a simulation study of a system with open boundaries .
## 3 Sitewise disorder
### 3.1 Disorder types
We distinguish three cases which will turn out to represent different classes of coarsening behavior. For type I disorder the dynamics is totally asymmetric, $`q(x)0`$, and the forward rates $`p(x)`$ are independent random variables in an interval $`[c,1]`$, with a minimal rate $`c>0`$. The simplest (and typical) example is that of binary rates, with probability density
$$f(p)=\varphi \delta (pc)+(1\varphi )\delta (p1)$$
(8)
where $`\varphi (0,1)`$ denotes the fraction of slow sites. Type II disorder is similar to type I except that the support of the probability density $`f(p)`$ extends all the way to $`p=0`$, i.e. the minimal rate $`c=0`$. For type II disorder nontrivial dynamics occurs only for continuous $`f(p)`$. As in the models with particlewise disorder (eq.(5)), the important feature of $`f(p)`$ is the behavior near $`p=0`$, which can be characterized by an exponent $`n`$ through the relation
$$f(p)p^n,p0.$$
(9)
Finally, for type III disorder not only the strength, but also the direction of the bias is spatially random. A majority of sites has a bias to the right, say, with $`p(x)>q(x)`$, while a minority has $`q(x)>p(x)`$. If the one-dimensional lattice is viewed as a transport path in a higher-dimensional disordered structure, such as a percolation cluster, the stretches of minority sites can be interpreted as “backbends” where the path turns back against the direction of the driving field . Compared to the strong disorder effects induced by the backbends, the randomness in the strength of the bias is irrelevant. Therefore a representative example of type III disorder is a model where the strength of the bias is constant, and only its direction varies. This corresponds to setting $`q(x)=1p(x)`$ and choosing the $`p(x)`$ from a binary distribution which is symmetric around $`p=1/2`$,
$$f(p)=(1\varphi )\delta (pb)+\varphi \delta (p(1b)).$$
(10)
Here $`b(1/2,1)`$ denotes the strength of the bias and $`\varphi (0,1/2)`$ the fraction of minority sites.
It is easy to see that for type II and III disorder the stationary particle current vanishes in the infinite system limit, due to the existence of arbitrarily large stretches of arbitrarily small jump rates (for type II) or arbitrarily long backbends (for type III). As a consequence phase separation occurs at any density $`\rho (0,1)`$, i.e. $`\rho _c^{}=0`$ and $`\rho _c^+=1`$. For type I disorder the existence of a nontrivial current function $`J(\rho )>0`$ describing the large scale dynamics of density profiles has been rigorously established, and it has been shown that $`J(\rho )`$ is convex in the sense that $`J^{\prime \prime }(\rho )0`$ . However, in contrast to the models with particlewise disorder the stationary state is not known, and therefore an explicit computation of $`J(\rho )`$ is not possible. In the next section some bounds on $`J(\rho )`$ will be derived and used to bound the critical densities for type I disorder. The coarsening dynamics for all three cases will be addressed in Section 3.3.
### 3.2 Bounds on the critical density for type I disorder
We first collect some obvious properties of $`J(\rho )`$. Due to particle-hole symmetry we have $`J(\rho )=J(1\rho )`$. The current is bounded from below by the current $`c\rho (1\rho )`$ of a pure system with all rates equal to the minimal rate $`c`$, and from above by the current $`\rho (1\rho )`$ of the system with all rates equal to unity. A more precise upper bound is obtained by observing that in the infinite system there are arbitrarily large stretches with rates arbitrarily close to $`c`$. The maximum current that can be driven through such a stretch is $`c/4`$, the maximum value of $`c\rho (1\rho )`$. We conclude that
$$c\rho (1\rho )J(\rho )\mathrm{min}[c/4,\rho (1\rho )].$$
(11)
Numerical simulations of site-disordered exclusion models and related growth models indicate that the upper bound $`c/4`$ is attained in a finite density interval around $`\rho =1/2`$, which coincides with the phase separation interval $`[\rho _c^{},\rho _c^+]`$; by particle-hole symmetry $`\rho _c^{}=1\rho _c^+\rho _c`$. In the following our strategy will be to derive optimal lower and upper bounds $`J_<(\rho )`$ and $`J_>(\rho )`$ on the stationary current, which are then translated into lower and upper bounds $`\rho _c^<`$, $`\rho _c^>`$ on $`\rho _c`$ through the relation
$$J_>(\rho _c^<)=J_<(\rho _c^>)=c/4.$$
(12)
The lower current bound in (11) does not give rise to any nontrivial density bound, while the upper bound $`\rho (1\rho )`$ yields
$$\rho _c(1\sqrt{1c})/2.$$
(13)
For the case of binary disorder (eq.(8)) an improved lower bound on the current was derived by Tripathy and Barma by considering a finite ring of $`L`$ sites, $`N=\rho L`$ particles and $`N_s=\varphi L`$ slow sites. They start from the observation that the maximum current that can be driven through a stretch of slow sites is a decreasing function of the length of the stretch (we will return to this point below in Section 3.3). It is therefore plausible (though not rigorously established) that for given $`L`$, $`N`$ and $`N_s`$ the stationary current will be minimal in the fully segregated limit where all slow sites form a single large stretch. For $`L\mathrm{}`$ the fully segregated system can be treated as two connected homogeneous systems with different densities, which are fixed through the constraints of equal currents and total particle number. This yields the upper density bound
$$\rho _c(1(1\varphi )\sqrt{1c})/2.$$
(14)
In the dilute limit $`\varphi 0`$ the bounds (13) and (14) coincide, and give $`\rho _c=(1\sqrt{1c})/2`$ exactly. It should however be noted that this limit does not correspond to the case of a single defect site, since the maximal current that can be driven through a single defect is larger than $`c/4`$ (see also Section 3.3).
The lower bound (13) can be improved by comparing the disordered exclusion model to a zero range process (ZRP) with the same set of jump rates $`\{p(x)\}`$. In the ZRP an arbitrary number of particles is allowed on any site , and therefore any attempted jump succeeds. As a consequence the stationary state of the ZRP is a product measure, with the occupation numbers at different sites being independent, for any choice of jump rates depending on the position $`x`$ and on the number of particles at the site . Here we consider the case where the rate at which a particle is transferred from site $`x`$ to $`x+1`$ is equal to $`p(x)`$ independent of the number of particles at $`x`$, provided the latter is not zero. It is then obvious (and can be proved through waiting time considerations) that the particle current $`J_{\mathrm{ZRP}}(\rho )`$ of the ZRP provides an upper bound to the current $`J(\rho )`$ of the ASEP.
In fact the disordered ZRP is equivalent to the ASEP with particlewise disorder, with the ZRP occupation numbers representing the headways in the ASEP . The ZRP current is equal to the particle speed $`v`$ of the ASEP, which is given by (4) for any disorder distribution $`f(p)`$. The ZRP density is equal to the mean headway of the ASEP, and is therefore related to the ASEP density through $`\rho _{\mathrm{ZRP}}=1/\rho _{\mathrm{ASEP}}1`$. Evaluating the integral in (4) for the binary distribution (8) yields
$$\rho _{\mathrm{ZRP}}=J_{\mathrm{ZRP}}\left(\frac{\varphi }{cJ_{\mathrm{ZRP}}}+\frac{1\varphi }{1J_{\mathrm{ZRP}}}\right),$$
(15)
and setting $`J_>=J_{\mathrm{ZRP}}`$ in (12) we obtain the density bound
$$\rho _c\frac{\varphi }{3}+\frac{c(1\varphi )}{4c},$$
(16)
which improves (13) for small $`c`$. In particular, for $`c0`$ we have $`\rho _c\varphi /3`$, which proves, remarkably, that the homogeneous phase $`\rho <\rho _c`$ persists even when the slow sites become complete blockages. In Figure 3 the bounds (13), (14) and (16) are compared to numerical data.
### 3.3 Coarsening behavior
At least for type I disorder the existence of a hydrodynamic limit implies that the relation (6) carries over to the sitewise case. To obtain a finer estimate of the coarsening scale $`\xi (t)`$ we rely on extremal statistics arguments similar to those used in Section 2.2. A schematic phase separated density profile is shown in Figure 4. Two “antischocks” at positions $`x_1`$ and $`x_2`$, where the density jumps from $`\rho _c^+=1\rho _c`$ to $`\rho _c^{}=\rho _c`$, mark bottleneck regions of particularly slow rates, which support maximum currents $`j_1`$ and $`j_2`$. If the bottleneck in the downstream direction is slightly more restrictive, in the sense that $`\mathrm{\Delta }j=j_1j_2>0`$, then the low density region between the bottlenecks will slowly fill in and disappear at a time
$$t(12\rho _c)\xi /\mathrm{\Delta }j.$$
(17)
If the statistics of extremal bottlenecks is known, the typical current difference $`\mathrm{\Delta }j`$ can be estimated as a function of $`\xi `$ and (17) yields a prediction for the coarsening law $`\xi (t)`$. In the following this will be carried out for the different disorder types.
#### 3.3.1 Type I disorder
Consider first the conceptually simplest case of the binary disorder distribution (8). We expect long stretches of slow sites to constitute the most restrictive bottlenecks. For a quantitative analysis we would require the maximum current $`j_{\mathrm{max}}(c,\mathrm{})`$ which can be driven through a stretch of $`\mathrm{}`$ slow sites with jump rates $`c`$ embedded in an infinite system of sites with jump rates 1. Already for $`\mathrm{}=1`$ the computation of $`j_{\mathrm{max}}(c,\mathrm{})`$ is a difficult unsolved problem . However for large $`\mathrm{}`$ we can make progress by replacing the stretch by a finite system of $`\mathrm{}`$ sites with uniform jump rates $`c`$ and periodic or open boundary conditions, for which the maximum current is known . For both kinds of boundary conditions the current approaches the $`\mathrm{}\mathrm{}`$ limit $`c/4`$ from above, with a leading correction proportional to $`1/\mathrm{}`$. Thus we expect, for large $`\mathrm{}`$,
$$j_{\mathrm{max}}(c,\mathrm{})(c/4)(1+a/\mathrm{})+𝒪(1/\mathrm{}^2),$$
(18)
where $`a`$ is a positive constant of order unity.
Since the probability distribution of the lengths of slow stretches is
$$P(\mathrm{})=(1\varphi )\varphi ^{\mathrm{}},$$
(19)
the longest stretch in a region of size $`\xi `$ is of the order of
$$\mathrm{}_{\mathrm{max}}\frac{\mathrm{ln}\xi }{\mathrm{ln}(1/\varphi )}.$$
(20)
Note that $`\mathrm{}_{\mathrm{max}}\xi `$, which is consistent with the assumption of well-localized bottlenecks inherent in Figure 4. Using (18) we see that the currents supported by the longest stretches exceed $`c/4`$ by an amount of the order of $`c/\mathrm{}_{\mathrm{max}}`$, and therefore
$$\mathrm{\Delta }j\frac{c\mathrm{ln}(1/\varphi )}{\mathrm{ln}\xi }.$$
(21)
Inserting this into (17) the leading order coarsening law is obtained as
$$\xi (t)\frac{t/t_0}{\mathrm{ln}(t/t_0)}$$
(22)
with a characteristic time scale $`t_0(12\rho _c)/c\mathrm{ln}(1/\varphi )`$. This argument was formulated earlier in the context of phase-disordered growth models, where also numerical evidence in favor of the coarsening law (22) was presented .
For continuous disorder distributions $`f(p)`$ the identification of the relevant bottlenecks is a little more subtle. Consider a region of size $`\mathrm{}`$ where all rates satisfy $`p(x)c+ϵ`$. The maximum current through such a region can then be estimated as
$$j_{\mathrm{max}}\frac{c+ϵ}{4}\left(1+\frac{a}{\mathrm{}}\right)c/4+ϵ/4+ca/4\mathrm{}c/4+j.$$
(23)
If $`f(p)`$ behaves as in (5) for $`pc`$ the probability of the region is of the order of $`ϵ^{(n+1)\mathrm{}}`$. The probability distribution of $`j`$ can then be written as
$$P(j)𝑑\mathrm{}𝑑ϵϵ^{(n+1)\mathrm{}}\delta (j(ϵ+ca/\mathrm{})/4)$$
$$𝑑ϵ\mathrm{exp}[(n+1)ca\mathrm{ln}(1/ϵ)/(4jϵ)].$$
(24)
Evaluating the last integral at the saddle point yields
$$P(j)\mathrm{exp}[(n+1)ca\mathrm{ln}^2(1/j)/j].$$
(25)
The current scale $`\mathrm{\Delta }j`$ of the most restrictive among $`\xi `$ bottlenecks is obtained by setting $`P(\mathrm{\Delta }j)1/\xi `$, which gives
$$\mathrm{\Delta }j\frac{[\mathrm{ln}(\mathrm{ln}\xi )]^2}{\mathrm{ln}\xi },$$
(26)
and the corresponding coarsening law reads, to leading order in $`t`$,
$$\xi (t)\frac{t[\mathrm{ln}(\mathrm{ln}t)]^2}{\mathrm{ln}t}$$
(27)
which, for most purposes, is indistinguishable from (22).
#### 3.3.2 Type II and III disorder
For type II disorder with a continuous probability distribution $`f(p)`$, characterized by (9), the expression (23) for the maximum current supported by a slow stretch of length $`\mathrm{}`$ applies with $`c=0`$. The distribution of $`j_{\mathrm{max}}`$ then becomes
$$P(j_{\mathrm{max}})𝑑\mathrm{}𝑑ϵϵ^{(n+1)\mathrm{}}\delta (j_{\mathrm{max}}ϵ(1+a/\mathrm{})/4)$$
$$𝑑\mathrm{}\mathrm{exp}[(n+1)\mathrm{}\mathrm{ln}((1+a/\mathrm{})/4j_{\mathrm{max}})].$$
(28)
Now the maximum of the exponent evidently occurs at $`\mathrm{}=1`$, i.e. the dominant bottlenecks are individual slow sites. The distribution of the currents supported by the bottlenecks is then simply given by the jump rate distribution $`f(p)`$ itself, and the situation reduces to that analyzed in the case of particlewise disorder, Section 2.2. In particular, the coarsening exponent $`z`$ for type II sitewise disorder is also given by (7).
For type III disorder with distribution (10) the dominant bottlenecks are long backbends, i.e. stretches of minority sites at which the local bias is directed against the mean flow direction. The maximum current that can be driven through a backbend of length $`\mathrm{}`$ is exponentially small in $`\mathrm{}`$, and is given by
$$j_{\mathrm{max}}(\mathrm{})\mathrm{exp}[(1/2)\mathrm{}\mathrm{ln}(b/(1b))].$$
(29)
Combining this with the probability distribution (19) of backbend lengths it follows that $`j_{\mathrm{max}}`$ is distributed according to a power law,
$$P(j_{\mathrm{max}})(j_{\mathrm{max}})^{2\theta ^11}$$
(30)
where
$$\theta =\frac{\mathrm{ln}[b/(1b)]}{\mathrm{ln}[1/\varphi ]}.$$
(31)
Since the largest backbend in a region of size $`\xi `$ is of length $`\mathrm{}\mathrm{ln}\xi \xi `$, we can employ a coarse grained picture in which the backbends are shrunk to individual sites with a jump rate distribution given by (30), thus effectively reducing the problem to type II disorder with the exponent $`n`$ in (9) given by $`n=2/\theta 1`$. The coarsening exponent for the disorder distribution (10) is then obtained from (7) as
$$z=1+\theta /2.$$
(32)
### 3.4 Relation to directed polymers
Using the waiting time approach the site disordered ASEP can be mapped to a zero temperature directed polymer (DP) with point and columnar disorder . In that context the coarsening law $`\xi (t)`$ describes the disorder-induced transverse wandering of the polymer, which can be estimated using variable range hopping arguments and the analogy to Lifshitz tails for one-dimensional disordered Schrödinger operators .
To see that the results derived for the DP are consistent with those obtained above, it is important the recall that the waiting time mapping transforms the time $`t`$ of the ASEP into the energy of DP. For type I disorder the transverse wandering $`\delta x`$ of the DP was found to increase with its length $`L`$ as
$$\delta xL/(\mathrm{ln}L)^2,$$
(33)
while the ground state energy behaves as $`EL/\mathrm{ln}L`$ to leading order. Combining the two results and identifying $`Et`$ the coarsening law (22) follows.
For type II disorder the power law (9) of the probability distribution at small $`p`$ translates into a power law tail
$$P(\tau )\tau ^{(2+n)},\tau \mathrm{}$$
(34)
in the distribution of waiting times or energies $`\tau =1/p`$. Directed polymers in the presence of columnar disorder with a power law distribution were considered in Ref., where it was shown that the wandering is typically ballistic, $`\delta xL`$, while the ground state energy scales with length as
$$EL^{(n+2)/(n+1)}$$
(35)
in agreement with (7). It is worthwhile to point out that in the DP context the scaling laws (33) and (35) were also confirmed numerically .
## 4 Summary and open questions
In this paper I have described some recent progress in our understanding of disorder effects in asymmetric simple exclusion models. A common feature of both particlewise and sitewise disordered systems is the appearance of phase separation in an interval of densities, which is macroscopically characterized by a linear portion in the current-density relation $`J(\rho )`$; in the particlewise case $`J(\rho )=c\rho `$ for $`\rho <\rho _c`$, while in the sitewise case $`J(\rho )c/4`$ for $`\rho _c\rho 1\rho _c`$. An interesting open question concerns the connection between phase separation and linearity of $`J(\rho )`$, which is reminiscent of the role that the convexity of thermodynamical potentials plays for the stability of equilibrium systems. While it is obvious that phase separation implies a linear segment in $`J(\rho )`$, the converse statement has, to my knowledge, not been established. To prove that it is false, it would be sufficient to find a (noisy!) exclusion type model with a homogeneous stationary state and a linear current-density relation (deterministic systems with linear $`J(\rho )`$ are well known ).
The dynamics of phase separation has been explored in the framework of scaling arguments, which can be formulated in a similar way both for particlewise and sitewise disorder. In the particlewise case the relevant bottlenecks which determine the positions of domain boundaries are always individual slow particles, while in the sitewise case with type I and III disorder the bottlenecks are formed collectively by many defects. For type I disorder this implies a certain universality of the coarsening law, in the sense that the exponent $`z`$ in (1) is $`z=1`$ independent of the underlying disorder distribution; the additional logarithmic corrections in (22,27) ensure the consistency with the rigorous result (6). This is somewhat analogous to the case of finite temperature directed polymers with columnar defects, where universal scaling laws arise from the thermal averaging over large spatial regions .
A numerical confirmation of the predictions for the coarsening dynamics in the case of sitewise disorder would be most welcome. For type II disorder this should be relatively straightforward, however in the cases of type I and III disorder the behavior is dominated by exponentially rare regions, which may make it hard to reach asymptopia.
Acknowledgements. The results on particlewise disorder were obtained in collaboration wit Pablo Ferrari and Timo Seppäläinen, while the material on sitewise disorder is joint work with Mustansir Barma and Goutam Tripathy. Support by DAAD within the PROBRAL programme and by DFG within SFB 237 is gratefully acknowledged. |
no-problem/9912/astro-ph9912215.html | ar5iv | text | # The FIRST Bright Quasar Survey. II. 60 Nights and 1200 Spectra Later
## 1. Introduction
The VLA FIRST Bright Quasar survey (FBQS) aims to define a sample of quasars that bridges the gap between traditional radio-loud and radio-quiet objects. Given the limiting radio flux density of 1 mJy at 1400 MHz (Becker, White, & Helfand 1995; hereafter BWH (95)), many of the optically bright quasars discovered by the FIRST survey fall near the traditional division between radio-loud and radio-quiet objects. Previous radio-selected quasar surveys have not reached deep enough to probe this regime. In a pilot study for the FBQS, Gregg et al. (1996; hereafter Paper I ) developed criteria to create a candidate list based on matching the FIRST survey to the Cambridge Automated Plate Measuring Machine (APM) catalog of POSS-I objects (McMahon & Irwin (1992)). Applying these criteria to the catalog of FIRST sources from the initial 2682 square degrees surveyed in the north Galactic cap (White, Becker, Helfand & Gregg 1997; hereafter WBHG (97)), a candidate list of 1238 objects with extinction-corrected, recalibrated magnitudes brighter than 17.8 mag on the POSS-I $`E`$ plates has been assembled. We have now collected optical spectra for more than 90% of the candidates and have identified 467 new quasars in addition to the 169 that were previously known in this region.
Quasars were originally discovered through their radio emission, but only $`10`$% of them are radio-loud<sup>1</sup><sup>1</sup>1We define radio-loud quasars as those with a radio-loudness parameter $`R^{}`$ greater than 10 (Stocke et al. (1992); see §2.5 for further discussion.). There are now several large surveys for optically selected quasars that are under way or complete: e.g., the Palomar-Green survey (Green, Schmidt, & Liebert (1986)), the Large Bright Quasar Survey (Hewett, Foltz & Chaffee 1995; hereafter LBQS ), the Edinburgh Quasar Survey (Goldschmidt et al. (1992)), and the Hamburg/ESO survey (Wisotzki et al. (1996); Hagen, Engels & Reimers (1999)). The FBQS survey will produce a sample of radio-selected quasar spectra that is comparable in size and quality to the largest existing optical surveys. In fact, the catalog published in this paper already contains more $`z>0.2`$ quasars brighter than $`B=18`$ than does the LBQS (340 for the FBQS vs. 319 for the LBQS) because even though radio quasars are rarer, the FBQS covers a substantially larger sky area than the LBQS (2682 deg<sup>2</sup> vs. 454 deg<sup>2</sup> for the LBQS.)
In this paper we present this new radio-selected sample of quasars. We discuss the criteria used to define the FBQS sample, as well as a new technique that could be used to make even more efficient samples (§2). We describe the optical spectroscopy that was carried out (§3), present the results of the spectroscopy including spectroscopic classifications for all objects (§4), and briefly discuss the results (§5). Detailed analysis of the FBQS sample, including the spectral properties of the new quasars, will be deferred to other papers; here our focus is on defining the sample and on presenting the basic data upon which subsequent work will be based.
## 2. The Sample
The primary catalog for the FBQS is the VLA FIRST survey which, as of July 1999, covers $`6000`$ deg<sup>2</sup> (BWH (95), WBHG (97); see also the FIRST home page at http://sundog.stsci.edu). However, at this time the vast majority of optical spectroscopy has been restricted to objects drawn from the smaller area covered by the 1997Apr24 version of the FIRST northern Galactic cap catalog. For the purposes of this paper, we will restrict the discussion to candidates in the north Galactic cap between declinations of $`+22`$ and $`+43`$ degrees, with Right Ascensions ranging from approximately $`7`$ to $`17`$ hours; the area covered is 2682 square degrees. The spatial distribution of the candidates included in the paper is shown in Figure 1. The selection criteria for membership in the sample are:
* The radio and optical positions must coincide to better than 1.2 arcsec.
* The recalibrated, extinction-corrected optical magnitude on the POSS-I red plate $`E17.8`$. (The $`O`$ magnitude roughly corresponds to $`B`$ and $`E`$ to $`R`$ or to Gunn $`r`$.)
* The optical morphology of the object must be stellar on at least one of the two POSS-I plates.
* The POSS-I color must be bluer than $`OE=2`$.
We avoid as far as possible any restriction on the radio properties of sources in the sample, other than requiring them to be in the FIRST catalog, which has its own selection effects. These criteria were based on the analysis presented in Paper I and have been found to be quite liberal, excluding very few potential quasars. The criteria are discussed in detail below.
### 2.1. Radio-Optical Positional Coincidence
We require that the radio and optical sources be separated by no more than 1.2 arcsec. A discussion of the astrometric accuracy that allows for such a tight constraint is found in WBHG (97). The FIRST radio positions have been used to correct the APM positions for POSS-I plate distortions before comparing the radio and optical positions (McMahon et al. (1999)). In general, the required close agreement in position excludes quasars that do not have at least a weak core radio component, so the candidate list is biased against completely lobe-dominated radio sources. The distribution of radio-optical separations for confirmed quasars is strongly peaked and falls rapidly beyond 0.5 arcsec (see Fig. 2a). Only 5 (0.8%) of our 636 quasars have separations between 1.1 and 1.2 arcsec, even though that annulus contains 16% of the area searched. Nonetheless, this criterion certainly excludes some quasars that are detected by the FIRST survey. Figure 2(b) shows the fraction of optical candidates that are found to be quasars as a function of separation; it declines steadily from $`80`$% near 0 arcsec to $`20`$% at 1.2 arcsec. If we make the conservative assumption that the quasar fraction does not decline further, we estimate that there are $`<10`$ additional quasars with separations between 1.2 and 1.5 arcsec that are excluded from the sample by the separation criterion.
### 2.2. E Magnitude
The original candidate list was limited to optical counterparts that are brighter than 17.5 mag on the POSS-I $`E`$ plate as measured by the APM scans. In fact, the APM $`E`$ magnitude typically overestimates the actual brightness by $`0.3\pm 0.3`$ mag, so the original candidate list went slightly deeper than intended and was not uniform over the survey area. In an attempt to improve the photometric accuracy and uniformity of the sample, the APM magnitudes were subsequently recalibrated plate-by-plate (McMahon et al. (1999)) using magnitudes from the Minnesota Automated Plate Scanner POSS-I catalog (APS<sup>2</sup><sup>2</sup>2The APS databases are supported by the National Science Foundation, the National Aeronautics and Space Administration, and the University of Minnesota, and are available at http://aps.umn.edu/., Pennington et al. (1993)), which are more uniform than the APM magnitudes because they were calibrated on a plate-by-plate basis, and a new limit of 17.8 in $`E`$ was used to redefine the complete sample. We preferred not simply to substitute the APS catalog for the APM because the APS sky coverage is incomplete in the FIRST region and because the APS catalog excludes objects detected on only one of the POSS-I plates, which dramatically reduces the fraction of FIRST sources that have optical identifications since most radio source counterparts are near the plate limit and so appear on only one plate. The latter consideration is not important for the FBQS, but is a disadvantage for most other optical projects.
After the APS recalibration, we find that the APM magnitudes are accurate to better than 0.2 magnitudes rms in both O and E (McMahon et al. (1999)). This accuracy was determined by comparing the magnitudes of objects appearing in the narrow overlap regions at the edges of the POSS-I plates; since the photometric errors are probably worst at the plate edges, for most objects the errors may be even smaller.
As a result of the E magnitude adjustments, the sample was augmented by approximately 100 candidate quasars. Furthermore, there are instances of sources in the APS catalog that are not detected on one or both plates in the APM catalog (the opposite also occurs.) The most common reason for such missing sources is that close objects are blended into a single merged entry. We added 26 such APS-supplement objects to our candidate list; these objects are noted in the comments. Finally, to improve the uniformity of the sample with Galactic latitude, an extinction correction was computed for each candidate object using the $`E(BV)`$ map of Schlegel, Finkbeiner & Davis (1998) with $`A(E)=2.7E(BV)`$ and $`A(O)=4.4E(BV)`$ (estimated from the $`E`$ and $`O`$ bandpasses given in Minkowski & Abell (1963).) These corrections are usually quite small — the median values are $`A(E)=0.058`$ and $`A(O)=0.094`$ — but can be significant at the high and low RA edges of the survey (see Fig. 3). We keep all objects that satisfy $`E17.8`$ and the $`OE`$ color cut described below using the extinction-corrected magnitudes; this adds an additional $`90`$ candidates to the sample.
The extinction-corrected magnitude distribution of identified candidates is shown in Figure 4(a) for various classifications. (The definitions adopted for these object classifications are given in §4; briefly, a “quasar” is any object with broad emission lines.) The efficiency of finding quasars is a strong function of magnitude. Figure 4(b) shows the fraction of candidate objects that are quasars as a function of $`E`$ magnitude. Less than 5% of the objects brighter than $`E=14`$ mag are quasars (most such objects are brighter than $`E=13`$ and so do not appear in Fig. 4); this grows to 70% for candidates fainter than $`E=17`$ mag. Most of the brightest nonquasars are galaxies that the APM has misclassified as stellar.
The very steep increase in the number of quasars with increasing magnitude introduces a strong Malmquist bias into this sample (and other samples of bright quasars.) There is a large pool of quasars just below the magnitude threshold, so the sample is certain to include variable objects that happened to be unusually bright when the POSS-I plates were taken; it will also include quasars with measurement and/or calibration errors that made them look brighter than they really are. When analyzing properties of the sample, readers are cautioned to consider this bias. See our study of the variability of a subset of the FBQS quasars for further discussion of this issue (Helfand et al. 1999b ).
### 2.3. Optical Morphology
Since we are searching for quasars, we require all candidates to be classified by the APM as stellar on at least one POSS-I plate. The APM star/galaxy separator is not infallible, however. Approximately 14% of the quasars found in the FBQS are classified as nonstellar on one of the two POSS-I plates. Since we admit Seyfert galaxies into our quasar sample, some of these nonstellar classifications may be correct. Fully 75% of the objects classified as stellar on both plates turned out to be quasars. In comparison, only 25% of the objects classified as stellar on only one plate proved to be quasars.
Nonetheless, it is likely that some quasars are being lost to the sample because they are being misclassified as galaxies on both plates. A match between the APM catalog and the Véron-Cetty & Véron (1998) quasar catalog found that 10% of the Véron quasars brighter than $`17.8`$ in $`E`$ were classified as galaxies on both POSS-I plates by the APM (Paper I ). However, $`40`$% of such objects are identified as Seyfert galaxies in the Véron catalog, so that the APM galaxy classification is likely to be correct. The FBQS sample is certainly not complete for Seyfert galaxies due to the restriction to point-like optical sources. This morphology criterion similarly makes the FBQS sample somewhat incomplete at low redshifts ($`z0.2`$), where the quasar host galaxy may be bright enough to skew the APM classification.
Requiring a point-like optical morphology may also discriminate against another interesting class of quasar: gravitationally lensed (or binary) objects. Gravitational lenses with two roughly equally bright images separated by $`2`$ to $`10`$ arcseconds are likely to be recorded as a single non-stellar object in the APM catalog; they are also likely to have less accurate positions in both the radio and the optical due to their extent. Such wide lenses are probably best identified using higher-resolution images than can be obtained from photographic Schmidt surveys. However, most gravitationally lensed quasars have smaller separations than this and so will be included in our sample despite not being true point sources. In an imaging search for lenses among the FBQS sample (Schechter et al. (1998)), we have so far found two lensed objects; both have component separations of $`1`$ arcsec or less and are unresolved by the APM, which classifies each as stellar on both plates. We have also identified one binary quasar (Brotherton et al. (1999)) with a separation of 2.3 arcsec, which is similarly classified as stellar on both plates, although it is slightly too faint to be in the FBQS sample.
### 2.4. Optical Color
Lastly, we excluded FIRST sources with very red optical counterparts ($`OE>2`$) since in the pilot study no quasars were this red. At the risk of missing a few intrinsically red or high-redshift quasars, this cut substantially reduces the number of candidates to observe. A histogram of the colors of all the quasars identified to date (Fig. 5a) shows very few close to the color limit and a very low discovery efficiency near $`OE=2`$ (Fig. 5b). There are only 7 confirmed quasars with dereddened $`OE1.8`$. Although we cannot exclude a large population of red quasars, there are very few quasars near our color boundary, whereas the non-quasar contamination of the candidate list has a strong dependence on color. Figure 5(a) also indicates the color distribution of the BL Lacs, AGN, H II/star-forming galaxies, and normal galaxies identified in our survey.
Our color cut does discriminate against one known class of red quasars: very high redshift quasars ($`z3.5`$). Hook et al. (1995) describe the evolution of APM/POSS-I colors with redshift and show that when the Lyman forest absorption moves into the $`O`$ band, the $`OE`$ color becomes dramatically redder. This trend is visible in a plot of $`OE`$ versus $`z`$ for the FBQS quasars (Fig. 6), where the colors of $`z>3`$ quasars are distinctly redder than those at lower redshifts. A prime goal of extending the FBQS sample to include redder objects would be to find high redshift quasars (e.g., Hook, Becker, McMahon, & White (1998)), but in view of the contamination of the red objects in the APM catalog by galaxies, due mainly to the poor morphology discrimination feasible with the POSS-I plates, we decided it was best for the FBQS to postpone a red quasar search until we have higher quality optical images (e.g., from the Sloan Digital Sky Survey and other deeper, higher resolution CCD surveys.)
In any case, few if any high redshift quasars were missed as a result of our color cut. The limit at $`OE=2`$ is substantially redder than the typical unreddened quasar color ($`OE0.5`$, Fig. 5), so it does allow for significant absorption in the $`O`$ band; and we do not expect to find many $`z>3.5`$ quasars brighter than $`E=17.8`$ mag simply because of the large ultraviolet luminosities required. If there is a closer population of quasars that is heavily reddened by either intrinsic or intervening dust absorption, the absorption at $`E`$ would have to be several magnitudes in order to produce a differential extinction of a magnitude or more in $`OE`$. Consequently, such objects would also be too faint to be included in this bright sample.
### 2.5. Radio Properties
There are no cuts to the sample based on radio flux density or radio morphology (other than the unavoidable requirement that the radio source have at least a weak core component, as mentioned above.) However, the FIRST catalog itself introduces several minor selection effects (BWH (95); WBHG (97)). The FIRST sensitivity limit is somewhat non-uniform on the sky, with small ($`15`$% peak-to-peak) variations due to the observing strategy and large variations due to decreased sensitivity in the vicinity of bright sources. The fraction of the survey area affected by sensitivity variations is small: 86% of the FBQS area has a $`5\sigma `$ flux density limit of $`1`$ mJy (the catalog has a hard limit at 1 mJy even when the images are slightly better than that), and 98% of the survey has a limit $`<1.25`$ mJy. The FIRST coverage map (WBHG (97); available through the FIRST web pages) gives the sensitivity as a function of position.
The FIRST survey detection limit applies to the peak flux density of sources rather than to the integrated flux density. Consequently, extended sources with total fluxes greater than 1 mJy may not appear in the catalog because their peaks fall below the detection threshold. For the objects selected in the FBQS, we expect the insensitivity of FIRST to extended sources to contribute less to the incompleteness than the requirement that FBQS quasars must have a nuclear radio component in order to meet the positional coincidence criterion. The latter, for example, makes the FBQS incomplete to the population of radio quasars that have extended, fossil radio lobes but no active nuclear emission (which must exist, though they are rare.)
The FBQS sample constitutes a significantly different population of quasars than has been seen in previous radio surveys. Figure 7 shows the distribution of radio flux densities for the confirmed quasars. The number of quasars increases dramatically with decreasing radio flux density near the FIRST survey limit. This is because at 1 mJy, the FIRST survey is probing the larger, radio-quiet (but obviously not radio-silent) quasar population. This is also clearly seen in Figure 8(a), which plots the monochromatic radio luminosity versus redshift for FBQS quasars. FIRST is sensitive to a large population of low radio-luminosity quasars, especially at low redshift.
In §2.2 we discussed the importance of the Malmquist bias vis a vis the E magnitude cutoff for the sample. The rapid increase in the FBQS counts toward our 1 mJy flux density cutoff implies that there is also a radio Malmquist bias in our sample. The slight nonuniformity of the FIRST survey flux density limit directly affects the FBQS sample and could create the appearance of large scale structure, since regions of higher sensitivity will have an excess of quasars. This effect must be accounted for in analyses of large-scale structure in the FBQS sample.
Even though the number of quasars increases sharply below a flux density of several mJy, the number of non-quasar candidates rises even faster such that the efficiency of identifying quasars decreases as the radio flux density decreases (see Fig. 7b). Nonetheless, quasars near the FIRST flux density limit are worth the effort because it is here that the FBQS is making a unique contribution, probing the transition between radio-loud and radio-quiet objects. The distribution of objects in the transition region is discussed further below (§5.4).
### 2.6. How Efficiently Can Quasars Be Selected?
The fraction of FBQS candidates that turns out to be quasars is, as shown above, a function of magnitude, color, radio-optical separation, etc. This raises an interesting question: how efficiently can we select quasars, using only the information that is available before taking spectra? In other words, how well can we maximize the number of quasars discovered per spectrum taken?
It is clear that by setting tight limits on the separations (Fig. 2) and colors (Fig. 5), and by observing only the optically fainter (Fig. 4) and radio-brighter (Fig. 7) objects, we could eliminate many of the FBQS candidates that turn out not to be quasars and so increase our efficiency. However, the completeness of the sample would suffer greatly using such a strategy, and many of the quasar candidates rejected would be among the more interesting types (e.g., radio-quiet, radio-intermediate, and high-redshift objects.)
To explore this question quantitatively, we have applied artificial-intelligence methods to classify the FBQS candidates according to their a priori (before taking spectra) probability of being quasars, $`P(Q)`$. There are a number of classification methods that we could have used (neural nets, nearest-neighbor algorithms, clustering methods, Bayesian classifiers, etc.); we chose to use the oblique decision tree classifier OC1<sup>3</sup><sup>3</sup>3OC1 is available via anonymous ftp from http://www.cs.jhu.edu/salzberg/announce-oc1.html (Murthy, Kasif, & Salzberg (1994)), which we have found to represent a good compromise between the competing goals of being fast in training and in application and of generating accurate, understandable classification algorithms. In the past this method has been successfully applied to cosmic ray identification in HST images (Salzberg et al. (1995)), to star-galaxy discrimination for the Guide Star Catalog-II (White (1997)), and to the problem of flagging sidelobes appearing in the FIRST catalog (WBHG (97)).
#### 2.6.1 Description of OC1
OC1 takes as input a training set of objects that have known classifications; in our case, we use all objects with spectra as the training set. Each object is described by a vector of numerical features (magnitudes, colors, and so on.) OC1 constructs a decision tree that accurately classifies the training set.
At each node in the tree a linear combination of the feature values plus a constant is computed; if the sum is positive, the right branch of the tree is taken, otherwise the left branch is taken. After each branch, there may be another decision test node, or the tree may terminate (a “leaf” node). To classify an unknown object, one starts at the tree’s root node and works down the decision tree, doing a series of tests and branches, until a leaf node is reached. The OC1 program attempts to produce trees that have pure samples of training-set objects at each leaf node (i.e., in the current case it tries to separate the training objects so that each leaf has nearly all quasars or nearly all non-quasars.) The unknown object is classified as quasar or not depending on which is the majority type of object at the final leaf.
#### 2.6.2 Voting Decision Trees
We have improved on the accuracy of the classification by using not just a single tree, but rather a group of 10 trees that vote. This multiple-tree approach has been shown to be quite effective (Heath, Kasif, & Salzberg (1996)). Searching for a good decision tree is an extremely difficult optimization problem; to find an approximate solution, OC1 uses a complex search algorithm that includes some randomization to avoid the classic problem of getting stuck in local minima in the many-dimensional search space. Thus, one can run OC1 many times using different seeds for the random number generator to produce many different trees.
Heath, Kasif, & Salzberg (1996) used a simple majority voting scheme: classify the object with each tree and then count the number of votes for each class. We have improved on this by using a weighted voting scheme, where each tree splits its vote between the quasar and non-quasar classes depending on the populations of the two types from the training set at that leaf. If an object winds up at a leaf node with $`N`$ training set objects of which $`Q`$ are quasars, the tree’s single vote is split into $`(Q+1)/(N+2)`$ in favor and $`(NQ+1)/(N+2)`$ against a quasar classification. (The particular form used was derived from the binomial statistics at the leaf.) We considered a variety of other voting schemes, but this one appeared to be the most accurate and robust.
#### 2.6.3 Testing the Classification Accuracy
The classifier was tested using 5-fold cross-validation. The training set (which consists of $`1000`$ objects with spectra) is divided into 5 equal-sized, randomly selected subsets (or folds). Setting aside the first fold, 10 decision trees are constructed by training on the other 4 folds. Then the trees are tested on the first fold, which was not used in the training, computing $`P(Q)`$ for each object in the fold. This process is repeated 5 times, each time holding back a different fold; when complete, we have used each object for testing exactly once. This technique, which is a standard approach, avoids the over-optimistic results for classification accuracy one would get if one simply trained on all the data and then tested the classifier on the same data.
#### 2.6.4 Results
The approach may sound complicated, but the ultimate result is simple — for each object, we compute the probability $`P(Q)`$ that the object is a quasar. A perfect classifier would assign all quasars $`P(Q)=1`$ and all non-quasars $`P(Q)=0`$; in a non-ideal case, we want to see quasars with $`P(Q)`$ values concentrated near unity and non-quasars near zero.
Figures 9(a) and 9(b) show the distribution of $`P(Q)`$, the probability that an object is classified as a quasar, using 10 voting trees. The distinction between the two cases is the set of features used for classification. Example (a) used 7 parameters: $`E`$, $`OE`$, $`\mathrm{log}S_p`$ (where $`S_p`$ is the FIRST peak flux density), $`S_i/S_p`$ (where $`S_i`$ is the FIRST integrated flux density), the radio-optical separation, and the two PSF parameters from the APM catalog that measure how point-like the object is on the $`O`$ and $`E`$ plates. Example (b) omitted the parameter $`OE`$, so that no color information was included.
Both panels of Figure 9 clearly show that the classifier does a good job of separating the FBQS candidates into high and low probability quasars. In both cases the distribution is encouragingly close to that of an ideal classifier. The quasar/non-quasar separation is somewhat better when the $`OE`$ color information is used: the distribution is more strongly peaked at the low and high $`P(Q)`$ ends. We created the color-free classifier because an examination of the quasars with $`P(Q)<0.2`$ (the shaded bins below 0.2 in Fig. 9a) shows that 4 of the 36 misclassified objects are in the interesting category of red quasars, either because they are high redshift (J092104.4+302031, $`z=3.34`$) or because they have strong broad absorption lines (BALs; J105427.1+253600, J120051.5+350831, and J132422.5+245222). If we exclude color as a classification criterion, the number of $`P(Q)<0.2`$ quasars stays about the same (34 objects), but the misclassified candidates are neither high-redshift nor BAL quasars.
In both cases, the great majority of the low $`P(Q)`$ quasars are low redshift objects: 24/34 (70%) of the $`P(Q)<0.2`$ objects have $`z<0.25`$. These objects are probably being recognized as slightly resolved on the POSS-I plates and so are, in some sense, rightly classified as non-quasars (since they are perceptibly non-stellar.)
At the other extreme, the great majority of the high $`P(Q)`$ candidates that are not quasars turn out to be BL Lacs. Using the classifier with color, there are 419 objects with $`P(Q)>0.85`$, of which 373 (89%) are quasars and 46 (11%) are non-quasars. But 31 of the 46 non-quasars are BL Lacs. If we combine the BL Lacs with the quasars, an astounding 96.4% of the high probability objects are quasars or BL Lacs and only 3.6% are other types of objects.
Figure 10 shows the fraction of candidates that turn out to be quasars as a function of $`P(Q)`$. The distribution follows well the expected linear form assuming that $`P(Q)`$ is, in fact, a good estimate of the probability that an object is a quasar.
Finally, Figure 11 shows how the completeness $`C`$ and reliability $`R`$ of the sample vary as we change the $`P(Q)`$ threshold, $`P_C`$, above which candidates are accepted into the sample. If we set $`P_C=0`$, all objects in the original FBQS sample are included, and the resulting sample is 100% complete (at least, it includes all the objects in the current paper) but is only 55% reliable (so 45% of the candidates turn out to be non-quasars.) As $`P_C`$ increases, the reliability of the sample increases dramatically at little cost in completeness. If color is used by the classifier, one can construct a sample that is about 80% reliable while still being 90% complete, or one can choose higher reliability (up to 89%) if lower completeness (70–80%) is acceptable. The results when color is not used are not quite as good, but still it is possible to construct an impressively pure sample at modest cost in completeness.
The way we might use this information depends on the scientific goals of the quasar survey. For the FBQS, our scientific interest is in taking a census of quasars to understand their demographics and physics. Consequently, we put a high priority on completeness and are willing to pay for it by spending extra telescope time to get spectra of lower probability candidates. We are loathe to risk losing high redshift and heavily absorbed BAL quasars from our sample, so we prefer to restrict the use of color and to aim instead for a highly complete sample.
On the other hand, if one is interested in quasars mainly as sources to probe the intergalactic medium, it could be appropriate to sacrifice completeness and the occasional red quasar in order to get a highly pure sample. In a search for quasars at particular locations, for example near other quasars or behind galaxy clusters, this approach can be used to identify excellent candidates with high probability, before a single night of telescope time has been used on spectroscopy.
To put our money where our mouths are, Table 8 gives our estimate (using color parameters) of the probability $`P(Q)`$ that each object is a quasar for the FBQS candidates that have not as yet been identified. As these objects are observed, we are confident that our predictions will be (in the mean) borne out. While we expect to use our simple sample-definition criteria as we continue expand the FBQS sample, we are already using the $`P(Q)`$ probabilities to assist in prioritizing targets for observation. Further improvements are probably possible utilizing advance knowledge of the X-ray and infrared properties of the candidates.
### 2.7. Summary of Incompleteness of the FBQS Sample
In brief, the selection criteria applied to the FBQS sample lead to several possible kinds of incompleteness and bias:
* The requirement of close positional coincidence excludes extended, lobe-dominated radio sources with no core component.
* The $`E`$ magnitude cut combined with the steep quasar luminosity function leads to unavoidable Malmquist bias in the sample. There is also a radio Malmquist bias from the FIRST flux density limit, which varies slightly as a function of position on the sky.
* The requirement that the objects be stellar on the POSS-I plates leads to incompleteness for low-redshift objects (which may appear resolved) and will bias against the discovery of wide gravitational lenses. Since the APM classifier is not perfect, there can also be truly stellar objects that are misclassified as galaxies on both plates and so are missing from our sample.
* The red color cut makes the sample incomplete for very high redshift ($`z3.5`$) and heavily obscured quasars; however, most such objects are likely to be optically too faint to be in the sample anyway.
* The FIRST catalog itself is incomplete for weak extended sources.
It is important to note that this survey is targeting quasars and hence the selection criteria have been optimized for quasars. Generally speaking, what works for quasars does not necessarily work for BL Lac objects, AGNs, or H II galaxies. In particular, requiring the optical morphology to be stellar and the optical colors to be bluer than $`OE=2`$ excludes very few quasars, but probably excludes the majority of the other types of objects. So even though we are finding examples of such objects, the lists should not be taken as complete.
For most purposes, however, these selection effects do not seriously affect the utility of the sample for statistical studies of quasars. The biases in optically selected samples are generally worse in that they are more difficult to characterize and correct. For example, the differential $`K`$-correction for broad-absorption line quasars introduces a complex selection in number counts versus redshift. The FBQS at least partly redresses these biases (e.g., by allowing much redder objects to be included.) Indeed, the FBQS is, within its limits, perhaps the most complete large-area quasar survey that exists.
## 3. Optical Spectroscopy
Spectra for the quasar candidates are being collected at five different observatories: the 3-m Shane telescope at Lick Observatory, the 2.1-m telescope at Kitt Peak National Observatory<sup>4</sup><sup>4</sup>4Kitt Peak National Observatory, NOAO, is operated by the Association of Universities for Research in Astronomy, Inc. (AURA), under cooperative agreement with the National Science Foundation, the 3.5-m telescope at Apache Point Observatory<sup>5</sup><sup>5</sup>5The Apache Point Observatory is owned and operated by the Astrophysical Research Consortium., the $`6\times 1.8`$-m Multiple Mirror Telescope (“MMT classic”), and the 10-m Keck-II telescope. The spectrographs at the five observatories all have different resolutions and wavelength coverage, which are summarized in Table 1. The observations are made in a wide variety of atmospheric conditions ranging from photometric to cloudy with both good and bad seeing. Many of the candidates have been observed more than once. We have observed almost all the candidates in the original candidate list, even those with previous identifications. (In several cases we have found errors in published redshifts or positions. Objects found to be discrepant compared with the Véron-Cetty & Véron (1998) catalog are noted in the table.)
## 4. Spectroscopic Results
Applying all the criteria described in §2 results in a list of 1238 candidates in 2682 square degrees, of which 169 are previously known quasars. In Tables 2–7 we present a list of all the candidates in the FBQS sample with spectral classifications. For each object we list the FIRST catalog RA and Dec<sup>6</sup><sup>6</sup>6The preferred naming convention for objects in the sample is “FBQS Jhhmmss.s+ddmmss”, where the Right Ascension and Declination given in the table are truncated (not rounded.) This is consistent with the IAU-approved naming convention for FIRST radio sources. (J2000), the recalibrated and extinction-corrected $`E`$ and $`O`$ magnitudes, the red extinction correction $`A(E)`$, and the FIRST peak and integrated radio flux densities. The radio-optical positional separation and APM star-galaxy classification (used to define the sample) are also given. The objects have been segregated into 6 tables by their optical spectral classification. Quasars are listed in Table 2, narrow line AGN in Table 3, BL Lac objects in Table 4, H II/star-forming galaxies in Table 5, galaxies without any emission lines in Table 6, and stars in Table 7. The criteria used to classify spectra are given below. Table 8 lists the objects for which spectra have not yet been obtained.
For the purposes of the FBQS, we have classified any object with broad emission lines as a quasar: i.e., we do not make a distinction between quasars and Seyfert 1 galaxies. We also make no distinction between quasars and broad-line radio galaxies, the radio-loud counterparts to Seyfert 1 galaxies. The absolute blue magnitude is given for objects with redshifts, so a conventional cut at $`M_B=23`$ can be made if desired to exclude lower-luminosity objects. Of the 636 quasars, 50 fall into this lower luminosity category. (The APM magnitudes are far too bright for extended objects with apparent magnitudes brighter than $`12`$ mag, so the high luminosities implied by the $`M_B`$ values for such objects should not be taken seriously. We have appended a note in the table for such objects giving the blue magnitude from NED<sup>7</sup><sup>7</sup>7The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. when available.)
BL Lac classification is based on the traditional definition presented in Stocke et al. (1991): all objects with emission line rest-frame equivalent widths less than 5 Å and/or 4000 Å break contrasts (Br<sub>4000</sub>) less than 25% appear in Table 4. When emission lines or Br<sub>4000</sub> have been measured, we list the quantities in the notes. Although this definition has been shown to be too restrictive, excluding some objects that otherwise exhibit BL Lac-like properties (see Marchã et al. (1996) and Laurent-Muehleisen et al. (1998)), our preliminary analysis indicates that broader criteria admit many objects into the BL Lac class that do not properly belong there. This is mainly because the FBQS candidate objects are more than two orders of magnitude fainter in the radio than the Marchã et al. sample used to define the newer BL Lac criteria. The FBQS selection thus admits many objects with very low radio luminosities that exhibit weak optical emission and absorption features like the high radio luminosity objects in Marchã et al. Many of these objects are clearly not BL Lacs despite being consistent with the broader optical selection criteria. A full analysis of the BL Lac content of the FBQS sample is underway (Laurent-Muehleisen et al. (1999)) and to avoid including non-BL Lacs in the table, we list only those objects that adhere to the strictest criteria.
We divide objects with narrow emission lines into narrow-line AGN and H II galaxies based on the ratios of \[N II\] to H$`\alpha `$ and \[O III\] to H$`\beta `$. An object is classified as a narrow-line AGN if \[N II\] is more than 60% of H$`\alpha `$, or (when H$`\alpha `$ is not observed) if \[O III\] is more than twice H$`\beta `$; otherwise it is designated as an H II galaxy (Osterbrock (1989)). The classification based on \[O III\]/H$`\beta `$ can be ambiguous and so the H II/AGN separation is less reliable for higher redshift objects.
Some of the objects classified as H II galaxies are likely to be narrow-line Seyfert 1 galaxies, which can have similar line ratios (Osterbrock & Pogge (1985)). For this paper we have not attempted to separate the two types of objects.
All other objects are classified as either normal galaxies (with no obvious emission lines) or stars. Galaxies flagged as previously known in Table 6 have been identified in NED; in most cases we have not obtained spectra, so their emission line (and Br<sub>4000</sub>) status should be considered less certain than the other objects in the table.
Note that most stars in the sample are likely to be chance coincidences with the radio sources. The APM positions for stars are degraded by 40 years of proper motion between the POSS-I and FIRST epochs; coupled with the rarity of radio emission from stars, this prevents a useful match between FIRST and APM for stars. Recent epoch stellar catalogs (ideally with proper motions) are required to study the radio emission from stars using FIRST (Helfand et al. 1999a ). One object listed in the stars table, J163302.6+234928, is detected in the ROSAT All Sky Survey Bright Source Catalog (Voges et al. (1996)) and is therefore likely to be a true stellar radio source identification. Another, J164018.1+384220, is actually a known halo-population planetary nebula (Barker & Cudworth (1984)). It is included with the stars because it is a Galactic object (and it was formerly a star.) A dozen or more of the other objects in Table 7 are also likely to be real radio stars, but current epoch optical positions will be needed to secure the identifications.
In Tables 2–6, we list the measured redshift where available (redshifts are unknown for many of the BL Lac objects). Redshifts for quasars and BL Lac objects were computed by cross-correlating the spectra with templates; for the quasars, a template constructed from the FBQS objects themselves was used (Brotherton, Tran, et al. (1999)). We use the redshift to calculate for each object the radio luminosity $`L_R`$ at a rest frequency of 5 GHz (assuming a radio spectral index of $`\alpha =0.5`$), the absolute $`B`$ magnitude $`M_B`$, and, as a measure of radio loudness, the ratio $`R^{}`$ of the 5 GHz radio flux density to the 2500 Å optical flux in the quasar rest frame (assuming $`\alpha _{opt}=1`$ and using the definition of Stocke et al. (1992)). We use the (APS-calibrated) APM $`O`$ magnitude as a direct estimate of $`B`$, and we do not correct the optical magnitude for the emission line contribution. The cosmological parameters $`H_0=50\text{km}\text{s}^1\text{Mpc}^1`$, $`\mathrm{\Omega }=1`$, and $`\mathrm{\Lambda }=0`$ were used in the luminosity calculations. When the redshift is not available we omit $`M_B`$ and $`L_R`$, and we compute $`R^{}`$ from the ratio of the 5 GHz flux density to the 2500 Å flux as above, but no $`K`$-corrections are applied. There is also a comments column, which notes details of particular interest such as the presence of broad absorption lines or damped Lyman $`\alpha `$ absorption lines, whether an object was previously known in the Véron quasar catalog (Véron-Cetty & Véron (1998)), the Hamburg catalog (Hagen, Engels & Reimers (1999)) or in NED, association with a ROSAT X-ray source or an IRAS infrared source, etc.
The spectra for all the objects identified as quasars are displayed in Figure 16 (See related series of gif files on the astro-ph website).
## 5. Discussion
Detailed analysis of the FBQS sample will be deferred to other papers, but we briefly discuss here both general properties of the sample and some interesting source classes.
### 5.1. Redshift Distribution
While the FIRST survey is clearly sensitive enough to detect quasars out to redshifts greater than 3, the preponderance of FBQS quasars are at redshifts below 2 (Fig. 12). The precipitous drop beyond $`z=2`$ is attributable more to the limit in $`E`$ than to the limit in radio flux density. It is apparent in the redshift-radio luminosity distribution (Fig. 8a) that the high-redshift quasars do not crowd against the radio detection limit; they are missing from the sample mainly because they are too faint in the optical (Fig. 8b). On the other hand, the lack of quasars at $`z>3.5`$ could be partly due to the red color cut used to define the sample, though the number of quasars beyond $`z=3`$ that are lost due to the color limit is certainly small (§2.4).
### 5.2. Efficiency of the FBQS
The absolute efficiency of the FBQS in identifying quasars can be estimated by comparing the number of quasars found in the FBQS to the number expected from previous surveys. Figure 13 compares the sky density of FBQS quasars to estimates of the sky density of optically selected quasars as given by La Franca & Cristiani (1997). A correction factor of 1.17 has been applied to the FBQS counts in the $`B=17.4`$–17.9 bin to account for objects that do not appear in the FBQS either because they are too faint on the red plate ($`E>17.8`$) or because they are not yet identified (Table 8). The correction factor for the other bins is negligible, changing the raw counts by less than 3%.
For quasars brighter than $`B=16.4`$ mag, the FBQS quasar density is indistinguishable from the density of optically selected quasars, so the efficiency is very high. The efficiency drops at fainter magnitudes, falling to $`25`$% when $`B=17.4`$–17.9 mag. This does not mean that FIRST detects all optically bright quasars — we know that there are radio-silent quasars that would not be detected by FIRST. We can conclude, however, that FIRST detects the bulk of bright radio-quiet quasars and that the incompleteness of the FBQS (due mainly to very low radio fluxes) is similar to the incompleteness of bright optically selected samples (which is typically due to objects missed due to unusual color and emission line properties.)
An analysis of both quasars in the Véron-Cetty & Véron (1998) and Hamburg (Hagen, Engels & Reimers (1999)) catalogs indicates that approximately 25% of all quasars brighter than $`E=17.8`$ are detected by the FIRST survey. The detected fraction is a bit lower for the Hamburg catalog – FIRST detects 17 out of 78 (22%) quasars in the FBQS area. That may just be a reflection of the statistics of the small numbers involved, or it may indicate that the Hamburg survey is more efficient than previous optical surveys at detecting quasars with unusual optical properties.
### 5.3. Advantages Over Optical Surveys
When compared to previous quasar surveys, we find that the FBQS has two primary advantages. First, because it is a radio-selected survey, it has high selection efficiency, mainly because detectable radio emission from stars is extremely rare. Only about one in $`10^4`$ 12th magnitude stars has radio emission detectable by the FIRST survey (Helfand et al. 1999a ). In contrast, X-ray selected samples are contaminated by ubiquitous stellar X-ray emission. The FBQS can cover a significantly larger sky area than optical objective prism surveys, with very much less telescope time required than for optical color-selected surveys. Perhaps the color-selection method will become comparably efficient with the advent of the wide-area, 5-color Sloan Digital Sky Survey, but until then radio selection is the most efficient technique for discovering new quasars (once someone has done the hard work of making a deep, wide-area radio catalog.)
Second, the radio-selected FBQS can discover classes of quasars that are rare or unknown in other surveys because of biases against them. The best example in the FBQS sample are the extreme broad absorption line quasars that show low ionization iron absorption. Becker et al. (1997) reported the discovery of two such objects, J084044.4+363328 ($`z=1.22`$; included in this paper) and J155633.8+351758 ($`z=1.48`$; too faint for the FBQS.) This paper includes 3 additional objects: J104459.5+365605 ($`z=0.701`$), J121442.3+280329 ($`z=0.700`$), and J142703.6+270940 ($`z=1.170`$). The spectra of these objects are remarkable; they are often difficult even to recognize as quasars. The emission lines are almost completely masked by absorption, and the colors of these objects are quite red. Such objects would be very unlikely to be recognized in any existing optically selected survey, so we really have no idea how common they may be among radio-quiet quasars.
In total there are 29 definite or tentative BAL quasars among the 636 FBQS quasars. This is comparable to the percentage of BAL quasars in optically selected samples (Foltz et al. (1990)). The properties of the FBQS BAL quasar sample are discussed further by Becker et al. (1999).
Of course, the disadvantage of a radio-selected survey is that we can find only quasars with radio emission above the FIRST 1 mJy limit, but as discussed above these constitute a significant minority (25%) of all quasars brighter than $`E=17.8`$. The FIRST survey is sufficiently deep that we detect many radio-intermediate and even radio-quiet quasars; only radio-silent quasars are not represented in the sample. This is most apparent in the number counts (Fig. 13), which are very similar for FBQS and optically selected samples for the brightest objects. It is the combination of large optical quasar surveys with the FBQS that may be most productive for studying quasar physics.
### 5.4. No Radio-Loud/Radio-Quiet Dichotomy?
Figure 14 shows the distribution of the radio-loudness parameter $`R^{}`$ versus redshift. An $`R^{}`$ of 10 is generally taken as the divide between radio-loud and radio-quiet objects (Stocke et al. (1992)). It is clear that over the entire range of observed redshifts, the FBQS is reaching well into the radio-quiet population and that there is no obvious deficit of quasars at the boundary. Previous studies, in contrast, have always found a bimodal distribution in $`R^{}`$ with a dearth of quasars in the range $`3R^{}100`$ (e.g., Miller, Peacock, & Mead (1990); Stocke et al. (1992)).
The inadequacy of previous surveys in finding these radio-transition quasars is well illustrated in Figures 15(a) and 15(b), which show the histogram of the number of quasars versus $`R^{}`$ compared with the distribution for previously known quasars. Although the FBQS shows a sharp increase in the number of quasars for $`R^{}<100`$, this population is actually decreasing among the ranks of previously known quasars. The vast majority (85%) of FBQS quasars with $`1<R^{}<100`$ are newly discovered, while those with more extreme $`R^{}`$ values in both directions are well-represented in existing samples.
Figure 15(a) also shows a histogram for previously known quasars that includes $`R^{}`$ upper limits for all the Véron quasars (Véron-Cetty & Véron (1998)) that fit the FBQS criteria (including sky area and brightness) but that are not detected by the FIRST survey. This distribution is certainly bimodal; however, the FBQS quasars fill the gap and there is no clear evidence for bimodality in the combined FBQS and radio-quiet Véron distribution.
Conceivably this is the result of selection effects in the FBQS, though we have no plausible explanation for how our sample selection could eliminate a gap in the $`R^{}`$ distribution. A definitive conclusion on the actual distribution of quasar radio-loudness awaits full analysis and modeling of the sample and its selection effects.
### 5.5. Summary
This paper purposely defers most of the scientific studies that can be based on this new sample of quasars in order to make the data available to the community more quickly. Clearly the value of this database will be greatly enhanced with complementary data at other radio frequencies as well as in the X-ray, IR, and UV bands. The FBQS will serve as a vital bridge between the traditional radio-loud and radio-quiet quasar samples. The literature is full of characteristics such as line ratios, X-ray spectral indices, and broad absorption lines that appear to depend on radio-loudness. With the new sample, we can begin to explore the radio-loudness transition zone.
The results in this paper are based on the 1997 version of the FIRST radio catalog, which covered 2682 square degrees of the north Galactic cap. As this is written in mid-1999, data have been collected that extend the area covered by FIRST to 6000 square degrees, and the FBQS candidate list will soon stand at $`2700`$; $`1500`$ of those candidates will eventually be confirmed as quasars (assuming that the telescope time allocation committees continue to be kind to us.) At that point the FBQS catalog will be one of the largest uniformly selected quasar surveys and will be an even more powerful tool for furthering our understanding of the quasar phenomenon.
Thanks to the referee, Todd Tripp, for many helpful suggestions, including an analysis of the FBQS/Hamburg match rate. We appreciate helpful discussions with Ed Moran on the classification of the emission line galaxies. Thanks to Hien Tran for computing the quasar redshifts from template correlations. We acknowledge extensive use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, Caltech, under contract with the National Aeronautics and Space Administration. The success of the FIRST survey is in large measure due to the generous support of a number of organizations. In particular, we acknowledge support from the NRAO, the NSF (grants AST-98-02791 and AST-98-02732), the Institute of Geophysics and Planetary Physics (operated under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48), the Space Telescope Science Institute, NATO, the National Geographic Society (grant NGS No. 5393-094), Columbia University, and Sun Microsystems. We appreciate the hospitality of the Institute of Astronomy, Cambridge University, which hosted visits enabling the writing of much of this paper. |
no-problem/9912/hep-ph9912374.html | ar5iv | text | # 𝑒⁺𝑒⁻→𝑍⁰𝑍⁰→𝑏𝑏̄𝑐𝑐̄ events as model independent probe of colour reconnection effects
## Abstract
According to the basic properties of QCD, colour reconnection effects can occur in hadronic processes at high energies. The comparison of $`e^+e^{}Z^0Z^0q\overline{q}q^{}\overline{q^{}}`$ events with the superposition of $`Z^0q\overline{q}`$ and $`Z^0q^{}\overline{q^{}}`$ events from LEP1 would provide an unambiguous model-independent probe. We show that at LEP2 energy, the background processes are negligible if we select only $`e^+e^{}Z^0Z^0b\overline{b}c\overline{c}`$ events, and limit the measurements in the phase space of on-shell $`Z^0Z^0`$ events.
PACS numbers: 13.87.Fh, 12.38.Bx, 12.40.-y, 13.65.+i
keywords: colour-reconnection, $`Z^0Z^0`$ events, phase space
The usual treatment of the hadronic processes in high energy collisions is divided into two phases, the perturbative parton cascade and the non-perturbative hadronization; and a phenomenological colour flow model(CFM) is used to assign the colour connections of the partons at the end of the first phase. These colour connections are the interface between the two phases, which is the starting point for the second phase. But it has been realized(see, e.g. and references therein) that CFM is a good approximation only when $`N_C\mathrm{}`$. With $`N_C=3`$ as it is in QCD, colour connections of the partons can occur in many different ways. For example, for the $`q\overline{q}+ng`$($`n>1`$) system, as the final states of the perturbative phase in $`e^+e^{}`$ annihilation, strict PQCD calculations show that many different parallel complete colour singlet sets can exist. Each of these sets can equivalently act as the bases of the colour space of the perturbative final state, but none of them is equivalent to the colour flow chain obtained from CFM. PQCD cannot tell from which colour singlet set the hadronization starts. This implies that colour reconnection effects can be very significant in some cases.
A well known example which has been studied frequently in literature is the colour reconnection effects in $`e^+e^{}W^+W^{}hadrons`$ at LEP2 energy. Here we have $`e^+e^{}W^+W^{}q_1\overline{q}_2q_3\overline{q}_4`$, and the two initial colour-singlet systems $`q_1\overline{q}_2`$ (from $`W^+`$) and $`q_3\overline{q}_4`$ (from $`W^{}`$) may be produced closely in space-time. The basic properties of QCD allow the colour reconnection to occur among partons from $`W^+`$ and $`W^{}`$ showers. Such effects destroy the naive picture of independent evolution and fragmentation of $`q_1\overline{q}_2`$ and $`q_3\overline{q}_4`$, respectively. This is one of the most important sources of the theoretical uncertainty in determining the W mass and has attracted much attention in recent years (see and references therein). But up to now, the manifestations of colour reconnection effects in the final state hadrons can only be studied with the help of model-dependent Monte-Carlo events generators in which a number of approximations and/or assumptions are made. Some of these approximations and/or assumptions can be very sensitive to the colour reconnection effects. So the theoretical uncertainty is usually very large. Hence, a very crucial and urgent question in this study is how to probe such kind of effects in a model independent manner.
One candidate of such probe currently being discussed is to compare pure hadronic decay $`W^+W^{}q_1\overline{q}_2q_3\overline{q}_4`$, in which reconnection between $`W^+`$ and $`W^{}`$ sources can occur, with semileptonic decay $`W^+W^{}q_1\overline{q}_2l\nu `$ of the same kinematics. But as pointed out in , this comparison still suffers from strong dependence on the hadronization scenario and on the choice of model parameters. Moreover, results may be strongly sensitive to the adopted experimental strategy. On the other hand, above the $`Z^0Z^0`$ threshold at LEP2, the $`e^+e^{}Z^0Z^0q\overline{q}q^{}\overline{q^{}}hadrons`$ events seem to be the most promising candidate in this aspect. Here the hadronic $`Z^0Z^0`$ events can contain similar colour reconnection effects as those in $`W^+W^{}`$ events, while the single $`Z^0q\overline{q}hadrons`$ data can be obtained from LEP1 without any theoretical uncertainty and with high precision. So the comparison of the experimental results in $`e^+e^{}Z^0Z^0hadrons`$ and the convolution of two single hadronic $`Z^0`$ event data from LEP1 will allow an unambiguous probe of the colour reconnection effects.
However, there is a great difficulty here, i.e. the background of $`Z^0Z^0`$ events is too large. For example, the cross sections of the corresponding $`W^+W^{}`$ process and corresponding QCD process($`e^+e^{}`$ annihilating into any $`q\overline{q}`$ pair) are both more than one order of magnitude larger than that of the signal process. Other electroweak processes like $`e^+e^{}`$ annihilating through $`\gamma ^{}Z^0`$ and $`\gamma ^{}\gamma ^{}`$ into four quarks are also significant . Can we reduce the background by selecting some specific type of events and/or limiting the measurement in some kinematic region? In this letter, we show that this question can be answered in the affirmative. By selecting the $`e^+e^{}Z^0Z^0b\overline{b}c\overline{c}`$ events as the signal process and limiting the measurements in the phase space of on-shell $`Z^0Z^0`$ decays, which is the phase space for most of the $`Z^0Z^0`$ events above threshold, the contribution of background process can indeed be suppressed very much. Our numerical results show that they are even negligible compared to the signal process $`e^+e^{}Z^0Z^0b\overline{b}c\overline{c}`$.
For the sake of explicity, we divide the background processes of hadronic $`Z^0Z^0`$ events into the following three types: 1.) the corresponding $`W^+W^{}`$ process, 2.) the corresponding QCD process and 3.) other elctroweak process. As pointed out above, the total cross section for them are more than one order of magnitude larger than that of the signal process. But these backgrounds can be greatly suppressed if the measurements are restricted in the following way.
First, $`Z^0`$ is neutral, while $`W^+W^{}`$ are charged. If we look only at
$$e^+e^{}Z^0Z^0b\overline{b}c\overline{c}$$
(1)
events as signal <sup>2</sup><sup>2</sup>2Here we select $`b\overline{b}c\overline{c}`$ events but not $`b\overline{b}b\overline{b}`$ or $`c\overline{c}c\overline{c}`$ to avoid identical particle effects. However, if some colour reconnection effects have no relation with identical particle effeccts, the $`b\overline{b}b\overline{b}`$ and $`c\overline{c}c\overline{c}`$ events, which will not be created via $`W^+W^{}`$ decay at all, can be included in those measurements. The following qualitative and quantitative discussions are quite the same, while the events which can be selected in experiments are increased to about 3 times.<sup>,</sup> <sup>3</sup><sup>3</sup>3We also note that heavy quarks are created only in the perturbative phase and can be easily identified in experiments., the corresponding $`W^+W^{}c\overline{b}b\overline{c}`$ events are suppressed by the CKM matrix element $`|V_{cb}|^20.0016`$. This leads to a strong suppression of five order of magnitude. So we can neglect $`e^+e^{}W^+W^{}b\overline{b}c\overline{c}`$ in comparison with the signal process(1) (hereafter,we will call the signal process(1) and 3.) other electroweak processes as all the EW process).
Second, comparing the signal process(1) to those in the remaining background processes, i.e.
2.) the QCD process
$$\begin{array}{ccc}e^+e^{}\gamma ^{}/Z^0(b\overline{b})/(c\overline{c})\hfill & +& g^{}\\ & & \\ & & (c\overline{c})/(b\overline{b})\end{array}$$
(2)
3.) other electroweak processes
$$e^+e^{}\gamma ^{}Z^0b\overline{b}c\overline{c},$$
(3)
$$e^+e^{}\gamma ^{}\gamma ^{}b\overline{b}c\overline{c},$$
(4)
$$\begin{array}{ccc}e^+e^{}\gamma ^{}/Z^0(b\overline{b})/(c\overline{c})\hfill & +& \gamma ^{}/Z^0\\ & & \\ & & (c\overline{c})/(b\overline{b}).\end{array}$$
(5)
the $`b\overline{b}`$ and $`c\overline{c}`$ in (1) have the following important pecularity: They originate predominantely from on-shell $`Z^0`$ bosons. This determines that the phase space of each of such kind of quarks and antiquarks is very limited: It is easy to see that, at $`\sqrt{S}=2M_Z`$(where $`M_Z`$ is $`Z^0`$ mass), the velocities of the two on-shell $`Z^0`$$`s`$, $`\beta =\sqrt{14M_Z^2/S}`$, are both zero. Thus this four quarks and antiquarks from on-shell $`Z^0`$ decay have the same energy $`M_Z/2`$. But for $`\beta =0`$, the on-shell $`Z^0Z^0`$ cross section $`\sigma _{ZZ}=0`$. Therefore we should study the process (1) above $`Z^0Z^0`$ threshold, i.e. $`\sqrt{S}>2M_Z`$. In this case, if the quark mass is neglected, the quark energy $`E_i`$($`i=c,\overline{c},b,\overline{b}`$) in the on-shell $`Z^0Z^0`$ decay satisfies,
$$\frac{M_Z(1\beta )}{2\sqrt{1\beta ^2}}E_i\frac{M_Z(1+\beta )}{2\sqrt{1\beta ^2}},$$
(6)
This shows that $`E_i`$ is limited to a given small region. Obviously, this range becomes wider as $`\sqrt{S}`$ increases, but even at the highest energy of LEP2, i.e. $`\sqrt{S}=200GeV`$, we have
$$29.5GeVE_i70.5GeV,$$
(7)
which is still a very narrow region compared to the processes (2)-(5), in which the quark energy can range from $`0`$ to $`\sqrt{S}/2`$, since there are non-resonant intermediate states (eg. $`\gamma ^{}`$, $`Z^0`$, $`g^{}`$). If only the energy range given by Eq. (6) is considered in measurement, the phase space of the four quarks in background processes (2)-(5) is restricted into a small part of the total, so that their cross sections are all strongly suppressed.
To show these effects quantitatively, we calculate the cross sections and differential cross sections of processes(1)-(5) in groups. For the comparison of the signal process to its background, (1) should be studied separately. In the QCD process(2), one of the two quark pairs originates from a colour-octet gluon, so both $`c\overline{c}`$ and $`b\overline{b}`$ are in colour-octets. In all the other processes, they are in colour-singlets. Therefore there is no interference between process(2) and the others. Hence the QCD process(2) can be studied separately. While all the other processes we consider here\[(1) and (3)-(5)\] can lead to the state with exactly the same quantum numbers, thus can interfere with each other, so we should include the contributions from all of the interference terms. We note that the scattering matrices for these processes can be calculated using the perturbation theory in the standard model for electroweak and strong interactions. The numerical results for the cross sections are obtained by integrated in the 8-dimensional phase space of four fermion system which is parameterized as usual(see, e.g.).
Before we present the calculated results for the cross sections, we would like to mention the following. we note that in general, a quark fragments into a hadron jet which can be observed in experiments. But if a quark is soft or collinear with other quarks, this quark cannot form a resolvable hadron jet. There are several different schemes to define a resolvable hadron jet. We use the Durham scheme. According to this scheme, a quark $`i`$ and another quark $`j`$ can form two resolvable jets if their energies $`E_i`$, $`E_j`$ and the angle $`\theta _{ij}`$ between their moving directions satisfy the condition $`y_{ij}>y_{cut}`$. Here
$$y_{ij}=\frac{2min(E_i^2,E_j^2)(1cos\theta _{ij})}{S},(ij)$$
(8)
and $`y_{cut}`$ is taken as 0.0015, as an example, same as . If $`y_{ij}>y_{cut}`$ for all possible permutations of $`i`$ and $`j`$, we say that these four quarks fragment into four different hadron jets. This criteria is applied in our calculation, thus the numerical results we present in the following should be understood as those for $`b\overline{b}c\overline{c}`$ four jets.
We now take $`\sqrt{S}=200GeV`$ as an example and show the results for the total cross sections of the $`Z^0Z^0`$ process(1), the corresponding QCD process(2) and all EW processes\[(1) and (3)-(5)\] in Fig.1. The shaded areas represent the corresponding cross sections $`\sigma _{ZZ}^c`$, $`\sigma _{EW}^c`$, $`\sigma _{QCD}^c`$ with the constraint(7) for the phase space. From the figure, we see clearly that the $`Z^0Z^0`$ events dominate the $`b\overline{b}c\overline{c}`$ four jet events, especially in the phase space as limited in (7). More precisely, we see that the $`Z^0Z^0`$ events in (7) take about 94% of the whole(including off-shell) $`Z^0Z^0`$ events, this implies our criteria has selected most of the $`Z^0Z^0`$ events; and that $`\sigma _{ZZ}^c\sigma _{EW}^c`$, with $`\delta _{EW}=\frac{\sigma _{EW}^c\sigma _{ZZ}^c}{\sigma _{ZZ}^c}3\%`$, this shows clearly that in the limited phase space contributions from the non-$`Z^0Z^0`$ EW processes and all interference between every two single processes are negligible. We also see $`\delta _{QCD}=\frac{\sigma _{QCD}^c}{\sigma _{ZZ}^c}0.7\%`$. This means that the corresponding QCD process is really negligible.
In Fig. 2 we show the corresponding energy distributions $`d\sigma /dE`$. In the case that the quark mass are neglected, the symmetry of the matrix element under the interchange of (anti)quark labels implies that the energy distributions are identical for the four quarks and antiquarks in the above mentioned processes. From this figure, we see again the dominance of $`Z^0Z^0`$ events in the energy range(7): The distribution curve of $`Z^0Z^0`$ process and that of all the EW processes show a high platform, and they almost coincide in this range. While outside rang(7), the $`Z^0Z^0`$ curve drop very fast and the other process become dominant. For the QCD process, the energy distribution is peaked near the two edges: $`E_j=0`$ and $`E_j=\sqrt{S}/2`$, since in these events we have quarks both from the decay of $`\gamma ^{}/Z^0`$ with virtuality $`\sqrt{S}`$(and thus the quark energy $`E_j`$ is $`\sqrt{S}/2`$) and those from soft virtual gluons (i.e. $`E_j0`$). but it is an order of magnitude lower than that of the $`Z^0Z^0`$ and the EW processes under the range (7). These results show clearly the efficiency of the restriction(7). It picks most of the signal events but drops a large part of the background events.
It may be also interesting to look at the angular distribution of these processes because of the following. In the $`Z^0Z^0`$ process (1), the colour reconnection may lead to two new colour singlets $`b\overline{c}`$ and $`c\overline{b}`$ if $`b`$ and $`\overline{c}`$($`c`$ and $`\overline{b}`$) are sufficiently close to each other in phase space. This means that the colour reconnection has large probability to occur if the angle $`\theta `$ between $`b`$ and $`\overline{c}`$ (or $`c`$ and $`\overline{b}`$) is small. We show in Fig. 3 the distribution $`d\sigma /d\mathrm{cos}\theta `$ versus angular separation between $`c`$ and $`\overline{b}`$ for the $`Z^0Z^0`$ process, the QCD process and the whole EW processes under condition (7). The angular distributions for the $`Z^0Z^0`$ process and the EW processes almost coincide, the same feature as in Fig 2. For the QCD process, the $`c\overline{b}`$ angular distribution is peaked in the back-to-back direction, which reflects the dominance of the back-to-back configuration for $`c\overline{b}`$(and also $`b\overline{c}`$) with the virtual gluon preferentially emitted along the quark or antiquark directions. For $`\mathrm{cos}\theta 1`$, the distribution is strongly suppressed by $`y_{cut}`$, while for $`\mathrm{cos}\theta 1`$, it is slightly restricted by the condition (7). Obviously, when $`\theta `$ is less than about 53 degree, the differential cross section of the QCD process is at least three orders of magnitude lower than the $`Z^0Z^0`$ one, which makes the measurements of the possible colour reconnection effects in the real $`Z^0Z^0`$ process more feasible.
In summary, the study of the colour reconnection effects in hadronic reactions is of fundamental significance in understanding QCD. High statistical data above the $`Z^0Z^0`$ threshold at LEP2 may allow a model-independent probe of these effects. The background processes are suppressed to a negligible level if we choose only $`e^+e^{}Z^0Z^0b\overline{b}c\overline{c}`$ as signal events and limit the measurement in the energy range given by (6). Qualitative analysis and quantitative results presented in this letter show the following: First, the greatest pollution to the signal process, the corresponding $`W^+W^{}`$ process, can be dropped by the CKM suppression for $`c,\overline{b}`$ ($`b,\overline{c}`$) pair. Second, in energy range(6), the pollution from the corresponding QCD process, that from other electroweak processes and that from all electroweak interference terms are negligibly small, while most of the $`Z^0Z^0`$ events are picked. Furthermore, limiting the angle between $`b`$ and $`\overline{c}`$(or between $`c`$ and $`\overline{b}`$) to small values, where colour reconnections occur with larger possibilities, the QCD background will be further suppressed. So comparing $`e^+e^{}Z^0Z^0b\overline{b}c\overline{c}`$ events with the superposition of the corresponding $`Z^0b\overline{b}`$ and $`Z^0c\overline{c}`$ events from LEPI would provide an unambiguous model-independent probe of the colour reconnection effects.
ACKNOWLEDGEMENT
We thank P. de Jong, Z. Liang, W. Metzger and T. Sjöstrand for helpful discussions. This work is supported in part by National Natural Science Foundation of China(NSFC).
FIGURE CAPTIONS
Fig. 1. The total cross section for $`e^+e^{}Z^0Z^0b\overline{b}c\overline{c}`$, that for the whole EW processes and that for the corresponding QCD process at $`\sqrt{S}=200GeV`$. The shaded area represents the corresponding cross section under the restriction $`29.5GeV<E_i<70.5GeV`$. In the calculations here and following $`\alpha _s`$ is set to 0.1.
Fig. 2. The energy distribution $`\frac{d\sigma }{dE_b}`$ of the $`Z^0Z^0`$ process(solid line), the whole EW processes(dashed line) and the QCD process(dotted line). The dash-dotted line denotes the edges of the energy range $`29.5GeV<E<70.5GeV`$.
Fig. 3. The angular distribution $`\frac{d\sigma }{d\mathrm{cos}\theta }`$ of the $`Z^0Z^0`$ process(solid line), the whole EW processes(dotted line) and the QCD process(dashed line) under the restriction $`29.5GeV<E<70.5GeV`$. |
no-problem/9912/cond-mat9912088.html | ar5iv | text | # References
LIFSHITZ-LIKE ARGUMENT FOR LOW-LYING STATES
IN A STRONG MAGNETIC FIELD
Cyril FURTLEHNER
Institute of Physics, University of Oslo
P.O. Box 1048 Blindern
N-0316 Oslo, Norway
Abstract :
We are interested in the question of the localization of an electron moving in two dimensions, submitted to a strong magnetic field and scattered by randomly distributed zero-range impurities. Considering the explicit expression for the density of states obtained by Brézin, Gross and Itzykson, we adapt the Lifshitz argument, in order to analyse the somewhat unusual power-law behavior of the low energy spectrum. The typical configurations of disorder which gives rise to low energy states are identified as cluster of impurities of well defined form, when the impurity density is smaller than the Landau degeneracy. This allows for an interpretation of low lying states, localized around these clusters. The size of these clusters diverges logarithmically when the energy goes to zero.
1. Introduction
The two dimensionnal problem of an electron submitted to a strong magnetic field and moving in a random potential, has been the subject of intensive investigations, because of its relevance for the integer quantum Hall effect. In the case of a locally correlated disordered potential, some explicit results have been found, concerning the average density of states . Although the DOS doesn’t contain in general any information about localization, exception should be made for the tails of the spectrum, which are generally associated with unprobable realizations of the random potential, and as in the Lifshitz-tail examples, are interpreted in terms of localized states. In the strong magnetic field problem, such a situation is encountered in the case of gaussian fluctuations, where the spectrum displays a gaussian tail at large energy . If disorder is realized by delta impurities obeying Poisson statistics, the situation is very different. The spectrum is bounded from below, and instead of having a tail, it is singular at low energy. More precisely, depending on a parameter $`f=\frac{\rho }{\rho _l}`$ which is the ratio between the density of impurities and the Landau degeneracy, it takes the following asymptotic form ,
$$\lambda \rho (E)_{\omega +0}\{\begin{array}{cc}(1f)\delta (\omega )+A(f)\omega ^f,\hfill & 0<f<1\hfill \\ \frac{1}{\omega (\mathrm{ln}[\omega /\alpha ])^2},\hfill & f=1\hfill \\ B(f)\omega ^{f2},\hfill & 1<f<2\hfill \\ constant,\hfill & f=2\hfill \\ C(f)\omega ^{f2},\hfill & f>2\hfill \end{array}$$
(1)
with $`\omega =\frac{f}{\lambda \rho }(E\omega _c)`$. This behavior is very uncommon, and seems to be particular to the choice of short-range single impurity potential, for long range one, one recovers the usual Lifshitz tail . In the standard Lifshitz argument, when there is no magnetic field, low energy states are localized in regions of space where impurities are absent; An empty region, of typical size $`\pi R^2`$, contains states with energy of the order of $`1/(\pi R^2)`$. For a Poisson distribution, the probability of not finding a single impurity in a volume $`\pi R^2`$ is $`\mathrm{exp}(\rho \pi R^2)`$. Identifying the energy to the inverse size of the empty region let to obtain the low energy behavior $`\mathrm{exp}(\frac{\rho }{E})`$. This heuristic argument has been later confirmed by an exact calculus . The question is whether it is possible to adapt this argument for the problem with strong magnetic field, in order to have a physical interpretation of the base of the spectrum, known from elsewhere to be constituated of localized states .
Let us consider the case where the density of impurities is less than the Landau degeneracy ($`f<1`$). The zero energy delta peak has a simple interpretation and corresponds to the delocalized state which is expected at the center of each Landau band :
these states are indeed linear combinations of Landau states, which vanish at the position of the impurities. And in a given volume $`V`$, the number of Landau states at disposal is $`\rho _lV`$. The number of constraints imposed on the zero energy states is $`\rho V`$, the number of impurities. As a consequence, the corresponding subspace of states has the dimension $`(\rho _l\rho )V`$ (unless as will be seen later that two impurities coincide). This gives as expected the degeneracy $`\rho _l(1f)`$ per unit volume, given by (1). What remains to be analysed is the $`\omega ^f`$ behavior of the excited states spectrum.
The paper is organized as follows, in the first part, the problem with a finite number $`N`$ of impurities is analysed in details. The zero modes are first extracted from the Hilbert space, which allows then to define the restriction of the Hamiltonian to the excited subspace as a $`N\times N`$ matrix. The two impurity case is explicitely solved and elucidates the mechanism which produces low energy states. The generalization to a cluster of impurities is then considered and an approximate expression of the lowest energy is found. In the second part, a statistical analyses is performed, using this expression, in order to find the most probable configurations corresponding to a given low energy, and the contribution to the DOS is computed in the case $`f<1`$.
2. The $`N`$ delta impurity problem
a. Coherent states basis for the excited subspace
The $`N`$ impurity problem, projected onto the LLL is defined by the Hamiltonian
$$H=\lambda P_0\underset{i=1}{\overset{N}{}}\delta (𝐫𝐫_i)P_0$$
(2)
after shifting the spectrum by a constant. $`\lambda `$ is the coupling constant of the delta potential, $`P_0`$ is the projection operator on the LLL. Let us consider the basis corresponding to the symmetric gauge, centered at position $`a`$ (using complex notation and magnetic units) :
$$\varphi _p^a(𝐫)=\frac{1}{\sqrt{\pi p!}}(za)^pe^{\frac{1}{2}(z\overline{z}+a\overline{a}2z\overline{a})}pN$$
(3)
In the situation where there is only one impurity, situated at position $`a`$, these states remain eigenstates, with zero energy for $`p>0`$ and with energy $`\frac{\lambda }{\pi }`$ for $`p=0`$. Let us associate to the impurity $`i`$ the coherent state $`\psi _i`$, corresponding to the only non-vanishing state at $`𝐫_i`$,
$$\psi _i(𝐫)=\varphi _0^{z_i}(𝐫)=\frac{1}{\sqrt{\pi }}e^{\frac{1}{2}(z\overline{z}+z_i\overline{z}_i2z\overline{z}_i)}$$
(4)
As already mentionned, the LLL is divided into two orthogonal subspaces : the zero energy subspace of dimension higher or equal to $`\rho _lVN`$, and the excited subspace of dimension less or equal to $`N`$. The subspace of wave-functions vanishing at $`𝐫_i`$, is orthogonal to $`\psi _i`$ and contains the zero energy states. Therefore the zero-energy subspace is orthogonal to the one generated by $`\psi _1`$,…$`\psi _N`$. Let us find under which conditions theses states are linearly independents.
Consider
$$\psi (𝐫)=\underset{i=1}{\overset{N}{}}a_i\psi _i(𝐫)$$
(5)
a linear combination of these $`N`$ states. In the Landau symmetric basis $`\psi `$ has the form
$$\psi (𝐫)=\underset{p=0}{\overset{\mathrm{}}{}}b_p\varphi _p(𝐫)$$
(6)
The relation between the $`b_p`$ and $`a_i`$ is
$$b_p=\frac{1}{\sqrt{p}!}\underset{i=1}{\overset{N}{}}a_i\overline{z}_i^pe^{\frac{1}{2}z_i\overline{z}_i}$$
(7)
In order for $`\psi `$ to be identically zero, the $`b_p`$ have to vanish. In particular, imposing this to the first $`N`$ ($`p=0\mathrm{}N1`$) leads to an homogeneous system of equations for the $`a_n`$, with a determinant proportionnal to the $`\overline{z}_i`$’s Vandermonde determinant, i.e. a completely antisymmetric function of these variables. Therefore a necessary condition for the $`\psi _i`$ to be linearly dependent is that two impurities coincide, and it is evidently sufficient. As a consequence $`\psi _1`$,…$`\psi _N`$ is a basis of the excited subspace (non-orthogonal).
Let us write the Hamiltonian into this basis. Starting from the decomposition (5) of an arbitrary excited state, the action of $`H`$ on this state is
$$<𝐫|H|\psi >=\lambda \underset{i=1}{\overset{N}{}}P_0(𝐫,𝐫_i)\underset{n=1}{\overset{N}{}}a_n\psi _n(𝐫_i)$$
(8)
with
$$P_0(𝐫,𝐫^{})=\frac{1}{\pi }e^{\frac{1}{2}(z\overline{z}+z^{}\overline{z}^{}2z\overline{z}^{})}$$
(9)
the kernel of the LLL projection operator. Using the fact that
$$<\psi _i|\psi _j>=\pi P_0(𝐫_i,𝐫_j)=\sqrt{\pi }\psi _j(𝐫_i)$$
(10)
we obtain
$$<𝐫|H|\psi >=\lambda \underset{i=1}{\overset{N}{}}\underset{j=1}{\overset{N}{}}a_jP_0(𝐫_i,𝐫_j)\psi _i(𝐫)$$
(11)
In conclusion the matrix elements of $`H`$ in the $`(\psi _1,\mathrm{},\psi _N)`$ basis are given by $`\lambda P_0(𝐫_i,𝐫_j)`$.
b. Two impurities
In the case with only two impurities, we can diagonalize this matrix. Choosing the spatial reference such that $`z_1=z_2=a/2`$, where $`a`$ is the distance between the two impurities, we have
$$H_2=\frac{\lambda }{\pi }\left(\begin{array}{cc}1& e^{\frac{1}{2}a^2}\\ e^{\frac{1}{2}a^2}& 1\end{array}\right)$$
(12)
The eigenvalues of this matrix correspond to the energies $`E_{}`$ and $`E_+`$ of the two excited states,
$$E_\pm =\frac{\lambda }{\pi }(1\pm e^{\frac{1}{2}a^2})$$
(13)
The corresponding wave-functions beeing (up to a normalization coefficient)
$$\psi _\pm =\psi _1\pm \psi _2=\frac{1}{\sqrt{\pi }}e^{\frac{1}{2}(z\overline{z}+a^2)}(e^{2az}\pm e^{2az})$$
(14)
In conclusion, when the two impurities are well seperated, the excited states have almost the same energy, comparable to the one impurity value. Whereas, a low energy state is obtained when the two impurities are close. For $`a<<1`$ this energy behave like
$$E_{}\frac{\lambda }{2\pi }a^2$$
(15)
c. Impurity cluster
The preceeding example suggests that low-energy states are associated with regions of high concentrations of impurities. Indeed, $`N`$ impurities involve $`N`$ localized states $`\psi _i`$ (which have a characteristic size $`1/\rho _l`$). Low-lying states are expected to appear when the overlap between these states starts to be important. Consider a situation (figure 4) where $`N`$ impurities are located in a small volume ($`\pi R^2`$), that is a cluster of impurities, then the $`N`$ corresponding states overlap essentially with the $`N_l=\rho _l\pi R^2`$ Landau states situated inside the disc (in the symmetric gauge, the states of the LLL are localized on a ring of radius $`\sqrt{l/\rho _l\pi }`$, where $`l`$ is the angular momentum ) So if $`N>N_l`$, we expect to have $`NN_l`$ low-energy states. It seems therefore natural to consider such configurations in order to analyse the bottom of the spectrum.
Let us estimate the lowest energy corresponding to such a configuration. Consider the decompositions (5) and (6) of an excited state. A low energy state is supposed to avoid the impurities. A way to construct such a state, is to impose on the $`b_p`$ to vanish until $`p=N2`$ included. In that case, $`\psi `$ has components only on the Landau states $`p>N2`$, situated at a distance from the center of the cluster greater or equal to $`\sqrt{\frac{N1}{\pi \rho _l}}`$. Such a state is given by the $`a_n`$, solution of the set of equations
$$b_p=0=\underset{n=1}{\overset{N}{}}a_ne^{\frac{1}{2}z_n\overline{z}_n}z_n^pp=0,\mathrm{}N2$$
(16)
And the solution, up to a proportionality constant is
$$a_ne^{\frac{1}{2}z_n\overline{z}_n}=C_{N,n}$$
(17)
where $`C_{N,n}`$ is the cofactor of the element $`(N,n)`$ in the Vandermonde type matrix:
$$D_N^p=\left(\begin{array}{cccc}1& \mathrm{}& \mathrm{}& 1\\ z_1& \mathrm{}& \mathrm{}& z_N\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ z_1^{N2}& \mathrm{}& \mathrm{}& z_N^{N2}\\ z_1^p& \mathrm{}& \mathrm{}& z_N^p\end{array}\right)$$
(18)
In particular, for $`p=0\mathrm{}N2`$,
$$detD_N^p=0=\underset{n=1}{\overset{N}{}}C_{N,n}z_n^p$$
(19)
which is precisely what we want. Moreover the $`C_{N,n}`$ have the expression
$$C_{N,n}=(1)^{\frac{N(N1)}{2}+n}\underset{p<qp,qn}{}(z_pz_q)$$
(20)
The matrix $`H_N=\lambda P_0(𝐫_i,𝐫_j)`$, written in $`\psi _1,\mathrm{}\psi _N`$ basis, is self-adjoint and positive, so its smallest eigenvalue $`E_0`$ verify the inequality:
$$E_0\frac{(\psi |H_N\psi )}{(\psi |\psi )}$$
(21)
with the norm defined by,
$$(\psi |\psi )=\underset{n=1}{\overset{N}{}}\overline{a}_na_n$$
(22)
From this choice and for the considered state the inequality rewrites
$$E_0E=\frac{\lambda }{\pi }\frac{\underset{n,m}{}\overline{C}_{N,n}C_{N,m}e^{z_m\overline{z}_n}}{_n|C_{N,n}|^2e^{|z_n|^2}}$$
(23)
and
$$E\frac{\lambda }{\pi }\frac{\underset{n,m}{}\overline{C}_{N,n}C_{N,m}e^{z_m\overline{z}_n}}{_n|C_{N,n}|^2}$$
(24)
Expanding the exponential in the preceeding expression, we observe that the first non-vanishing term corresponds to $`\frac{(z_m\overline{z}_n)^{N1}}{(N1)!}`$, because the determinant of $`D_N^p`$ is zero for $`p<N1`$. In addition, since $`|z_n|^2N_lN`$, the serie has a rapid decay, which allow to neglect the remainder of the expansion. We then obtain
$$E_0\frac{\lambda }{\pi }\frac{1}{(N1)!}\frac{|D_{N1}|^2}{_{n=1}^N|C_{N,n}|^2}$$
(25)
with $`D_{N1}=(1)^{\frac{N(N1)}{2}}_{p<q}(z_pz_q)`$ the Vandermonde determinant of the $`z_n`$ variables. If $`n^{}`$ labels the impurity for which, $`_{pn}|z_nz_p|^2`$ is minimum, then we have the inequality
$$\underset{n=1}{\overset{N}{}}|C_{N,n}|^2N|C_{N,n^{}}|^2$$
(26)
which leads to the approximate form for $`E_0`$
$$E_0\frac{\lambda }{\pi }\frac{1}{N!}\underset{p}{\mathrm{min}}\underset{np}{}|z_pz_n|^2$$
(27)
and which coincides with expression (15) in the two impurities case.
3. Cluster thermodynamic
We can now use this expression in order we understand the low energy behavior of the spectrum obtained by Brézin, Gross, Itzykson ($`f<1`$). We start from the principle that each impurity in the system gives rise to an excited state with energy depending on the configuration of the other impurities. If the concentration around one impurity is high, in other words if the impurity is in a cluster, then the corresponding energy is low and not affected by the impurities situated outside of the cluster (too far away for overlap effect). We can therefore associate a low-energy state to the formation of a cluster around an impurity, and by extension, a density of states per impurity. Let $`X_i`$ be a variable parametrizing the cluster configuration of the impurity $`i`$. Its contribution to the density of state per impurity is proportionnal to the probability $`P(X_i)`$ of being realized
$$\rho _i(E)=DX_iP(X_i)\delta (E(X_i)E)$$
(28)
So in average, the low-energy density of states by unit volume is proportional to $`\rho `$ times the preceeding expression. If we use now the expression (27) to evaluate the energy of the clusters, we see that to a given energy corresponds a statistical ensemble of clusters. Each cluster is defined by its volume $`N_l`$, its mean density $`\nu =N/N_l>1>f`$, and by the positions $`z_i,i=1\mathrm{}N`$, of the impurities in the cluster. At very low energy, the clusters are expected to be macroscopic objects, and have to be described by a finite number of macroscopic variables, giving the density profile, in replacement of the microscopic degrees of freedom (namely the individual positions of impurities). Let us look first for the distribution of positions in a cluster of energy $`E`$, size $`N_l`$ and mean density $`\nu `$. For a given configuration the energy is
$$E=e^{_{n=1}^N\mathrm{log}|z_n|^2N\mathrm{log}N+N}$$
(29)
using the Stirling formula ($`N!N^Ne^N`$) and with $`0|z_n|^2N_l`$ ($`N_l=\pi \rho _lR^2`$). Consider a subdivision of the cluster in $`M`$ cells, corresponding to intervals of the $`|z_n|^2`$ equal to $`a=N_l/M`$ (cells with identical area $`\frac{\pi R^2}{M}=\frac{a}{\rho _l}=\pi \delta r^2=\frac{\delta |z|^2}{\rho _l}`$). If $`n_p`$ is the number of impurities in the cell $`p`$, then the probability associated to this configuration $`(n_1,\mathrm{},n_M)`$ of the cluster is
$$P(n_1,\mathrm{},n_M,N)=\frac{N!}{n_1!\mathrm{}n_M!}(\frac{1}{M})^N\frac{(fN_l)^N}{N!}e^{fN_l}$$
(30)
Since we are interested in the continuum limit, define ($`x=|z|^2/N_l=\frac{p}{N_l}a`$)
$$\nu (x)dx=n_p=\nu (x)\frac{1}{M}1MN$$
(31)
the energy takes then the form
$$\mathrm{log}E=N_l_0^1[\nu (x)\mathrm{log}x+\nu \nu \mathrm{log}\nu ]𝑑x$$
(32)
and, at leading contribution in $`N_l`$, the probability is
$$\mathrm{log}P=N_l_0^1[\nu (x)(1\mathrm{log}\frac{\nu (x)}{f})f]𝑑x$$
(33)
with the constraint
$$_0^1\nu (x)𝑑x=\nu $$
(34)
Let us determine the configuration for which $`\mathrm{log}P`$ is maximum at fixed $`E`$, $`N_l`$ and $`\nu `$. Using a Lagrange multiplier for the energy constraint we obtain the saddle point equation
$$\frac{\mathrm{log}P}{\nu (x)}\alpha \frac{\mathrm{log}E}{\nu (x)}=0$$
(35)
The solution, with proper normalization, is
$$\nu (x)=\nu (1\alpha )(\frac{x}{N_l})^\alpha $$
(36)
$`\alpha `$ being implicitely determined through the relation between $`\gamma =\frac{1}{1\alpha }`$ and the energy,
$$\mathrm{log}E=N_l(\nu (\mathrm{log}\nu 1)+\gamma \nu )$$
(37)
and the probability now takes the form
$$\mathrm{log}P=\mathrm{log}EN_l(f\nu (1+\mathrm{log}\gamma f))$$
(38)
At a given energy, the possible configurations are parametrized by $`(N_l,\nu )`$. The saddle point is determined by the set of equations
$$\left(\frac{\mathrm{log}P}{N_l}\right)_{\nu ,E}=0$$
(39)
$$\left(\frac{\mathrm{log}P}{\nu }\right)_{N_l,E}=0$$
(40)
Using (37), which determines $`\gamma `$ these equations rewrites
$$\mathrm{log}\gamma f+\frac{1\mathrm{log}\nu }{\gamma }\frac{f}{\nu }=0$$
(41)
$$\mathrm{log}\gamma f\frac{1}{\gamma }\mathrm{log}\nu =0$$
(42)
So finally, $`\mathrm{log}P`$ has its maximum (which can be verified by computing second derivatives) at energy $`E`$ when $`\nu =1`$, $`N_l=f\mathrm{log}E/(1f)`$, which corresponds to $`\gamma =1/f`$. The other solution ($`\nu =f`$ and $`\gamma =1`$ is also a local maximum, but outside the range of interest for the parameters. For this type of configurations (paramatrized now only by $`X=N_l`$), we have the relation
$$\mathrm{log}\frac{P}{E}=f\mathrm{log}E$$
(43)
Using (28) (with the change of variable $`DXdE/E`$), we arrive at the expected low-energy behavior of the density of states
$$\rho (E)E^f$$
(44)
Moreover, states contributing to this behavior are associated to the existence of impurity clusters of size $`N_l=f\mathrm{log}E/(1f)`$ and the shape
$$\nu (x)=fx^{f1}$$
(45)
In contrary to the Lifshitz argument, the low energy states are associated with regions of high impurity concentration around which they localize, with caracteristic size $`\mathrm{log}\frac{1}{E}`$, a rough indication of a logarithmic divergency of the localization length, at least when $`f1`$. This feature might be very particular to the zero-range nature of the impurity scattering potential. When $`f`$ approaches $`1`$, this picture might be modified by some “percolation” effect of these cluster. For $`f`$ greater than $`1`$ the argument developped in this paper is not applicable, neither is the stantard Lifshits argument, to reproduce the low energy spectrum. This seems to indicate that states are not localized at the bottom of the spectrum in this case.
Acknowledgements
I am gratful to A. Comtet for interesting me to this question and for useful discussions, as well as to J.M. Leinaas, L. Pastur and S. Villain-Guillot. I am thankful to the Institute of physics of the Oslo University for hospitality, and the National Reasearch Council of Norway for financial support. |
no-problem/9912/nucl-th9912003.html | ar5iv | text | # Comment on “Nucleon form factors and a nonpointlike diquark”
## Abstract
Authors of Phys. Rev. C 60, 062201 (1999) presented a calculation of the electromagnetic form factors of the nucleon using a diquark ansatz in the relativistic three-quark Faddeev equations. In this Comment it is pointed out that the calculations of these form factors stem from a three-quark bound state current that contains overcounted contributions. The corrected expression for the three-quark bound state current is derived.
The proper way to include an external photon into a few-body system of strongly interacting particles described by integral equations has recently been discussed in detail . In particular, it has been shown how to avoid the overcounting problems that tend to plague four-dimensional approaches . The purpose of this Comment is to point out that just this type of overcounting is present in the work of Bloch et al. who calculated the electromagnetic current of the nucleon (and hence form factors), using the diquark ansatz in a four-dimensional Faddeev integral equation description of a three-quark system. Moreover, it is shown that the correct expression for the electromagnetic current consists of just three of the five contributions calculated in Ref. .
We begin by following Ref. which is devoted to the discussion of the electromagnetic current of three identical particles, and is therefore directly applicable to the present case of a three-quark system. There we used the gauging of equations method to show that the bound state electromagnetic current of three identical particles is given by
$$j^\mu =\overline{\mathrm{\Psi }}\mathrm{\Gamma }^\mu \mathrm{\Psi }$$
(1)
where $`\mathrm{\Psi }`$ ($`\overline{\mathrm{\Psi }}`$) is the wave function of the initial (final) three-body bound state, and $`\mathrm{\Gamma }^\mu `$ is the three-particle electromagnetic vertex function given by
$$\mathrm{\Gamma }^\mu =\frac{1}{6}\underset{i=1}{\overset{3}{}}\left(\mathrm{\Gamma }_i^\mu D_{0i}^1+\frac{1}{2}v_i^\mu d_i^1\frac{1}{2}v_i\mathrm{\Gamma }_i^\mu \right).$$
(2)
Here $`\mathrm{\Gamma }_i^\mu `$ is the electromagnetic vertex function of the $`i`$’th particle, $`d_i`$ is the propagator of particle $`i`$, $`v_i`$ is the two-body potential between particles $`j`$ and $`k`$ ( $`ijk`$ is a cyclic permutation of 123), $`v_i^\mu `$ is the five-point function resulting from the gauging of $`v_i`$, and $`D_{0i}d_jd_k`$ is the free propagator of particles $`j`$ and $`k`$. Because the bound state wave function $`\mathrm{\Psi }`$ is fully antisymmetric, we can write
$$j^\mu =\frac{1}{2}\overline{\mathrm{\Psi }}\left(\mathrm{\Gamma }_3^\mu D_{03}^1+\frac{1}{2}v_3^\mu d_3^1\frac{1}{2}v_3\mathrm{\Gamma }_3^\mu \right)\mathrm{\Psi }.$$
(3)
The second term on the right hand side (RHS) of this expression defines the two-body interaction current contribution
$$j_{\text{two-body}}^\mu =\frac{1}{4}\overline{\mathrm{\Psi }}v_3^\mu d_3^1\mathrm{\Psi },$$
(4)
while the first and third terms together make up the one-body current contribution to the bound state current. As discussed in Ref. , the first term on the RHS of Eq. (3) defines an electromagnetic current
$$j_{\text{overcount}}^\mu =\frac{1}{2}\overline{\mathrm{\Psi }}\mathrm{\Gamma }_3^\mu D_{03}^1\mathrm{\Psi }$$
(5)
which overcounts the one-body current contributions, while the third term defines a current
$$j_{\text{subtract}}^\mu =\frac{1}{4}\overline{\mathrm{\Psi }}v_3\mathrm{\Gamma }_3^\mu \mathrm{\Psi }$$
(6)
which plays the role of a subtraction term in that it removes the overcounted contributions. Here we shall not be concerned with the two-body interaction current, but rather, endeavour to examine the cancellations taking place between the first (“overcount”) and last (“subtract”) terms in detail. Thus we stress that the correct one-body contribution to the current, also known as the impulse approximation, is given by
$$j_{\text{impulse}}^\mu =j_{\text{overcount}}^\mu j_{\text{subtract}}^\mu .$$
(7)
To reveal these cancellations one writes the bound state wave function in terms of its Faddeev components
$$\mathrm{\Psi }=\mathrm{\Psi }_1+\mathrm{\Psi }_2+\mathrm{\Psi }_3$$
(8)
where
$$\mathrm{\Psi }_i=\frac{1}{2}D_{0i}v_i\mathrm{\Psi }.$$
(9)
These components are related through the Faddeev equations
$$\mathrm{\Psi }_i=\frac{1}{2}D_{0i}t_i(\mathrm{\Psi }_j+\mathrm{\Psi }_k)$$
(10)
where $`t_i`$ is the $`t`$ matrix for the $`j`$-$`k`$ system, and for identical fermions obey the symmetry relations
$$P_{12}\mathrm{\Psi }_1=\mathrm{\Psi }_2,P_{13}\mathrm{\Psi }_1=\mathrm{\Psi }_3,P_{23}\mathrm{\Psi }_1=\mathrm{\Psi }_1,\text{etc.}$$
(11)
where $`P_{ij}`$ is the operator interchanging particles $`i`$ and $`j`$. The term with overcounting is thus
$$j_{\text{overcount}}^\mu =\frac{1}{2}\left(\overline{\mathrm{\Psi }}_1+\overline{\mathrm{\Psi }}_2+\overline{\mathrm{\Psi }}_3\right)\mathrm{\Gamma }_3^\mu D_{03}^1\left(\mathrm{\Psi }_1+\mathrm{\Psi }_2+\mathrm{\Psi }_3\right)$$
(12)
which after the use of Eqs. (11) becomes a sum of five terms
$`j_{\text{overcount}}^\mu `$ $`=`$ $`{\displaystyle \frac{1}{2}}\overline{\mathrm{\Psi }}_3\mathrm{\Gamma }_3^\mu D_{03}^1\mathrm{\Psi }_3+\overline{\mathrm{\Psi }}_3\mathrm{\Gamma }_1^\mu D_{01}^1\mathrm{\Psi }_3`$ (13)
$`+`$ $`\overline{\mathrm{\Psi }}_2\mathrm{\Gamma }_1^\mu D_{01}^1\mathrm{\Psi }_3+\overline{\mathrm{\Psi }}_2\mathrm{\Gamma }_2^\mu D_{02}^1\mathrm{\Psi }_3+\overline{\mathrm{\Psi }}_2\mathrm{\Gamma }_3^\mu D_{03}^1\mathrm{\Psi }_3.`$ (14)
The diquark ansatz used in Ref. is equivalent to invoking the separable approximation for the two-body $`t`$ matrix:
$$t_i=h_i\tau _i\overline{h}_i,$$
(15)
with $`\tau _i`$ playing the role of the diquark propagator and $`h_i`$ describing the vertex between the diquark and two free quarks. In the case of separable interactions, it is usual to define the spectator-quasiparticle (quark-diquark) amplitude $`X_i`$ through the equation
$$\mathrm{\Psi }_i=G_0h_i\tau _iX_i$$
(16)
where $`G_0=d_1d_2d_3`$. In terms of these amplitudes the contribution of Eq. (14) becomes
$`j_{\text{overcount}}^\mu `$ $`=`$ $`{\displaystyle \frac{1}{2}}\overline{X}_3d_3\mathrm{\Gamma }_3^\mu d_3\tau _3\left(\overline{h}_3d_1d_2h_3\right)\tau _3X_3+\overline{X}_3\tau _3\left(\overline{h}_3d_1\mathrm{\Gamma }_1^\mu d_1d_2h_3\right)d_3\tau _3X_3`$ (17)
$`+`$ $`\overline{X}_2\tau _2d_2\overline{h}_2d_1\mathrm{\Gamma }_1^\mu d_1h_3d_3\tau _3X_3+\overline{X}_2\tau _2d_2\overline{h}_2\mathrm{\Gamma }_2^\mu d_2d_1h_3d_3\tau _3X_3`$ (18)
$`+`$ $`\overline{X}_2\tau _2d_2\overline{h}_2d_3\mathrm{\Gamma }_3^\mu d_1h_3d_3\tau _3X_3.`$ (19)
The five terms summed on the RHS of Eq. (19) are illustrated in Fig. 1. The last four terms are identical to the contributions $`2\mathrm{\Lambda }_\mu ^i`$ ($`i=2,\mathrm{},5`$) of Ref. , while the first term on the RHS of Eq. (19) differs from $`\mathrm{\Lambda }_\mu ^1`$ only in that our diquark propagator contains a dressing bubble. With or without this bubble, Eq. (19) does not give the correct impulse approximation.
With the help of Eq. (9), the subtraction term of Eq. (6) can be expressed as
$`j_{\text{subtract}}^\mu `$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{i=1}{\overset{3}{}}}\overline{\mathrm{\Psi }}_3D_{03}^1\mathrm{\Gamma }_3^\mu \mathrm{\Psi }_i`$ (20)
$`=`$ $`{\displaystyle \frac{1}{2}}\overline{\mathrm{\Psi }}_3D_{03}^1\mathrm{\Gamma }_3^\mu \mathrm{\Psi }_3+\overline{\mathrm{\Psi }}_2D_{02}^1\mathrm{\Gamma }_2^\mu \mathrm{\Psi }_3.`$ (21)
Comparison with Eq. (14) shows that the first and fourth terms of Eq. (14) are overcounted.Actually the fourth and fifth terms of Eq. (14) are identical, as can easily be shown using Eq. (23). Thus, although we have singled out the fourth term as the one being overcounted, it should be understood that overcounting is due to either the fourth or fifth terms. Thus the correct expression for the impulse approximation is
$$j_{\text{impulse}}^\mu =\overline{\mathrm{\Psi }}_3\mathrm{\Gamma }_1^\mu D_{01}^1\mathrm{\Psi }_3+\overline{\mathrm{\Psi }}_2\mathrm{\Gamma }_1^\mu D_{01}^1\mathrm{\Psi }_3+\overline{\mathrm{\Psi }}_2\mathrm{\Gamma }_3^\mu D_{03}^1\mathrm{\Psi }_3.$$
(22)
For the work of Ref. , this means that the correct impulse approximation is given by the sum of their $`\mathrm{\Lambda }_\mu ^2`$, $`\mathrm{\Lambda }_\mu ^3`$, and $`\mathrm{\Lambda }_\mu ^5`$ only, and not, as claimed in their work, by the sum of all five $`\mathrm{\Lambda }_\mu ^i`$’s. Diagrammatically this means that the correct impulse approximation to the nucleon current in the diquark model corresponds to the sum of the second, third, and fifth diagrams of Fig. 1.
A further comment regarding Ref. concerns the numerical values obtained for the contributions $`\mathrm{\Lambda }_\mu ^1`$ and $`\mathrm{\Lambda }_\mu ^5`$. By using the symmetry properties of Eqs. (11), one can rewrite the Faddeev equations, Eqs. (10), as
$$\mathrm{\Psi }_i=D_{0i}t_i\mathrm{\Psi }_j$$
(23)
where $`ij`$. For separable interactions this implies that the amplitudes $`X_i`$ satisfy the equations
$$X_i=\overline{h}_iD_{0i}h_j\tau _jX_j$$
(24)
where $`ij`$. Using the time-reversed version of these equations one obtains $`\overline{X}_3=\overline{X}_2\tau _2\overline{h}_2d_1d_2h_3`$ which can be used to simplify the last term of Eq. (19):
$$\overline{X}_2\tau _2d_2\overline{h}_2d_3\mathrm{\Gamma }_3^\mu d_1h_3d_3\tau _3X_3=\overline{X}_3d_3\mathrm{\Gamma }_3^\mu d_3\tau _3X_3.$$
(25)
The RHS of this equation is just $`2\mathrm{\Lambda }_\mu ^1`$ of Ref. and we have therefore shown that
$$\mathrm{\Lambda }_\mu ^1=\mathrm{\Lambda }_\mu ^5.$$
(26)
This equality appears not to be reflected in the numerical results of Ref. as is evident from their Table II.
Finally, we note that the errors of Ref. have been perpetuated in a recent preprint . |
no-problem/9912/astro-ph9912539.html | ar5iv | text | # Untitled Document
THE FORMATION OF THE FIRST STARS <sup>1</sup> Presented at the 33rd ESLAB Symposium on “Star Formation from the Small to the Large Scale” held in Noordwijk, The Netherlands, November 2–5, 1999; to be published in the ESA Special Publications Series (SP-445), edited by F. Favata, A. A. Kaas, and A. Wilson
Richard B. Larson
Yale Astronomy Department
New Haven, CT 06520-8101, USA
larson@astro.yale.edu
ABSTRACT
The first bound star-forming systems in the universe are predicted to form at redshifts of about 30 and to have masses of the order of $`10^6`$M. Although their sizes and masses are similar to those of present-day star-forming regions, their temperatures are expected to be much higher because cooling is provided only by trace amounts of molecular hydrogen. Several recent simulations of the collapse and fragmentation of primordial clouds have converged on a thermal regime where the density is about $`10^3`$$`10^4`$cm<sup>-3</sup> and the temperature is about 300 K; under these conditions the Jeans mass is of the order of $`10^3`$M, and all of the simulations show the formation of clumps with masses of this order. The temperatures in these clumps subsequently rise slowly as they collapse, so little if any further fragmentation is expected. As a result, the formation of predominantly massive or very massive stars is expected, and star formation with a normal present-day IMF seems very unlikely. The most massive early stars are expected to collapse to black holes, and these in turn are predicted to end up concentrated near the centers of present-day large galaxies. Such black holes may play a role in the origin of AGNs, and the heavy elements produced by somewhat less massive stars also formed at early times may play an important role in chemically enriching the inner parts of large galaxies and quasars.
1. INTRODUCTION
How did star formation begin in the universe? And can we make any credible predictions about the properties of the first stars? Recent years have seen great advances in our theoretical understanding of the origin of structure in the universe and the formation of galaxies, and on the observational front we now have observations extending out to redshifts greater than 5 and looking back to the first billion years of the history of the universe. It is clear that much had already happened during that period: galaxies, or at least parts of galaxies, had already appeared on the scene; the first quasars had already formed; and the intergalactic medium had become ionized. Furthermore, the densest parts of the universe, including quasars, had already become significantly enriched in heavy elements. Most of the heavy elements in large galaxies like our own appear in fact to have been produced at relatively early times, and there has been only modest subsequent enrichment, leaving a general paucity of metal-poor stars compared with the predictions of simple models; this is the long-standing and apparently ubiquitous ‘G-dwarf problem’. Finally, a far-infrared background radiation has been observed which contains half of the present radiative energy density of the universe, and which is believed to have been produced mainly by dust-obscured star formation at high redshifts. All of these observations reflect in various ways the effects of early star formation, and some of them, particularly the rapid enrichment in heavy elements, would be easiest to understand if early star formation had produced preferentially massive stars. Therefore there has been great interest in understanding the earliest stages of star formation in the universe, and especially in understanding what the typical masses or mass spectrum of the first stars might have been.
2. THE FIRST STAR-FORMING SYSTEMS
Current cosmological models now provide us with a framework for addressing the problem of early star formation and specifying plausible initial conditions. Recent progress in cosmology has led to a set of variants of the standard CDM model which, while differing in quantitative details, all make similar predictions about the way in which structure emerged in the early universe. In all of the currently viable models, cosmic structure is built up hierarchically, and larger systems are assembled from smaller ones by the accumulation of matter at the nodes of a filamentary web-like network. These models provide a well-tested description of the development of galaxy clustering and large-scale structure in the universe, but they do not yet predict correctly the properties of individual galaxies and have not been tested at all on much smaller scales; therefore we cannot yet be confident that they predict in a quantitatively correct way the properties of the first star-forming systems. However, in all models, qualitatively similar things are expected to happen on smaller scales at earlier times, so we expect that the first star-forming systems were created in a similar way by the accumulation of matter at the nodes of a filamentary network. The various currently viable models can then be used to make extrapolations to smaller scales and earlier times that we can use as working hypotheses to investigate early star formation. We can also hope that the physics of early star formation was simpler than that of present-day star formation because the important physical processes involved only various forms of hydrogen, and because turbulence and magnetic fields might not yet have been introduced by the effects of prior star formation.
Current models predict that the first bound systems capable of forming stars appeared during the first $`10^8`$ years of the history of the universe at redshifts between 50 and 10, and that they had masses between about $`10^5`$ and $`10^8`$M (Peebles 1993; Haiman, Thoul, & Loeb 1996; Tegmark et al. 1997; Nishi & Susa 1999; Miralda-Escudé 2000). Most of this mass is dark matter, and the gas mass is about an order of magnitude smaller. The predicted radii of these first star-forming systems are between 10 and 500 parsecs, and their internal velocity dispersions are between about 5 and 25 km/s. These properties are not greatly different from those of present-day star-forming regions in galaxies, including giant molecular clouds, large complexes of gas and young stars, and starburst regions. However, an important difference is that the temperatures of the first collapsing clouds must have been much higher than those of present molecular clouds because of the absence of any heavy elements to provide cooling. In the metal-free primordial clouds, the only possibility for cooling below $`10^4`$K is provided by trace amounts of molecular hydrogen comprising up to about $`10^3`$ of the total hydrogen abundance, and H<sub>2</sub> molecules cannot cool the gas significantly below 100 K; the calculated temperatures of primordial clouds are in fact mostly in the range 200–1000 K (Anninos & Norman 1996; Haiman, Thoul, & Loeb 1996; Tegmark et al. 1997; Nakamura & Umemura 1999; Abel, Bryan, & Norman 1999; Bromm, Coppi, & Larson 1999). Thus, thermal pressure must have played a much more important role in primordial star formation than it does in present-day star formation; in particular, the Jeans mass must have been much larger in the primordial clouds than it is in present-day clouds, since the densities of the first star-forming clouds were not very different from those of present clouds, while their temperatures were much higher.
3. THERMAL PROPERTIES OF THE PRIMORDIAL GAS
Three different groups simulating the collapse and fragmentation of primordial star-forming clouds have recently obtained consistent results for their thermal behavior, and these results will be summarized briefly here. The most realistic calculations are those of Abel et al. (1998) and Abel, Bryan, & Norman (1999; hereafter ABN), who have started with a full cosmological simulation including a detailed treatment of the gas physics, and have followed the evolution of the first recollapsing density peak through many orders of magnitude in density using a progressively finer grid. At the latest stage reached, this calculation is well on the way to modeling the formation of a single massive star or small group of stars. Somewhat more idealized is the simulation by Bromm, Coppi, & Larson (1999, 2000; hereafter BCL) of the collapse of a simple ‘top-hat’ cosmological density perturbation with a standard spectrum of density fluctuations and a typical angular momentum; this calculation uses an SPH technique intended to follow the formation of a small group of dense clumps. In this simulation the gas collapses to a disk which then fragments into filaments and clumps, and the collapse of the clumps is again followed through a large increase in density. The most idealized and least cosmology-dependent calculations are those of Nakamura & Umemura (1999, 2000; hereafter NU), who have simulated the collapse and fragmentation of gas filaments with initial densities and temperatures appropriate for primordial clouds; these authors have followed the collapse of these filaments to higher densities than the other groups and into the regime where opacity becomes important.
In the simulations of ABN and BCL, the initial recollapse of a $`3\sigma `$ density peak at a redshift of $`30`$ compresses the gas and heats it to temperatures above 1000 K; this in turn increases the H<sub>2</sub> formation rate and causes the H<sub>2</sub> abundance to rise from its initial value of about $`10^6`$ to a quasi-equilibrium value of about $`10^3`$ of the total hydrogen abundance. The additional molecular hydrogen thus created cools the gas in the flattened disk-like configuration resulting from the collapse to a temperature of about 200–300 K, but the temperature then begins to rise again in the dense clumps that form, reaching $`500`$$`800`$K at the highest densities attained. The simulations by NU of the fragmentation of filaments do not start with any overall collapse and therefore do not show the same initial rise and fall in temperature, but they nevertheless yield similar H<sub>2</sub> abundances and similar temperatures of $`300`$$`500`$K over a similar range of densities. All three sets of calculations converge into the same density-temperature regime after the initial collapse has stopped and any flattened or filamentary configuration produced by it is beginning to fragment into clumps; at this stage, the density is about $`10^3`$$`10^4`$ atoms per cm<sup>3</sup> in all cases, and the temperature is about 200–300 K. Under these conditions the Jeans mass is of the order of $`10^3`$M, and all of these simulations find that clumps are formed with masses of this order.
Thus, while more work is needed to verify the generality of these conclusions, it appears that the predicted thermal behavior and fragmentation scale of primordial clouds are fairly robust results, and depend mainly on the gas physics and not on the simulation techniques or the cosmological model assumed. An important feature of the gas physics that may help to account for this convergence of results is that at densities greater than $`10^4`$cm<sup>-3</sup> the level populations of the H<sub>2</sub> molecule come into thermodynamic equilibrium, causing the density dependence of the cooling rate to saturate and the cooling time to become independent of density; as a result, the cooling time again becomes longer than the free-fall time at the higher densities, and the collapse is significantly slowed down from a free fall. A further effect that may also be relevant is that the gas is still confined partly by the gravity of the dark matter when it begins to fragment into clumps, so that it is not yet fully self-gravitating and lingers for a time in this favored density-temperature regime, allowing more time for the Jeans scale to be imprinted on the dynamics.
4. FRAGMENTATION AND THE STELLAR MASS SCALE
During the collapse of the ‘top-hat’ density perturbations studied by BCL, the density fluctuations in the dark matter grow unimpeded, while the gas at first retains a smoother distribution because it is warmer; however, as the density of both the dark matter and the gas increase, the gas increasingly responds to the dark matter density fluctuations and begins to develop a similar clumpy structure. After about a free-fall time, the dark matter ‘virializes’ to form a small dark halo, while the gas settles into a smaller rotationally supported disk within this halo. For a system with a total mass of $`2\times 10^6`$M and a gas mass of $`10^5`$M collapsing at a redshift of $`30`$, the radius of this disk is about 15 pc. Irregularities in the disk develop into filamentary spiral structure, and the filaments then fragment into massive clumps. The density fluctuations in the dark matter thus appear to trigger the growth of structure in the gas and may determine where the most massive clumps will form, but once the gas has settled into a disk, its subsequent evolution appears to depend more on the thermal properties of the disk than on the initial conditions for the collapse. In particular, the masses of the clumps depend mainly on the Jeans scale in the disk and not on the nature of the initial density fluctuations. The most important effect of the dark matter may then simply be that its gravity confines the gas in a disk long enough for gravitational instabilities in the disk to play a significant role in its evolution.
The full cosmological simulation of ABN shows qualitatively similar behavior, but it exhibits filamentary structure from the beginning and yields a single dominant central clump in the first recollapsing density peak. The rezoning technique used by these authors follows with ever-increasing resolution the collapse of this clump, and at the last stage reached, it has developed a slowly contracting core with a mass of about $`100`$M surrounded by a flattened envelope with a mass of about $`1000`$M in which rotational support is important. The central core continues to contract nearly spherically and shows no sign of fragmentation into smaller objects. It is possible that additional objects might form in the flattened disk-like region around it, but the calculation focuses on resolving the collapse of the central core. The introduction of ‘sink particles’ in the simulation of BCL allows the calculation to be carried farther and the formation of a small group of objects to be followed, at the expense of not resolving what happens at the highest densities. Again, the clumps formed show no sign of fragmenting into smaller objects. The masses of these clumps are typically of the order of $`10^3`$M, with a range extending from the resolution limit of $`10^2`$M to more than $`10^4`$M. Experiments with a variety of initial conditions suggest that a typical clump mass of $`10^3`$M is a rather general result even when the overall structure of the system becomes much more complex than the simple disk discussed above.
The simulations of NU also find that primordial gas filaments tend to fragment into clumps with masses of the order of $`10^3`$M. If some of the gas condenses into much thinner and denser filaments before fragmenting, objects with much smaller masses can also be formed. The simulations do not yet indicate how much mass might fragment into smaller objects in this way, but it seems unlikely that a major fraction of the total mass will be involved. A question of great interest is whether any stars smaller than a solar mass can be formed; Uehara et al. (1996) suggested that such stars cannot form under primordial conditions because the onset of high opacity to the H<sub>2</sub> cooling radiation sets a minimum fragment mass that is approximately the Chandrasekhar mass, somewhat above $`1`$M. NU confirm this result from a more detailed treatment of the radiative transfer problem, and they conclude that fragmentation can continue down to a minimum mass between 1 and $`2`$M. If there is indeed a minimum mass for metal-free stars that is larger than $`1`$M, this would be a very important result because it would imply that we should see no metal-free stars at the present time, even if large numbers of such stars had once been formed, since all of these stars should by now have evolved.
5. STAR FORMATION THEN AND NOW
It may be useful to compare early star formation with present-day star formation in discussing the expected fragmentation scale and the likely masses of the stars formed. Since the predicted sizes and masses of the first star-forming systems are not greatly different from those of present-day molecular clouds, their average densities and internal pressures are also not very different. In fact, the typical gas pressure in the disks discussed above, which have an average density of $`10^3`$cm<sup>-3</sup> and a temperature of $`300`$K, is about the same as the typical pressure in present-day cold molecular cloud cores, which have a density of $`3\times 10^4`$cm<sup>-3</sup> and a temperature of $`10`$K. Thus, if one calculates the Jeans mass from the temperature and pressure of a star-forming cloud, for example by taking the mass of a marginally stable Bonnor-Ebert sphere which varies as the square of the temperature and inversely as the square root of the pressure, this mass is larger in primordial clouds than in present clouds just by the square of the temperature. Since the Jeans mass in present-day molecular clouds is of the order of one solar mass (see below), and since the temperature is about 30 times higher in the primordial clouds, the Jeans mass in these clouds is predicted to be about 1000 times higher, or about $`10^3`$M, as was noted above. Note that the Jeans mass will remain higher than present values even after the first heavy elements have been introduced, since the temperature still cannot fall below the cosmic background temperature, which is 57–85 K at redshifts of 20–30; this is nearly an order of magnitude higher than the present-day temperatures of cold molecular cloud cores, implying a Jeans mass that is still almost two orders of magnitude higher than present values, again assuming a similar pressure (Larson 1998).
How relevant is the Jeans mass in determining typical stellar masses? This question has been controversial in recent years, and it has been debated whether the present characteristic stellar mass of the order of one solar mass is determined by the scale of cloud fragmentation or by the onset of strong outflows that terminate the accretional growth of protostars at some stage (Meyer et al. 2000). Some recent evidence suggests that stellar masses are closely related to the masses of the dense clumps observed in star-forming molecular clouds, supporting the view that the characteristic stellar mass is determined by the scale of cloud fragmentation. Motte, André, & Neri (1998) have observed many small dense clumps in the $`\rho `$ Ophiuchus cloud that have masses between 0.05 and $`3`$M, and they find that the mass spectrum of these clumps is similar to the IMF of local field stars, including the flattening below a solar mass that is indicative of a characteristic mass around one solar mass. The mass spectrum of these clumps has also been compared with that of the pre-main-sequence stars in the same cloud by Luhman & Rieke (1999), and they find that the two functions are indistinguishable from each other and from the IMF of the local field stars. This suggests that stars form with masses similar to those of the observed dense clumps in molecular clouds, and that the characteristic stellar mass is therefore determined by the typical clump mass. This mass is in turn found to be similar to the Jeans mass calculated from the typical temperature and pressure in molecular clouds, which is approximately one solar mass or slightly less (Larson 1985, 1996, 1999, 2000; Meyer et al. 2000).
Since the predicted thermal behavior of primordial clouds is qualitatively similar to that of present-day molecular clouds except for temperatures that are about 30 times higher at a given pressure, similar conclusions might be expected to hold for primordial clouds, with masses about three orders of magnitude larger for both the clumps and the stars formed. This suggests that the first stars were very massive objects, with typical masses of perhaps several hundred solar masses. However, definite conclusions cannot yet be drawn about the masses of the first stars because the final fate of the clumps discussed above has not yet been determined. The possibility that they will fragment into many smaller objects has not been ruled out, even though this seems unlikely. In the present-day case, it appears on both observational and theoretical grounds that the fragmentation of collapsing Jeans-mass clumps is limited to the formation of at most a small multiple system, the typical outcome being the formation of a binary system (Larson 1995). Numerical simulations illustrating the formation of binary and multiple systems (e.g., Burkert, Bate, & Bodenheimer 1997) have usually assumed an isothermal equation of state, but the temperatures in the primordial clumps discussed above do not remain constant but rise slowly as the clumps contract, and this can only reduce the amount of subsequent fragmentation that occurs. However, even if further fragmentation is unimportant, it remains possible that significant differences from the typical present-day situation could arise because of the much higher stellar masses expected at early times; for example, radiative feedback effects such as the dissociation of H<sub>2</sub> molecules by ultraviolet radiation might reduce the efficiency of early star formation (Haiman 2000; Ferrara & Ciardi 2000), and they might also reduce the masses of the stars formed by preventing the accretion by them of most of the initial clump mass (Abel 1999).
In summary, it appears to be a fairly general result that the first star-forming clouds fragment into massive clumps with masses of the order of $`10^3`$M and temperatures of a few hundred K; this result depends mainly on the well-understood thermal physics of the gas and not on the details of the initial conditions or the simulation technique used. It also appears unlikely that these massive clumps will fragment into many smaller objects as they collapse to higher densities. The stars that form in them will then almost certainly be much more massive than typical present-day stars. The IMF of the first stars was therefore almost certainly top-heavy, and it seems very unlikely that a standard present-day IMF could have been produced. The first stars might typically have been massive ($`10^2`$M) or possibly very massive ($`10^3`$M) objects, perhaps similar in the latter case to the ‘VMOs’ studied by Carr, Bond, & Arnett (1984) and Bond, Arnett, & Carr (1984) and reviewed by Carr (1994). Of course, any predictions concerning the properties of the first stars are as yet untested by any direct observations, and we must await observations of very high-redshift objects with instruments such as NGST before we can know for sure whether the work that has been described here is on the right track.
6. POSSIBLE EFFECTS OF EARLY STAR FORMATION
How were the first star-forming systems related to the galaxies that we presently see, and what role might the first stars have played in accounting for the properties of the systems that we see? The first star-forming units probably cannot be identified with any presently observed systems, since they would have been too small and too loosely bound to survive or to retain any gas after forming the predicted massive stars. These stars would have evolved within a few Myr, and any whose masses were larger than about 250 M would have collapsed to black holes containing at least half of the initial stellar mass (Bond, Arnett, & Carr 1984; Heger, Woosley, & Waters 2000). If a significant fraction of the first stars had such large masses, much of the matter that condensed into them might soon have ended up in black holes of similar mass. Massive black holes formed in this way at early times could have had a number of interesting consequences, including seeding the formation of supermassive black holes in galactic nuclei and thus accounting for the origin of AGNs.
In standard hierarchical cosmologies, the first objects form preferentially in the densest parts of the universe, and they then become incorporated through a series of mergers into systems of larger and larger size which eventually become present-day large galaxies and clusters of galaxies. The first $`3\sigma `$ density peaks are predicted to be strongly clustered on the scale of clusters of galaxies and significantly clustered even on the scale of individual galaxies, and this means that the first stars or their remnants should now be concentrated in the inner parts of large galaxies, which in turn are mostly in large clusters (Miralda-Escudé 2000; White & Springel 2000). That is, these objects should now be found mostly in places like the inner parts of M87 rather than in the outer halos of galaxies like the Milky Way. If massive black holes were present from early times in the dense regions that later became the inner parts of large galaxies, they might have become increasingly centrally concentrated because of the strong gravitational drag effects that would have been present in such regions. The most massive ones might then have served as the seeds for building up larger black holes by accretion, and mergers among them might also have contributed to building up very massive central black holes. If the present galaxies with large spheroids were built up by a series of mergers of smaller systems that already contained central black holes, these central black holes might have merged along with their host systems to form increasingly massive black holes at the centers of galaxies of increasing mass. It is conceivable that most of the remnants of the first stars could have ended up in this way in the supermassive black holes of AGNs; in this case, an understanding of early star formation could turn out to be very relevant to understanding the origin of AGNs.
The accumulation processes that build massive black holes in galactic nuclei might be analogous to the processes that form massive stars at the centers of star clusters. Massive newly formed stars are always found in clusters, typically near their centers, and this can only be understood if these massive stars were in fact formed near the cluster center (Bonnell & Davies 1998). Since the various accretion or accumulation processes that might increase stellar masses are most important in the dense central parts of forming clusters, while the Jeans mass is not unusually high there, this suggests that massive stars are built up by accumulation processes in clusters and are not formed by direct cloud fragmentation (Larson 1982; Bonnell, Bate, & Zinnecker 1998; Clarke, Bonnell, & Hillenbrand 2000; Bonnell 2000). The most massive stars may even be produced by collisions and mergers between less massive stars in the extremely dense cores of forming clusters (Bonnell, Bate, & Zinnecker 1998; Stahler, Palla, & Ho 2000). As is the case with galaxies, interactions and mergers among subunits may play an important role in driving accretion onto central objects and possibly causing them to merge, and this could lead to the formation of stars of increasing mass as clusters are built up by the merging of substructure (Clarke, Bonnell, & Hillenbrand 2000; Bonnell 2000; Larson 2000). Such processes might also account for the observed power-law upper stellar IMF; in the simple model suggested by Larson (2000), the most massive star accretes a fixed fraction (1/6) of the remaining gas each time two subunits merge, and this leads to a Salpeter-like upper IMF with a slope $`x=1.36`$ in which the mass of the most massive star increases as the 0.74 power of the mass of the cluster. A modification of this model might be able to account for the fact that the masses of the nuclear black holes in galaxies are approximately proportional to the bulge mass, typically being about 0.005 times the bulge mass. If nuclear black holes are built up by the same kind of sequence of mergers and associated accretion events, and if the resulting black holes merge into a single object when their masses exceed $`10^6`$M, then the central black hole mass increases in proportion to the bulge mass for larger masses and is about 0.005 times the bulge mass, as observed. Thus there could be a close analogy between black hole formation in galactic nuclei and the formation of massive stars in clusters.
The first stars must also have produced the first heavy elements in the universe. While stars more massive than 250 M are predicted to collapse completely to black holes without ejecting any heavy elements, somewhat less massive primordial stars would have exploded as supernovae and begun to enrich their surroundings with heavy elements. Stars with masses in the range between about 100 and $`250`$M are predicted to be partly or completely disrupted by the pair-production instability (Bond, Arnett, & Carr 1984; Heger, Woosley, & Waters 2000), producing an energetic supernova event and dispersing some or all of the heavy elements produced during their evolution. Thus such objects could plausibly have been the first sources of heavy elements. Metal-free stars with masses between about 35 and $`100`$M probably again collapse to black holes (Heger, Woosley, & Waters 2000), while stars with masses between 10 and $`35`$M can explode as type II supernovae, possibly providing a second source of heavy elements if significant numbers of such stars were formed at the earliest times.
The first star-forming systems were probably too short-lived and too weakly bound to retain any of the heavy elements produced, so the first systems capable of self-enrichment were probably larger systems that formed somewhat later as larger cosmological structures collapsed, incorporating some of the remnants and nucleosynthetic products of the first systems. The dwarf galaxies of the Local Group are found to have a minimum mass of about $`2\times 10^7`$M (Mateo 2000), which is about the minimum mass needed for a galaxy to retain ionized gas. The retention of ionized gas is essential for subsequent star formation and chemical enrichment to occur because most of the gas in a galaxy is cycled many times through an ionized phase, and the dispersal and mixing of heavy elements also probably occurs mostly in an ionized medium. Preliminary calculations similar to those of BCL but with H<sub>2</sub> replaced as the dominant coolant by a low abundance of heavy elements suggest that in such circumstances there may be a threshold metallicity between $`10^4`$ and $`10^3`$ times the solar value below which no cooling occurs but above which some of the gas can cool to temperatures as low as the cosmic background temperature. This would reduce the typical masses of the dense clumps formed, but not to present-day values because the background temperature is still relatively high at high redshifts. For example, the first systems with masses as large as $`2\times 10^7`$M are predicted to form at a redshift of $`25`$, and the cosmic background temperature at that redshift is $`71`$K. If it is still valid to assume a pressure similar to that in present-day star-forming clouds, the predicted Jeans mass is then about $`50`$M, still much larger than present-day values, suggesting that star formation at high redshifts would still have produced a top-heavy IMF even after the first heavy elements had been introduced (Larson 1998).
Early star formation with a top-heavy IMF in those regions that later became the inner parts of large galaxies could help to resolve a number of problems regarding the chemical abundances of galaxies and quasars. The heavy-element abundances in galaxies increase systematically with mass up to the largest masses known, and they also increase radially inward in large galaxies of all types. Neither of these trends is fully explained by standard models assuming a universal IMF, even if gas flows are invoked, but both might be explained if early star formation with a top-heavy IMF enriched preferentially the inner parts of the largest galaxies. The nuclear regions of the largest galaxies contain the most metal-rich stars known, and quasars can be even more metal-rich, with metallicities up to 5 or 10 times solar. These high metallicities cannot be explained with a standard IMF, but they might be explainable if the nuclear regions of galaxies containing AGNs were enriched by massive stars formed at early times with a top-heavy IMF, as would be expected in the picture sketched above. Star formation with a top-heavy IMF during the formation of cluster elliptical galaxies, perhaps associated with merger-induced starbursts, could also help to explain the high metallicities of both the stars and the hot gas in clusters of galaxies (Zepf & Silk 1996; Larson 1998). As was noted by these authors, a top-heavy early IMF might in addition help to account for the observed far-infrared cosmic background radiation, which is believed to be produced mostly by obscured high-redshift massive star formation, without violating limits on the current density of low-mass stars in the universe (see also Dwek et al. 1998). Finally, we note that the ‘G-dwarf problem’ mentioned in the introduction might be solved or alleviated if such processes made a significant contribution to enriching galaxies generally, even outside the nuclear regions.
7. SUMMARY
The first theoretical studies of the formation of primordial or ‘population III’ stars based on current cosmological models with a full treatment of the relevant physics have been begun within the past few years, and work by several groups is continuing. These studies have obtained consistent results for the thermal behavior of the first star-forming clouds, implying a scale of fragmentation that is of the order of $`10^3`$M, and all of the simulations have found the formation of clumps whose masses are of this order. This strongly suggests that the first stars were typically massive or very massive objects, that is, it suggests a very top-heavy early stellar IMF, although quantitative predictions are not yet possible. More work is needed to verify the generality of these results, and more work is also needed to follow the later evolution of the clumps and predict the masses of the stars that form in them, but it seems very unlikely that the earliest star formation could have yielded a normal present-day IMF. After the first massive stars had formed, the properties of star-forming systems and the universe must rapidly have become much more complex as many feedback effects came into play, but it appears that some of the properties of galaxies and quasars might be explainable as a result of early star formation with a top-heavy IMF occurring in the dense regions that became the inner parts of large galaxies. The coming years are sure to see much progress, both theoretical and observational, in the currently very active quest to understand early star formation.
REFERENCES
Abel T. 1999, private communication
Abel T., Anninos P., Norman M. L., Zhang, Y. 1998, ApJ 508, 518
Abel T., Bryan G., Norman M. L. 1999, in Evolution of Large-Scale Structure: From Recombination to Garching, Banday A. J., Sheth R. K., Da Costa L. N. (eds.), ESO, Garching, p. 363 (ABN)
Anninos P., Norman M. L. 1996, ApJ 460, 556
Bond J. R., Arnett W. D., Carr B. J. 1984, ApJ 280, 825
Bonnell I. A. 2000, in Stellar Clusters and Associations: Convection, Rotation, and Dynamos, in press (astro-ph/9908268)
Bonnell I. A., Davies M. B. 1998, MNRAS 295, 691
Bonnell I. A., Bate M. R., Zinnecker H. 1998, MNRAS 298, 93
Bromm V., Coppi P. A., Larson R. B. 1999, ApJ 527, L5 (BCL)
Bromm V., Coppi P. A., Larson R. B. 2000, in The First Stars, Weiss A., Abel T., Hill V. (eds.), Springer, Berlin, in press
Burkert A., Bate M. R., Bodenheimer P. 1997, MNRAS 289, 497
Carr B. J. 1994, ARA&A 32, 531
Carr B. J., Bond J. R., Arnett W. D. 1984, ApJ 277, 445
Clarke C. J., Bonnell I. A., Hillenbrand L. A. 2000, in Protostars and Planets IV, Mannings V., Boss A. P., Russell S. S. (eds.), University of Arizona Press, Tucson, in press (astro-ph/9903323)
Dwek E., Arendt R. G., Hauser M. G., Fixsen D., Kelsall T., Leisawitz D., Pei Y. C., Wright E. L., Mather J. C., Moseley S. H., Odegard N., Shafer R., Silverberg R. F., Weiland J. L. 1998, ApJ, 508, 106
Ferrara A., Ciardi B. 2000, in The First Stars, Weiss A., Abel T., Hill V. (eds.), Springer, Berlin, in press
Haiman Z. 2000, in The First Stars, Weiss A., Abel T., Hill V. (eds.), Springer, Berlin, in press
Haiman Z., Loeb A. 1997, ApJ 483, 21
Haiman Z., Thoul A. A., Loeb A. 1996, ApJ 464, 523
Heger A., Woosley S. E., Waters R. 2000, in The First Stars, Weiss A., Abel T., Hill V. (eds.), Springer, Berlin, in press
Larson R. B. 1982, MNRAS 200, 159
Larson R. B. 1985, MNRAS 214, 379
Larson R. B. 1995, MNRAS 272, 213
Larson R. B. 1996, in The Interplay Between Massive Star Formation, the ISM and Galaxy Evolution, Kunth D., Guiderdoni B., Heydari-Malayeri M., Thuan T. X. (eds.), Editions Frontières, Gif sur Yvette, p. 3
Larson R. B. 1998, MNRAS 301, 569
Larson R. B. 1999, in The Orion Complex Revisited, McCaughrean M. J., Burkert A. (eds.), ASP Conference Series, San Francisco, in press
Larson R. B. 2000, in Star Formation 1999, Nakamoto T. (ed.), in press (astro-ph/9908189)
Luhman K. L., Rieke G. H. 1999, ApJ 525, 440
Mateo M. 2000, in The First Stars, Weiss A., Abel T., Hill V. (eds.), Springer, Berlin, in press
Meyer M. R., Adams F. C., Hillenbrand L. A., Carpenter J. M., Larson R. B. 2000, in Protostars and Planets IV, Mannings V., Boss A. P., Russell S. S. (eds.), University of Arizona Press, Tucson, in press (astro-ph/9902198)
Miralda-Escudé J. 2000, in The First Stars, Weiss A., Abel T., Hill V. (eds.), Springer, Berlin, in press (astro-ph/9911214)
Motte F., André P., Neri R. 1998, A&A 336, 150
Nakamura F., Umemura M. 1999, ApJ 515, 239
Nakamura F., Umemura M. 2000, in The First Stars, Weiss A., Abel T., Hill V. (eds.), Springer, Berlin, in press (NU)
Nishi R., Susa H. 1999, ApJ 523, L103
Peebles P. J. E. 1993, Principles of Physical Cosmology, Princeton University Press, Princeton, p. 635
Stahler S. W., Palla F., & Ho P. T. P. 2000, in Protostars and Planets IV, Mannings V., Boss A. P., Russell S. S. (eds.), University of Arizona Press, Tucson, in press
Tegmark M., Silk J., Rees M. J., Blanchard A., Abel T., Palla F. 1997, ApJ 474, 1
Uehara H., Susa H., Nishi R., Yamada M., Nakamura T. 1996, ApJ 473, L95
White S. D. M., Springel V. 2000, in The First Stars, Weiss A., Abel T., Hill V. (eds.), Springer, Berlin, in press (astro-ph/9911378)
Zepf S. E., Silk J. 1996, ApJ, 466, 114 |
no-problem/9912/hep-th9912032.html | ar5iv | text | # Quantum Annihilation of Anti-de Sitter Universe
## Abstract
We discuss the role of conformal matter quantum effects (using large $`N`$ anomaly induced effective action) to creation-annihilation of an Anti-de Sitter Universe. The arbitrary GUT with conformally invariant content of fields is considered. On a purely gravitational (supersymmetric) AdS background, the quantum effects act against an (already existing) AdS Universe. The annihilation of such a Universe occurs, what is common for any conformal matter theory. On a dilaton-gravitational background, where there is dilatonic contribution to the induced effective action, the quantum creation of an AdS Universe is possible assuming fine-tuning of the dilaton.
1. There is a large interest now in studies related to Anti-de Sitter (AdS) backgrounds. This is caused by several reasons. First of all, via the AdS/CFT correspondence (for a review,see ) investigating the classical IIB supergravity on an AdS background (after compactification), one can get answers for the dual (boundary) quantum gauge theory. Second, the AdS space is an extremely symmetric one (maximum number of Killing vectors), like Minkowski space. Moreover, it is a well-known supersymmetric background for supergravity theories. For strings the backgrounds with AdS section are often suspected to be the exact vacuum state. Third, according to some cosmological data the inflationary Universe could have the spatial section with negative curvature. Generalizing this for a brane world one can speculate on the possibility of 4d AdS stage (or regions) in the early Universe.
The important question is: how should one create the AdS regions in the early Universe? The mechanisms of such creation maybe important for the realization of AdS Black Holes presence. In this note we investigate the role of matter quantum effects to the Anti-de Sitter Universe. It is well known that the inflationary (de Sitter) Universe maybe created completely by quantum effects of conformally invariant matter. On the contrary, as we will show below, the quantum creation of Anti-de Sitter Universe by quantum effects is rather unrealistic. The quantum annihilation of such a Universe occurs. Only in presence of dilaton the fine-tuning of the dilaton solution may lead to quantum creation of the AdS Universe (an example of quantum maximally supersymmetric YM theory conformally coupled to conformal background supergravity).
2. Let us start from one form of the metric describing four-dimensional Anti-de Sitter (AdS<sub>4</sub>) spacetime,
$`\text{d}s^2=\text{e}^{2\lambda \stackrel{~}{x}_3}(\text{d}t^2(\text{d}x^1)^2(\text{d}x^2)^2)(\text{d}\stackrel{~}{x}^3)^2,`$ (1)
having a negative effective cosmological constant $`\mathrm{\Lambda }=\lambda ^2`$. One may present this metric in conformally flat form via the transformation
$`y=x^3={\displaystyle \frac{\text{e}^{\lambda \stackrel{~}{x}_3}}{\lambda }}.`$ (2)
Then
$`\text{d}s^2=a^2(\text{d}t^2\text{d}𝒙^2)=a^2\eta _{\mu \nu }\text{d}x^\mu \text{d}x^\nu ,`$ (3)
with $`a=\text{e}^{\lambda \stackrel{~}{x}^3}=1/(\lambda x^3)=1/(\lambda y)`$. This form of the metric is often useful in the study of quantum gauge theory via the SG dual.
Imagine now that the early Universe is described by some quantum grand unified theory (GUT) containing $`N_s`$ conformal scalars, $`N_f`$ spinors and $`N_v`$ vectors. It is enough to consider only free fields in such a GUT as radiative corrections are not important for our purposes. Moreover, the large class of GUTs - asymptotically finite and asymptotically conformal GUTs (see the book for a review) - maybe presented as a collection of free fields at strong curvature (in the early Universe). Unlike to the de Sitter space the AdS space is supersymmetric background of GUT if it is SUSY. Note that quantum fields on negative curvature space have been reviewed in ref..
The quantum GUT under consideration produces the well-known conformal anomaly (for a review see )
$`T=b\left(F+{\displaystyle \frac{2}{3}}\mathrm{}R\right)+b_1G+b_2\mathrm{}R,`$ (4)
where $`b`$ and $`b_1`$ are constants,
$`b={\displaystyle \frac{N_s+6N_f+12N_v}{120(4\pi )^2}},b_1={\displaystyle \frac{N_s+11N_f+62N_v}{360(4\pi )^2}},`$ (5)
while $`F`$ is the square of the Weyl tensor,
$`F=R_{\mu \nu \alpha \beta }R^{\mu \nu \alpha \beta }2R_{\mu \nu }R^{\mu \nu }+{\displaystyle \frac{1}{3}}R^2,`$ (6)
and $`G`$ is the Gauss–Bonnet invariant. Note that the constant $`b_2`$ is known to be in general ambiguous, as it may be changed by a finite renormalization of the gravitational action. In the following we put $`b_2=0`$, since this does not influence the physical consequences. The features of a particular GUT are encoded in the numerical values of $`b`$ and $`b_1`$ as their signs (what is important for us) do not change.
Using the above conformal anomaly one can easily calculate the anomaly-induced effective action . Having in mind the applications for quantum induced AdS space we consider a metric similar to (3), i.e. $`g_{\mu \nu }=\text{e}^{2\sigma (y)}\eta _{\mu \nu }`$, where $`\eta _{\mu \nu }`$ is the Minkowski metric. Then the techniques of ref. may easily be applied, and the following anomaly-induced effective action is obtained,
$`W={\displaystyle \text{d}^4x\left[2b_1\sigma \mathrm{}^2\sigma 2(b+b_1)(\mathrm{}\sigma +\eta ^{\mu \nu }(_\mu \sigma )(_\nu \sigma ))^2\right]}.`$ (7)
Since $`\sigma `$ is assumed to depend only on $`y`$, this expression may be simplified:
$`W=V_3{\displaystyle \text{d}y\left[2b_1\sigma \sigma ^{\prime \prime \prime \prime }2(b+b_1)(\sigma ^{\prime \prime }+(\sigma ^{})^2)^2\right]}.`$ (8)
Here $`\sigma ^{}=\text{d}\sigma /\text{d}y`$. Formally, this expression coincides with the effective action on a time dependent conformally flat background, but its physics is of course different. One should also remember that the total effective action consists of $`W`$ plus some conformally invariant functional. In the case of a conformally flat background, as considered here, this conformally invariant functional is a non-essential constant. Only if one considers periodicity on some of the coordinates (say, AdS BH) will this constant become more important, as a kind of Casimir energy, since it will depend on the radius of the compact dimension.
In order to take into account the quantum matter effects in the AdS Universe we should add the anomaly-induced action to the classical gravitational action
$`S_{\mathrm{cl}}={\displaystyle \frac{1}{\kappa }}{\displaystyle \text{d}^4x\sqrt{g}(R+6\mathrm{\Lambda })}={\displaystyle \frac{1}{\kappa }}{\displaystyle \text{d}^4x\text{e}^{4\sigma }(6\text{e}^{2\sigma }((\sigma ^{})^2+(\sigma ^{\prime \prime }))+6\mathrm{\Lambda })}.`$ (9)
The sum of classical action and quantum effective action describes the dynamics of whole quantum system.
Equations of motion following from the action $`S_{\mathrm{cl}}+W`$ are
$`{\displaystyle \frac{a^{\prime \prime \prime \prime }}{a}}{\displaystyle \frac{4a^{}a^{\prime \prime \prime }}{a^2}}{\displaystyle \frac{3(a^{\prime \prime })^2}{a^2}}+\left(66{\displaystyle \frac{b_1}{b}}\right){\displaystyle \frac{a^{\prime \prime }(a^{})^2}{a^3}}+{\displaystyle \frac{6b_1(a^{})^4}{ba^4}}{\displaystyle \frac{a}{4b\kappa }}\left(12a^{\prime \prime }24\mathrm{\Lambda }a^3\right)=0.`$ (10)
Here prime means derivative with respect to $`y`$. One may now look for the special AdS-like solutions of Eq. (10): $`a=c/y`$. When there are no quantum corrections, there is a solution with $`c=1/\sqrt{\mathrm{\Lambda }}`$, in accordance with the AdS metric. When the effective cosmological constant $`\mathrm{\Lambda }=0`$, Eq. (10) reduces to $`c^2=b_1\kappa `$ (at $`a(y)=c/y`$). However, $`c^2=b_1\kappa `$ leads to an imaginary scale factor $`a`$, because $`b_1<0`$. This indicates that matter corrections alone can not create an Anti-de Sitter Universe. This is in contrast to the possibility of creation of a de Sitter Universe by solely matter quantum effects . Indeed, when scale factor depends only on time the sign of curvature (which is positive) is changing. As a result, $`a=c/\eta `$ with $`c^2=b_1\kappa `$. In other words, there is always a solution - in form of a quantum created de Sitter Universe! On the contrary, the presence of a negative effective cosmological constant in classical theory is a necessary condition for the existence of an Anti-de Sitter Universe, at least within our scenario.
In the general case, the algebraic equation for $`c^2`$ becomes (assuming $`\mathrm{\Lambda }<0`$):
$`\kappa b_1c^2\mathrm{\Lambda }c^4=0,`$ (11)
and it has the solutions:
$`c_{1}^{}{}_{}{}^{2}={\displaystyle \frac{1}{2\mathrm{\Lambda }}}\left(1+\sqrt{1+4\kappa b_1\mathrm{\Lambda }}\right)`$ (12)
and
$`c_{2}^{}{}_{}{}^{2}={\displaystyle \frac{1}{2\mathrm{\Lambda }}}\left(1\sqrt{1+4\kappa b_1\mathrm{\Lambda }}\right).`$ (13)
The first solution corresponds to the quantum corrected Anti-de Sitter Universe. Here, starting from some bare (even very small!) negative cosmological constant, we get an Anti-de Sitter Universe with a smaller cosmological constant due to quantum corrections. The quantum corrections act against the existing Anti-de Sitter Universe and make it less stable. This is the mechanism of annihilation of Anti-de Sitter Universe.
The second solution corresponds to the imaginary scale factor since it has $`c^2<0`$.
The complete effective action on the solution $`a(y)=c/y`$ is
$`S_{\mathrm{cl}}+W`$ $`=`$ $`V_3{\displaystyle \frac{\text{d}y}{y^4}\left[6b_1\mathrm{ln}\left(\frac{c^2}{y^2}\right)8(b+b_1)\frac{6(\mathrm{\Lambda }c^4c^2)}{\kappa }\right]}`$ (14)
$`=`$ $`V_3{\displaystyle \frac{\text{d}y}{y^4}\left[6b_1\mathrm{ln}\left(\frac{c^2}{y^2}\right)8b14b_1+\frac{12c^2}{\kappa }\right]}.`$
The dependence of effective action from $`c^2`$ is seen.
Thus, we have shown that quantum corrections in an already existing Anti-de Sitter Universe make it less stable, whereas creation of an Anti-de Sitter Universe by quantum corrections alone is impossible. This is in contrast to the de Sitter Universe, which may be created by quantum corrections alone. It is now interesting to understand the role of other effects in our scenario. As such we consider the presence of dilaton. The related dilatonic terms in action may play the role of effective cosmological constant.
3. As an explicit example of theory where dilaton appears in conformal anomaly we consider $`𝒩=4`$ $`\text{SU}(N)`$ super YM theory covariantly coupled with the background $`𝒩=4`$ conformal supergravity (see for an introduction). The corresponding vector multiplet is $`(A_\mu ,\psi _i,X_{ij})`$. There is also complex scalar (axion and dilaton) in the conformal supergravity multiplet. Note that such a theory is not a realistic one.
On the purely bosonic background the conformal anomaly in such a theory is the following
$`T=b\left(F+{\displaystyle \frac{2}{3}}\mathrm{}R\right)+b_1G+b_2\mathrm{}R+C\left[\mathrm{}\varphi ^{}\mathrm{}\varphi 2\left(R^{\mu \nu }{\displaystyle \frac{1}{3}}g^{\mu \nu }R\right)_\mu \varphi ^{}_\nu \varphi \right].`$ (15)
The last term, with the constant
$$C=\frac{N^21}{(4\pi )^2},$$
is the contribution of dilaton and axion fields. Note also that in adjoint representation $`N_v=N^21`$, $`N_s=6N_v`$ and $`N_f=2N_v`$ in the theory under discussion.
Using the conformal anomaly (15), and following ref. , it is again not difficult to construct the anomaly-induced effective action $`W`$ on the conformally flat background, $`g_{\mu \nu }=\text{e}^{2\sigma }\eta _{\mu \nu }`$.
With $`\sigma `$ and $`\varphi `$ depending only on time, one gets, in terms of the conformal time $`\eta `$ ,
$`W=V_3{\displaystyle \text{d}\eta \left[2b_1\sigma \sigma ^{\prime \prime \prime \prime }2(b+b_1)(\sigma ^{\prime \prime }+(\sigma ^{})^2)+C\sigma \text{Re}(\varphi ^{}\varphi ^{\prime \prime \prime \prime })\right]},`$ (16)
where $`V_3`$ is three-dimensional volume, and $`\sigma ^{}=\text{d}\sigma /\text{d}\eta `$. As usual, the complete effective action is the sum of two terms: $`W`$ and some conformally invariant functional $`W_1`$. Since we discuss a conformally flat background, $`W_1`$ can only depend on the complex scalar $`\varphi `$, and has to be constant when $`\varphi `$ is constant. Supposing that $`W_1`$ is a local functional (so that the Schwinger–De Witt expansion may be used), we are left with only one possibility,
$`W_1=V_3{\displaystyle \text{d}\eta \text{Re}(\varphi ^{}\varphi ^{\prime \prime \prime \prime })\mathrm{ln}\mu ^2}.`$ (17)
The coefficient of $`W_1`$ depends on the regularization used, since it may be absorbed into the scale $`\mu `$, by a redefinition. Thus our quantum correction to the classical action is $`\mathrm{\Gamma }=W+W_1`$.
The usual choice for the complex scalar $`\varphi `$ is
$`\varphi =\chi +\text{i}\text{e}^\phi ,`$ (18)
where $`\phi `$ is the dilaton, and $`\chi `$ is the R-R scalar (axion) of type IIB supergravity (or conformal supergravity, as above).
Thus, the final expression for the one-loop effective action of $`𝒩=4`$ super Yang–Mills theory coupled to conformal supergravity on a conformally flat background is
$`\mathrm{\Gamma }=V_3{\displaystyle \text{d}\eta \left[2b_1\sigma \sigma ^{\prime \prime \prime \prime }2(b+b_1)(\sigma ^{\prime \prime }+(\sigma ^{})^2)+(C\sigma +A)\text{Re}(\varphi ^{}\varphi ^{\prime \prime \prime \prime })\right]},`$ (19)
where $`A`$ is some constant depending on regularization, and $`\varphi =\chi +\text{i}\text{e}^\phi `$. One may consider the case of large $`N`$, then in the large $`N`$ expansion the proper quantum gravity corrections to $`\mathrm{\Gamma }`$ will be of next-to-leading order.
There are different choices for the classical gravitational action. For example, one can consider the axion-dilatonic gravity by Gibbons–Green–Perry , which describes the bosonic sector of type IIB supergravity,
$`S_{\mathrm{cl}}={\displaystyle \frac{1}{\kappa }}{\displaystyle \text{d}^4x\sqrt{g}\left(R+\frac{1}{2}g^{\mu \nu }_\mu \phi _\mu \phi +\frac{1}{2}\text{e}^{2\phi }g^{\mu \nu }_\mu \chi _\mu \chi \right)}.`$ (20)
In the absence of the dilaton and axion, this reduces to the standard action of general relativity. Using the above choice of a conformally flat metric one may simplify Eq. (20) as follows,
$`S_{\mathrm{cl}}={\displaystyle \frac{1}{\kappa }}V_3{\displaystyle \text{d}\eta \left(6\text{e}^{2\sigma }((\sigma ^{})^2+(\sigma ^{\prime \prime }))+\frac{1}{2}\text{e}^{2\sigma }(\phi ^{})^2+\frac{1}{2}\text{e}^{2\sigma +2\phi }(\chi ^{})^2\right)}.`$ (21)
Now it is convenient to transform to cosmological time $`t`$, such that $`\text{d}t=a(\eta )\text{d}\eta =\text{e}^{\sigma (\eta )}\text{d}\eta `$. The equations of motion may be obtained by variations with respect to $`a`$, $`\phi `$ and $`\chi `$.
We will consider the simpler choice when the axion is equal to zero, and the kinetic term for the dilaton in the classical action is absent. The regularization where $`A=0`$ is also chosen. As a result one comes to the model discussed in ref.. The study of the corresponding effective equations for time-dependent conformally flat metric has been done in the above work. The possibility of quantum creation of de Sitter Universe has been proved. One can give now an analysis related to a quantum corrected Anti-de Sitter Universe in the same way as it was done in previous section. Note, however, that in the case under discussion there is no classical cosmological constant and as a result there is no Anti-de Sitter solution on the classical level.
The effective equations of motion take the form:
$`{\displaystyle \frac{a^{\prime \prime \prime \prime }}{a}}{\displaystyle \frac{4a^{}a^{\prime \prime \prime }}{a^2}}{\displaystyle \frac{3a^{\prime \prime 2}}{a^2}}+{\displaystyle \frac{6a^{\prime \prime }a^2}{a^3}}\left(1{\displaystyle \frac{b_1}{b}}\right)+{\displaystyle \frac{6b_1a^4}{ba^4}}+{\displaystyle \frac{3aa^{\prime \prime }}{\kappa b}}{\displaystyle \frac{C}{4b}}\phi \phi ^{\prime \prime \prime \prime }`$ $`=`$ $`0,`$
$`\mathrm{ln}a\phi ^{\prime \prime \prime \prime }+(\mathrm{ln}a\phi )^{\prime \prime \prime \prime }`$ $`=`$ $`0.`$ (22)
Here prime means derivative on $`y`$ and we put $`\phi =\varphi `$.
Motivated by ref. one can make now the following transformation:
$`\text{d}z=a(y)\text{d}y.`$ (23)
Then, in terms of the variable $`z`$ the first of equations (Quantum Annihilation of Anti-de Sitter Universe) is:
$`a^2\stackrel{\mathrm{}.}{𝑎}+3a\dot{a}\stackrel{\mathrm{}}{𝑎}+a\ddot{a}^2\left(5+{\displaystyle \frac{6b_1}{b}}\right)\dot{a}^2\ddot{a}+{\displaystyle \frac{3}{\kappa b}}\left(a^2\ddot{a}+a\dot{a}^2\right){\displaystyle \frac{C\phi Y[\phi ,a]}{4b}}=0.`$ (24)
Here $`\dot{a}=\text{d}a/\text{d}z`$, $`Y[\phi ,a]`$ is given in ref. , and the second equation (Quantum Annihilation of Anti-de Sitter Universe) (in terms of $`z`$) is also given in ref. (Eq. (10)). The only difference from the analysis done there, is that the sign of the $`1/\kappa `$ term is reversed. Then, as in ref. we search for special solutions
$`a(z)a_0\text{e}^{Hz},\phi (z)\phi _0\text{e}^{\alpha Hz},`$ (25)
Analyzing the second of equations (Quantum Annihilation of Anti-de Sitter Universe) (in terms of $`z`$) and dropping logarithmic term in it (same arguments as in ref. may be given) one comes to the same solution:
$`\phi (z)=\phi _1\text{e}^{\frac{3}{2}Hz}+\phi _2\text{e}^{2.62Hz}+\phi _3\text{e}^{0.38Hz},`$ (26)
where $`\phi _1,\phi _2,\phi _3`$ are constants. Substituting the particular solution $`\phi (z)=\phi _0\text{e}^{\alpha Hz}`$ in Eq. (24) one obtains:
$`H^2{\displaystyle \frac{1}{\kappa }}\left[b_1+{\displaystyle \frac{C}{24}}\phi _{0}^{}{}_{}{}^{2}\left(\alpha ^46\alpha ^3+11\alpha ^26\alpha \right)\right]^1.`$ (27)
The first term in the denominator is always negative, while the second term may be positive only at $`\alpha =3/2`$. Then, it is only with the special dilaton solution $`\phi (z)=\phi _1\text{e}^{\frac{3}{2}Hz}`$ (i.e. $`\phi _2=\phi _3=0`$) and at the condition $`\phi _{1}^{}{}_{}{}^{2}>12`$ one gets the positive $`H^2`$ and, hence, non-imaginary scale factor for AdS Universe. Thus, there occurs the possibility for quantum creation of dilatonic AdS Universe, but it requires a strong fine-tuning of the dilatonic solution. This is, of course, a very non-realistic situation. Note that the corresponding AdS scale factor is:
$`a(y)={\displaystyle \frac{1}{Hy}}.`$ (28)
One can analyze the Eqs. (Quantum Annihilation of Anti-de Sitter Universe) numerically with the same qualitative result; for some very special initial conditions for dilaton and scale factor the quantum creation of AdS Universe is possible. However, in most cases such a process does not occur.
Thus, we demonstrated that quantum effects of conformal matter do not support the creation of an AdS Universe. Moreover, an already existing classical AdS Universe is annihilating due to quantum effects. Only for dilaton-gravitational background (maximally SUSY YM theory) where there is dilatonic contribution to conformal anomaly and to induced effective action the quantum effects may create AdS Universe subject to strong fine-tuning. However, the probability of such creation is much less than the same process for de Sitter Universe at similar conditions.
The model under discussion maybe understood also as a simplified model for creation-annihilation of AdS Black Hole. Taking additional contribution to effective action due to geometrical structure of AdS BH one can repeat this analysis for a more realistic situation. Unfortunately, this additional piece of effective action is not known in closed form. From another side, the inclusion of non-trivial axion maybe also done. However, in such case only numerical analysis of equations of motion is possible.
Acknowledgments. The work by SDO has been supported in part by the Norwegian Research Council and RFBR grant N99-02-16617. We are extremely grateful to Jan Myrheim for participation at the early stage of this work and helping in preparation of draft of this ms. We also thank Kåre Olaussen and Shinichi Nojiri for useful discussions. |
no-problem/9912/astro-ph9912284.html | ar5iv | text | # Neutron Star/Supernova Remnant Associations
## 1. Radio Pulsar/Supernova Remnant Associations
There are complicated selection effects against finding radio pulsars and SNRs; both populations are significantly incomplete, and not in an easily quantifiable way. However, since the working hypothesis is that all pulsars are born in supernovae, it is fair to consider just the youngest pulsars and ask whether they are associated with SNRs.
Table 1 contains all published radio pulsars having characteristic ages ($`\tau _cP/2\dot{P}`$) under 25 kyr. This age cutoff is arbitrary, but the inclusion of only the youngest pulsars is deliberate: associations involving older pulsars are harder to evaluate. First, evidence suggests that SNRs can fade on time scales of $``$20-25 kyr, much shorter than pulsar lifetimes. Second, a young pulsar moves a distance $`d12(v/450\mathrm{km}/\mathrm{s})(\tau /25\mathrm{kyr})`$ pc from its birth place (where $`\tau `$ is its true age). This distance can be far enough to reach or escape the parent remnant shell, depending on the birth velocity distribution, which has been estimated independently (e.g. \[Lyne & Lorimer 1994\]). The current version of Green’s SNR catalog<sup>1</sup><sup>1</sup>1http://www.mrao.cam.ac.uk/surveys/snrs/ contains 220 SNRs, 156 of which lie in the range $`260^{}<l<50^{}`$ and $`|b|<3.5^{}`$. The area these remnants cover is $``$32 square degrees, or some 3% of that region of the Galactic Plane. In the same area, the published pulsar catalog<sup>2</sup><sup>2</sup>2http://pulsar.princeton.edu/pulsar/catalog.shtml reports 280 radio pulsars. Assuming these are distributed randomly in that area, one expects $``$8.5 chance coincidences. By contrast, if one restricts oneself to pulsars having $`\tau _c<25`$ kyr, one expects only $``$0.4 chance coincidences. Thus, very few or none of the entries in Table 1 should be mere chance superpositions.
In Table 1, the columns are $`\tau _c`$, period $`P`$, distance $`d`$, inferred surface magnetic field $`B`$, the discovery band (items with asterisks were discovered in the past 15 yr, double asterisks in the past 5 yr), and associated SNR. Some caveats are necessary: $`\tau _c`$ is only an estimate of the true age $`\tau `$; $`\tau _c`$ is calculated assuming a birth spin period $`P_0/P<<1`$ and braking index $`n=3`$ and is an overestimate of $`\tau `$ if $`P_0P`$; the reverse is true for $`n<3`$. Similarly, $`B`$ is estimated assuming a dipole braking model, incorrect for objects with $`n<3`$, and is dependent on the stellar radius and mass.
Note that of the 4 associations discovered in the past 5 yr, 3 were found at X-ray energies. This is in stark contrast to the situation in the previous 25 yr, in which only 2 of the 13 discovered young pulsars were found at X-ray energies, the rest being found at radio wavelengths. This is due to major advances in X-ray telescopes. Also, of the 8 pulsars found at radio wavelengths in the past 15 yr, only one was found by looking for pulsations in a SNR. This is not for lack of trying. Indeed the three major radio pulsar search efforts targeting SNRs (\[Kaspi et al. 1996, Gorham et al. 1996\], Lorimer et al. 1998) collectively searched 91 targets and found precisely zero young pulsars. By contrast, untargeted surveys of the Galactic plane for radio pulsars (\[Johnston et al. 1992, Clifton & Lyne 1986, Lyne et al. 2000\]) collectively discovered 6 radio pulsars having $`\tau _c<25`$ kyr. This suggests that the best way to find young radio pulsars is to look everywhere in the Galactic Plane but in SNRs! One explanation for this quandary is that bright SNRs reduce radio pulsar survey sensitivity; for example, the mean radio flux of the SNRs in Green’s catalog (omitting the bright Cas A) roughly doubles the system temperature of the Parkes Multibeam survey. The quandary also underscores how incomplete the SNR catalog is. Indeed, PSRs B1610$``$50 and J1617$``$5055 ($`\tau _c=7`$ and 8 kyr, respectively) appear very young but do not have any observable associated SNR (Pivovaroff et al. 2000, \[Kaspi et al. 1998\]). The absence of visible SNRs around these pulsars suggests that remnant fading time can be much shorter than has been suggested (e.g. Braun et al. 1989).
## 2. Evidence for New Classes of Neutron Stars
There is now significant evidence that young neutron stars do not all manifest themselves as radio pulsars. There has been only a modest hint that this is true from population statistics. The supernova rate in the Galaxy has been estimated to be $`0.025_{0.005}^{+0.008}`$ yr<sup>-1</sup> (Tammann et al. 1994). Of these, some 85-90% are likely to be of Type Ib and II (i.e. producing compact stellar remnants). The black hole formation rate is thought to be small, at most a few percent (\[Fryer 1999\]). The pulsar birth rate for radio luminosities greater than 1 mJy kpc<sup>2</sup> is $`0.010\pm 0.007`$ yr<sup>-1</sup> (\[Lyne et al. 1998\]). Given that there must be some low luminosity pulsars, the agreement in the pulsar birth rates and the neutron-star-producing supernova rate is reasonable, though a pulsar dearth is possible. There has been, however, a long-recognized puzzle that most SNRs do not contain visible pulsars. There are significant selection effects against finding radio pulsars in SNRs, but this is less true of finding Crab-like plerions at the centers of shell SNRs, as those radiate isotropically. Yet roughly 85% of Green’s catalogued SNRs have pure shell morphologies. In fact, independent evidence is mounting that a significant fraction of young neutron stars have properties very different from Crab-like radio pulsars; just how large that fraction has yet to be determined, as is the cause of the diversity. To summarize, there are three classes of unusual high-energy sources have been compellingly argued to be young, isolated neutron stars:
Anomalous X-Ray Pulsars (AXPs): The properties of AXPs can be summarized as follows (see \[Mereghetti & Stella 1995, Gotthelf & Vasisht 1998\]): they exhibit X-ray pulsations in the range $``$5–12 s; they have pulsed X-ray luminosities in the range $`10^{34}`$-$`10^{35}`$ erg/s; they spin down regularly within the limited timing observations available (e.g. Kaspi et al. 1999); their X-ray luminosities are much greater than their $`\dot{E}`$’s; their X-ray spectra are characterized by thermal emission with $`kT0.4`$ keV, with evidence for a hard component; and they are in the Galactic Plane. Currently there are 5 confirmed AXPs and one strong AXP candidate (see Table 2). Of these 6 sources, 3 lie at the apparent centers of SNRs: 1E 2259+586 in CTB 109 (\[Fahlman & Gregory 1981\]), 1E 1841$``$045 in Kes 73 (\[Vasisht & Gotthelf 1997\]), and AX J1845$``$0258 in G29.6+0.1 (\[Gotthelf & Vasisht 1998, Torii et al. 1998\], Gaensler et al. 1999). The association of these objects with SNRs is arguably the most compelling reason to believe they are isolated neutron stars. The leading models explaining their large X-ray luminosity invoke the large stellar magnetic field as inferred from the spin down (hence the name “magnetars”), either using field decay (\[Thompson & Duncan 1996\]) enhanced thermal emission (\[Heyl & Hernquist 1997\]).
Soft Gamma Repeaters (SGRs): SGRs, of which 4 are known (see Table 2), are sources that occasionally and suddenly emit bursts of soft $`\gamma `$-rays having super-Eddington luminosities. That 3 of them lie in the Galactic Plane, and the 4th is in the LMC, argues that they are a young population. The detection of AXP-like X-ray pulsations from 2 of these sources (e.g. \[Kouveliotou et al. 1998\]), with evidence for pulses from the other two, also argues strongly that they are isolated neutron stars. Their burst properties and observed spin-down are well explained in the magnetar model (\[Thompson & Duncan 1995\], but see Marsden et al. 1999). The association between SGR 0526$``$66 and the SNR N49 in the LMC (\[Cline et al. 1982\]) first suggested the SGRs might be young neutron stars, however since then the SGR/SNR association picture has grown a bit murky. First, SGR 0526$``$66 is located near the edge of the N49 shell; this is problematic as it requires a very high transverse velocity ($`v_t>1000`$ km/s) for the SGR (Rothchild et al. 1994). SGR 1806$``$20 has been suggested to be associated with the plerionic radio nebula G10.0$``$0.3 (\[Kulkarni & Frail 1993\]), although a recent relocalization of the $`\gamma `$-ray source calls the association into question (\[Hurley et al. 1999\]). SGR 1900+14 has been associated with SNR G42.8+0.6 (\[Vasisht et al. 1994\]), however the $`\gamma `$-ray source lies well outside the shell, demanding a distressing $`v_t>3000`$ km/s. Smith et al. (1999) suggest that the newly discovered SGR 1627$``$41 may be associated with the shell SNR G337.0$``$0.1; the large SGR positional uncertainty precludes a firm conclusion.
“Quiescent” Neutron Stars: There are currently 4 cases of X-ray point sources in SNRs that may be “quiescent” neutron stars having low $`\dot{E}`$ – they exhibit neither magnetopsheric emission nor obvious Crab-like plerions. These are: 1E 1207.4$``$5209 in PKS 1209$``$52 (G296.5+10.0, \[Helfand & Becker 1984, Vasisht et al. 1997\]), 1E 161348$``$5055 in RCW 103 (\[Tuohy & Garmire 1980\]), 1E 0820$``$4250 in Puppis A (Petre et al. 1996) although claimed 75 ms X-ray pulsations, if confirmed, imply that it is an ordinary rotation-powered pulsar (Pavlov et al. 1999), and the newly discovered point source in Cas A (\[Tananbaum 1999\]). That these sources are only seen in X-rays suggests they could be thermally cooling neutron stars, still hot following their formation. Problematic in the neutron star interpretation for 1E 161348$``$5055 is that its X-ray luminosity is apparently variable (Gotthelf et al. 1999). In Cas A, a preliminary look shows that the point source spectrum may be harder than expected for a cooling neutron star (M. Pivovaroff, pers. comm.). Deep searches for pulsations (enough to see few percent modulation, as in the known thermally cooling neutron stars like Vela) and/or high-resolution X-ray spectroscopy to detect predicted absorption lines in the stellar atmospheres are the most promising ways of determining the nature of these objects.
In conclusion, although radio pulsars first confirmed the neutron star/supernova remnant connection, it now appears clear that they represent only a part of young neutron star phase space. The origin and full extent of the diversity are not yet clear, however the problem appears tractable, particularly given the Parkes Multibeam survey and new and upcoming X-ray missions, including Chandra, XMM, and ASTRO-E. |
no-problem/9912/cond-mat9912387.html | ar5iv | text | # Anomalous Spreading of Power-Law Quantum Wave Packets
\[
## Abstract
We introduce power-law tail quantum wave packets. We show that they can be seen as eigenfunctions of a Hamiltonian with a physical potential. We prove that the free evolution of these packets presents an asymptotic decay of the maximum of the wave packets which is anomalous for an interval of the characterizing power-law exponent. We also prove that the number of finite moments of the wave packets is a conserved quantity during the evolution of the wave packet in the free space.
03.65.-w, 05.40.Fb
\]
Power-law probability density functions are receiving a lot of attention in different research fields . Stochastic processes with power-law distributions may or may not be characterized by a typical scale in time and/or in the size of the random variable. One implication of the absence of a typical scale is the divergence of the variance of the distribution. Examples of phenomena without a typical scale are observed in physical systems at the critical state , in self-organized and in complex systems . Considering few-body quantum systems - (i) scale free power-law processes have been observed in a quantum system by investigating and modeling experiments of velocity selective coherent population trapping and (ii) the power-law temporal growth of the moments of a wave packet has been investigated in quantum systems with fractal energy spectra and eigenfunctions such as the Harper model . Spatial power-law wave functions have not been considered within the framework of non-relativistic quantum mechanics. The probabilistic interpretation of the wave function and the recent results on power-law distributions observed in physical systems motivates us to investigate the properties of power-law wave functions in quantum mechanics. In this letter we consider quantum wave packets with power-law tails. Specifically we focus on – (i) their physical properties. Namely the uncertainty product, the associated energy and the momentum distribution and (ii) the spreading of such quantum wave packets during the free evolution.
We define as Power-Law Tail Wave Packet (PLTWP) a wave function $`\psi (x)`$ describing a non-relativistic spinless particle in one dimension, which decreases with $`x`$ as
$$\psi (x)x^\alpha .$$
(1)
This class of wave packets is square-integrable only if $`\alpha >1/2`$. For the sake of simplicity, in this study we assume that wave function is real, positive and even. The study of quantum wave packets with wave function real or complex, uneven and with zeros is presented elsewhere . One of the properties of power-law distributions is that only a finite number of moments of the variable are finite. In quantum theory this implies that the moments of position operator $`\widehat{x}^m`$ with $`m2\alpha 1`$ are infinite. The lack of finite moments of $`x`$ has a counterpart in the properties in $`k=0`$ of Fourier transform $`g(k)=FT[\psi (x)]`$ which gives the amplitude probability distribution of momentum. We note that when $`n<\alpha n+1`$, only the first $`n1`$ derivatives of $`g(k)`$ exist in $`k=0`$. In the special case $`1/2<\alpha 1`$ the Fourier transform $`g(k)`$ is infinite in $`k=0`$. The behavior of $`g(k)`$ when $`k0`$ is described by two different series expansions depending on the value of $`\alpha `$. Specifically, when $`\alpha `$ is not an odd integer number
$$g(k)a_0+a_2k^2+\mathrm{}+a_{2n}k^{2n}+bk^{\alpha 1}+o(k^\alpha ),$$
(2)
where $`2n`$ is the largest even integer number smaller than $`\alpha 1`$. When $`\alpha =2n+1`$ the series expansion is
$$g(k)a_0+a_2k^2+\mathrm{}+a_{2n}k^{2n}+bk^{2n}\mathrm{log}k+\mathrm{}.$$
(3)
The case $`\alpha =1`$ has a logarithmic divergence in $`k=0`$.
In spite of this behavior we wish to point out that all moments of the momentum operator (including the kinetic energy) of the particle $`\widehat{p}^m`$ ($`m1`$) are finite. In fact, a property of Fourier transform is
$$FT[\psi ^{(m)}(x)]=(ik)^mg(k),$$
(4)
where $`\psi ^{(m)}(x)`$ indicates the $`m`$-th derivative of $`\psi (x)`$. This property is valid under the hypothesis that $`lim_{|x|\mathrm{}}\psi ^{(r)}(x)=0`$ for $`r=0,1,\mathrm{},m1`$. Since the tails behave as $`x^{\alpha m}`$, $`\psi ^{(m)}(x)`$ is absolutely integrable. By using Riemann-Lebesgue lemma , one can conclude that $`lim_k\mathrm{}k^mg(k)=0`$ for all $`m`$.
A consequence of the finiteness of momentum moments is that the uncertainty in momentum $`\mathrm{\Delta }k`$ is always finite for PLTWP. As mentioned above, when $`\alpha \frac{3}{2}`$ the second moment of the position operator and thus the root mean square deviation $`\mathrm{\Delta }x`$ of position are infinite. Therefore this kind of wave packets have an uncertainty product $`\mathrm{\Delta }x\mathrm{\Delta }k`$ which is infinite when $`\alpha \frac{3}{2}`$. This physical property reflects the fact that for PLTWP with $`\alpha \frac{3}{2}`$ a typical scale exists in momentum space whereas the wave packet is scale free in space.
Can a PLTWP be an eigenfunction of an Hamiltonian? To answer this question we note that each PLTWP can be related to a specific physical potential. The PLTWP is then an eigenfunction of the corresponding Hamiltonian. The equation providing the potential associated with a particle described by a wave function $`\psi (x)`$ is
$$U(x)=E+\frac{\mathrm{}^2}{2M}\frac{1}{\psi (x)}\frac{d^2\psi (x)}{dx^2},$$
(5)
where $`E`$ is the eigenvalue of energy and $`M`$ is the mass of the particle. The shape of the potential depends on the local properties of the wave function. ¿From Eqs. (1) and (5) it is immediate to conclude that the potential associated with PLTWP behaves asymptotically as $`x^2`$ . As an illustrative example we present PLTWPs defined as
$$\psi (x)=\frac{N}{(x^2+\gamma ^2)^{\alpha /2}},$$
(6)
where $`N`$ is a suitable normalization constant and $`\gamma `$ is a scale parameter. The above family of quantum wave packets is related to the Student’s t-distribution when $`\alpha `$ is integer. The associated potential is
$$U(x)=\frac{\mathrm{}^2}{2M}\alpha \frac{x^2(1+\alpha )\gamma ^2}{(x^2+\gamma ^2)^2},$$
(7)
and is shown in Fig. 1. In this figure the eigenvalue $`E`$ is set equal to zero. The potential is a single well potential with two symmetrical confining barriers. The potential reaches a maximal value and then decreases asymptotically as $`U(x)x^2`$. A general property of this potential is that by increasing the value of $`\alpha `$ the depth of the potential well increases. From Fig. 1 it is clear that the associated potential does not present anomalies of any sort.
Hereafter we consider the properties of a PLTWP in the simplest case of dynamical evolution, namely the free wave evolution. We assume that a $`t=0`$ the wave packet in the free space has the asymptotic properties of Eq. (1). We focus our attention on the spreading of the wave packet. During the dynamical evolution in free space the wave function at time $`t`$ is given by
$$\psi (x,t)=\frac{1}{\sqrt{2\pi }}_{\mathrm{}}^{\mathrm{}}g(k)e^{i(kx\mathrm{}tk^2/2M)}𝑑k.$$
(8)
We briefly recall that in the case of a Gaussian wave packet the amount of spreading of the wave packet can be quantified either by considering the time dependence of the position variance or by determining the time dependence of the maximum of the wave function. To quantify the amount of spreading of a free wave packet in the same way for PLTWP with finite or infinite variance we choose to focus our attention to the asymptotic behavior in time of the wave function in a specific position (for example $`x=0`$). In the Gaussian case, the variance of $`\psi (x,t)^2`$ is asymptotically proportional to $`t^2`$ and the maximum of the wave function $`\psi (0,t)^2`$ decreases as $`t^1`$ asymptotically. We will show in the following that this asymptotic behavior observed for Gaussian quantum packets is not universally observed in the free evolution of a quantum wave packet.
We consider here the free evolution of the maximum of the packet, which is described by Eq. (8) with $`x=0`$. It is possible to give an asymptotic expansion in time $`t`$ of integral of Eq. (8) using phase stationary method (see for example ). This method shows that the asymptotic expansion of $`\psi (0,t)`$ is determined by the dominant term of series expansion about the origin ($`k=0`$) of $`g(k)`$ and by the function $`\mathrm{}tk^2/2M`$. The first term of the series expansion about $`k=0`$ of $`g(k)`$ is $`g(k)Q|k|^{\lambda 1}`$ for any value of $`\alpha 1`$. By using phase stationary method , we know that when $`0<\lambda <2`$ it is possible to give an asymptotic expansion of integral of Eq. (8) with $`x=0`$ in the form
$$\psi (0,t)\frac{1}{\sqrt{2\pi }}e^{i\pi \lambda /4}Q\mathrm{\Gamma }(\frac{\lambda }{2})\frac{1}{\beta ^{\lambda /2}},$$
(9)
where $`\beta =\mathrm{}t/2M`$. If $`g(k)`$ has a finite non-vanishing limit as $`k`$ goes to zero then $`\lambda =1`$ and we conclude that the maximum of the packet decreases asymptotically in time as $`1/\sqrt{t}`$. This is the case, for example, of Gaussian wave packets cited above and, in general, of all commonest wave packets with finite FT in $`k=0`$. From Eqs. (2,3) and (9) we conclude that this is also the case of PLTWP with $`\alpha >1`$. These packets show a customary asymptotic dynamics of the maximum. PLTWPs with $`1/2<\alpha <1`$ show a different behavior. In fact, in these cases $`g(k)`$ diverges in $`k=0`$ as $`k^{\alpha 1}`$ and, as a consequence $`\lambda =\alpha <1`$. Hence, form Eq. (9) we conclude that the maximum decreases asymptotically as $`t^{\alpha /2}`$. This is a maximum decrease, which is anomalous and slower with respect to the decrease observed in customary wave packets. The $`t^{1/2}`$ behavior observed in Gaussian wave packets is interpreted in terms of the time evolution of a group of classical particles with a momentum dispersion $`\mathrm{\Delta }p`$. A similar simple picture cannot explain the behavior observed for PLTWPs with $`\alpha <1`$ .
To illustrate the process of convergence of different wave packets to the expected asymptotic behavior we calculate the time evolution of the amplitude of wave packet at $`x=0`$ for three different cases. Specifically we consider a Gaussian wave packet and two PLTWPs of the class defined by Eq. (6) with $`\alpha =3`$ and $`\alpha =0.75`$. In Fig. 2 we show the numerical estimate of $`|\psi (0,t)|`$. In the figure is clear that the Gaussian and the PLTWP with $`\alpha =3`$ soon converge to the usual $`1/\sqrt{t}`$ asymptotic behavior whereas the PLTWP with $`\alpha =0.75`$ slowly converges to the anomalous asymptotic behavior of $`1/t^{\alpha /2}=1/t^{0.375}`$.
The case $`\alpha =1`$ cannot be handled with phase stationary method because of logarithmic divergence of $`g(k)`$ in $`k=0`$. Although we are not able to provide a general answer for the case $`\alpha =1`$, we are able to determine the asymptotic behavior in the specific case of a packet described by Eq. (6) with $`\alpha =1`$. For a such wave packet, the maximum decreases as $`1/\sqrt{t}`$.
In order to obtain a more complete description of the free wave spreading, we consider the time evolution of the tails of the packet. We prove that the free wave packet evolution of a PLTWP conserves the tails. More precisely, if at $`t=0`$ the dominant term of asymptotic expansion of $`\psi (x)`$ is $`cx^\alpha `$, at each subsequent time the asymptotic expansion of $`\psi (x,t)`$ will be dominated by $`cx^\alpha `$. In this sense, the free evolution cannot change the asymptotic properties of the packet. We can visualize this property dividing the whole set of PLTWPs in classes such as each class is characterized by a value of $`\alpha `$. In other words, any packet belongs to the same class at any time during the free wave evolution. Since $`\alpha `$ determines the number of finite moments of operator $`\widehat{x}`$, our result implies that position moments cannot became finite if they were infinite at $`t=0`$ and vice versa. We prove our statement as follows. Let us first consider Eq. (8) and look for the asymptotic expansion in $`x`$ of the FT of $`f(k)=g(k)e^{i\beta k^2}`$. The asymptotic expansion of the FT of $`f(k)`$ is determined by its singularities . By following Ref. , we say that $`f(k)`$ is singular in $`k_0`$ if one cannot differentiate $`f(k)`$ in $`k_0`$ any number of times. From properties of $`g(k)`$ of PLTWPs we conclude that $`k=0`$ is the only singularity for $`f(k)`$. In order to find the asymptotic expansion of Eq. (8), we construct a function $`F(k)`$ in such a way that $`\stackrel{~}{f}(k)f(k)F(k)`$ has absolutely integrable $`m`$-th derivative in an interval including $`k=0`$. $`F(k)`$ must be a linear combination of powers of $`k`$ and product of powers of $`k`$ and $`\mathrm{log}k`$ . Moreover the $`m`$-th derivative of $`f(k)`$ must be absolutely integrable in an interval from a real value to infinity. If all these hypothesis are verified, the FT of $`f(k)`$ is equal to the FT of $`F(k)`$ plus $`o(x^m)`$. Depending on the specific value of $`\alpha `$ the regularizing function assumes a different form. In spite of this, our results do not depend on the specific value of $`\alpha `$. For the sake of simplicity, we present here the case with $`2n+1<\alpha <2n+2`$. The demonstration of cases for other value of $`\alpha `$ (specifically $`2n<\alpha <2n+1`$, $`\alpha =2n`$ and $`\alpha =2n+1`$) are similar. In order to regularize the $`2n+1`$-th derivative of $`f(k)`$ the natural choice is
$$\stackrel{~}{f}(k)=g(k)e^{i\beta k^2}bk^{\alpha 1}.$$
(10)
In fact the first $`2n+3`$ derivatives of $`\stackrel{~}{f}(k)`$ are absolutely integrable in an interval including $`k=0`$. Therefore, we obtain the asymptotic expansion of $`\psi (x,t)`$ as
$`\psi (x,t)=FT(F(k))+o(x^{2n3})=`$ (11)
$`cx^\alpha +o(x^{2n3})`$ (12)
which demonstrates our assertion.
As an illustrative example let us consider the free evolution of the PLTWP of Eq. (6) with $`\alpha =2`$ and $`\gamma =1`$. The corresponding wave function has the form of the Cauchy distribution and it is possible to find the $`\psi (x,t)`$ analytically. In Fig. 3 we show the square modulus of the positive tail of the packet versus $`x`$ at different times in a log-log plot. The figure shows that the dominant term of the asymptotic expansion, i.e. $`x^4`$, is the same at any time. The figure also shows that new terms of the asymptotic expansion become relevant at longer times and the region of asymptotic convergence moves towards larger values of $`x`$.
A connection between stochastic processes and quantum processes has been considered within the framework of stochastic mechanics . Here we note that the free evolution of a quantum wave packet is closely related to a superdiffusive (ballistic) stochastic process. When a superdiffusive behavior is observed in classical and quantum processes, the central limit theorem does not apply and a general theoretical description is lacking. In our study we obtain two general conclusions in the problem of the free evolution of a PLTWP. The first concerns the temporal evolution of the maximum of the wave packet (which corresponds to the probability of return to the origin in stochastic processes). In the well-known case of a Gaussian wave packet (which has a fractional Brownian motion with exponent $`h=1`$ as the approximately corresponding stochastic process ) the variance is quadratic in time and the probability of return to the origin is inversely proportional to the time. Similarly, we find that the same behavior is asymptotically observed for the evolution of an even and real PLTWP when the exponent $`\alpha `$ is greater than one. Conversely, for values of $`\alpha `$ within the interval $`1/2<\alpha <1`$ we observe an anomalous behavior of the time evolution of the maximum of the wave packet. We are not able to interpret this unexpected behavior on a semiclassical basis. The second conclusion concerns the conservation during the quantum time evolution of the number of finite/infinite moments of the $`t=0`$ distribution. This behavior is peculiar to this quantum dynamics and could not be observed, for example, in stochastic processes obeying the central limit theorem.
We thank INFM and MURST for financial support. We wish to thank Giovanni Bonanno for help in numerical calculations. |
no-problem/9912/astro-ph9912552.html | ar5iv | text | # IMF AND EVOLUTION OF CLOSE BINARIES AFTER STARFORMATION BURSTS
ABSTRACT. This paper is a continuation and development of our previous articles (Popov et al., 1997, 1998). We use “Scenario Machine” (Lipunov et al., 1996b) – the population synthesis simulator (for single binary systems calculations the program is available in WWW: http://xray.sai.msu.ru/ (Nazin et al., 1998)) – to calculate evolution of populations of several types of X-ray sources during the first 20 Myrs after a starformation burst.
We examined the evolution of 12 types of X-ray sources in close binary systems (both with neutron stars and with black holes) for different parameters of the IMF – slopes: $`\alpha =1`$, $`\alpha =1.35`$ and $`\alpha =2.35`$ and upper mass limits, $`M_{up}`$: 120 $`M_{}`$, 60 $`M_{}`$ and 40 $`M_{}`$. Results, especially for sources with black holes, are very sensitive to variations of the IMF, and it should be taken into account when fitting parameters of starformation bursts.
Results are applied to several regions of recent starformation in different galaxies: Tol 89, NGC 5253, NGC 3125, He 2-10, NGC 3049. Using known ages and total masses of starformation bursts (Shaerer at al., 1998) we calculate expected numbers of X-ray sources in close binaries for different parameters of the IMF. Usually, X-ray transient sources consisting of a neutron star and a main sequence star are most abundant, but for very small ages of bursts (less than $`4`$ Myrs) sources with black holes can become more abundant.
Key words: Stars: binary: evolution;
1. Introduction
Theory of stellar evolution and one of the strongest tools of that theory – population synthesis – are now rapidly developing branches of astrophysics. Very often only the evolution of single stars is modeled, but it is well known that about 50% of all stars are members of binary systems, and a lot of different astrophysical objects are products of the evolution of binary stars. We argue, that often it is necessary to take into account the evolution of close binaries while using the population synthesis in order to avoid serious errors.
Initially this work was stimulated by the article Contini et al. (1995), where the authors suggested an unusual form of the initial mass function (IMF) for the explanation of the observed properties of the galaxy Mrk 712 . They suggested the “flat” IMF with the exponent $`\alpha =1`$ instead of the Salpeter’s value $`\alpha =2.35`$. Contini et al. (1995) didn’t take into account binary systems, so no words about the influence of such IMF on the populations of close binary stars could be said. Later Shaerer (1996) showed that the observations could be explained without the IMF with $`\alpha =1`$. Here we try to determine the influence of the variations of the IMF on the evolution of compact binaries and apply our results to seven regions of starformation (Shaerer et al., 1998, hereafter SCK98).
Previously (Lipunov et al., 1996a) we used the “Scenario Machine” for calculations of populations of X– ray sources after a burst of starformation at the Galactic center. Here, as before in Popov et al. (1997, 1998), we model a general situation — we make calculations for a typical starformation burst. We show results on twelve types of binary sources with significant X-ray luminosity for three values of the upper mass limit for three values of $`\alpha `$.
2. Model
Monte-Carlo method for statistical simulations of binary evolution are now widely used in astrophysics: for analysis of radio pulsar statistics, for formation of the galactic cataclysmic variables etc. (see the review in van den Heuvel 1994).
Monte-Carlo simulations of binary star evolution allows one to investigate the evolution of a large ensemble of binaries and to estimate the number of binaries at different evolutionary stages. Inevitable simplifications in the analytical description of the binary evolution that we allow in our extensive numerical calculations, make those numbers approximate to a factor of 2-3. However, the inaccuracy of direct calculations giving the numbers of different binary types in the Galaxy (see e.g. van den Heuvel 1994) seems to be comparable to what follows from the simplifications in the binary evolution treatment.
In our analysis of binary evolution, we use the “Scenario Machine”, a computer code, that incorporates current scenarios of binary evolution and takes into account the influence of magnetic field of compact objects on their observational appearance. A detailed description of the computational techniques and input assumptions is summarized elsewhere (Lipunov et al. 1996b; see also: http://xray.sai.msu.ru/~ mystery/articles/review/), and here we briefly list only principal parameters and initial distributions.
We trace the evolution of binary systems during the first 20 Myrs after their formation in a starformation burst. Obviously, only stars that are massive enough (with masses $`810\mathrm{M}_{}`$) can evolve off the main sequence during the time as short as this to yield compact remnants: neutron stars (NSs) and black holes (BHs). Therefore we consider only massive binaries, i.e. those having the mass of the primary (more massive) component in the range of $`10\mathrm{M}_{}`$$`M_{up}`$.
We assume that a NS with a mass of $`1.4\mathrm{M}_{}`$ is formed as a result of the collapse of a star, whose core mass prior to collapse was $`M_{}(2.535)\mathrm{M}_{}`$. This corresponds to an initial mass range $`(1060)\mathrm{M}_{}`$, taking into account that a massive star can lose more than $`(1020)\%`$ of its initial mass during the evolution with a strong stellar wind. The most massive stars are assumed to collapse into a BH once their mass before the collapse is $`M>M_{cr}=35\mathrm{M}_{}`$. The BH mass is calculated as $`M_{bh}=k_{bh}M_{cr}`$, where the parameter $`k_{bh}`$ is taken to be 0.7.
The mass limit for NS (the Oppenheimer-Volkoff limit) is taken to be $`M_{OV}=2.5\mathrm{M}_{}`$, which corresponds to a hard equation of state of the NS matter.
We made calculations for several values of the coefficient $`\alpha `$:
$$\frac{dN}{dM}M^\alpha $$
We calculated $`10^7`$ systems in every run of the program. Then the results were normalized to the total mass of binary stars in the starformation burst. We also used different values of the upper mass limit, $`M_{up}`$.
3. Results
On the figures we show some of the results of our calculations (full results can be found in the electronic preprint (Popov et al. 1999)). On all graphs on the X- axis we show the time after the starformation burst in Myrs, on the Y- axis — number of the sources of the selected type that exist at the particular moment.
On the figures results are shown for three values of upper mass limits: $`120M_{}`$ – solid lines, $`60M_{}`$ – dashed lines, $`40M_{}`$ – dotted lines.
The calculated numbers were normalized for $`10^6M_{}`$ in binary stars. We show on the figures and in tables only systems with the luminosity of compact object greater than $`10^{33}\mathrm{erg}/\mathrm{s}`$.
Curves were not smoothed so all fluctuations of statistical nature are presented. We calculated $`10^7`$ binary systems and then the results were normalized.
We apply our results to seven regions of recent starformation (see the tables, the full set can be found in (Popov et al., 1999)). Ages, total masses and some other characteristics were taken from SCK98 (we used total masses determined for Salpeter’s IMF even for the IMFs with different parameters, which is a simplification). We made an assumption, that binaries contain 50% of the total mass of the starburst. Numbers were rounded off to the nearest integer.
As far as for several regions ages are uncertain, we made calculations for two values of the age.
Different types of close binaries show different sensitivity to variations of the IMF. When we replace $`\alpha =2.35`$ by $`\alpha =1`$ the numbers of all sources increase. Systems with BHs are more sensitive to such variations.
When one try to vary the upper mass limit, another situation appear. In some cases (especially for $`\alpha =2.35`$) systems with NSs show little differences for different values of the upper mass limit, while systems with BHs become significantly less (or more) abundant for different upper masses. Luckily, X-ray transients, which are the most numerous systems in our calculations, show significant sensitivity to variations of the upper mass limit. But of course due to their transient nature it is difficult to use them to detect small variations in the IMF. If it is possible to distinguish systems with BH, it is much better to use them to test the IMF.
4. Discussion and conclusions
The results of our calculations can be easily used to estimate the number of X- ray sources for different parameters of the IMF if the total mass of stars and age of a starburst are known (in (Popov et al., 1997, 1998) analytical approximations for source numbers were given). And we estimate numbers of different sources for several regions of recent starformation.
Here we tried to show, that populations of close binaries are very sensitive to the variations of the IMF. One must be careful, when trying to fit the observed data for single stars with variations of the IMF. And, vice versa, using detailed observations of X-ray sources, one can try to estimate parameters of the IMF, and test results, obtained from single stars population.
Acknowledgements. We want to thank K.A. Postnov for discussions and G.V. Lipunova and I.E. Panchenko for technical assistance. SBP also thanks organizers of the conference for support and hospitality.
This work was supported by the grants: NTP “Astronomy” 1.4.2.3., NTP ‘Astronomy” 1.4.4.1 and “Universities of Russia” N5559.
References
Contini, T., Davoust, E., & Considere, S.: 1995, A & A 303, 440
Lipunov, V.M., Ozernoy, L.M., Popov, S.B., Postnov, K.A. & Prokhorov, M.E.: 1996a, ApJ 466, 234
Lipunov, V.M., Postnov, K.A. & Prokhorov, M.E.: 1996b, Astroph. and Space Phys. Rev. 9, part 4
Nazin, , S.N., Lipunov, V.M., Panchenko, I.E., Postnov, K.A., Prokhorov, M.E. & Popov, S.B.: 1998, Grav. & Cosmology 4, suppl. “Cosmoparticle Physics” part.1, 150 (astro-ph 9605184)
Popov, S.B., Lipunov, V.M., Prokhorov, M.E., & Postnov, K.A.: 1997, astro-ph/9711352
Popov, S.B., Lipunov, V.M., Prokhorov, M.E., & Postnov, K.A.: 1998, AZh, 75, 35 (astro-ph/9812416)
Popov, S.B., Prokhorov, M.E., & Lipunov, V.M.: 1999, astro-ph/9905070
Schaerer, D.: 1996, ApJ 467, L17
Schaerer, D., Contini, T., & Kunth, D.: 1998, A&A 341, 399 (astro-ph/9809015) (SCK98)
van den Heuvel, E.P.J.: 1994, in “Interacting Binaries”, Eds. Shore, S.N., Livio, M., & van den Heuvel, E.P.J., Berlin, Springer, 442 |
no-problem/9912/cond-mat9912436.html | ar5iv | text | # 3D-melting features of the irreversibility line in overdoped Bi2Sr2CuO6 at ultra-low temperature and high magnetic field
## 1 INTRODUCTION
Despite the deep structural and physical similarities with the wide family of Bi- and Tl- based high-$`T_\mathrm{c}`$ compounds, the layered cuprate superconductor Bi<sub>2</sub>Sr<sub>2</sub>CuO<sub>6</sub> (Bi-2201) has a relatively low critical temperature (typically less than 13 K). On the other hand, this is accompanied by a value of the upper critical field $`B_{\mathrm{c2}}`$ which lies within the experimentally accessible range, thus allowing the investigation of the whole $`BT`$ phase diagram. Nevertheless, difficulties in growing high-quality single crystals have made this material much less studied than the other cuprates. In this work we present the results of irreversible magnetization measurements, carried out up to the applied magnetic field $`B_\mathrm{a}=28`$ T and down to $`T=60`$ mK on a Bi-2201 single crystal. The goal of this work is to extract the irreversibility line $`B_{\mathrm{irr}}(T)`$ and deduce what is the most suitable model to interpret its behavior, especially in view of the high anisotropy of the material and the influence of flux motion in the resistive transitions in magnetic field.
## 2 EXPERIMENT
The investigated sample is a high-quality Bi<sub>2+x</sub>Sr<sub>2-(x+y)</sub>Cu<sub>1+y</sub>O<sub>6±δ</sub> single crystal of approximate size $`1100\times 700\times 10`$ $`\mu `$m<sup>3</sup>, intrinsically overdoped because of the Bi excess localized on the Sr positions; accordingly, we found a critical temperature $`T_\mathrm{c}4`$ K. The magnetization was detected by means of a very sensitive capacitive torquemeter, installed into the mixing chamber of a dilution refrigerator. The plane of the torquemeter was tilted in order to have an angle $`\theta =30^{}`$ between the $`c`$-axis of the sample and the direction of the applied magnetic field. The torque loops $`\tau (B)`$ (inset of Fig. 1) were recorded during two independent sets of experiments, sweeping the field at the rates $`\mathrm{d}B_\mathrm{a}/\mathrm{d}t=10.8`$ mT/s and $`\mathrm{d}B_\mathrm{a}/\mathrm{d}t=15`$ mT/s, respectively. The magnetization loops $`M(B)`$ can be obtained from the relationship $`M=\tau /(B_\mathrm{a}\mathrm{sin}\theta )`$. Calling $`B_{\mathrm{irr}}^{(\mathrm{a})}`$ the applied field corresponding to the vanishing of the irreversible torque, the actual irreversibility field $`B_{\mathrm{irr}}`$ that would be obtained for $`𝐁_\mathrm{a}𝐜`$ is given by $`B_{\mathrm{irr}}=B_{\mathrm{irr}}^{(\mathrm{a})}\mathrm{cos}\theta `$; this is due to the high anisotropy of Bi-2201, implying that only the field orthogonal to the $`ab`$-planes is effective.
## 3 DISCUSSION
The irreversibility line $`B_{\mathrm{irr}}(T)`$ obtained in this way is shown in Fig. 1: qualitatively, it is very similar to the resistive upper critical field recently obtained with crystals very similar to ours , provided that the criterion $`\rho /\rho _\mathrm{n}=0.1`$ (i.e. the ”foot” of the resistive transition) is chosen. This confirms that, in very anisotropic superconductors, there is a wide area of the $`BT`$ phase diagram where the resistive transition is influenced by flux motion. On the other hand, this $`B_{\mathrm{irr}}(T)`$ can not be interpreted in a simple depinning or flux creep picture, since its curvature does not correspond to a law $`(1T/T_\mathrm{c})^n`$ with $`n2`$. Here we show that the flux motion above $`B_{\mathrm{irr}}`$ takes place as a consequence of the melting of a 3D-anisotropic flux lattice, as demonstrated by the very good fit (solid line in Fig. 1) provided by the analytical form of the 3D-melting line $`B_\mathrm{m}(T)`$ as calculated from the Lindemann criterion :
$$B_\mathrm{m}(T)=B_{\mathrm{c2}}(0)\frac{4\vartheta ^2}{\left(1+\sqrt{1+4\vartheta T_\mathrm{s}/T}\right)^2}$$
(1)
where $`\vartheta =c_\mathrm{L}^2\sqrt{\beta _\mathrm{m}/Gi}(T_\mathrm{c}/T1)`$, $`T_\mathrm{s}=T_\mathrm{c}c_\mathrm{L}^2\sqrt{\beta _\mathrm{m}/Gi}`$, $`c_\mathrm{L}`$ is the Lindemann number, $`Gi=\frac{1}{2}\left(\frac{\gamma k_\mathrm{B}T_\mathrm{c}}{(4\pi /\mu _0)B_\mathrm{c}^2(0)\xi _{ab}^3(0)}\right)^2`$ is the Ginzburg number, $`\gamma =\sqrt{m_c/m_{ab}}`$ is the mass anisotropy, $`\kappa =\lambda _{ab}(0)/\xi _{ab}(0)`$ and $`\beta _\mathrm{m}5.6`$. With parameters appropriate for Bi-2201, the fit to Eq. (1) yields the very reasonable estimate of $`c_\mathrm{L}=0.13`$, very close to the value $`c_\mathrm{L}=0.14`$ predicted for 3D vortex lattice melting in high magnetic field . It is interesting to observe that Eq. (1) is obtained using a linear extrapolation of $`B_{\mathrm{c2}}`$ down to $`T=0`$, and here it provides a good fit down to $`T/T_\mathrm{c}0.01`$: a fully linear behavior of $`B_{\mathrm{c2}}(T)`$ is indeed found in Bi-2201 taking the saturation points of the magnetoresistive transitions.
The high anisotropy of Bi-2201 might suggest that a more realistic picture for the vortex system is a stack of weakly coupled 2D pancake vortices . Instead, our data do not agree neither qualitatively nor quantitatively to the predicted existence of a 2D-3D crossover field $`B_{\mathrm{cr}}`$ and a field-independent melting temperature $`T_\mathrm{m}^{2\mathrm{D}}`$ in the $`BT`$ phase diagram; also the almost identical shape of the torque loops at different temperatures (inset of Fig. 1) suggests the existence of only one pinning mechanism. We can also rule out a possible quantum melting of the vortex lattice at low temperatures , since we don’t find any low-$`T`$ linear features in $`B_{\mathrm{irr}}(T)`$.
In conclusion, we can interpret the irreversibility line of overdoped Bi<sub>2+x</sub>Sr<sub>2-(x+y)</sub>Cu<sub>1+y</sub>O<sub>6±δ</sub> as the signature of 3D-anisotropic vortex lattice melting down to $`T/T_\mathrm{c}0.01`$. |
no-problem/9912/astro-ph9912252.html | ar5iv | text | # Massive Star Formation in Galaxies: Radiative transfer models of the UV to mm emission of starburst galaxies.
## 1 Introduction
It now looks increasingly clear that the long-sought era of galaxy formation is becoming accessible with a number of observational methods for close scrutiny. These range from optical/ultraviolet studies (Steidel & Hamilton 1992, Lilly et al 1996, Madau et al 1996), mid-infrared (Rowan-Robinson et al 1997 and references therein) and sub-millimeter surveys (Hughes et al 1998). A number of authors have discussed the history of star formation in the Universe which appears to have peaked at a redshift of about 1-3. Estimates like this are, of course, model-dependent as they involve the converion from an observed (usually monochromatic) luminosity density to a star-formation rate. In particular, they depend on the extent and geometry of dust obscuration especially at the shorter wavelengths.
As is well known a large fraction of the power emitted by galaxies (ranging from about 30% in normal galaxies to almost 100% in actively star forming galaxies or starbursts) lies in the infrared part of the spectrum as a result of reprocessing of starlight by dust. Extensive infrared observations of galaxies are therefore necessary in order to describe fully their energy output. With the advent of the Infrared Space Observatory (ISO) the spectra of a number of galaxies in the local Universe have been observed with unprecedented detail in the infrared. They invariably display a variety of absorption/emission features due to dust/molecules. It is clear that radiative transfer calculations in dusty media will be useful for the interpretation of these observations and a development of a better understanding of the origin of the infrared luminosity of galaxies.
Radiative transfer models for the infrared emission of starburst galaxies have been presented before by Rowan-Robinson & Crawford (1989; hereafter RRC), Rowan-Robinson & Efstathiou (1993; hereafter RRE), Krügel & Siebenmorgen (1994; hereafter KS). These models used state of the art codes for calculating the transfer of radiation in dusty media and incorporated a model for the composition and size distribution of grains in the interstellar medium. KS additionally included the effect of transiently heated grains/PAHs and considered the local change of dust temperatures in hot spots around luminous OB stars. The basic assumption of previous starburst models is that a starburst is made up of an ensemble of compact HII regions similar to those found in our galaxy. A number of authors (e.g. Rowan-Robinson 1980, Krügel & Walmsley 1984, Churchwell, Wolfire & Wood 1990, Efstathiou & Rowan-Robinson 1994) studied the infrared properties of different samples of such HII regions and concluded that they could be modelled adequately by spherically symmetric dust clouds surrounding young massive stars.
All of the above models assume that the cloud ensemble in the starburst consists of a number of identical systems. In this paper we present illustrative models for the evolution of giant molecular clouds (GMCs) induced by massive star formation at their centers and calculate their infrared spectra. In the radiative transfer code we use, the effect of transiently heated particles/PAHs as well as classical grains is included. We also follow the evolution of the stellar populations with the models of Bruzual & Charlot (1995). This approach allows us to relate the observed properties of a starburst to its age and its star formation history. We illustrate these models by comparison with the IRAS colours of starburst galaxies as a class and with the multiwavelength data on M82 and NGC6090.
## 2 A new starburst model
The basic assumption of our model for a starburst galaxy is that star formation takes place primarily within optically thick molecular clouds. This is supported by an array of observational studies (e.g. Elmegreen 1985) which show that molecular clouds are associated with young stars.
The mass of molecular clouds in our Galaxy ranges between $`10^210^7M_{}`$ with a mass distribution approximately following $`M^{1.5\pm 0.1}`$ (Dame et al 1983, Solomon et al 1987). It follows that about $`70\%`$ of molecular mass is associated with GMCs more massive than $`10^6M_{}`$. If the mass distribution in a starburst follows a similar form as the Galactic one, and GMCs are at least as likely to be the sites of massive stars as the less massive molecular clouds, then we would expect the bulk of the luminosity of the starburst to arise from GMCs with a fairly narrow range of mass. In fact low-mass molecular clouds, like the nearby Taurus and Ophiuchus clouds, are known to form predominantly low-mass stars.
An indirect argument that massive star formation in galaxies takes place in GMCs with roughly a power-law mass distribution comes from HST observations of starburst galaxies (O’Connell et al 1995, Meurer et al 1995) which reveal a population of super star clusters following a luminosity function of a power-law form $`\varphi (L)L^2`$. A similar slope is found for systems of young clusters in other galaxies (e.g. Whitmore & Schweizer 1995). This luminosity function is quite unlike that of globular clusters although it has been suggested that it could evolve into one (Meurer et al). In our model these star clusters represent the evolved counterparts of star clusters forming within GMCs, with the mass spectrum given above, after they have dispersed their nascent molecular clouds. It is not clear what the ages of these clusters are because of the known age-reddening degeneracy but in the case of M82 their proximity to the 2$`\mu m`$ nucleus (see section 4) points towards an age $`>`$ 10Myrs.
In previous studies we assumed that a starburst is an ensemble of identical star forming complexes, which we approximate as spherical. Here we refine this model by considering the fact that, given the starburst takes place over a finite period of time, the star-forming complexes that constitute it are bound to be at different evolutionary stages. We therefore use a simple evolutionary model for HII regions to construct a family of models that predict the infrared spectrum of HII regions as a function of their age.
### 2.1 The dusty HII region phase
Star formation takes place primarily in the dense cores of GMCs. The details of the physical processes involved are not well understood yet. The efficiency of star formation (or the gas consumption rate) ranges from about $`1\%`$ in late type spirals to $`60\%`$ or more in starburst galaxies (Kennicutt 1998).
Once massive stars form they can ionize the surrounding medium and inhibit further star formation. This in itself, however, cannot explain the low star formation efficiency in disk galaxies because it does not explain what determines the number of massive young stars formed initially.
The evolution of GMCs is determined by ionization induced expansion in the early stages ($`t<10^7`$ yrs) and later by stellar winds and supernova explosions. The latter eventually disperse the molecular clouds on timescales of a few times $`10^7`$ years. Current scenaria of self-propagating star formation (e.g. Larson 1988, Tenorio-Tagle & Bodenheimer 1988, hereafter TTB) hold that molecular cloud formation by a number of mechanisms occurs on a timescale of $`10^8`$ yrs, to complete the cycle.
The evolution of HII regions due to ionization has been the subject of extensive study both analytically and numerically for a broad variety of circumstances (spherically symmetric case, two-dimensional solutions leading to so-called champagne flows) and is considered to be generally well understood (for a review see Franco et al 1990, TTB).
The presence of a number of massive stars in the centre of a GMC producing numerous ionizing photons leads to the formation, on an extremely short timescale, of the classical HII region with initial Stromgren radius $`R_S`$. The pressure in the ionized region, being orders of magnitudes higher than in the rest of the GMC, drives the expansion of the HII region into the surrounding medium. The details of the expansion depend on a number of factors including the density distribution in the medium. Assuming a constant density medium and spherical symmetry, the evolution of the HII region radius can be described by (Spitzer 1978)
$$R=R_S\left[1+\frac{7}{4}\frac{c_it}{R_S}\right]^{\frac{4}{7}}$$
(1)
where $`c_i`$ is the sound speed in the ionized gas. $`t`$ is the time since the formation of the initial Stromgren sphere. However, since the timescale for the latter is extremely short we will assume that $`t`$ is the time from the onset of star formation (assumed to take place instantaneously) in the GMC.
The expansion can be broken down into a number of phases, marked by the time when massive stars first move off the main sequence and some recombination occurs ($`t34\times 10^6`$ yrs), a phase of further ionization, and finally total recombination ($`t10^7`$ yrs) as $`F_{}`$ diminishes. However, throughout expansion and until the supernovae ejecta finally reach the expansion front caused by ionization, equation (1) can be considered to give a reasonable description of the evolution, a fact confirmed by numerical simulations (TTB).
In the simple case of a core of constant density $`n_ccm^3`$ the initial Stromgren radius $`R_S`$ generated by a compact star cluster producing $`F_{}`$ ionizing photons is given by (Franco et al 1990),
$$R_S=4.9\left(\frac{F_{}}{5\times 10^{52}s^1}\right)^{\frac{1}{3}}\left(\frac{n_c}{2\times 10^3cm^3}\right)^{\frac{2}{3}}pc$$
(2)
The average densities of GMCs in our Galaxy are in the range $`10`$ to $`10^2cm^3`$ (Dame et al 1983) but their cores, where most of the stars form, have densities three or more orders of magnitude higher. Higher densities are also deduced in more actively star-forming galaxies in accordance with the Schmidt law (Kennicutt 1998). Given the lack of knowledge about $`n_c`$, its density distribution, the effect of dust within the HII region and our assumption that all star formation in a given GMC occurs instantaneously on defining $`R_S`$, equation (2) provides only a rough estimate of $`R_S`$, probably an upper limit.
We use the tables of Bruzual and Charlot (1995) to derive the number of Lyman continuum photons $`F_{}`$. For a Salpeter IMF and stars in the mass range $`0.1125M_{}`$,
$$F_{}=5\times 10^{52}\left(\frac{\eta }{0.25}\right)\left(\frac{M_{GMC}}{10^7M_{}}\right)s^1$$
(3)
where $`\eta `$ is the star formation efficiency.
The molecular cloud radius $`r_2`$ is related to its mass $`M_{GMC}`$ and average density $`n_{av}`$ (initially assumed to be uniform) by
$$r_2=50\left(\frac{(1\eta )M_{GMC}}{10^7M_{}}\right)^{\frac{1}{3}}\left(\frac{n_{av}}{300cm^3}\right)^{\frac{1}{3}}pc$$
(4)
At time $`t=0`$, the GMC is essentially divided into three zones. Zone A (inside the dust sublimation radius $`r_1`$) consists of ionized gas and the stellar cluster which is approximated as a point source. In zone B ( $`r_1<r<R_S`$) we have ionized gas and dust, whereas zone C ( $`R_S<r<r_2`$) is the dusty neutral zone.
As the expansion gets under way, the shock wave at $`R(t)`$, leading the ionization front at $`R_i(t)`$, accumulates neutral gas between $`R_i`$ and $`R`$. Inside $`R_i`$ the mass density of the ions $`\rho _i`$ follows (Franco et al )
$$\rho _i(t)=\rho _{n_0}\left(\frac{R}{R_S}\right)^{3/2}$$
(5)
where $`\rho _{n_0}`$ is the initial mass density of the neutral material assumed to be equal to the initial mass density of the ions. The density inside the HII region is probably constant to a good approximation as the expansion is subsonic (Franco et al ).
The density enhancement and the separation of $`R_i`$ and $`R`$ cannot be followed analytically. Instead we assume that the neutral gas accumulated in the shock is spread over the entire neutral cloud, and we use the principle of conservation of mass to calculate the (uniform) density of neutral material $`\rho _n(t)`$
$$\rho _n(t)=\rho _{n_0}\frac{1\left(\frac{R}{r_2}\right)^{3/2}\left(\frac{R_S}{r_2}\right)^{3/2}}{1\left(\frac{R}{r_2}\right)^3}$$
(6)
Assuming the standard conversion from gas column density to extinction (Savage & Mathis 1979), the visual extinction to the center of the cloud at $`t=0`$ is given by,
$$\tau _{V_0}=50\zeta \left(\frac{r_2}{50pc}\right)\left(\frac{n_{av}}{300cm^3}\right)$$
(7)
where $`\zeta `$ is the metallicity with respect to solar.
To estimate the evolution of the optical depth of the cloud we assume that the number density of grains scales in the same way as $`\rho _i`$ and $`\rho _n`$ to get
$$\tau _V=\tau _{V_0}[\frac{\rho _n}{\rho _{n_0}}(1\frac{R}{r_2})+f_d\frac{\rho _i}{\rho _{n_0}}(\frac{R}{r_2}\frac{r_1}{r_2})]$$
(8)
where $`f_d`$ takes account of a possible depletion of dust in the HII region as a result of shocks etc.
We will assume that the shock advances into the neutral part of the GMC until the swept up material is half of the original GMC mass. For our assumed uniform density medium this means the advance of the shock stops when $`R0.8r_2`$
### 2.2 The supernova phase
The state of the GMC at $`10^7`$ years is likely to be a narrow neutral shell still expanding, because of the momentum it acquired in the HII phase, at about 10 km s<sup>-1</sup>. Over the next $`34\times 10^7`$ years supernova explosions are likely to be the main factor that will influence the evolution of the GMCs eventually leading to their dispersal and return of most of the gas to the HI phase.
This phase has also been extensively studied analytically and numerically, but unfortunately mainly for the ‘standard’ density case of 1 cm<sup>-3</sup> (TTB and references therein). Most relevant for our purposes is the work of McCray and Kafatos (1987) who presented an analytical solution for the case of multi-supernova explosions in an OB association. Numerical solutions have shown that this solution is an adequate approximation if the supernova explosions are frequent as in our case. The sequence of discrete supernova explosions in the model is replaced by a scaled-up stellar wind solution. The radius of the swept up shell before the hot interior begins to cool at time $`t_c`$ is given by
$$R_{SN}=97\left(\frac{N_{}E_{51}}{n}\right)^{0.2}\left(\frac{t}{10^7yrs}\right)^{0.6}pc$$
(9)
where $`N_{}`$ is the total number of stars in the association (with $`M>7M_{}`$) that are destined to explode as supernovae , $`E_{51}`$ is the energy per supernova (assumed constant) in units of $`10^{51}`$ ergs, and $`n`$ is the density of the medium, again assumed to be constant.
After time $`t_c`$, given by
$$t_c=4\times 10^6\zeta ^{1.5}(N_{}E_{51})^{0.3}n^{0.7}yr$$
(10)
the shell expands as
$$R_{SN}=R_c(t/t_c)^{1/4}$$
(11)
where $`R_c`$ is the radius reached at $`t_c`$.
For a Salpeter IMF and stars in the mass range $`0.1125M_{}`$ the Bruzal & Charlot models predict $`8.92\times 10^3`$ supernova explosions per 1 $`M_{}`$ of stellar mass. So, our adopted parameters for the GMCs in a starburst ($`M_{GMC}=10^7M_{}`$, $`\eta =0.25`$; see section 3) imply $`N_{}=2.23\times 10^4`$. This, in turn, implies that the supernova shell will remain inside the shell formed by ionization at $`t=10^7`$yrs if the density of the medium within which the supernovae are exploding is higher than $`10^3cm^3`$. This is not unreasonable given the contributions from stellar winds, failed ‘cores’ etc. in a compact cluster of this mass. The velocity of the supernova shell is also predicted to drop well below the expansion velocity of the neutral shell by $`10^7`$ yrs.
In this paper we will therefore assume that the supernova shell remains ‘trapped’ inside the neutral shell which continues to expand at 10 km s<sup>-1</sup> until $`t=4\times 10^7`$yrs. We discuss whether this assumption is inconsistent with infrared observations of galaxies later in the paper.
Our plan in this paper is to use the simple evolutionary scenario outlined in the last two sections and our radiative transfer code in dusty media to predict the infrared properties of evolving HII regions. A further ingredient of our models will be the evolutionary synthesis models of Bruzual & Charlot (1995) which will provide the spectral energy distribution of the evolving stellar population powering the HII region.
The family of evolving HII region models will then form the basis of our starburst models under our assumption that the latter are an ensemble of giant HII regions at different evolutionary stages.
In order to get a first impression of the characteristics of our models we have in this paper confined our attention to the spherically symmetric case. Clearly, highly non-spherical geometries can arise especially at the latter stages of the evolution and we plan to explore these situations in future studies.
### 2.3 Dust model
The model we have assumed for the absorption/emission properties of the dust is an extension of the ‘classical’ grain model (e.g. Mathis 1990 and references therein) to take into account the effects of small grains and molecules. The model is described in detail in Siebenmorgen & Krügel (1992; hereafter SK92) where it has been shown to account satisfactorily for the emitted spectra of dust in a number of environments (solar neighbourhood, planetary nebulae, star-forming regions).
The model assumes three populations of grains and aims to fit the average interstellar extinction curve subject to the constraints imposed by abundances of heavy elements that are in the solid state etc. The first population of grains consists of the large grains (assumed to have optical properties as the “astronomical” silicates of Draine & Lee 1984 and of amorphous carbon by Edoh 1983). These grains provide the bulk of the emission/absorption at long wavelengths and are responsible for the silicate resonances at 9.7 and 18$`\mu m`$. The large grains are assumed to have a power-law size distribution ($`n(a)a^q`$, $`q=3.5`$, 100Å$`a`$2500Å).
The second population of grains consists of small graphite particles ($`n(a)a^q`$, $`q=4`$, 10Å$`a`$100Å). These grains are responsible for the 2175Å bump (Draine 1989) and emit primarily in mid-IR wavelengths. Because of their small size they show dramatic fluctuations in temperature. Their emission is calculated with the method of Siebenmorgen et al (1992) which is a faster (but still fairly accurate) alternative to the treatment of Guhathakurta & Draine (1989).
The third grain population is composed of PAHs that are now widely believed to be the carriers of the infrared features at 3.3, 6.2, 7.7, 8.6 and 11.3$`\mu m`$ (Puget & Leger 1989, Allamandola et al 1989). The SK92 model assumes two components for the PAHs: a single molecule component made up of 25 carbon atoms and a cluster component made up of 10-20 molecules. A ratio of carbon atoms in PAHs to H atoms in the ISM of $`3\times 10^5`$ is assumed (which is about 10% of the total carbon abundance thought to reside in grains/molecules). The PAH clusters dominate at longer (mid-IR) wavelengths and are largely responsible for the quasi-continuum underlying the features (Desert, Boulanger & Puget 1990). The PAHs absorb mainly in the Far UV and to a lesser extent in the visible (SK92).
The PAHs are thought to be dehydrogenated depending on their environment, so a degree of dehydrogenation $`\alpha _{H/C}`$ (defined as the ratio of hydrogen to carbon atoms) needs to be introduced. $`\alpha _{H/C}`$ varies from about 0.4 for the solar neighbourhood to 0.1 or less for star forming regions (SK92). As the emission of some of the features (3.3, 8.6 and 11.3$`\mu m`$) is proportional to the number of hydrogen atoms and of others (6.2 and 7.7$`\mu m`$) proportional to the number of carbon atoms, $`\alpha _{H/C}`$ can have an effect on the relative strengths of the features.
In regions of high radiation intensity (such as those to be found in regions of massive star formation or AGN), the small graphites and PAHs are thought to be underabundant with respect to the ISM by factors of 10 or more (SK92, Rowan-Robinson 1992 and references therein). This will result in a flattening of the extinction curve in the UV and an elimination of the 2175Å feature. The latter has been shown to be a characteristic of the extinction curves of starburst galaxies (Gordon, Calzetti & Witt 1997). For the models presented in this paper we assume that all the grains are depleted by a factor of 5 (i.e. $`f_d=0.2`$) inside $`R`$. We further assume that the clusters are made of 500 C atoms and contain $`90\%`$ of the total C abundance in PAHs. Note that the PAHs abundance is further reduced because of the photo-destruction mechanism that is self-consistently applied in the code (Siebenmorgen 1993).
### 2.4 Radiative transfer model
The method of solution of the radiative transfer problem in dusty media containing transiently heated grains is described in Efstathiou & Siebenmorgen (1999). The method of obtaining the intensity distribution at any point in the cloud and hence iterating for the temperature of each of the large grains is that used by Efstathiou & Rowan-Robinson (1990, 1995; hereafter ER90, ER95 respectively). The emission of the transiently heated particles is calculated according to the method of Siebenmorgen et al. (1992). Proper treatment of the photodestruction of the PAHs and the sublimation of the large grains, at the inner part of the cloud, is taken into account.
## 3 Evolving HII region models
There are basically three free parameters in our model ($`M_{GMC}`$, $`\eta `$, and $`n_{av}`$) which we fix by relating to observational constraints where available.
Kennicutt (1998) finds the median rate of gas consumption in starburst galaxies per $`10^8`$ years to be $`30\%`$. Colbert et al (1998) estimate from ISO spectroscopy that the mass in stars in the M82 starburst is $`0.51.3\times 10^8M_{}`$. Assuming the molecular gas mass of $`2\times 10^8M_{}`$ estimated from a number of studies (e.g. Hughes et al 1994) is associated with the GMCs that formed those stars, the implied gas consumption rate is 0.2-0.39. In this paper we assume $`\eta =0.25`$.
For $`M_{GMC}`$ we choose $`10^7M_{}`$, a mass close to the upper limit of the galactic mass spectrum, as the emission is likely to be dominated by such GMCs (section 2). As a check, we can compare the luminosity expected from one of these GMCs ($`M_{GMC}=10^7M_{},\eta =0.25`$) with that of the 10$`\mu m`$ knots in M82. Assuming they have the same spectrum as the starburst as a whole, each of these knots has a luminosity of $`2\times 10^8L_{}`$ (Telesco & Gezari 1992). By comparison, the models of Bruzual & Charlot ($`0.1<M<125M_{}`$) predict $`L3.5\times 10^8L_{}`$ for a $`10^7`$ years old instantaneous burst.
Probably the most uncertain free parameter in our models is the initial GMC average density $`n_{av}`$ and the core density $`n_c`$. We assume $`n_{av}=300cm^3`$ which is within the range of GMC densities found in the centre of the galaxy (Güsten 1989). There is evidence (e.g. RRE, Downes & Solomon 1998) that higher gas densities are to be found in the more extreme starbursts powering ultraluminous infrared galaxies. The core density is assumed to be $`2\times 10^3cm^3`$.
In Figure 1 we plot the density, temperature and spectral energy distributions of a GMC with assumed parameters $`M_{GMC}=10^7M_{},\eta =0.25`$, and $`n_{av}=300cm^3`$. These parameters are used for the remainder of this paper. The density at the boundary between the neutral and ionized region has been smoothed in order to be able to handle it numerically. The shading in the same diagram represents the spread in temperature between different grain species. Note that the spread is much smaller in the neutral region because it is more optically thick (see Efstathiou & Rowan-Robinson 1994, Krügel & Walmsley 1984 for a discussion of such radiative transfer effects in dust clouds). Our evolutionary scheme predicts that by $`10^7`$ years the ionization front has compressed the dust to a very narrow shell ($`r_1/r_20.8`$).
The SEDs of the GMCs vary significantly with age. In the early stages of the cloud’s evolution its SED is predicted to be warmer and show little signs of PAH emission. This is partly because the stellar population is younger and the radiation field stronger. The main contributing factor though is that there is more hot dust inside the Stromgren sphere than at later times. The weakness of the PAH features is partly due to the stronger near to mid-IR continuum emission from large grains and to the higher degree of photodestruction because of the stronger and harsher radiation field.
By 10Myrs the mid-IR spectrum is dominated by the PAHs features and quasi-continuum and shows the characteristic shape that is observed in the spectra of starburst galaxies (Willner et al 1977, Acosta-Pulido et al 1996). Note that while the general trend is for the dust shell to cool off with time, the expansion of the HII region introduces a subtle effect that somewhat counteracts this in the HII phase. As the neutral shell is pushed out and the density inside $`R`$ declines, the total optical depth to the stellar cluster (eqn 8) decreases. This means that the neutral shell actually heats up a little bit. This effect is better demonstrated by the evolution of the IRAS colours, plotted in Figure 2.
At $`t>20Myrs`$ the peak of the SEDs of the shells shifts to longer wavelengths and become remarkably similar to those of the diffuse medium and cirrus clouds.
The effect of changing the IMF from Salpeter to Miller-Scalo is to reduce $`F_{}`$ by about 80$`\%`$ and $`R_S`$ by about 20$`\%`$. So the effect on the overall SEDs is small.
## 4 Evolving starburst models
To synthesize the spectral energy distribution from a burst of star formation from those of individual GMCs, let us assume that at time $`t`$ after the onset of the starburst the star formation rate (or in our case the number of GMCs forming stars instantaneously with efficiency $`\eta `$) is $`\dot{M}_{}(t)`$. If we further assume that the ensemble of clouds is optically thin, i.e. they don’t shadow each other, then the flux from the burst is given by
$$F_\nu (t)=_0^t\dot{M}_{}(t^{})S_\nu (tt^{})𝑑t^{}$$
(12)
where $`S_\nu (tt^{})`$ is the flux from a GMC $`tt^{}`$ years after the onset of star formation at its centre.
A useful parameterization for the star formation rate in a starburst which has been extensively used (e.g. Rieke et al 1980, Genzel et al 1998) is that of exponential decay, $`\dot{M}_{}(t^{})e^{t/\tau }`$, where $`\tau `$ is some time constant. In the limit of large $`\tau `$ this approximates a scenario of constant star formation history. Under this assumption eqn (12) reduces to
$$F_\nu (t)=\dot{M}_{}(0)e^{t/\tau }_0^te^{t^{}/\tau }S_\nu (t^{})𝑑t^{}$$
(13)
In general, some stellar light will escape the GMC without being absorbed by the dust shell, a situation we can’t address exactly in our present spherically symmetric model. To account for this, but only in the models fitted to M82 and NGC6090, $`S_\nu `$ is corrected for this effect by allowing a fraction $`f(t)`$ of the starlight to leak out. The emission from the dust is correspondingly reduced to maintain flux conservation. The leaking starlight may suffer further absorption with the radiation reprocessed to infrared radiation as well. Clearly there are a number of possibilities here. The starlight may be absorbed locally (i.e. the GMC has some kind of anisotropic density distribution, or optically thin holes), or alternatively the starlight escapes the GMC and is absorbed by another GMC or the diffuse medium (KS). In the first case the best approximation would probably be to assume that the starlight is reprocessed to the infrared with the same SED as that of the GMC itself. In the second case it would be better to assume that the starlight is reprocessed according to the SED of the more numerous GMCs with ages of about 40-100Myrs. In this paper, in particular in the models fitted to M82 and NGC6090 in section 6, we consider only the first case. More details on the approximation used in those fits can be found in Appendix A.
## 5 Comparison with IRAS data
The IRAS data on galaxies have for the last decade or so provided the yardstick by which theoretical models of their infrared emission should be measured. While more extensive datasets are increasingly becoming available with ISO and other projects (e.g. SCUBA), which enable more detailed comparisons (and we illustrate this in the next section for two well studied objects), it is still instructive to compare our models with the IRAS colours.
RRC proposed that the IRAS colours of galaxies can be understood in terms of 3 components: a ‘disc’ component with infrared properties similar to those of cirrus clouds in our Galaxy, a starburst component with properties similar to those of compact galactic HII regions and an AGN component, which is associated with the dusty torus that is now an integral part of the standard AGN model. The latter component is not discussed further in this paper but instead we concentrate on the colours of starburst and Disc galaxies.
In Figure 2 we plot the IRAS colours of the starburst galaxies in the sample of RRC (selected to have good quality fluxes in all four IRAS bands) and indicate the position of their disc component (D). Also indicated in Figure 2 are the positions of M82 and NGC6090 on the colour-colour diagrams. Disc galaxies cover the part of the colour-colour diagram between the starbursts and D, although there is some overlap with the starburst galaxies. RRC proposed that the overlap is due to mixing of the starburst and Disc components. Rowan-Robinson (1992) also showed that some of the variation in the colours of the disc galaxies may be due to the intensity of the radiation field.
If we consider first the predicted colours of the sequence of GMCs (dotted line), we see that they span the entire range of observed galaxy colours. This suggests that a weighted sum of emission from such a family of GMCs (which is what equation (12) essentially is) may explain the observed galaxy colours. To test this we have computed the colours of a galaxy that experienced an exponentially decaying burst with $`\tau =20`$Myrs (solid line). The predicted colours in the age range 0-72Myrs nicely match the spread in the colours of starburst galaxies. Furthermore, they predict a correlation of the 100/60 ratio, and an anticorrelation of the 25/12 one, with age. The predicted mean age of starburst galaxies ($`50`$Myrs) agrees remarkably well with other estimates from Br$`\gamma `$ equivalent widths and CO indices (Goldader et al 1997) as well as ISO spectroscopy (Genzel et al 1998). The predicted colours are not very sensitive on the assumed value of $`\tau `$. The colours for the constant star formation case, a scenario more appropriate for disc galaxies, also lie on the same track but are packed towards the M82 end. This may have significant implications for the origin of the far-IR luminosity of disc galaxies as we discuss further in section 8. The conclusion from this analysis is therefore that the age of the burst can account for some of the variation in IRAS galaxy colours attributed by RRC to mixing with a cirrus component.
There are two more features of the colours of the sequence of GMCs that are worth mentioning. First the discontinuity at $`t=20Myrs`$ is due to the switch from the HII to supernova phase. Clearly, how quickly (and how far) the tracks move to the right of the colour-colour diagrams will depend on the details of the supernova phase which we treat very crudely in this paper. If the supernova ejecta expel the dust further away than we have assumed, then (in the absence of any other heating source) we would expect the dust to cool down further and the 60/25 and 100/60 ratios to increase. The discontinuity in the colours may also be an artifact of our simplified treatment of the GMC evolution. In particular, stellar winds from young stars may impart as much kinetic energy in the ISM as the type II supernovae. Secondly, the apparent ‘retrograde’ motion of the tracks, best seen in the 25/12 versus 60/25 colour-colour diagram, is due to the radiative transfer effect, highlighted in section 3, which leads to the temporary heating up of the dust shell despite the fact that the radiation field is diminishing. The reality of these features can only be assessed by more detailed simulations of the evolution of GMCs.
In Figure 3 we give the SEDs of the starburst discussed above ($`\tau =20`$Myrs) for some representative ages. No correction for leaking starlight is applied to the models. A number of features are worth highlighting: (1) there is a general tendency for the peak of the SED to shift to longer wavelengths with age. (2) The PAH features get stronger with age. (3) the 9.7$`\mu m`$ silicate feature gets shallower with age. (4) while the optical/UV light of the youngest bursts is almost completely obscured, the oldest starbursts are predicted to emit significantly in the optical/UV. This illustrates the extent to which old diffuse (and therefore optically thin) GMCs dominate the emission of old bursts.
## 6 A model for M82
### 6.1 Observational constraints on M82
M82 being the nearest example of a starburst (D=3.25Mpc Tammann & Sandage 1968) has received a lot of attention both observationally and theoretically. The luminosity of the starburst ($`3\times 10^{10}L_{}`$ Telesco & Harper 1980) is sufficient to outshine any pre-existing stellar population in the nucleus of this dwarf galaxy. The IR luminosity arises mostly in the central 200x400pc region (13” x 26”) which shows a bilobal distribution and is aligned with a central stellar bar. The bar is mainly made up of an old stellar population (Telesco et al 1991) and it is thought that it has probably formed about $`10^8`$ years ago during the interaction of M81 with M82. Radio interferometry suggests a supernova frequency of 1 every 10-20 years in the central region which implies a high star formation rate (Muxlow et al 1994).
Rieke et al (1980), in their classic study of M82, drew attention to the fact that the starburst emission is spatially correlated at 10-, 20$`\mu m`$ and radio wavelengths. The emission region constitutes an elongated structure aligned with the major axis of the galaxy. The broad correspondence of the mid-IR emission with the radio was also confirmed by Telesco & Gezari (1992). The map of Telesco & Gezari shows the two prominent peaks seen in earlier observations but their higher spatial resolution also allowed them to resolve a number of clumps with characteristic radii of about 20pc. This provides further confirmation of the basic assumption of our model that the starburst is made of an ensemble of massive GMCs.
Rieke et al also pointed out that the starburst shows a completely different appearance in the K band with the ‘nucleus’ of the galaxy being the prominent feature. There is no 10$`\mu m`$ feature associated with the nucleus. In the K band there is also extended emission along a stellar bar about a kpc in extent. Rieke et al also found that 2$`\mu m`$ spectra of the nucleus show the absorption bands characteristic of giants and supergiants. Their interpretation of these findings is that the nucleus represents the first generation of stars formed about $`5\times 10^7`$ yrs ago in an (exponentially decaying) starburst which has since propagated outwards.
Shen & Lo (1995) mapped the CO emission from M82 with 2.5” resolution. Their maps revealed unresolved ($`<30pc`$) structure in the CO emission. They suggest that most of the CO emitting gas (and therefore the starburst) is located in molecular spilar arms at 125 and 390pc from the nucleus. They also resolved the previously known double-peaked structure in several peaks.
The task of utilizing the multiwavelength data for M82 is complicated by the fact that the good spatial and spectral resolution data at the shorter wavelengths $`\lambda 60\mu m`$ can not be exactly matched by the far-IR and submm data. A scan by Telesco & Harper (1980) at 58$`\mu m`$ (where the spectral energy distribution peaks) shows that the intrinsic source size is about 30” very similar to that at 10$`\mu m`$. Hughes et al (1994) deconvolved the source size of M82 at various wavelengths. They find that between 10-40$`\mu m`$ the source size is fairly constant (25x8 arcsec<sup>2</sup>) but between 100$`\mu m`$ and 1.3mm it is slightly larger (37x10 arcsec<sup>2</sup>).
While an IRAS LRS spectrum (with an aperture large enough to include the whole galaxy) is available for M82, because of the importance of the mid and near-IR wavelength range in constraining the model, we use instead the 2-13$`\mu m`$ spectrophotometry by Willner et al (1977) with a 30” aperture. The 10$`\mu m`$ spectrum looks similar to that of Roche et al (1991) and Jones & Rodriguez-Espinosa (1984) in a number of positions with 3” which lends further support to the idea that what we see in the central 30” is an ensemble of similar star-forming regions. The IRAS LRS spectrum also shows a similar spectrum although the 10$`\mu m`$ feature appears a little shallower. The flux in the IRAS LRS spectrum is also about a factor of 2 higher than the spectrum of Willner but also about 50% higher than the broad band 12$`\mu m`$ point. The 10$`\mu m`$ growth curves of Rieke et al (1980) also suggest that the total mid-IR flux may be higher than that in the 30” aperture by about 50%. The K band photometry of Rieke in a 30” aperture matches that of Willner et al in a similar aperture rather well. The K flux in the 30” aperture is about 30% of the total (Rieke et al, Ichikawa et al 1995).
In conclusion, the data plotted in Figure 4 are probably a good description of the spectrum of the starburst in M82 but it should be borne in mind that the far-IR and submm data may include contributions from a region more extended than the central starburst. Having experimented with a range of combinations of age and decay time $`\tau `$ we could not obtain a fit to the multi-wavelength data with a single burst scenario. The fundamental problem with a single burst (in the context of our model) is that M82 shows characteristics for both a young burst (warm SED and fairly deep 10$`\mu m`$ absorption) and an older burst (strong 1-5$`\mu m`$ continuum).
Our model fit, shown in Figure 4, assumes that M82 experienced two bursts of star formation. The first one occured 26.5Myrs ago and had a very steep exponential decay ($`\tau =2Myrs`$). This burst is now responsible for most of the K band light and about half of the far-IR and submillimeter emission. The second burst occured 16.5Myrs ago and decayed more slowly ($`\tau =6Myrs`$). The GMCs in both bursts are assumed to leak 20$`\%`$ of their starlight after 10Myrs which subsequently suffers a visual extinction of 1.5 magnitudes. The two bursts are predicted to contribute roughly equal amounts of UV flux. The spectrophotometry of Willner et al shows an excess over the model longwards of the 3.3$`\mu m`$ feature (possibly associated with a quasi-continuum similar to that underlying the longer wavelength features) which we cannot match with our present grain model. Note that while the 10$`\mu m`$ spectrum of the younger burst provides an excellent fit to the shape of the observed spectrum, the addition of the older burst with its flatter spectrum dilutes the absorption feature. Our model could be in better agreement with the 10$`\mu m`$ feature and the extended far-IR to submm emission if at least some of the dust shells associated with the older burst have expanded beyond the central 30 arcsec. In fact there is evidence from the map of Hughes et al (1994) that some submm emission is associated with the outflows along the minor axis of the galaxy.
## 7 A model for NGC6090
Acosta-Pulido et al (1996) presented ISO spectrophotometry and extensive multi-wavelength photometry for NGC6090. This galaxy is about 10 times more luminous than M82 and its optical image shows a disturbed morphology and signs of a recent interaction. UV to K band data for this galaxy are also available (Gordon et al 1997). The IRAS colours of NGC6090 show evidence for cold dust. RRC attributed that to a significant contribution from cirrus in this galaxy. It is therefore not surprising that the emission from an M82 type starburst has to be supplemented by colder dust to fit the data for $`\lambda >120\mu m`$ (Acosta-Pulido et al).
As it is clear from Figure 2, the IRAS colours of NGC6090 predict an ‘old’ burst scenario. In Figure 5 we test this by comparing such a model to the multiwavelength data. We find that a good fit can be obtained with $`t=64`$Myrs, $`\tau =50`$Myrs. As in the M82 model we have assumed that after 10Myrs 20$`\%`$ of the starlight from each GMC escapes (and subsequently suffers 1.5mag of visual extinction), but we find that adequate fits to the data can be obtained even if we have no leakage.
The remaining discrepancy between the model and the far-IR observations suggests that some emission from even colder dust in the starburst or general interstellar medium of this galaxy (cirrus) may be contributing at the longer wavelengths. Colder dust in the starburst could for example arise if the supernova explosions are more effective (as we discuss in section 5) in destroying the GMCs and expelling the dust away from the heating sources. Evidence of very cold dust ($`T10K`$) has been found in some normal galaxies by recent ISO observations (e.g. Krügel et al 1998). The same authors, however, find no evidence for cold dust in the starburst galaxies in their sample. It will be interesting to assess how common is this far-IR excess in starburst galaxies as it will provide strong constraints to theoretical models.
## 8 Summary and Discussion
We have described an evolutionary scheme for massive GMCs centrally illuminated by a stellar cluster. The defining characteristic of this scheme is that the HII regions formed by the ionizing stellar radiation compress the neutral gas and dust in the GMC to a narrow shell. This naturally explains why the maximum temperature of the large grains in HII regions appears to be lower than the sublimation temperature of the graphite and silicate grains. It also explains why the near to mid-IR spectrum of HII regions and starburst galaxies is dominated by the PAH emission. These trends are consistent with the findings of Helou et al (1998) from ISO mid-IR spectrophotometry. The evolution of the GMC after about $`10^7`$yrs is more uncertain but we show that even a modest rate of expansion at the interstellar velocity dispersion is sufficient to lead to the formation of a diffuse cold shell by about $`10^8`$yrs. The infrared spectrum of these shells is very similar to infrared cirrus clouds. We associate these shells with the HI superstructures observed in our galaxy and other galaxies (Heiles 1979) and usually associated with large star clusters.
The sequence of GMC spectral energy distributions we have computed cover the range of observed galaxy SEDs as observed by IRAS. This leads us to believe that the observed spectra of galaxies could be modelled in terms of this evolutionary scheme and constrain their star formation history. Exponentially decaying bursts are shown to account satisfactorily for the colours of starburst galaxies in the sample of RRC. The age of the burst is shown to be an important contributing factor to the spread in the galaxy colours and can mimick (to some extent) the effect of mixing with a cirrus component. Our application of these models to M82 and NGC6090 has shown that good fits to the UV to mm data on these galaxies can be obtained with the models. In M82 we find evidence for two bursts separated by 10Myrs. Our model is similar in this respect to that of Rieke et al (1993) who find that two Gaussian bursts (separated by 8-25Myrs) can account for a range of observational constraints. Theoretical support for this periodic burst scenario is also provided by the models of Krügel & Tutukov (1993). NGC6090 is best fitted by an older burst (64Myrs). There is evidence for a colder dust component in this galaxy. It is not clear whether that is due to cirrus or colder dust in the starburst.
The colours predicted for a continuous star formation scenario, one more appropriate for disc galaxies, falls short of explaining the colours of disc galaxies. Although a more thorough exploration of the parameter space is needed before any definite conclusions can be drawn, this result is not entirely surprising for the following reason. In starburst galaxies, almost by definition, the emission from the recently formed stars outshines the old stellar population so the effect of the latter on our models is probably not very significant. In disc galaxies, however, where the star formation rate is lower, the old stellar population is more important and needs to be taken into account when considering the energy balance and emission of GMCs, especially the older ones. The effect of this would be, neglecting any change in the spectral shape, to boost the luminosity of the old GMCs and therefore their contribution to the overall galaxy emission. This provides an indirect argument in favour of the idea that old stars make a significant contribution to the far-IR luminosity of disc galaxies (Waltebros & Greenwalt 1996).
The wealth of data on M82 allows us to apply further checks on the validity of this model. O’Connell et al (1995) imaged the central few hundred parsecs of M82 with the HST in the V and I bands and discovered a complex of over 100 luminous star clusters. The clusters do not seem to be associated with X-rays, infrared or radio compact sources but instead with the less obscured regions in the galaxy such as the ‘nucleus’ and the periphery of the dust lane. The size of these clusters is estimated to be about 3pc, consistent with being the remnants of the GMCs in our model for the starburst. The fact that M82 is viewed almost edge-on complicates somewhat the interpretation of these observations but the (at least) partial segregation of the optical and infrared emitting regions suggests that they arise from stellar populations at different evolutionary stages, as in our model. This also suggests that the assumption of spherical symmetry is reasonable at least for the young ($`t<10`$Myrs) GMCs.
Satyapal et al (1997) studied the stellar clusters in the central 500pc of M82 with near-IR spectroscopy and high resolution imaging. Their findings support a picture in which the typical age of the stellar clusters is $`10^7`$ yrs. They also find a correlation between the age of the clusters and distance from the centre suggesting that the starburst is propagating outwards at a speed of about 50km/s. This picture is also supported by spectro-imaging of M82 at 3.3$`\mu m`$ which shows evidence for dissociation of PAH molecules in the ‘nucleus’ (Normand et al 1995). McLeod et al (1993) also found that the 3.3$`\mu m`$ feature is about twice as strong at a position offset by about 8” from the nucleus. The centre of M82 has been known for some time (Axon & Taylor 1978) to show evidence for a biconical outflow along its minor axis. This is now widely believed to be due to a galactic wind, a general characteristic of starburst galaxies, powered by stellar winds and supernovae. Numerical hydrodynamic simulations (Suchkov et al 1994) suggest that such outflows can develop under quite general conditions at the center of starbursts but their morphology depends very much on the energy deposition rate and density of the medium. It is likely that the galactic wind was powered by the earlier burst and that the interaction of the wind with gas in the plane of the galaxy triggered or at least contributed to the outwardly propagating secondary burst.
Telesco & Gezari (1992) noted that there is no correspondence between the peaks of the radio emission in M82 (presumed to be young supernova remnants) and those at 12.4$`\mu m`$. This suggests that the mid-IR peaks in M82 are associated with GMCs in which supernova activity has not yet started or in which the supernova rate is still very low. This observation is in excellent agreement with our M82 model in which the mid-IR emission is predominantly due to the younger burst.
As is evident from Figure 3 the luminosity of a starburst galaxy is predicted to change quite considerably with age. We illustrate this more clearly in Figure 6 where we plot the luminosity at different IRAS wavelengths (normalized to its peak) as a function of age for the same model parameters used in Figure 3 ($`\tau =20Myrs`$). The luminosity rises sharply in the first 10Myrs to peak at an age which depends on the wavelength. It then declines sharply by up to a factor of 5 with age. We have omitted the 25$`\mu m`$ curve as it is very similar to that at 60$`\mu m`$. There are a number of interesting implications arising from this result for the detectability of starburst galaxies. It implies for example that mid-IR surveys would preferentially detect younger starbursts than far-IR surveys. It also implies that near the detection limit of such surveys up to half the galaxies that experienced a burst in the last $`7\times 10^7`$ years may be missed because they are either too young or too old. This may have implications for estimates of the star formation rate over volumes of space as it introduces another form of bias.
In conclusion, the evolutionary scheme we have put forward in this paper seems to be in very good agreement with an array of observational evidence on M82 and is consistent with the infrared properties of other starburst galaxies. It promises to be useful for the interpretation of the growing datasets on infrared galaxies and we plan such studies in future work. The scheme also lends itself for incorporation into simulations of galaxy formation and evolution and this was one of the prime motivations for its development.
While useful constraints on the star formation history and stellar populations of galaxies can be obtained from this model, it will be of much interest to explore in the future the effect of deviations from spherical symmetry especially at the later stages of GMC evolution. Hydrodynamical simulations of the evolution of star-forming molecular clouds which take into account the effects of ionization, stellar winds and multi-supernova explosions in dense environments should also take high priority. Only then will we be able to take full advantage of the data ISO, SCUBA, WIRE, SIRTF, NGST, VLT, FIRST, PLANCK, SOFIA, ALMA etc. are expected to yield over the next decade or so.
## Acknowledgments
AE acknowledges support by PPARC. This work has made use of the NASA Extragalactic Database (NED). We thank an anonymous referee for useful comments and suggestions.
## Conclusions
## APPENDIX: Corrections for non-spherical geometry
Let the monochromatic luminosity of a spherical GMC of optical depth $`\tau _V`$ (centrally illuminated by a stellar cluster) at frequency $`\nu `$ be $`4\pi S_\nu ^s`$ whereas that of the stellar cluster (in the absence of any dust) be $`4\pi S_\nu ^{}`$. By definition, $`_o^{\mathrm{}}S_\nu ^s𝑑\nu _o^{\mathrm{}}S_\nu ^{}𝑑\nu `$. If the optical depth of the GMC is $`\tau _V^l(<\tau _V)`$ for a fraction $`f`$ of the sky centred on the star cluster, then the luminosity of such a non-spherical GMC can be approximated by,
$$L_\nu =4\pi [(1f)S_\nu ^s+fe^{\frac{C_{e,\nu }}{C_{e,V}}\tau _V^l}S_\nu ^{}+f\frac{_o^{\mathrm{}}(1e^{\frac{C_{e,\nu }}{C_{e,V}}})S_\nu ^{}𝑑\nu }{_o^{\mathrm{}}S_\nu ^{}𝑑\nu }S_\nu ^s]$$
where $`C_{e,\nu }`$ is the dust extinction cross-section at frequency $`\nu `$.
The first term assumes that the GMC emits isotropically with the same spectral energy distribution as the spherical GMC. The factor $`1f`$, however, accounts for the fact that a fraction $`f`$ of the sky is covered by a lower (or even zero) optical depth. The remaining two terms attempt to correct for this complication. The second term gives the stelar light that is transmitted through the optically thin holes. The third term assumes that the light absorbed by dust in the optically thin holes is reprocessed to the infrared with the same spectral energy distribution as that of the spherical GMC.
If a large number of non-spherical, but otherwise identical, GMCs viewed at different orientations are present in a starburst, then in equations 12 and 13 one can use their average emission $`S_\nu \frac{L_\nu }{4\pi }`$. While this approximation conserves energy and is probably close to the best one can do for correcting for non-spherical geometry, without doing the actual calculation, its usefulness is bound to be limited as some of the assumptions (e.g. isotropic emission from a non-spherical optically thick system) are known to be false (e.g. Efstathiou & Rowan-Robinson 1995). |
no-problem/9912/hep-lat9912051.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The proposal that extended vortex configurations are responsible for maintaining confinement at arbitrarily weak coupling in $`SU(N)`$ gauge theories has a long history \- . Several important results were established by the early eighties. Over the last two years, the vortex picture of confinement has undergone very substantial development by a series of numerical investigations as well as new analytical results -.
It was originally conjectured that thick vortices occur with nonvanishing measure contribution in the path integral at arbitrarily large beta, and that this provides a sufficient mechanism for confinement. In light of recent developments, it now appears that not only is this contribution sufficient but also necessary: it is responsible for the full string tension of large Wilson loops in $`SU(N)`$ gauge theories. After briefly recalling basic features of the vortex picture, we discuss some of the recent numerical and analytical results underlying these developments. Conclusions are presented in section 7.
## 2 Physical Picture
A vortex configuration of the gauge field may be characterized by a multivalued singular $`SU(N)`$ gauge transformation function $`V(x)`$. The multivaluedness ambiguity lies in the center $`Z(N)`$, so the transformation is single-valued in $`SU(N)/Z(N)`$. If one attempts to extend such a gauge transformation $`V(x)`$ throughout spacetime, it becomes singular on a closed surface $`𝒱`$ of codimension 2 (i.e. a closed loop in $`d=3`$, a closed 2-dimensional sheet in $`d=4`$) forming the topological obstruction to a single-valued choice of $`V(x)`$ throughout spacetime. Generic vortex configurations of the gauge potentials then consist of a pure-gauge long-range tail given by $`V(x)`$, and a core enclosing the region where $`𝒱`$ would be if one were to try to smoothly extend $`V`$ everywhere. Equivalently, the configuration cannot be smoothly deformed to pure gauge everywhere without encountering a topological obstruction $`𝒱`$. Note that it is only the existence of this obstruction, and not its precise location, that is relevant; the location may always be moved around by a regular gauge transformation. The asymptotic pure-gauge part provides then a topological characterization of the configurations irrespective of the detailed structure of the core.
Assume now that two gauge field configurations $`A_\mu (x)`$ and $`A_\mu ^{}(x)=VAV^1+V_\mu V^1`$ differ by such a singular gauge transformation $`V(x)`$, and denote the path ordered exponentials of $`A_\mu `$ and $`A_\mu ^{}`$ around a loop $`C`$ by $`U[C]`$ and $`U^{}[C]`$, respectively. Then $`\mathrm{tr}U^{}[C]=z\mathrm{tr}U[C]`$, where $`z1`$ is a nontrivial element of the center, whenever $`V`$ has obstruction $`𝒱`$ linking with the loop $`C`$; otherwise, $`z=1`$. Conversely, changes in the value of $`\mathrm{tr}U[C]`$ by elements of the center can be undone by singular gauge transformations on the gauge field configuration linking with the loop $`C`$. This means that vortex configurations are topologically characterized by elements of $`\pi _1(SU(N)/Z(N))=Z(N)`$. This topological $`Z(N)`$ flux is of course conserved only mod $`N`$, and, hence, the number of vortices in a given gauge field configuration in a given spacetime region can be defined only mod $`N`$.
Vortex configurations, if sufficiently spread out, can be present at any nonzero coupling. This is because, by spreading the flux over a sufficiently ‘thick’ core, one can incur suffiently small cost in local action so that the configuration is not energetically suppressed at large beta; while at the same time there is very substantial disordering over long distances. In this way UV asymptotic freedom can coexist with IR confinement. The crucial question then is whether this class of configurations contributes with enough weight in the path integral measure to provide a sufficient mechanism for confinement at large beta. One may suspedt that this is so since it is easily seen that, given one vertex configuration, an enormous number of others may be produced by fluctuations that do not alter the vorticity content.
The vortex picture of confinement may then be summarized as follows. Confinement at arbitrarily weak coupling in $`SU(N)`$ gauge theories is the result of nonzero vorticity in the vacuum over sufficiently large scales. In other words, in any suffiently large spacetime region, the expectation for the presence of a spread out vortex is nonzero at all large $`\beta <\mathrm{}`$. This is strikingly demonstrated by a recent lattice computation which shows that the relative probability for vortex excitation in fact approaches unity (section 6). Configurations carrying sufficiently thick vortices contribute with essentially the same weight as ones without vortices. In this sense one has a ‘condensate’ of vortex configurations.
On the lattice a discontinuous (singular) gauge transformation introduces a thin vortex. The topological obstruction $`𝒱`$ is regulated to a coclosed set of plaquettes (in $`d=4`$ this is a closed 2-dimensional surface of dual plaquettes on the dual lattice). This represents the core of the thin vortex, each plaquette in $`𝒱`$ carrying flux $`zZ(N)`$.
Thick vortex configurations can be constructed by perturbing the bond variables $`U_b`$ in the boundary of each plaquette $`p`$ in $`𝒱`$ so as to cancel the flux $`z`$ on $`p`$, and distribute it over the neighboring plaquettes. Continuing this process by perturbing bonds in the neighboring $`p`$’s one may distribute the flux over a thickened core in the two directions transverse to $`𝒱`$. Beyond the thickness of the core, the vortex contribution reduces to the original multivalued pure gauge. If the original thin vortex is long enough, it may be made thick enough, so that each plaquette receives a correspondingly tiny portion of the original flux $`z`$ that used to be on each $`p`$ in $`𝒱`$. Long thick vortices may therefore be introduced in $`\{U_b\}`$ configurations having $`\mathrm{tr}U_p\mathrm{tr1}`$ for all $`p`$. (Here $`U_p`$ denotes the product of the $`U_b`$’s around the plaquette boundary.) Thus they may survive at weak coupling where the plaquette action becomes highly peaked around $`\mathrm{tr}U_p\mathrm{tr1}`$. Long vortices may link with a large Wilson loop anywhere over the area bounded by the loop, thus potentially disordering the loop and leading to confining behavior. Thin vortices, on the other hand, necessarily incur a cost proportional to the size of $`𝒱`$, and only short ones can be expected to survive at weak coupling. These can link then only along the perimeter of a large loop generating only perimeter effects.
## 3 Isolation of Vortices and their Contribution - Simulation Results
Considerable activity has been devoted over the last two years to the isolation of vortices, and the computation of their contribution to the heavy quark potential and other physical quantities on the lattice. Several approaches have been pursued:
* Computation of the expectation of the Wilson loop fluctuation solely by elements of the center on smoothed configurations which remove short distance fluctuations.
* Center projection in maximal center gauge (MCG) and isolation of P-vortices.
* Direct computation of excitation probability of a vortex in the vacuum (magnetic-flux free energy).
Of these, the third is the most direct and physically transparent, and closely connected to the formulation of rigorous analytical results on the necessity and sufficiency of vortices for confinement. It also is computationally the most expensive. The result of a recent computation is presented in section 6 below. In this section we discuss the first two.
### 3.1 Quark potential from center fluctuations on smoothed configurations
We saw above that the fluctuation in the value of $`\mathrm{tr}U[C]`$ by elements of $`Z(N)`$, parametrizing the different $`\pi _1(SU(N)/Z(N))`$ homotopy sectors, expresses the changes in the number (mod $`N`$) of vortices linked with the loop over the set of configurations for which it is evaluated. One is not, however, interested in fluctuations produced by small thin vortices, which will still be present even at large beta but are irrelevant to the long distance physics. One would like to isolate only fluctuations by elements of the center due to extended configurations that can reflect long distance dynamics. To ensure this one performs local smoothing on the configurations which is constructed so that it removes short distance fluctuations but preserves long distance physics. There is another, in fact essential reason for employing smoothing: it ensures good topological representation of extended fluctuations on the lattice. (According to rigorous theorems, only for lattice configurations with sufficiently small variations of the plaquette function $`U_p`$ from its maximum is it possible to unambiguously define a continuum interpolation assignable to a topological sector.)
Separate out the $`Z(N)`$ part of the Wilson loop observable by writing $`\mathrm{arg}(\mathrm{tr}U[C])=\phi [C]+\frac{2\pi }{N}n[C]`$, where $`\pi /N<\phi [C]\pi /N`$, and $`n[C]=0,1,\mathrm{},N1`$. Thus, with $`\eta [C]=\mathrm{exp}(i\frac{2\pi }{N}n[C])Z(N)`$,
$`W[C]=\mathrm{tr}U[C]`$ $`=`$ $`|\mathrm{tr}U[C]|e^{i\phi [C]}\eta [C]`$ (1)
$`=`$ $`|\mathrm{tr}U[C]|\mathrm{cos}(\phi [C])\mathrm{cos}({\displaystyle \frac{2\pi }{N}}n[C]),`$
using the fact that the expectation is real by reflection positivity, and that it is invariant under $`n[C](Nn[C])`$. Next define
$$W_{Z(N)}[C]=\mathrm{cos}(\frac{2\pi }{N}n[C]).$$
(2)
One then compares the string tension extracted from the full Wilson loop $`W[C]`$, eq. (1), to the string tension extracted from $`W_{Z(N)}[C]`$, eq. (2), on sets of progressivelly smoothed configurations -. Results have been obtained for $`N=2`$ and $`N=3`$ using the smoothing procedure in . Typical results for the heavy quark potential for $`N=3`$ from six times smoothed lattices are shown in figure 1.
There is striking coincidence of the potentials extracted from (1) and (2) at large distances indicating that the full asymptotic string tension is carried by the vortex-induced center fluctuations. This coincidence is found to be very robust and stable under different number of smoothings. This is a very stringent test, as configurations subjected to different degrees of smoothings are vastly different, and indicates that this is an actual long-distance physics effect.
### 3.2 Center projection, MCG and P-vortices
Most of the numerical simulations follow this approach. As almost all results in MCG are for $`N=2`$, we discuss only this case. The method consists of the following steps.
1. Fix the gauge by maximizing the quantity: $`_b|\mathrm{tr}U_b|^2`$. This is the MCG.
2. Make the center projecction: $`U_bZ_b`$ by replacing each $`SU(2)`$-valued bond variable by the closest center element $`Z_b`$.
3. The excitations of the resulting $`Z(2)`$ bond configurations are coclosed sets of plaquettes each carrying $`1`$ flux, i.e. $`Z(2)`$ vortices. These are the projection vortices (P-vortices).
The string tension extracted from loops of the center projected Z(2) variables is then found , to reproduce the full asymptotic string tension of the SU(2) LGT.
The rational for the method is as follows. Consider two configurations of the bond variables $`U`$ and $`U^{}`$ that differ by a discontnuous (singular) gauge transformation introducing a vortex. The corresponding configurations in the adjoint representation $`U_A`$ and $`U_A^{}`$ are then gauge equivalent by a regular gauge transormation. Now go to MCG. Then $`U_A`$ and $`U_A^{}`$ go to the same MCG-fixed adjoint configuration $`\overline{U}_A`$; whereas $`U`$ and $`U^{}`$ are transformed to $`\overline{U}`$ and $`\overline{U}^{}`$ corresponding to the same adjoint $`\overline{U}_A`$. Hence $`\overline{U}`$ and $`\overline{U}^{}`$ can only differ by a discontinuous plus possibly regular $`Z(2)`$ gauge transformations. Upon center projection, the projected $`Z`$ and $`Z^{}`$ configurations will also differ by the same discontinuous $`Z(2)`$ transformation (plus possibly regular transformations), i.e one P-vortex (mod $`N`$) reflecting the vortex introduced by the singular transformation by which $`U`$ and $`U^{}`$ differ.
Note that the projected configurations $`Z`$ and $`Z^{}`$ give for any Wilson loop linking with the P-vortex values differing by a sign (nontrivial element of $`Z(2)`$). But, as we saw in the previous section, the value of the Wilson loop in the original $`U`$ and $`U^{}`$ will indeed differ by a sign if they differ by a discontinuous gauge transformation. In this way contact is made with the method in 3.1 above.
The above, however, relies on certain assumptions. It assumes that: a) $`U_A`$ and $`U_A^{}`$ are gauge equivalent everywhere; b) the gauge fixing of the adjoint links by the MCG is complete everywhere leaving only a residual $`Z(2)`$ symmetry. It turns out, however, that, for the purpose of associating a P-vortex with a thick vortex, these assumptions cannot hold in general. First it is evident that a) applies strictly only if the singular gauge transformation corresponds to a thin vortex; for a thick vortex it cannot apply in the thick core, hence there is no unique $`\overline{U}_A`$ everywhere. Furthermore, there is a pronounced Gribov copies problem. The MCG gauge fixing functional has in fact many local maxima, and in practice only a local maximum can be achieved. The result after projection can be strongly dependent on the chosen maximum.
All this is in fact inexorably connected with the physics of the problem. General non-Abelian configurations can have $`U_p1`$ everywhere, but the bond variables gradually wandering all over the group over long distances. In fact, as we argued, this is precisely why smooth extended vortices can survive at large beta. The MCG attempts to put every bond as closely to an element of the center as possible, and largely compress a thick vortex to a thin. It is to be expected that in general this cannot be achieved everywhere without gauge fixing ambiguities and Gribov problems.
Numerical demonstration of the Gribov copies problem was given in . Starting from configurations fixed in the Lorentz gauge and then going to the MCG produces on average a maximum higher than starting from random gauge and then going to MCG. The resulting picture in the two cases is dramatically different. In the latter case the results of are reproduced. In the former case there is essentially complete loss of string tension, while there is a drop of only about 40% in the density of P-vortices indicating that they cannot be associated with thick vortices contributing to the string tension, but only with short thin vortices.
It is also known that slight local smoothing of the $`SU(2)`$ configurations also causes considerable decrease in the center projected string tension. This again indicates ambiguities in associating thick vortices with P-vortices that are introduced with expanded cores and additional smoothness over longer distances in the configurations.
In conclusion, gauge fixing can be a useful way of isolating vortices, but clearly further work is needed. A more sophisticated approach is called for which exhibits the vortex cores (topological obstructions) as an intrinsic property of the gauge field, and hence independent of the particular gauge fixing procedure adopted. A promising proposal along these lines was made in . (Further discussion can be found in .)
## 4 Necessary Condition for Confinement at Weak Coupling
The numerical results above indicate that only the fluctuations between different $`\pi _1(SU(N)/Z(N))`$ homotopy sectors are responsible for the asymptotic string tension. Fluctuations among the same sector become irrelevant for large enough loops. Numerically, a rather delicate near cancellation between the different sectors occurs, resulting in area-law for the Wilson loop expectation (as oppposed to an exponentially larger perimeter-law result) at weak coupling. This suggests that eliminating fluctuations between different sectors will result in vanishing string tension, i.e. loss of confinement at weak coupling. Since thick vortices are precisely the configurations allowing jumps between different sectors as the continuum limit is approached, this implies that their presence is a necessary condition for confinement.
A relevant rigorous result was in fact already obtained long ago in Ref. . There it was shown that in the presence of constraints eliminating thick vortices completely winding around the lattice with periodic boundary conditions, the electric-flux free energy order parameter in $`SU(N)`$ LGT exhibits non-confining behavior at arbitrarily weak coupling.
The electric-flux free energy gives an upper bound on the Wilson loop . To exhibit non-confining behavior for the Wilson loop itself, one needs a lower bound. In we recently obtained the following rigorous result. Consider the expectation of the Wilson loop in the presence of constraints that eliminate from the functional measure all configurations that can represent thick vortices linking with the Wilson loop, i.e. allow the Wilson loop to fluctuate into the nontrivial $`\pi _1`$ homotopy sectors.
Then for sufficiently large $`\beta `$, and dimension $`d3`$ the so constrained Wilson loop expectation $`W[C]`$, exhibits perimeter law, i.e. there exist constants $`\alpha ,\alpha _1(d),\alpha _2(d)`$ such that
$$W[C]\alpha \mathrm{exp}\left(\alpha _2e^{\alpha _1\beta }|C|\right).$$
(3)
Here $`|C|`$ denotes the perimeter length of the loop $`C`$. In other words, the potential between two external quark sources is nonconfining at weak coupling.
In only the $`SU(2)`$ case is treated explicitly. The result is proven for a variety of actions. One class of actions considered is given by:
$$A_p(U)=\beta \mathrm{tr}U_p+\lambda \text{sign}(\mathrm{tr}U_p),$$
(4)
where $`0\lambda <\mathrm{}`$ extrapolates between the standard Wilson action ($`\lambda =0`$) and the ‘positive plaquette action’ model ($`\lambda \mathrm{}`$). Another choice is:
$$A_p(U_p)=\beta \mathrm{tr}U_p+\mathrm{ln}(\theta (|\mathrm{tr}U_p|k)),$$
(5)
i.e. Wilson action with an excised small ‘equatorial’ strip in $`SU(2)`$ of width $`k`$ such that $`k\beta `$ large as $`\beta `$ becomes large; e.g. $`k`$ a small constant, or $`k1/\beta ^{1/2}`$. All these actions have the same naive continuum limit and expected to be in the same universality class . Choice of different actions serves to emphasize that the result is independent of the particular choice of YM action latticization.
It should be emphasized that the constraints do not eliminate thin vortices. This is achieved by employing the $`SO(3)\times Z(2)`$ formulation , of the $`SU(2)`$ LGT. All constraints depend only on the $`SO(3)`$ coset variables, and thus any $`Z(2)`$ plaquette fluxes on thin vortices remain unaffected. We refer to Ref. which contains detailed explicit derivations.
## 5 Sufficiency Condition - Lower Bound on the String Tension by the Excitation Probability for a Vortex
As already mention an upper bound on the Wilson loop is given by the electric-flux free energy order parameter . This quantity is the $`Z(N)`$ Fourier transform of the magnetic-flux free energy . The magnetic flux free energy order parameter is defined as the ratio $`Z(z)/Z`$ of the partition function with a ‘twisted’ action to that with the original (untwisted) action. The ‘twist’ inserts a nontrivial element $`zZ(N)`$, i.e. a discontinuous gauge transformation, in the action on every plaquette of a $`(d2)`$-dim topologically nontrivial coclosed set of plaquettes $`S^{}`$ (closed 2-dim surface of dual plaquettes on the dual lattice in $`d=4`$) with periodic boundary conditions. Thus $`\mathrm{ln}(Z(z)/Z)`$ gives the free energy cost for exciting a vortex completely winding in $`(d2)`$ spacetime directions around the lattice, and is also referred to as the vortex free energy. (Alternatively, one may consider appropriate fixed boundary conditions in the remaining two spacetime directions so that the winding vortex again remains trapped .) The upper bound on the Wilson loop in terms of the $`Z(N)`$ Fourier transform of such ‘vortex containers’ , , implies area-law only if the vortex free energy remains finite in the large volume limit (in the Van Hove sense). Now to cancel a cost proportional to $`L^{(d2)}`$ ($`L`$ lattice linear length), the system must respond by spreading the discontinuous gauge transformation on $`S^{}`$ in the two transverse directions, i.e. the vortex free energy will remain finite only if the expectation for exciting an arbitrarily long, thick vortex remains finite. This provides then a sufficiency condition for confinement.
Recently, we have obtained an alternative lower bound on the string tension for $`SU(2)`$ which can be expressed directly in terms of the ’t Hooft loop expectation (magnetic-disorder parameter) . The ’t Hooft operator amounts to a source exciting a $`Z(2)`$ monopole current on a coclosed set of cubes (closed loop of dual bonds on the dual lattice), and forming the coboundary of a set of plaquettes $`S^{}`$ (forming the boundary of a $`(d2)`$-dim surface of dual plaquettes) representing the attached Dirac sheet. The operator inserts a twist $`(1)Z(2)`$ on each plaquette in $`S^{}`$. In our case the monopole ‘loop’ is taken to be the minimal coclosed set of cubes consisting of the $`2(d2)`$ cubes sharing a given plaquette $`p`$. The set $`S^{}`$ attached to it winds around the lattice in the $`(d2)`$ perpendicular directions. Again, to cancel a cost proportional to $`L^{(d2)}`$, the system must respond by spreading a discontinuous gauge transformation on $`S^{}`$ in the two transverse directions, i.e. the operator gives the expectation for exciting a thick vortex ‘punctured’ by a short monopole loop. Now the presence of the ‘puncture’ by the small monopole loop (site of the ’t Hooft loop source) is a purely local effect that can be extracted with fixed action cost (at finite lattice spacing). Shrinking the monopole loop to a point gives then the magnetic flux free-energy observable $`Z(z)/Z`$.
It is known that, for $`SU(N)`$, individual configurations exist giving vanishing vortex energy cost. The non-Abelian nature of the group is crucial for their existence. No construction of a finite measure contribution, hence no proof at the nonperturbative level is available though.
## 6 Measurement of the Vortex Free Energy
In the absence of an analytical proof, we have resorted to numerical evaluation of the magnetic flux free energy (vortex free energy) for $`N=2`$.
Measurement was performed by combining Monte Carlo simulation with the multihistogram method of Ref. . The method was used in to compute the free energy of a $`Z(2)`$ monopole pair as a function of the pair’s separation. The method consists roughly of looking at the probability distribution of the energy along the twist (all other variables integrated out). This probability is reconstructed by combining histograms of the energy along the twist obtained from several simulations at different values of the coupling along the twist. The method tends to be computationally expensive. The result of our computation is shown in figure 2. The lattice spacings are $`a=0.119`$ fm and $`a=0.085`$ fm for $`\beta =2.4`$ and $`\beta =2.5`$, respectively. As expected by physical reasoning, not only does the vortex free energy cost remain finite as the lattice volume grows, but it tends to zero, i.e. the weighted probability for the presence of a vortex goes to unity for sufficiently large lattice. This reflects the exponential spreading of color-magnetic flux in a confining phase.
## 7 Conclusions
Simulations show that the full string tension is accounted for by center fluctuations of the Wilson loop insensitive to short distance details. The result is robust under local smoothings of configurations, consistent with the picture of thick vortex configurations in the vacuum being responsible for confinement at weak coupling.
The method of gauge fixing for associating vortices in the full theory with vortices in center-projected $`Z(N)`$ configurations (P-vortices) is, upon closer inspection, a rather tricky proposition. The common implementation (maximal center gauge) suffers from pronounced Gribov and smoothing problems. Clearly a more sophisticated approach is needed that manifestly does not depend on the particular gauge fixing procedure adopted. Work along these lines is being currently pursued.
Elimination of thick vortex configurations in the vacuum allowing the Wilson loop to fluctuate into different $`\pi _1(SU(N)/Z(N))`$ homotopy sectors has recently been rigorously shown to lead to loss of confinement at arbitrarily weak coupling. It is an old result that this also holds true for the electric-flux free energy order parameter. In other words, the presence of vortices is a necessary condition for confinement.
The numerical simulations indicate that in fact the presence of vortices is both a necessary and sufficient condition. Again analytical arguments relate the existence of nonzero string tension directly to the nonvanishing of the excitation probability of a sufficiently thick vortex in the vacuum. A numerical evaluation of the vortex free energy presented here shows that this probability is indeed equal to unity. This indicates that the vacuum indeed exhibits a ‘condensate’ of thick vortices.
## 8 Acknowledgments
We are grateful to the organizers for their invitation and warm hospitality in Dubna, and for conducting such a successful, interactive workshop. We thank the participants for many discussions. The work of T.G.T. was supported by FOM, and of E.T.T. by NSF-PHY 9819686. |
no-problem/9912/hep-ex9912009.html | ar5iv | text | # 1 Jets
## 1 Jets
### 1.1 Inclusive Jet Cross Sections at $`\sqrt{s}=1.8`$ TeV
So much has been said about the high-$`E_T`$ behavior of the inclusive jet cross section at the Tevatron that it is difficult to know what can usefully be added (see Fig. 1). The measured central inclusive jet cross sections, from CDF and DO, compared with the NLO theory, are shown in Fig. 3 (note that the CDF figure does not include systematic errors). The impression one gets is that there is a marked excess above QCD in the CDF data, which is not observed at DO.
In order to compare with CDF, DO carried out an analysis in exactly the same rapidity interval ($`0.1<|\eta |<0.7`$). The results are shown in Fig.3. Firstly we note that there is no actual discrepancy between the datasets. Secondly, for this plot the theoretical prediction was made using the CTEQ4HJ parton distribution, which has been adjusted to give an increased gluon density at large $`x`$ while not violating any experimental constraints (except perhaps fixed target photon production data, which in any case require big corrections before they can be compared to QCD, as we shall see later). The result of this increased gluon content is improved agreement especially with the CDF data points.
What then have we learned from this issue? In my opinion, whether the CDF data show a real excess above QCD, or just a “visual excess,” depends critically on understanding the systematic errors and their correlations as a function of $`E_T`$. Whether nature has actually exploited the freedom to enhance gluon distributions at large $`x`$ will only be clear with the addition of more data — the factor of 20 increase in luminosity in Run II will extend the reach significantly in $`E_T`$ and should make the asymptotic behavior clearer. Whatever the Run II data show, this has been a useful lesson; it has reminded us all that parton distributions have uncertainties, whether made explicit or not, and that a full understanding of experimental systematics and their correlations is needed to understand whether experiments and theory agree or disagree.
DO have extended their measurement of inclusive jet cross sections into the forward region. Figure 4 shows the measured cross sections up to $`|\eta |=3`$. They are in good agreement with NLO QCD over the whole range of pseudorapidity and transverse energy.
### 1.2 Dijet Production
Both Tevatron experiments have also studied dijet final states. CDF has presented cross sections for processes with one central jet ($`0.1<|\eta _1|<0.7`$) and one jet allowed forward ($`|\eta _2|`$ up to 3.0). In Fig. 5 these are compared with the NLO QCD prediction ast a function of the central jet’s transverse energy ($`E_{T1}`$). The data show an excess above the theory for large $`E_{T1}`$, just as seen in the inclusive cross section; but since these events are common to both samples, this is not surprising.
DO have measured the cross sections for dijet production with both same-side ($`\eta _1\eta _2`$) and opposite-side ($`\eta _1\eta _2`$) topologies, for four bins of $`|\eta |`$ up to 2.0. The results are all in good agreement with the NLO QCD prediction, as seen in Fig. 6.
### 1.3 Jet Cross Sections at $`\sqrt{s}=630`$ GeV
Both CDF and DO have measured the ratio of jet cross sections at $`\sqrt{s}=1800`$ and 630 GeV, exploiting a short period of data taking at the latter center of mass energy at the end of Run I. This ratio is expected to be a rather straightforward quantity to measure and to calculate. The ratio as a function of scaled jet transverse energy $`x_T=2E_T/\sqrt{s}`$ is shown in Fig.8. Unfortunately the two experiments are not obviously consistent with each other (especially at low $`x_T`$) or with NLO QCD (at any $`x_T`$). At least two explanations have been suggested for the discrepancy. Firstly, different renormalization scales could be used for the theoretical calculations at the two energies. While allowed, this seems unappealing. Glover has suggested that such a procedure is in fact natural when a scaling variable like $`x_T`$ is used; because $`x_T`$ differs by a factor of about three between the two center of mass energies for a given $`E_T`$, a factor of three difference in the renormalization scales is appropriate. An alternative explanation is offered by Mangano, who notes that a shift in jet energies by of order 3 GeV which might arise from non-perturbative effects would bring the data in line with the prediction. Such effects might include losses outside the jet cone, underlying event energy, and intrinsic transverse momentum of the incoming partons; one might be under or over-correcting the data and the two experiments might even obtain different results depending on how the corrections were done (based on data or Monte Carlo, for example). It seems that more work, both theoretical and exerimental, is needed before this question can be resolved.
### 1.4 BFKL
DO have used the 630 GeV data to make a measurement which a cynic might perhaps describe as “yet another attempt to find an observable which displays BFKL behaviour.” The ratio of cross sections at 1800 and 630 GeV is measured for dijets with large rapidity separations, using bins such that $`x`$ and $`Q^2`$ are the same in the two datasets (and therefore the parton distributions cancel, to first order). Figure 8 shows this ratio as a function of the rapidity separation $`\mathrm{\Delta }\eta `$ at 630 GeV. The ratio is much larger than unity, and rises with rapidity separation, qualitatively like the BFKL expectation — but also like HERWIG. Maybe we are indeed seeing BFKL dynamics, but given that we apparently can’t predict the ratio of inclusive cross sections between the two energies, I would caution against inferring too much.
### 1.5 Jet Structure at the Tevatron
All the results presented so far have used a cone jet finder. By running a $`k_T`$ jet finder inside previously identified jets, one can count the number of “subjets” or energy clusters. This allows the perturbative part of fragmentation to be studied. DO have made such a measurement and, by comparing jets of the same $`E_T`$ and $`\eta `$ at $`\sqrt{s}=1800`$ and 630 GeV, have inferred the composition of quark and gluon jets. The extracted subjet multiplicity $`M`$ for the two species is shown in Fig.9. The ratio of $`M1`$ for the two cases, which might naively be expected to equal the ratio of gluon and quark colour charges, is found to be $`1.91\pm 0.04`$, compared with $`1.86\pm 0.04`$ from HERWIG.
These last three measurements all relied on a very short period of Tevatron running at 630 GeV, which was surprisingly productive. Though it is politically difficult to get reduced energy running time when one is searching for new physics, I suspect we may want to do something like this once again in the next run.
### 1.6 Jet Production at HERA
Many studies of jet production have also been carried out at HERA, by H1 and ZEUS. Inclusive jet, dijet and three-jet cross sections have been measured as a function of jet $`E_T`$, pseudorapidity and $`Q^2`$, typically using a $`k_T`$ jet finder. The results are in good agreement with NLO (where available) or LO QCD. ZEUS have also reported a study of subjets similar to that described above; they find that the subjet multiplicity rises as the jet becomes more forward, which is consistent with the expectation that more gluon jets would be produced in this region of phase space (and also consistent with the HERWIG Monte Carlo).
## 2 Direct Photon Production
Historically, many authors hoped that measurements of direct (or prompt) photons would provide a clean test of QCD, free from the systematic errors associated with jets, and would help pin down parton distributions. In fact photons have not lived up to this promise — instead they revealed that there may be unaccounted-for effects in QCD cross sections at low $`E_T`$. (Because photons can typically be measured at lower energies than jets, they provide a way of exploring the low-$`E_T`$ regime). Results from the Tevatron experiments are shown in Fig. 11. While the general agreement with the NLO calculation of Owens and collaborators is good, there is a definite tendency for the data to rise above the theory at low energies.
An often-invoked explanation for this effect is that there is extra transverse momentum smearing of the partonic system due to soft gluon radiation. The magnitude of the smearing, or “$`k_T`$”, is typically a few GeV (at the Tevatron), motivated in part by the experimentally measured $`p_T`$ of the $`\gamma \gamma `$ system in diphoton production which peaks around 3 GeV. Inclusion of such $`k_T`$ as a Gaussian smearing in the calculation gives much better agreement with the data, as shown in Fig. 11.
One should note that a shape a bit closer to that observed in the data can be obtained in other ways. Vogelsang et al. use a NLO treatment of fragmentation and allow the renormalization and factorization scales to differ, yielding the curves shown in Fig.12 without any $`k_T`$.
Much larger deviations from QCD are observed in fixed-target experiments such as E706 at Fermilab. Again, Gaussian smearing (with $`k_T1.2`$ GeV in this case) can account for the data, as shown in Fig. 14. A rather different view is expressed by Aurenche and collaborators, who find their calculations, sans $`k_T`$, to be consistent with all data with the sole exception of E706 (see Fig. 14). They say “it does not appear very instructive to hide this problem by introducing an arbitrary parameter fitted to the data at each energy.” Indeed, I believe that Gaussian smearing has told us pretty much all that it can. Its predictivce power is small: what happens to forward photons, for example? The “right way” to treat soft gluon emission should be through a resummation calculation which works nicely for $`\gamma \gamma `$ and $`W/Z`$ transverse momentum distributions. Unfortunately, this approach does not seem to model the E706 data, which still lie a factor of two or more above the resummed QCD calculations (Fig. 15). I do not know if there are other terms that can be resummed, of whether this should be taken as an implication that the whole idea of soft gluons being responsible for this discrepancy is mistaken.
As an interesting aside, we may note that the ZEUS measurement of prompt photon production at HERA, which covers a similar range of transverse momenta to E706, agrees well with NLO QCD without any need for $`k_T`$ (Fig. 16). It is true that the agreement is only good if the “resolved” photon contribution to the DIS process is included, and this does form a kind of resummation. It has also been pointed out that the enhancements from $`k_T`$ would be smaller here than for E706 as the cross section is less steeply falling, with only a 20% enhancement expected at $`E_T=5`$ GeV.
In summary, direct photon production has proved extremely interesting and remains quite controversial. The appropriateness of a Gaussian $`k_T`$ treatment is still hotly debated, the experiments may not all be consistent, and resummation has not proved to be the answer.
## 3 Weak Bosons
### 3.1 $`Z`$ Transverse Momentum
DO has new results on the transverse momentum distribution of the $`Z`$ boson. Figure 17 shows the data compared with a variety of QCD predictions. Clearly the fixed-order NLO QCD is not a good match for the data, while the resummed formalism of Ladinsky and Yuan fits rather well. On the other hand the resummed calculations of Davies, Webber and Stirling, and of Ellis and Veseli, do not offer quite as good a description of the data.
### 3.2 $`W+`$jets
DO used to show a cross section ratio $`(W+1\mathrm{j}\mathrm{e}\mathrm{t})/(W+0\mathrm{j}\mathrm{e}\mathrm{t})`$ which was badly in disagreement with QCD. This is no longer shown: the data were basically correct, but there was a bug in the way DO extracted rhe ratio from the DYRAD theory calculation.
Recent CDF measurements of the $`W+`$jets cross sections agree well with QCD, as shown in Fig. 18.
## 4 Heavy Flavour Production
At the Tevatron, the measured inclusive $`b`$ and $`B`$meson production cross sections continue to lie a factor of about two above the NLO QCD expectation. This is seen by both CDF and DO in the central and forward regions (the difference is perhaps even larger for forward $`b`$ production, as seen in Fig. 19). On the other hand, QCD does a good job of predicting the shape of inclusive distributions, and of the correlations between $`b`$ quark pairs, so it seems unlikely that any exotic new production mechanism is responsible for the higher than expected cross section.
There are now results on heavy quark production at LEP 2. In particular, L3 has reported the first observation of $`b`$ production in $`\gamma \gamma `$ collisions ($`e^+e^{}e^+e^{}b\overline{b}`$). As shown in Fig. 20 the cross section is, once again, 2–3 times the QCD expectation. The picture is rather similar at HERA. H1 and ZEUS have reported $`b`$ production cross sections in $`\gamma p`$ collisions, using leptons to tag $`b`$-jets. As shown in Fig. 21 the cross section is at least a factor of two above the QCD prediction. We therefore have a picture of consistent experimental results which are unfortunately all inconsistent with theory!
If the heavy flavour is heavy enough, QCD seems to work rather better. The current state of measured and predicted top cross sections is summarised in Table 1. This includes the latest (revised downward) CDF measurement. There is an excellent agreement between data and theory.
## 5 Measurements of $`\alpha _s`$
The strong coupling $`\alpha _s`$ is a fundamental parameter of QCD. Its value cannot be calculated, but must be determined experimentally. A number of new measurements have been reported recently.
Deep inelastic scattering data is now being interpreted in an NNLO framework. Santiago and Ynduráin report an extraction of $`\alpha _s`$ from $`F_2`$ measured at SLAC, BCDMS, E665 and HERA; they obtain $`\alpha _s(m_Z)=0.1163\pm 0.0023`$. Kataev, Parente and Sidorov extracted $`\alpha _s`$ from $`xF_3`$ measured at CCFR and obtain $`\alpha _s(m_Z)=0.118\pm 0.006`$.
The LEP electroweak working group has reported new values from LEP 1/SLD $`Z`$ pole data. The full Standard Model fit yields a value of $`\alpha _s(m_Z)=0.119\pm 0.003`$ while the ratio of the $`Z`$ partial widths to hadrons and leptons gives $`\alpha _s(m_Z)=0.119\pm 0.004_0^{+0.003}(m_H)`$. The LEP experiments have all extracted $`\alpha _s`$ from event shapes, charged particle and jet multiplicities at $`\sqrt{s}=130196`$ GeV. Monte Carlo programs are used to model non-perturbative effects. Both L3 and OPAL have nicely demonstrated the running of $`\alpha _s`$; in the case of L3, by using radiative events to access a lower $`\sqrt{s}`$, and in OPAL’s case by combining their data with that of JADE. Typical uncertainties on $`\alpha _s(m_Z)`$ from the LEP2 data are around $`\pm 0.006`$.
At HERA, $`\alpha _s`$ has been extracted from the inclusive jet rate and the dijet rate (H1) and the dijet fraction (ZEUS). The values of $`\alpha _s(m_Z)`$ obtained are consistent with the world average, with uncertainties of around $`\pm 0.0050.008`$.
S. Bethke has kindly provided me with an updated world average value of $`\alpha _s(m_Z)`$ for this meeting. It is based on 25 measurements listed in Fig. 22:
$$\alpha _s(m_Z)=0.119\pm 0.004.$$
If one uses only complete NNLO QCD results (the filled symbols in Fig.22) one obtains:
$$\alpha _s(m_Z)=0.119\pm 0.003.$$
We note excellent consistency bwteeen low and high energy data, and between deep inelastic scattering, electron-positron and hadron collider data. Changes from the previous world average are minimal.
### 5.1 Consistency Tests
At this point we know the value of $`\alpha _s`$ rather well — it is hard to point to a prediction which is limited by its precision. Hence some of the more interesting measurements are really tests of self-consistency and of our understanding of QCD, rather than determinations of this parameter.
An example is the use of power corrections for the non-perturbative corrections to event shape variables, as described in Bryan Webber’s contribution to these proceedings. I would place DELPHI’s extraction of $`\alpha _s`$ from oriented event shape variables in the same category. Fits at $`m_Z`$ yield very precise values of $`\alpha _s`$: $`0.1180\pm 0.0018`$ from the jet cone energy fraction, for example. However, the fitting procedure relies on “optimization” of the renormalization scale for each variable through a simultaneous fit of $`\alpha _s`$ and $`x_\mu =\mu ^2/Q^2`$. For 18 jet shape variables the resulting scales range from $`\mu ^2=(0.003Q)^2`$ to $`(7Q)^2`$, a much larger range than one is comfortable with. In fact the whole procedure has been called theoretically unjustified. Nonetheless the consistency of the results is certainly interesting: if $`x_\mu `$ is fixed to 1, the spread in the extracted $`\alpha _s`$ values is much larger. I suspect that this is telling us something about QCD (though maybe not, or not just, the value of $`\alpha _s`$).
Another measurement of this type is the extraction of $`\alpha _s`$ from the CDF inclusive jet cross section. The value quoted, $`\alpha _s(m_Z)=0.113_{0.009}^{0.008}`$, is consistent with the world average, and $`\alpha _s`$ shows a nice evolution with scale (given by the jet transverse energy), as shown in Fig. 23. However the figure also shows that the measurement suffers from a large, and hard to quantify, sensitivity on the parton distributions, especially on the value of $`\alpha _s`$ assumed therein. At this time I think it must be characterized as a nice test of QCD and not a measurement of $`\alpha _s`$.
## 6 Some Final Remarks
In closing I will take the opportunity to outline some areas where I think progress would be welcome: “What I would like for Christmas”.
Firstly parton distributions with quoted uncertainties, or at least with a technique for the propagation of uncertainties as outlined by Giele and collaborators. This would spare us from future unnecessary excitement over things like the high-$`E_T`$ jet “excess.”
Secondly I would like to see a theoretical and experimental effort to understand the underlying event in hadronic collisions. It is an inconsistent treatment of the event to subtract it out of the jet energies, as is usually done. The ratio of 1800 to 630 GeV jet cross sections may indicate problems with this approach. And such understanding would also enable a consistent treatment of double parton scattering which may be very important at the LHC.
Thirdly we need progress on jet algorithms for hadron colliders. Indeed, there is a lot of work going on in workshops at Fermilab and Les Houches. There are various species of $`k_T`$ algorithm to be compared, and the question remains whether the cone algorithm can be made theoretically acceptable. Theoertical requirements centre on the need for infrared and collinear safety, and the avoidance of ad hoc parameters; experimentalists worry about sensitivity to noise, pileup and negative energies. These can have potentially large effects on jet measurements especially at low energies, as shown in Fig. 24.
Fourthly there is one QCD process that we have completely failed to describe so far. Figure 25 shows a CDF event with a track in the Roman Pot detectors. It is jet production, a perturbative process which I have claimed is well-modelled by NLO QCD. Except for one detail: in a substantial fraction (a few percent?) of such events, one of the protons doesn’t break up. Whether we call this pomeron exchange, or think of it in terms of parton correlations inside the proton, it doesn’t form part of a consistent description of hard hadronic interactions. My hunch is that we won’t get too far in understanding processes like this as long as we think of them as being somehow alien to perturbative QCD.
## 7 Conclusions
In conclusion, testing QCD typically means testing our ability to calculate within QCD. In fact our calculational tools are working quite well, especially at moderate to high energy scales. The state of the art is NNLO calculations, and NLL resummations. However, interesting things (challenges!) start happening as we reduce the energy scale to of order 5 GeV. We have problems calculating $`b`$ quark cross sections; there are problems with low-$`p_T`$ direct photon production ($`k_T`$?); and perhaps indications of effects of a few GeV in jet energies. In addition, there are other challenges for the future: identification of appropriate jet algorithms, understanding of the underlying event in hadron-hadron collisions, understanding parton distribution uncertainties, and obtaining a consistent picture of hard diffraction processes. With two new hadron collider facilities coming on line in the next decade, we can be assured of a vibrant future for perturbative QCD and jet physics.
Discussion
Alex Firestone (NSF): Regarding the isolated inclusive photon distributions, how plausible is it that at least some of the problem is due to signal extraction above background? Different experiments have different calorimeter resolution and different criteria for photon isolation. Could this be the source of the inconsistencies?
Womersley: Having carried out these analyses myself at the Tevatron, I certianly understand that there are large systematic errors at low $`E_T`$ because of the big jet backgrounds. There are also problems in computing the effects of the isolation requirements on the theoretical predictions. But the difficulty pointed out by Aurenche et al. is that the E706 and the ISR data do not agree. I do not really want to speculate on why that might be so.
Jonathan Butterworth (University College London): It’s true that the photon structure function is a way to resum partonic contributions. But the partons from the photon are collinear, so the $`k_T`$ effect is not included in this calculation.
Paul Söding (DESY): One of your wishes for Christmas has already become true; there is an analysis by Michael Botje which provides PDF’s with uncertainties. The paper is available.
Michel Davier (LAL, Orsay): I think that the $`\alpha _s`$ average you quoted from Siggi Bethke has a very conservative uncertainty (0.003). The 2 most precise determinations (from $`\tau `$ decays and the $`Z`$ width) have uncertainties already at this level and the jet rate at LEP yields results approaching this accuracy. These determinations have different systematics. So I would rather think that the combined uncertainty is 0.002 at most.
Siggi Bethke (Aachen and MPI Munich): Most of the individual errors quoted are lower limits, because there is some freedom to adjust the theoretical uncertainties. The new world average that was quoted relies, for the first time, on NNLO calculations alone, and also assumes some degree of correlation between individual errors. The scatter of the central values of the results is also $`\pm `$ 0.003. |
no-problem/9912/astro-ph9912106.html | ar5iv | text | # ON THE ORIGIN OF THE MeV GAMMA-RAY BACKGROUND
## 1 Introduction
There has been much recent progress over the last 10 years in understanding the origins of the high energy cosmic background radiation. It now seems almost certain that the bulk of the hard X–ray background from 2–200 keV is made from obscured radio–quiet AGN (Madau, Ghisellini & Fabian 1994; Comastri et al. 1995; Gilli, Risalti & Salvati 1999), while at high energies (30 MeV – 100 GeV) the beamed radio–loud AGN (blazars) dominate (Stecker & Salamon 1996; Zdziarski 1996; Sreekumar et al. 1998). However, between the radio–quiet AGN rollover at $`100`$ keV, and the low energy break in the spectrum of radio–loud AGN at $`10`$ MeV, there is a substantial background component detected by COMPTEL and SMM in the 200 keV – 3 MeV range which is not accounted for in these models. Some of this is produced by type Ia supernovae (Zdziarski 1996), but the latest calculations show that there is still a marked discrepancy (by about a factor 2) between the observed background and current best estimates of the supernovae contribution (Watanabe et al 1999). It also appears that blazars will not account for the MeV background because their spectra generally have a break at an energy of about 10 MeV (McNaron-Brown et al. 1995). The small population of possible “MeV blazars” will also not account for the MeV background. In fact, even if we accept that unresolved blazars account for the extragalactic background radiation at energies above 30 MeV (Stecker & Salamon 1996), essentially all of these blazars would have to be “MeV blazars” as well in order to account for the background flux level at MeV energies, contrary to the observational evidence (McNaron-Brown et al. 1995).
In this paper we propose a new hypothesis to account for the extragalactic background in the MeV region as derived from two independent analyses of the COMPTEL data from the Compton Gamma Ray Observatory satellite (Kappadath, et al. 1996; Sreekumar, Stecker and Kappadath 1997; Weidenspointner 1999). Our hypothesis is based on an analogy between the galactic black hole candidate Cyg X–1 and active galactic nuclei (AGN), which are generally believed to be powered by supermassive black holes. We use this analogy to extend the observation of a nonthermal MeV “tail” in the Cyg X–1 spectrum to hypothesize that such nonthermal tails exist in extragalactic AGN spectra, even though past and present gamma-ray detectors could not observe such tails at the flux levels expected. We will then argue that a superposition of unresolved AGN with Cyg X–1 type spectra, such as has been shown to reasonably account for the X-ray background (e.g. Gilli, Risaliti, and Salvati 1999) can also account for the shape and flux level of the MeV background deduced from the COMPTEL data.
In this regard, it is relevant to note very recent obervations of Seyfert galaxies with flat spectrum radio nuclei using the VLBA have shown that these sources are emitting non-thermal radiation from central core regions with sizes $``$ 0.05 to 0.2 pc (Mundell, Wilson, Ulvestad & Roy 1999). Such cores may also be the source of non-thermal MeV emission.
## 2 The Cyg X–1 Epitome
Cyg X–1 in its low/hard state has a spectrum dominated by a power law component which rolls over at $`200`$ keV. This is well fit by a model involving a thermal population of hot electrons which Compton upscatter soft seed photons from the accretion disk (e.g. Gierlinski et al 1997). However, recent COMPTEL observations show a small hard tail of emission, extending out to MeV energies (McConnell et al. 1997). An explanation that has been suggested to account for this hard tail would be that the electron distribution is not completely thermalized (Poutanen & Coppi 1998). This is physically reasonable since the thermalization timescales for the electrons can be rather slower than the other timescales in these systems (e.g. Coppi 1999). The overall 2 keV – 5 MeV spectrum of Cyg X–1 can then be modelled if $`90`$ per cent of the power goes into a $`100`$ keV thermal electron distribution, while the remaining $`10`$ per cent is in the form of a non–thermal tail (Poutanen & Coppi 1998).
It is well known that the low/hard state spectrum of Cyg X–1 and other galactic black hole candidates bear a remarkable similarity to that from radio–quiet AGN (see e.g. the review by Poutanen 1998), plausibly because both involve the same physical processes of disk accretion onto a black hole. Thus we expect a similar hard tail to be present in Seyfert galaxies (both type 1 and type 2). In Cyg X–1 this tail begins roughly an order of magnitude below the peak in the hard X–ray spectrum. Such a tail could not be detected in an individual AGN using current instrumentation, but we will show that a superposition of such tails in the spectra of AGN would account for the reported MeV background spectrum and flux.
## 3 A Further Component to the MeV background from Seyferts ?
Galactic black hole candidate sources are known to make spectral transitions to a high/soft state at accretion rates of greater than 10 per cent of the Eddington mass accretion rate. In this state the spectrum is dominated by a thermal component at $`1`$ keV (presumably corresponding to emission from the accretion disk), but also shows a steep non–thermal hard X–ray tail which extends past 511 keV (Grove et al 1998; Gierlinski et al 1999). It is not yet known where this power law breaks (Grove et al 1998), but it is plausible that this also extends to MeV energies, as seems to be indicated by COMPTEL data from Cyg X–1 (Poutanen & Coppi 1998; Gierlinski et al 1999). There is a class of Seyfert galaxies, the Narrow Line Seyfert 1’s Osterbrock & Pogge 1985; Boroson & Green 1992), which are thought to be the AGN analogue of these high mass accretion rate systems (Pounds, Done & Osborne 1995). These comprise about 10 per cent of Seyferts (Boroson & Green 1992), so would also contribute to an extragalactic MeV background if these truely are comparable to the soft/high state galactic black holes in having a steep unbroken power law spectrum extending beyond 511 keV.
## 4 An Illustrative Spectrum
We will now estimate contributions that AGN power-law MeV tails would make to the X-ray/MeV background were these to be universal components in AGN spectra. Because our main concern is with the MeV component of the diffuse background, we restrict our calculation of the extragalactic background spectrum to energies in the range 100 keV - 10 MeV. Our calculation of the X-ray background (XRB) follows that of Pompilio, La Franca & Matt (1999) (PLM), to which we refer the reader for details. Briefly, the XRB is assumed to be comprised of the summed emission of unresolved AGN, which fall into two types: AGN1 and AGN2. In the standard unification scheme, the spectral differences between the two are due to the orientation of the AGN molecular torus relative to our line of sight. For AGN1, there is no obscuration of the nucleus by the torus along our line of sight, while for AGN2 the nuclear spectrum is both attenutated and altered by photoelectric absorption and Compton scattering within the torus.
Following Comastri et al. (1995), PLM adopt the following for the AGN1 source luminosity $`l(E)`$ in units of keV s<sup>-1</sup>keV<sup>-1</sup>
$$l(E)\{\begin{array}{cc}E^{1.3}\hfill & E<1.5\hfill \\ E^{0.9}e^{E/400}+r(E)\hfill & E>1.5\hfill \end{array},$$
(1)
where $`E`$ is in keV, and $`r(E)`$ is a Compton reflection component of nuclear emission off the surrounding gas and dust. The normalization coefficient is determined by requiring $`l(E)𝑑E=L`$, where $`L`$ is the AGN luminosity in the energy band 0.3-3.5 keV at the source. To this we have added a non-thermal power-law component with a soft cutoff at energies below the XRB peak of $`30`$ keV whose amplitude and spectral index are variable parameters. We neglect the reflection component, which is essentially compensated for in our energy region by normalizing the AGN1 spectrum to the AGN luminosity $`L`$. Integration of
$$l(E)=\kappa L\{\begin{array}{cc}(E/1.5)^{1.3}\hfill & E<1.5\hfill \\ (E/1.5)^{0.9}e^{E/400}+\eta (E/1.5)^\alpha 𝒞(E)\hfill & E>1.5\hfill \end{array}$$
(2)
over $`0.3<E<3.5`$ keV determines the normalization constant $`\kappa `$. The power-law tail is assumed to be cut off below 30 keV by an arbitrary cutoff function $`𝒞(E)`$.
The spectra of AGN2 are modifed by the intervening molecular tori. The ratio $`R(z)`$ of AGN2 to AGN1 sources is taken from PLM, and is approximately 5 at redshift $`z=0`$. The distribution of torus thicknesses through which the AGN2 emissions pass is taken from Risaliti, Maiolino & Salvati (1999) (their Table 3). Above 100 keV photoelectric absorption is negligible compared to Compton scattering (Morrison & McCammon, 1983), so we consider only the latter in calculating the mean transmission coefficient $`T(E)`$ for AGN2 spectra.
The AGN1 X-ray luminosity function (XLF) $`\mathrm{\Phi }(L,z)`$, following PLM, is taken to be separable, $`\mathrm{\Phi }(L,z)=\mathrm{\Phi }_0(L)f(z)`$, where $`f(z)`$ is the evolution factor, and $`\mathrm{\Phi }_0(L)`$ is the current XLF. Integrating over the XLF gives the intensity of the diffuse background $`I(E)`$ in units of keV cm<sup>-2</sup>s<sup>-1</sup> sr<sup>-1</sup>keV<sup>-1</sup>
$$I(E)=\frac{c}{4\pi H_0}_0^{z_{\mathrm{max}}}𝑑z\frac{f(z)l\left[E(1+z)\right]\left\{1+R(z)T\left[E(1+z)\right]\right\}}{(1+z)^2(1+2q_0z)^{1/2}}_{L_{\mathrm{min}}}^{L_{\mathrm{max}}}𝑑LL\mathrm{\Phi }_0(L).$$
(3)
Figure 1 shows the results of this calculation with two power-law tail spectral indices, $`\alpha =`$1.2 and 1.4. The amplitude $`\eta `$ for these two cases is chosen to best fit the set of diffuse background measurements; in both cases this results in the power-law tail component being roughly an order of magnitude below the 30 keV peak value of the XRB. This is consistent (within the considerable uncertainties) with the amplitude of the MeV tail of Cyg X–1 relative to its hard-state peak.
## 5 Conclusion
We have examined a new hypothesis for explaining the origin of the extragalactic background radiation at MeV energies. Based on data from the galactic black hole candidate Cygnus X-1, and assuming that radio quiet active galactic nuclei, i.e. the Seyfert galaxies, contain much more massive black holes at their cores, we assume that all such black hole sources exhibit a high energy tail of the same magnitude relative to the thermal emission as Cygnus X-1. We show that by making this assumption, we can account for the flux and spectrum of the extragalactic MeV background as a superposition of emission from Seyfert AGN.
## 6 Acknowledgements
We thank P. Sreekumar for use of his compilation of diffuse background data (Sreekumar et al., 1998). |
no-problem/9912/cond-mat9912168.html | ar5iv | text | # Interface effects on the shot noise in normal metal- 𝑑 -wave superconductor Junctions
## Abstract
The current fluctuations in normal metal / $`d`$-wave superconductor junctions are studied for various orientation of the crystal by taking account of the spatial variation of the pair potentials. Not only the zero-energy Andreev bound states (ZES) but also the non-zero energy Andreev bound states influence on the properties of differential shot noise. At the tunneling limit, the noise power to current ratio at zero voltage becomes 0, once the ZES are formed at the interface. Under the presence of a subdominant $`s`$-wave component at the interface which breaks time-reversal symmetry, the ratio becomes $`4e`$.
preprint: Shot noise, Dec. 1999
The origin of shot noise is the current fluctuations in transport due to the discreteness of the charge carrier. Shot noise measurements provide important information on conduction processes which can not be obtained from the usual conductance measurements . In the last few years, several novel features peculiar to shot noise in mesoscopic systems have been revealed. In particular, the shot noise in normal metal-superconducting junction and superconductor / insulator / superconductor junctions have been intensively studied. It has been shown, through these works, that the Andreev reflection and the charge transport by the Cooper pairs have significant influence on the transport fluctuation at low voltages. However, most of previous theories are constructed on the conventional $`s`$-wave superconductors and the theory for $`d`$-wave superconductors has not been presented.
On the other hand, extensive experimental and theoretical investigations have revealed that the pair potentials of high-$`T_\mathrm{c}`$ superconductors are $`d_{x^2y^2}`$-wave symmetry. One of the essential differences of $`d_{x^2y^2}`$-wave superconductors from conventional $`s`$-wave superconductors is that the phase of the pair potential strongly depends on the wave vector. For example, the appearance of zero-bias conductance peak (ZBCP) in tunneling conductance at a (110) surface of $`d_{x^2y^2}`$-wave superconductors reflects the sign change of effective pair potential through the reflection of quasiparticle at the surface. Orientational and material dependences of ZBCP of high-$`T_c`$ superconductors have been experimentally studied in several groups, and the consistency between theory and experiments has been checked in details .
At this stage, it is an interesting problem to clarify what is expected in the shot noise in normal metal / insulator /$`d_{x^2y^2}`$-wave superconductor $`(n/I/d)`$ junction under the presence of the ZBCP. Recently, Zhu and Ting presented a theory of the shot noise in $`n/I/d`$ junction . They found a remarkable feature for $`n/I/d`$ junction : when the angle between the normal to the interface $`\alpha `$ is $`\pm \pi /4`$, the noise-to-current ratio is zero at zero-bias voltage and quickly reaches a classical Shottky value $`2e`$ at finite voltage. This feature is completely discrepant from that for conventional $`s`$-wave superconductor ($`n/I/s`$) junctions where the ratio is $`4e`$ at zero voltage and $`2e`$ at finite voltage. This anomalous behavior is responsible for the formation of the zero energy Andreev bound states (ZES) at the interface of the $`d`$-wave superconductor. Although Zhu and Ting theory (ZT theory) clarified important aspects of the shot noise under the presence of ZES, there still remain several unresolved problems.
One is the detailed orientational dependence of the shot noise. In ZT theory, the vanishment of the noise-to-current ratio $`R(eV)`$ at zero voltage $`(eV=0)`$ is shown only for $`\alpha =\pi /4`$ ($`0\alpha \pi /4`$). Since this property is related to the existence of ZBCP in tunneling conductance, we must check the value of $`R(0)`$ for $`0<\alpha <\pi /4`$ where ZBCP appears in the tunneling conductance. In this paper, we will show that in the tunneling limit, $`i.e.`$, low transparency limit, $`R(0)`$ vanishes for $`\alpha 0`$ and becomes $`4e`$ only for $`\alpha =0`$ where no ZES are expected. Moreover, it is shown that $`R(0)`$ is classified into three values, $`0`$, $`2e`$ and $`4e`$, corresponding to the region of the Fermi surface contributing to the ZES.
The other is the influence of the spatial dependence of the pair potential on the shot noise. It is known that when the ZES are formed at the interface of $`d`$-wave superconductor the pair potential is suppressed near the interface . Consequently, not only the ZES but also the non-zero energy Andreev bound states (NZES) are formed . The influences of the NZES on the shot noise are clarified. We further study the situation where a subdominant $`s`$-wave component which breaks the time reversal symmetry is induced near the interface of $`d`$-wave superconductor . The subdominant $`s`$-wave component influences significantly on $`R(eV)`$.
The model examined here is a two-dimensional $`n/I/d`$ junction within the quasiclassical formalism where the pair potential has a spatial dependence
$$\overline{\mathrm{\Delta }}(x,\theta )=\{\begin{array}{cc}0,& (x0)\\ \overline{\mathrm{\Delta }}_R(x,\theta ),& (x0)\end{array}$$
(1)
Here $`\theta `$ is the angle of quasiparticle trajectory measured from the $`x`$ axis. If we apply this formula to $`d`$-wave superconductors including a subdominant $`s`$-wave component near the interface, $`\overline{\mathrm{\Delta }}_R(x,\theta )`$ is decomposed into
$$\overline{\mathrm{\Delta }}_R(x,\theta )=\mathrm{\Delta }_d(x)\mathrm{cos}[2(\theta \alpha )]+\mathrm{\Delta }_s(x)$$
(2)
where $`\alpha `$ denotes the angle between the normal to the interface and the $`x`$ axis of the crystal. The insulator located between the normal metal and the superconductor is modeled by a $`\delta `$ function. The magnitude of the $`\delta `$-function denoted as $`H`$ determines the transparency of the junction $`\sigma _N`$, with $`\sigma _N=\mathrm{cos}^2\theta /[Z^2+\mathrm{cos}^2\theta ]`$ and $`Z=mH/\mathrm{}^2k_F`$. The effective mass $`m`$ and Fermi momentum $`k_F`$ are assumed to be constant throughout the junction. The noise power to current ratio $`R(eV)`$, the differential shot noise $`S_T(eV)`$, and the tunneling conductance $`\sigma _S(eV)`$, are given by
$$R(eV)=\frac{_0^{eV}𝑑ES_T(E)}{_0^{eV}𝑑E\sigma _S(E)}$$
(3)
$$S_T(eV)=\frac{1}{2}_{\pi /2}^{\pi /2}𝑑\theta \overline{S}(eV,\theta )\mathrm{cos}\theta ,\sigma _S(eV)=\frac{1}{2}_{\pi /2}^{\pi /2}𝑑\theta \overline{\sigma }_S(eV,\theta )\mathrm{cos}\theta $$
(4)
$$\overline{S}(eV,\theta )=\frac{4e^3}{h}[R_a(1R_a)+R_b(1R_b)+2R_aR_b]$$
(5)
$$\overline{\sigma }_S(E,\theta )=\frac{2e^2}{h}(1+R_aR_b)$$
(6)
The magnitude of the Andreev and normal reflection $`R_a`$ and $`R_b`$ are given by
$$R_a=\frac{\sigma _N^2\left|\eta _{R,+}(0,\theta )\right|^2}{\left|1+(\sigma _N1)\eta _{R,+}(0,\theta )\eta _{R,}(0,\theta )\right|^2},$$
(7)
and
$$R_b=\frac{(1\sigma _N)\left|1\eta _{R,+}(0,\theta )\eta _{R,}(0,\theta )\right|^2}{\left|1+(\sigma _N1)\eta _{R,+}(0,\theta )\eta _{R,}(0,\theta )\right|^2}.$$
(8)
As a reference, we also calculate the tunneling conductance normalized by that in the normal state,
$$\sigma _T(eV)=\frac{_0^{eV}𝑑E\sigma _S(E)}{\frac{1}{2}_{\pi /2}^{\pi /2}𝑑\theta \sigma _N\mathrm{cos}\theta }.$$
(9)
Note that the differential shot noise and conductance spectrum are expressed only by $`\eta _{R,\pm }(x,\theta )`$ just at the boundary $`(x=0)`$ where $`\eta _{R,\pm }(x,\theta )`$ obeys the following equations,
$$\frac{d}{dx}\eta _{R,+}(x,\theta )=\frac{1}{i\mathrm{}v_F\mathrm{cos}\theta }\left[\overline{\mathrm{\Delta }}_R(x,\theta _+)\eta _{R,+}^2(x,\theta )\overline{\mathrm{\Delta }}_R^{}(x,\theta _+)+2E\eta _{R,+}(x,\theta )\right],$$
(10)
$$\frac{d}{dx}\eta _{R,}(x,\theta )=\frac{1}{i\mathrm{}v_F\mathrm{cos}\theta }\left[\overline{\mathrm{\Delta }}_R^{}(x,\theta _{})\eta _{R,}^2(x,\theta )\overline{\mathrm{\Delta }}_R(x,\theta _{})+2E\eta _{R,}(x,\theta )\right],$$
(11)
with $`v_F=k_F/m`$, $`\theta _+=\theta `$ and $`\theta _{}=\pi \theta `$, and $`\overline{\mathrm{\Delta }}_R(x,\theta _+)`$ $`[\overline{\mathrm{\Delta }}_R(x,\theta _{})]`$ is the effective pair potential felt by an electron \[a hole\] like quasiparticle. The quasiparticle energy $`E`$ is measured from the Fermi energy.
The spatial dependence of the pair potentials are determined by the following equations
$$\mathrm{\Delta }_s(x)=g_sk_BT\underset{\omega _n}{}\frac{1}{2\pi }_{\pi /2}^{\pi /2}𝑑\theta ^{}\{[g_R(\theta ^{},x)]_{12}[g_R^+(\theta ^{},x)]_{12}\}$$
(12)
$$\mathrm{\Delta }_d(x)=g_dk_BT\underset{\omega _n}{}\frac{1}{2\pi }_{\pi /2}^{\pi /2}𝑑\theta ^{}\mathrm{cos}[2(\theta ^{}\alpha )]\{[g_R(\theta ^{},x)]_{12}[g_R^+(\theta ^{},x)]_{12}\}$$
(13)
$$\underset{x\mathrm{}}{lim}\mathrm{\Delta }_s(x)=0,\underset{x\mathrm{}}{lim}\mathrm{\Delta }_d(x)=\mathrm{\Delta }_0$$
(14)
with dimensionless inter-electron potential of the $`s`$-wave $`g_s`$ and $`d`$-wave $`g_d`$, respectively. The quasiclassical Green’s function $`g_R(\theta ,x)`$ obeys
$$g_R(\theta ,x)=U_R(\theta ,x,0)g_R(\theta ,0)U_R^1(\theta ,x,0)$$
(15)
$$i\mathrm{}v_{Fx}\frac{}{x}U_R(\theta ,x,0)=\left(\begin{array}{cc}i\omega _n\hfill & \mathrm{\Delta }_R(x,\theta _+)\hfill \\ \mathrm{\Delta }_R^{}(x,\theta _+)\hfill & i\omega _n\hfill \end{array}\right)U_R(\theta ,x,0),$$
(16)
with $`\omega _n=2\pi k_BT(n+1/2)`$ and $`U_R(\theta ,0,0)=1`$. In the actual numerical calculations, $`\eta _{R,\pm }(x,\theta )`$ is calculated from Eqs. (10) to (11). Since $`g_R(\theta ,0)`$ is expressed by $`\eta _{R,\pm }(\theta ,0)`$, $`g_R(\theta ,x)`$ is obtained using Eqs. (15) to (16). Subsequently, the spatial dependence of the pair potentials $`\mathrm{\Delta }_d(x)`$ and $`\mathrm{\Delta }_s(x)`$ are calculated by Eqs. (12) to (14). To get self-consistently determined pair potential, this process is repeated until enough convergence is obtained.
First let us consider the case where $`\mathrm{\Delta }_s(x)`$ is not present. It is known for $`\alpha 0`$, $`\sigma _T(eV)`$ has a zero bias conductance peak (ZBCP) with small magnitude of $`\sigma _N`$ . It is expected that this property reflects on $`S_T(eV)`$ for $`\alpha 0`$. In Fig. 1, $`S_T(eV)`$ is plotted for $`Z=5`$ with $`\alpha =\pi /6`$ \[see curve $`a`$ in Fig. 1(a)\]. As a reference, similar calculations are performed based on the non self-consistent pair potential where the spatial dependence of the pair potential is chosen as $`\mathrm{\Delta }_d(x)=\mathrm{\Delta }_0`$ \[see curve $`b`$ in Fig. 1(a)\]). As seen from curves $`a`$ and $`b`$, $`S_T(0)`$ has a peak around zero voltage originating from the ZBCP in $`\sigma _T(0)`$ \[see Fig. 1(b)\]. As compared to curves $`a`$ to $`b`$, both the height and width of the peak around zero voltage of curve $`a`$ are small as compared to those in $`b`$, since the degree of resonance at zero voltage is weakened due to the reduction of $`\mathrm{\Delta }_d(x)`$ at the interface, Besides this property, curves $`a`$ in and $`S_T(eV)`$ ($`\sigma _T(eV)`$) have second peak around $`eV0.45\mathrm{\Delta }_0`$ due to the formation of NZES which can not be expected in curves $`b`$.
In Fig. 2, we plot $`R(eV)`$ for $`n/I/d`$ junction for sufficient low transparency case, $`e.g.`$, $`Z=5`$. In the case of $`\alpha =0`$, no ZES are formed at the interface, then the resulting $`R(eV)`$ is $`4e`$ at zero voltage and is $`2e`$ at higher voltage (curve $`a`$ in Fig. 2). When the value of $`\alpha `$ deviates from $`0`$, $`R(0)`$ is zero and it quickly reaches a classical Shottky value $`2e`$, which is consistent with other report (curve $`b`$ and $`c`$). This feature is explained based on the $`\theta `$ dependence of $`\overline{\sigma }_S(0,\theta )`$ and $`\overline{S}(0,\theta )`$ as follows: in general, $`\overline{\sigma }_S(0,\theta )=\frac{4e^2}{h}R_a`$ and $`\overline{S}(0,\theta )=\frac{16e^3}{h}R_a(1R_a)`$ are satisfied. At the tunneling limit, $`R_a`$ is suppressed with the increase of $`Z`$ unless the ZES are formed at the interface. When ZES are formed, $`R_a=1`$ is satisfied independent of $`Z`$. Such a situation is realized for $`\pi /4\alpha <\theta <\pi /4+\alpha `$. For $`\alpha =0`$, no ZES are formed and both the magnitude of $`S_T(0)`$ and $`\sigma _S(0)`$ are reduced with the increase of $`Z`$, while $`R(0)`$ remains to be constant. On the other hand, for $`\alpha 0`$ the magnitude of $`S_T(0)`$ is reduced while $`\sigma _S(0)`$ remains finite with the increase of $`Z`$, thus $`R(0)`$ becomes zero. The important point is that the origin of the vanishment of $`R(0)`$ is responsible for the presence of ZES. It is a universal property excepted for unconventional superconductor junctions where finite region of the Fermi surface contributing on the formation of the ZES.
The critical situation is realized in $`n/I/p_x+ip_y`$ junction where ZES are formed only by the quasiparticles injected perpendicular to the interface . The spatial dependence of the pair potential $`\overline{\mathrm{\Delta }}_R(x,\theta )`$ is given by
$$\overline{\mathrm{\Delta }}_R(x,\theta )=\mathrm{\Delta }_{p1}(x)\mathrm{cos}\theta +i\mathrm{\Delta }_{p2}(x)\mathrm{sin}\theta $$
(17)
with
$$\underset{x\mathrm{}}{lim}\mathrm{\Delta }_{p1}(x)=\mathrm{\Delta }_0,\underset{x\mathrm{}}{lim}\mathrm{\Delta }_{p2}(x)=\mathrm{\Delta }_0.$$
(18)
$`R(eV)`$ is calculated based on the self-consistently determined pair potentials, $`\mathrm{\Delta }_{p1}(x)`$ and $`\mathrm{\Delta }_{p2}(x)`$. As shown in curve $`d`$ in Fig. 2, $`R(eV)`$ is nearly $`2e`$ independent of bias voltages. The nature that $`R(0)`$ is neither $`4e`$ nor $`0`$ has never been predicted in previous theories .
Under the presence of the ZBCP, since the quasiparticle density of states at the zero energy near the interface are enhanced, a subdominant $`s`$-wave component of the pair potential $`\mathrm{\Delta }_s(x)`$ can be induced near the interface, when a finite $`s`$-wave pairing interaction strength exists, even though the bulk symmetry remains pure $`d`$-wave . Since the phase difference of the $`d`$-wave and $`s`$-wave components is not a multiple of $`\pi `$, the mixed state breaks the time-reversal symmetry . In such a case, the ZBCP splits into two and the amplitude of the splitting depends on the magnitude of the induced $`s`$-wave component. The corresponding $`R(eV)`$ is plotted in Fig.3(a) with $`\alpha =\pi /4`$ and $`Z=5`$ where the transition temperature of $`s`$-wave component is chosen as $`T_s=0.15T_d`$ (dotted line) and $`T_s=0.3T_d`$ (solid line). The induced $`s`$-wave component near the interface influences crucially on the $`R(eV)`$ at low voltages. The remarkable feature is that $`R(0)`$ recovers to be $`4e`$ as in the cases of $`n/I/s`$ junctions. It is because that with the inducement of $`s`$-wave component which breaks the time reversal symmetry, the position of the Andreev bound state shifts to $`eV\mathrm{\Delta }_s(0)`$, where $`\sigma _T(eV)`$ has a peak \[see Fig. 3(b)\]. Consequently, ZBCP shifts to this voltage and $`R(eV)`$ has a dip structure. Although $`R(0)=4e`$ is satisfied, the overall feature of $`R(eV)`$ is completely different from that in $`n/I/s`$ junction, since $`s`$-wave component is induced only near the interface in the present case.
In this paper, the current fluctuations in normal metal / $`d`$-wave superconductor junctions are studied for various orientation of the junction by taking account of the spatial variation of the pair potentials. Not only the zero energy Andreev bound states but also the non-zero energy Andreev bound states show up in the line shape of $`S_T(eV)`$. At the tunneling limit, we found universal property of $`R(0)`$ for two dimensional superconductors. $`R(0)`$ is $`0`$, $`2e`$ and $`4e`$, corresponding to three cases where the region of the Fermi surface contributing to the ZES is i)finite region, ii)point and iii)none, respectively. The present property gives useful information on the identification for the symmetry of the unconventional superconductors. We hope the measurement of shot noise in unconventional superconductors will be performed near future.
It is known that ZES also influences significantly on Josephson current in $`d`$-wave superconductor / insulator / $`d`$-wave superconductor ($`d/I/d`$) junctions . It is an interesting and future problem to study the shot noise in $`d/I/d`$ junctions. |
no-problem/9912/cond-mat9912434.html | ar5iv | text | # 𝑎𝑏-plane resistivity and possible charge stripe ordering in strongly underdoped La2-xSrxCuO4 single crystals
## Abstract
We have measured the $`ab`$-plane resistivity of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> single crystals with small Sr content (x=0.052$`÷`$0.075) between 4.2 and $`300`$K by using the AC Van der Pauw technique. As recently suggested by Ichikawa et al., the deviation from the linearity of the $`\rho _{\mathrm{ab}}(T)`$ curve starting at a temperature T<sub>ch</sub> can be interpreted as due to a progressive slowing down of the fluctuations of pre-formed charge stripes. An electronic transition of the stripes to a more ordered phase could instead be responsible for some very sharp anomalies present in the $`\rho _{\mathrm{ab}}(T)`$ of superconducting samples just above $`T_\mathrm{c}`$.
There is a growing experimental evidence for the existence of charge stripes in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO) and the related compounds obtained by substitutions. Successive ordering transitions of these stripes, which seem to have a metallic character , possibly occur when the temperature is lowered. One would expect that, in superconducting samples, the phase transitions of the stripes occurring above $`T_\mathrm{c}`$ leave a mark on the electrical resistivity . In the present work, we study the in-plane resistivity $`\rho _{\mathrm{ab}}(T)`$ of strongly underdoped single crystals of LSCO, which exhibits, besides the usual features previously observed, an anomalous peak just above $`T_\mathrm{c}`$. We relate it to an electronic transition of the stripes to a more ordered phase .
We carefully investigated the temperature dependence of the $`ab`$-plane resistivity of LSCO single crystals with small Sr doping ($`0.052x0.075`$). The crystals, having typical dimensions of about $`1\times 1\times 0.3`$mm<sup>3</sup>, were grown by slowly cooling a non-stoichiometric melt. The chemical composition of every crystal was determined by means of EDS microprobe analysis and only crystals having a homogeneous Sr content were selected. In order to increase the precision of the $`ab`$-plane measure, we used an AC version of the standard four-probes eight-measures Van der Pauw method injecting in the crystals AC currents of $`100÷300\mu `$A at $`133`$Hz. Details on the experimental technique are given elsewhere . Fig. 1 shows the results. The crystals with $`x=0.052`$ are not superconducting and show a low-temperature semiconducting-like behaviour of the resistivity, apart from an anomalous local maximum at about $`25`$K (curve *a* in Fig. 1). The crystals with $`x=0.06`$ and 0.075 show well-defined superconducting transitions with midpoints at $`T_\mathrm{c}=6.7`$K ($`\mathrm{\Delta }T_\mathrm{c}=0.6`$K) and $`T_\mathrm{c}=8.9`$K ($`\mathrm{\Delta }T_\mathrm{c}=2.2`$K), respectively (curves *b* and *c*). However, very sharp anomalous peaks are present at temperatures about $`2`$K greater than $`T_\mathrm{c}`$. The inset of Fig. 1 shows these peaks in greater detail. No anomalies are present in the AC susceptibility of the same crystals.
We first discuss the general behaviour of the resistivity curves. The $`\rho _{\mathrm{ab}}(T)`$ at $`150<T<300`$K is almost perfectly linear. We then fitted the resistivity in this region by using a linear model: $`\rho _{\mathrm{ab}}(T)=\alpha T+\beta `$. Following a recent paper of Ichikawa et al. , we calculated the normalized resistivity $`\rho _{\mathrm{ab}}(T)/(\alpha T+\beta )`$. The results for the superconducting samples are shown in Fig. 2. In Ref. 3 the deviations of the $`\rho _{\mathrm{ab}}(T)`$ from the linearity exceeding a certain threshold (dependent on $`x`$) have been associated to the localization of pre-formed stripes, which occurs at a temperature $`T_{\mathrm{ch}}`$ independently determined by Cu NQR measurements . This correlation has been observed in Nd-substituted crystals but also in crystals without Nd, having $`x`$=0.10 and $`x`$=0.12 . Following the same approach, we obtained the stripe localization temperatures $`T_{\mathrm{ch}}`$ in our samples by crossing the normalized resistivities with appropriate thresholds extrapolated from the data of Ref. 3. Solid circles on curves *a* and *b* of Fig. 2 show the results of this operation. The inset of Fig. 2 shows the temperatures $`T_{\mathrm{ch}}`$ of our crystals versus the Sr-doping (solid squares): they are in excellent agreement with $`T_{\mathrm{ch}}`$ values determined by means of NQR measurement in LSCO crystals with different doping (open squares). A further decrease of the temperature below $`T_{\mathrm{ch}}`$ produces a continuous slowing-down of the stripe motion: stripes start to be pinned (by defects or by intrinsic low doping) and $`\rho _{\mathrm{ab}}(T)`$ shows an upturn up to a peak followed by the region dominated by superconducting fluctuations.
One could argue that the small peaks at $`T_\mathrm{o}`$ about 2 K above $`T_\mathrm{c}`$ (see the inset of Fig. 2) may be due to measurement errors, $`\rho _\mathrm{c}`$ interferences in $`\rho _{\mathrm{ab}}`$, or crystal inhomogeneities (i.e. parts of the crystal are not superconducting and contribute to $`\rho _{\mathrm{ab}}`$ with a semiconducting-like term just above $`T_\mathrm{c}`$). We can reply to these objections that (i) the Van der Pauw’s four-probes AC technique provides four resistance measures and all of them show very reproducible peaks at $`T_\mathrm{o}`$; (ii) a detailed data analysis suggests that the resistivity contribution of *c*-axis terms or different-doping islands is unable to give peaks as those observed . Another possible explanation for their origin could be a *structural* low-temperature transition, similar to that observed at higher temperature in Nd-substituted samples , but there are no other evidences of such a transition near $`T_\mathrm{c}`$ in LSCO.
We speculate that an *electronic* phase transition from a *nematic* stripe phase to a more ordered *smectic* one (“stripe glass”) is responsible for the anomalies near $`T_\mathrm{c}`$. This scenario is supported by theoretical arguments and experimental analogies with the behaviour of some metal dichalcogenides at the charge-density wave transition and it is consistent with the coexistence of cluster spin-glass and superconductivity recently observed below $``$ 5 K in LSCO with $`x=0.06`$ . |
no-problem/9912/astro-ph9912042.html | ar5iv | text | # 1 Introduction:
## 1 Introduction:
The discovery by the CGRO experiments that blazars sometimes radiate a large or even the major fraction of their luminosity at $`\gamma `$-ray energies marked a milestone in our knowledge on these powerful sources. Now, after 8 years of operation, roughly 80 blazars have been detected by EGRET at $`\gamma `$-ray energies above $``$100 MeV (e.g. Hartman et al. 1999). COMPTEL, covering the soft $`\gamma `$-ray range (0.75-30 MeV) has detected 9 of these EGRET blazars (Collmar et al. 1999).
During recent years some blazars have been discovered to emit even at TeV-energies by the Whipple Observatory: the blazars Mkn 421 (Punch et al. 1992), Mkn 501 (Quinn et al. 1996), and 1 ES 2344+514 (Catanese et al. 1998). The most prominent ones are Mkn 421 and Mkn 501, which have been detected many times by Whipple and confirmed by other TeV-experiments like the HEGRA Cherenkov telescopes (e.g. Aharonian et al. 1999). Despite their (occasional) prominence at TeV-energies – during flaring periods Mkn 501 was the brightest TeV-source in the sky – they are weak EGRET sources. Mkn 421 shows a weak flux in the EGRET band and Mkn 501, at the time of its TeV-discovery, was not detected at all by EGRET. A common feature of all TeV-blazars is their extreme variability on time-scales of years, days, hours, or even minutes.
COMPTEL, along the course of its now 8-year mission, has had the prominent TeV-blazars Mkn 421 and Mkn 501 several times within its field-of-view. The complete set of data (April ’91 to Nov. ’98) on these sources has been analysed. In this paper we will concentrate on time-averaged results, i.e. results of combined data over periods of typically one year.
## 2 Observations and Data Analysis:
CGRO observations are organized in so called Cycles (or Phases) which last for a time period of roughly one year, consisting of many (30 - 40) individual pointings (called Viewing Periods: VPs), which typically last for two weeks. Table 1 provides the COMPTEL exposures on both sources for the individual CGRO cycles. It clearly shows that Mkn 421 was favorably located for COMPTEL in Cycle VII. This is due to its proximity ($``$25<sup>o</sup>) to SN 1998bu which was a major COMPTEL target in 1998.
We have applied the standard COMPTEL maximum-likelihood analysis method (e.g. de Boer et al. 1992) to derive detection significances, fluxes, and flux errors of $`\gamma `$-ray sources in the four standard COMPTEL energy bands (0.75-1 MeV, 1-3 MeV, 3-10 MeV, 10-30 MeV), and a background modelling technique which eliminates any source signature but preserves the general background structure (Bloemen et al. 1994). The source fluxes for both sources have been derived by a flux fitting procedure which iteratively determines the flux of one or more potential sources and simultaneously a background model which takes into account the presence of possible sources. As there are no known nearby $`\gamma `$-ray sources or source candidates, for the case of Mkn 421 no other source was included in this procedure, while for the case of Mkn 501 the prominent EGRET quasar 4C+38.41, which is only $``$4<sup>o</sup> away, was included as a second source. Because Mkn 421 (l/b: 179.8/65.0) and Mkn 501 (l/b: 63.6/38.9) are located at high galactic latitudes, the present analyses have been carried out consistently in local coordinate systems, i.e. centered on each source.
## 3 Results:
### 3.1 Markarian 421:
Up to the end of CGRO Cycle VI (November ’97) no convincing evidence for Mkn 421 could be found in any of the four standard COMPTEL energy bands in the different CGRO cycles. However, in the 10-30 MeV map of CGRO Cycle VII a source-like feature appears which is positionally consistent with Mkn 421 (Fig. 1). The detection significance at the position of Mkn 421 is formally 3.2$`\sigma `$ assuming $`\chi _1^2`$-statistics for a known source, i.e. close to the detection threshold. We checked the 3rd EGRET catalogue (Hartman et al. 1999) for $`\gamma `$-ray sources in this region. Apart from Mkn 421, the catalogue lists no other source which could be responsible for this $`\gamma `$-ray emission. Therefore, we consider Mkn 421 as the most likely candidate. This is supported by the fact, that in 1998 Mkn 421 was unusually active at TeV-energies, sometimes even brigther than the Crab (Aharonian et al. 1999). However, a different origin for this $`\gamma `$-ray emission than Mkn 421 cannot be excluded. EGRET cannot help to resolve this issue, because it observed this sky position simultaneously only for 4 out of the 108 COMPTEL days (Table 1): either the pointings were too far off for its narrow-field-of-view mode or EGRET was switched off.
Applying the procedure described in Section 2, fluxes from the position of Mkn 421 have been derived. In addition to the 3$`\sigma `$ result in the 10-30 MeV band of the Cycle VII data, there is a 2$`\sigma `$ flux point derived in the 3-10 MeV band for this time period. Below 3 MeV the data are consistent with noise resulting in upper limits only. In Figure 1 these flux values are compared to the time-averaged 1997-1998 HEGRA TeV-spectrum (Aharonian et al. 1999). The authors provide the spectral slope for the year 1998 individually ($`\alpha `$: -3.00$`\pm `$0.05) but not the flux normalisation. They note that the spectral slopes in both years are consistent within statistics. The comparison shows that Mkn 421 (if responsible for the emission) radiated in 1998 on average more power at MeV-energies than at TeV-energies. In addition, the MeV spectral points are several orders of magnitudes below the power-law extrapolation of the TeV-spectrum, proving that the TeV-spectrum has to bend over above 30 MeV. The last conclusion is also valid of course, if the observed $`\gamma `$-rays are not connected to Mkn 421. We like to note, that a large fraction of the COMPTEL Cycle VII exposure on Mkn 421 was collected during times (July to September 1998), when the source was inaccesible for TeV-observations.
### 3.2 Markarian 501:
Up to the end of CGRO Cycle VII (December 1998) no convincing evidence for Mkn 501 could be found in any of the four standard COMPTEL energy bands in the different CGRO Cycles. At TeV-energies Mkn 501 showed its largest activity so far during the observational period in 1997 (e.g. Quinn et al. 1999). To be quasi-simultaneous with that, we combined all COMPTEL Cycle VI data on Mkn 501 and derived the upper limits for its MeV flux. Unfortunately, only 6 days (a CGRO ToO during April 1997) of COMPTEL observations are directly simultaneous to the observed TeV-flaring period. The COMPTEL 1997 upper limits are compared to the time-averaged 1997 TeV-shape as observed by the Whipple telescope in Figure 2. The COMPTEL upper limits are $``$2 orders of magnitude below the extrapolation for an assumed power-law shape at TeV-energies, requiring spectral bending, but still allowing for a luminosity in the MeV-band roughly equal to that detected in the TeV-band. Evidence for spectral bending was recently also found in the TeV-data itself (e.g. Samuelson et al. 1998). The COMPTEL points are consistent with the extrapolation of that shape towards lower energies. As mentioned above, in April 1997 a multiwavelength campaign on Mkn 501 was carried out, where CGRO participated for 6 days (April 9-15, 1997) in a target-of-opportunity observation. COMPTEL has not detected the blazar within this short observation period. The COMPTEL upper limits are shown in Figure 2, together with simultaneous flux measurements (Catanese et al. 1997) from neighboring high-energy bands. The COMPTEL upper limits are consistent with these measurements, however do not provide any further constraints.
## 4 Summary and Conclusions:
We present first COMPTEL MeV-results of the prominent TeV-blazars Mkn 421 and Mkn 501. So far, the analysis has been carried out in the four standard energy bands for individual CGRO VPs and individual CGRO cycles. Up to the end of CGRO Cycle VI (November 1997), no evidence for Mkn 421 was found. However, the combined Cycle VII 10-30 MeV data show evidence (although near the detection threshold) for a $`\gamma `$-ray source which is positionally consistent with Mkn 421. As there are no other known $`\gamma `$-ray sources in that sky region, we consider the TeV-blazar as the most likely counterpart. If this $`\gamma `$-ray emission really originates from Mkn 421, this would be an interesting result. Broad-band spectra for flaring TeV-blazars indicate a spectral minimum near the COMPTEL and EGRET bands (e.g. Fig. 2), i.e. they should be located in the ”spectral valley” between the peaks of the assumed synchrotron and inverse Compton (IC) emission components. This result would mean that either the synchrotron emission moved up to the highest energies ever observed, or the IC emission at MeV-energies was as high as never observed before for any TeV-blazar which would be the most likely explanation in our mind, or something else (e.g. a combination of both) has happened. Eventually, multiwavelength spectra might help to resolve this issue. No convincing evidence for Mkn 501 is found in any of the analysed data. Derived upper limits, which are quasi-simultaneous with the large TeV-flaring period in 1997, require a spectral bending for a TeV power law spectrum, and are consistent with the extrapolation of a reported curved TeV-spectrum. COMPTEL participation in a multiwavelength campaign resulted in upper limits which are consistent with the simultaneous measurements in the neighboring energy bands, however do not provide any further constraints.
References
Aharonian et al. 1999, A&A submitted (astro-ph/9905032)
Bloemen, H., Hermsen, W., Swanenburg, B.N., et al. 1994, ApJ Suppl. 92, 419
de Boer, H., Bennett, K., Bloemen, H., et al. 1992, In: Data Analysis in Astronomy IV, eds. V. Di Gesu et al. (New York: plenum Press), 241
Catanese, M., et al. 1997, ApJ 487, L143
Catanese, M., et al. 1998, ApJ 501, 616
Collmar, W. et al., 1999, Proc. 3rd INTEGRAL Conf., in press
Hartman, R. et al., 1999, ApJ Suppl., accepted
Punch, M. et al. 1992, Nature 358, 477
Quinn, J. et al. 1996, ApJ 456, L83
Quinn, J. et al. 1999, ApJ, submitted
Samuelson, F.W. et al. 1998, ApJ 501, L17 |
no-problem/9912/cond-mat9912331.html | ar5iv | text | # Structurally constrained protein evolution: results from a lattice simulation.
## I Introduction
Almost unmistakingly, naturally occurring proteins with sequence similarity larger than 40% adopt similar folds . Since natural proteins with homologous sequences descend from a common ancestor, this observation indicates that protein structures are significantly conserved in evolution. Indeed, several proteins with different functions show a remarkable structural similarity of evolutionary origin even if their sequences can not anymore be recognized as related . A recent study on the Protein Data Bank (PDB) showed that the typical sequence similarity between proteins with the same fold is about 8.5% , only slightly larger than for a random pair of sequences . In this set also proteins with common ancestors are likely to exist. These observations cue to the fact that during evolution, there is a strong memory for the structure but only a very loose memory for the sequence.
The neutral theory of molecular evolution, proposed in 1968 by Kimura and, independently, by Jukes and King , is well consistent with these observations. Kimura suggested that most amino acid substitutions in protein sequences are selectively neutral, i.e. indistinguishable from the wild type from the phenotypic point of view, and are fixed by chance in biological populations . This hypothesis has been heatedly debated in the genetic literature . Strictly speaking, conservation of the fold and neutral evolution are not equivalent, since neutrality deals with the activity of the protein, concentrated in its active site, more than with its structure. Moreover drastic changes in the environment can modify the selective value of protein structures. However, since our model does not represent biological activity, we assume in the following neutrality to be synonymous of structure conservation.
In the last decade, the possible occurrence of neutral evolution has been revealed by a series of computational and analytic studies of the sequence to secondary structure relationship for RNA molecules . An exponentially large number of sequences correspond on average to a single structure, and the distribution of the number of sequences per structure is quite broad (following a power law), with the most common structures forming connected neutral networks which percolate sequence space.
For proteins, the sequence to structure relationship is much more difficult to study than for RNA. Shakhnovich and Gutin , using the Random Heteropolymer model, argued that the probability that a point mutation is neutral (i.e. it does not alter the native state) is non vanishing even for very long sequences. In the same spirit, Tiana et al. considered a cubic lattice model and a sequence with 36 residues, optimized in such a way that its ground state coincides with a target structure and is very stable , and estimated that 70% of the point mutations are neutral. Bornberg-Bauer studied a two-dimensional HP model with only two residue types and chain length $`N=18`$ by using exact enumeration. He found, in analogy with the RNA case, that the distribution of the number of sequences per structure is very broad, but sequences corresponding to the same structure are clustered in small regions of sequence space.
We simulate the evolution of a protein sequence subject to structure conservation. Mutations that change the protein’s native structure, identified with the ground state of the model, are considered lethal and are rejected. In this way, our sequence follows an evolutionary trajectory on a neutral network, i.e. a set of sequences sharing the same fold and connected by point mutations. While the structure (fold) is conserved, the sequence changes as new mutations are accepted, and after a sufficient number of steps along the evolutionary trajectory have been performed, the sequence behaves essentially as a random one with respect to the original one .
It is important, however, to impose not only the condition that the native state is conserved but also that its stability remains high and the folding time remains low. These conditions are not only biologically relevant, but also help the model protein to diffuse in sequence space.
We use in this study a lattice representation of protein conformations, because only in this way the ground state and its thermodynamic stability can be reliably determined, but we believe that our simplified model reproduces the generic features of the evolution of real protein sequences.
Support to our results comes from a recent study by Babajide and coworkers , who found evidence for the presence of neutral networks in protein sequence space. Their work is similar in spirit to the present one, but rather different methodologically. Real protein structures were represented through the $`C_\alpha `$ and $`C_\beta `$ coordinates taken from the PDB, and an approximate criterion of fold recognition based on the Z score was used. Further support also comes from the work of Govindarajan and Goldstein , who introduced the “foldability” landscape in order to describe molecular evolution. In the language of Govindarajan and Goldstein the foldability of a protein represents its fitness for survival during evolution and it is related to the stability and to the kinetic accessibility of the native state. Govindarajan and Goldstein also found that their evolutionary dynamics in sequence space was confined inside “neutral networks”.
Tha main result of our paper, namely the fact that the fraction of neutral neighbors strongly fluctuates inside the neutral network, and that these fluctuations can be related to the foldability landscape, should also be put in relation with the recent preprint by Tiana et al. where it is shown that the energy of a target structure has a complex landscape with valleys and barriers in sequence space. The present work supports such a picture by using the same protein model but employing rather different methods of investigation.
We already presented some results on neutral networks in Ref. . Here we focus our attention on the issue of the stability of the native state, relating it to the characteristics of the evolution. In Sec.II, we describe our model protein and our protocol to simulate neutral evolution. In Sec.III we summarize our previous results. In Sec.IV we describe the properties of the sequences generated, dividing them in four classes. This section focuses on the relation between thermodynamic stability and evolutionary dynamics. Sec.V presents an overall discussion, relating our results to biological observations.
## II A simple model of protein evolution
In this section we define the lattice model used to represent protein structure and the algorithm introduced to simulate evolution in sequence space.
### A Lattice model of protein structure
To investigate the correspondence between sequences and structures we use a lattice model with twenty amino acid types. We consider sequences of length $`N=36`$, denoted by the symbol $`𝐒=\{s_1,\mathrm{},s_N\}`$, where $`s_i`$ belongs to a twenty-letter alphabet. Configurations are represented by self avoiding walks on the simple cubic lattice, where each occupied site represents an amino acid. An energy $`E(𝐒,𝒞)`$ is assigned to configuration $`𝒞`$ of sequence $`𝐒`$ according to the rule:
$$E(𝐒,𝐂)=\underset{i<j}{\overset{1,N}{}}C_{ij}U(s_i,s_j),$$
(1)
where $`U(a,b)`$ is a $`20\times 20`$ symmetric interaction matrix expressing the contact interactions of amino acids of species $`a`$ and $`b`$. We use an interaction matrix $`U(a,b)`$ derived from the Miyazawa-Jernigan interaction matrix . The matrix $`𝐂=\{C_{ij}\}=f(𝒞)`$, called the contact map of configuration $`𝒞`$, has elements $`C_{ij}`$ equal to one if residues $`i`$ and $`j`$ are nearest neighbors on the lattice but not along the chain and zero otherwise. The similarity between contact maps is measured through the overlap $`q(𝐂,𝐂^{})`$, defined as
$$q(𝐂,𝐂^{})=\frac{1}{N_c^{}}\underset{i<j}{}C_{ij}C_{ij}^{},$$
(2)
where $`N_c^{}`$ is the maximal between $`N_c`$ and $`N_c^{}`$, the number of contacts respectively of two contact maps $`𝐂`$ and $`𝐂^{}`$, and $`N_c=_{j>i}C_{ij}`$. With this definition, two maps are identical if and only if $`q=1`$. Note that this does not imply in general identity of configurations. Nevertheless, we use the overlap as a measure of similarity in configuration space because structures with the same contact map are degenerate in energy and for compact structures, only small conformational fluctuations are allowed when the entire set of contacts is specified. Moreover, such structural fluctuations might play an important role in protein functionality .
The native structure of sequence $`𝐒`$ is identified with the ground state of the model if it is thermodynamically stable. We evaluate stability by measuring the average overlap $`q`$ between the ground state and the Boltzmann ensemble of structures:
$$q=\frac{1}{Z}\underset{𝒞}{}q(𝐂_0,f(𝒞))e^{E(𝒞,𝐒)/T},$$
(3)
where $`𝐂_0`$ is the contact map of the ground state, $`f(𝒞)`$ is the contact map of configuration $`𝒞`$ and $`Z`$ is the partition function. This quantity is close to one if all the low energy structures are quite similar to the native state. In this case the energy landscape of the model is well correlated, and the sequence is also expected to be a good folder.
We consider the target contact map $`𝐂^{}`$ represented in Fig.1. It has $`N_c=40`$ contacts, the maximal number of contacts possible for a chain of length $`N=36`$. In this case the contact map defines uniquely the configuration of the system. The contact map $`𝐂^{}`$ was studied by Shakhnovich and coworkers in a computer experiment of inverse folding . They designed a sequence with ground state on $`𝐂^{}`$ using the procedure of Ref. , and showed that $`𝐒^{}`$ has good properties of kinetic foldability and thermodynamic stability at the temperature where the folding is fastest. The lower part of the energy landscape of this sequence is remarkably smooth: all the structures with low energy have a high overlap $`q`$ with the ground state. The lowest energy of configurations with a fixed value of $`q`$ decreases regularly as $`q`$ approaches one. This correlated energy landscape, reminiscent of the “funnel” paradigm , is the reason of the good folding properties of the sequence, which is very different from a random sequence. In it was shown that the same sequence is also very stable against mutations. It was estimated that about 70% of the point mutations performed on $`𝐒^{}`$ result in new sequences with exactly the same ground state and good folding properties. Thus energy minimization makes $`𝐂^{}`$ stable not only in structure space, but also in sequence space.
We note that $`𝐂^{}`$ is an atypical structure for the interaction parameters that we choose: since $`U(a,b)`$ has average value zero and variance $`0.3`$, one would expect open structures to be energetically favored. Indeed, typical random sequences with $`N=36`$ and Gaussian contact interactions whose average vanishes have a ground state with approximately 29-33 contacts , being thus less than maximally compact.
### B Sequence space
In this study we consider only point mutations, thus all sequences have the same length $`N=36`$ and the metric in sequence space is given by the Hamming distance,
$$D(𝐒,𝐒^{})=\underset{i=1}{\overset{N}{}}\left[1\delta (s_i,s_i^{})\right],$$
(4)
where $`\delta `$ is the Kronecker symbol and $`s_i`$ takes 20 different values, one for each amino acid. A measure of sequence similarity is then given by the overlap $`Q(𝐒,𝐒^{})`$,
$$Q(𝐒,𝐒^{})=\frac{1}{N}\underset{i=1}{\overset{N}{}}\delta (s_i,s_i^{}),$$
(5)
which is equal to one minus the normalized Hamming distance.
We introduce also the distance $`D_{HP}(𝐒,𝐒^{})`$ and the overlap $`Q_{HP}(𝐒,𝐒^{})`$ to measure differences in hydrophobicity. These are defined by transforming every sequence into a sequence of binary symbols, either H or P, according to the hydrophobicity of the residue. We consider 8 hydrophobic amino-acids and 12 polar ones. The definitions of $`D_{HP}`$ and $`Q_{HP}`$ are analogous to those of $`D`$ and $`Q`$, where now $`s_i`$ can take only two values.
### C Evolutionary process
Our protein sequence evolves through point mutations subject to conservation of the target contact map $`𝐂^{}`$, representing the biologically active native structure . We impose this condition by simulating the following iterative procedure:
1. At $`t=0`$ we start from $`𝐒(0)=𝐒^{}`$, which has $`𝐂^{}`$ as its “native state”.
2. At time $`t`$ we mutate at random one amino acid in $`𝐒(t1)`$, producing a new sequence $`𝐒^{}(t)`$.
3. We submit the new sequence to selection according to the criteria specified below. If the sequence is accepted then $`𝐒(t)=𝐒^{}(t)`$, otherwise we restore $`𝐒(t)=𝐒(t1)`$.
The selection step is governed by the following three conditions
. The ground state $`𝐂`$ of $`𝐒^{}(t)`$ must have an overlap with $`𝐂^{}`$ equal or larger than a given “phenotypic threshold” $`q_{\mathrm{thr}}`$.
$$q(𝐂^{},𝐂)>q_{\mathrm{thr}},$$
(6)
In our calculations we imposed strict conservation of the native state by setting $`q_{\mathrm{thr}}=1`$.
. We define thermodynamic stability through the condition
$$q(𝐂^{},𝐂)>q_{\mathrm{thr}},$$
(7)
where $``$ represent a Boltzmann average at the temperature $`T`$ of the simulation and $`q_{\mathrm{thr}}`$ is a fixed parameter. This condition implies that all the thermodynamically relevant states are very similar to the target state.
. The structure $`𝐂^{}`$ must be reached in a limited number of steps of our Monte Carlo algorithm, in at least two independent attempts.
For the test of the sequences we used the PERM method , a Monte Carlo algorithm particularly suited for finding the ground state of lattice polymers. Note that there is no bias towards $`𝐂^{}`$ in our Monte Carlo algorithm, i.e. it has the same a priori probability of being visited as any other possible structure. We remark that other schemes of simulations are also suitable to the same effect, as e.g. the Monte Carlo algorithm used in Ref. . Such Monte Carlo method with moves in configuration space is more suitable than PERM to estimate folding times. However, due to computational limitations, we did not try to measure accurately the folding time, thus we adopted the PERM method, which is faster for the task that is interesting for us.
The test of a new sequence $`𝐒`$ is divided into three phases:
* We discard $`𝐒`$ if after $`m`$ iterations $`𝐂^{}`$ is not reached or if other structures of energy lower than $`𝐂^{}`$ are found.
* Otherwise we continue to run the algorithm for another $`m`$ iterations and discard $`𝐒`$ if we find structures of energy lower than $`𝐂^{}`$.
* If $`𝐒`$ passed the first two phases, we run again and independently the MC algorithm for a time $`2m`$ and accept $`𝐒`$ if also this time $`𝐂^{}`$ is found as the lowest energy structure.
Thus for each accepted sequence we run the algorithm for $`4m`$ steps, with $`m=124000`$. We never found in the second independent run of the MC algorithm a structure with lower energy than the putative ground-state $`𝐂^{}`$ found in the first run. This fact encourages us to believe that the algorithm was effective in finding the ground state. Another support to this conclusion comes from the fact that, as it will be discussed later in more detail, all of the selected sequences have a remarkably correlated energy landscape, which makes the task of finding the ground state easier.
On the other hand, whenever the sequence was rejected, we are less sure that we were able to determine its ground state. The difference is due to two reasons: first, we investigate rejected sequences on the average for a shorter time. Second, rejected sequences have typically a less correlated energy landscape, so that the determination of the ground state should be more difficult. Nevertheless, we shall present in Sec.IV also data about rejected sequences, since they are interesting and refer to a very large number of sequences, even if they are individually not completely reliable.
The three conditions for the acceptance of a mutation enforce the conservation of the fold of the protein. This is similar to neutral evolution where the biological activity of the mutated sequence does not vary. Nevertheless, conservation of the fold is not a necessary condition for selective neutrality in real proteins, although a very high degree of conservation is usually observed, and it is not even a sufficient one, since - in the case of enzymes - the active site has also to be conserved and the environment has to remain reasonably stable. Thus our model represents the neutral evolution of the part of the chain not involved in chemical activity, in a stable chemical environment. Despite its simplifications, we believe that our model captures important features of structural constraints in the neutral evolution of proteins.
## III Neutral networks
In this section we summarize results regarding the diffusion in sequence space under our model of neutral evolution. More details have been given in Ref. .
### A Hamming distance
An interesting result of our simulation is that sequences originated from the same common ancestor diverge so much that their similarity is almost as low as for a random pair of sequences while their structures remain unchanged. Starting from the same sequence we generated eight realizations of neutral evolution, simulating the phylogenetic radiation of eight species from a common ancestor. We use the following values of the selection parameters: phenotypic threshold $`q_{\mathrm{thr}}=1`$, corresponding to exact conservation of the ground state, stability threshold $`q_{\mathrm{thr}}=0.90`$ at a temperature $`T=0.16`$ chosen so that the folding of the initial sequence $`𝐒^{}`$ is fastest with our Monte Carlo algorithm.
The average Hamming distance between the final points of the eight evolutionary trajectories is $`D=30.2`$, only slightly smaller than the random value $`D^{ran}=34.2`$ (see Fig.2). However, this quantity had not yet reached a stationary value when the simulations were interrupted, thus we can not exclude that the long time behavior coincides with $`D^{ran}`$. An indication in this sense is the fact that the maximum distance between sequences in two different trajectories is $`D=35`$. All the residues in the original sequence could be substituted at least twice, but some are more difficult to change. We define the degree of conservation, or rigidity, of residues at the $`i`$-th position in the sequence as follows:
$$R_i=\underset{a}{}P_i^2(a),$$
(8)
where $`P_i(a)`$ is the probability to find the amino-acid $`a`$ at position $`i`$. $`P_i(a)`$ is estimated from the end points of the eight neutral paths generated. $`R_i=1`$ if the amino-acid at position $`i`$ is never changed, while $`R_i1/8`$ if it is completely random. We found that several positions have rigidity compatible with the random value, and no position has $`R_i=1`$. As one would expect, the most conserved positions are the two in the interior of the structure (see Fig.2).
The same results hold for the HP representation in which hydrophobic (H) and polar (P) amino acids are grouped together so that $`s_i`$ can assume only two values. The average value of the distance is in this case $`D_{HP}=16.3`$, not far from the random value $`D_{HP}^{ran}=17.3`$, and the variance is $`V_{HP}=6.0`$, compatible with $`V_{HP}=D_{HP}(1D_{HP})/N`$. It is at first sight surprising that also the distance $`D_{HP}`$ is close to that expected for random sequences. This is in part an effect of the short length of our sequences, since only two residues are in the interior of the structure, while all other ones are at the surface. It is interesting to note, however, that also the two residues in the interior of $`𝐂^{}`$ have rigidity $`R_i<1`$, even when the two letter HP representation is used. The distinction between polar and hydrophobic residues is based only on the interaction matrix that we use, in which also polar residues can have attractive interactions. It is interesting to note that a recent study of real protein structures found a correlation between amino acids buried in the core and evolutionary conserved ones, consistently with our results.
### B Neutral Mutation Rate
For a given sequence $`𝐒`$ of $`N`$ amino acids, we define the neutral mutation rate $`x(𝐒)`$ as the fraction of acceptable non-synonymous mutations
$$x(𝐒)=\frac{1}{20N}\underset{i=1}{\overset{N}{}}\underset{\alpha s_i}{\overset{1,20}{}}\chi _{\alpha i}(𝐒),$$
(9)
where $`\chi _{\alpha i}(𝐒)`$ equals one if assigning the amino acid of species $`\alpha `$ at position $`i`$ on the sequence $`𝐒`$ does not change the native state, and zero otherwise. Non-synonymous mutations are those for which an amino acid is not replaced by itself.
The simplest measure of the neutral mutation rate is obtained by computing the frequency of neutral mutations over all the non-synonymous mutations proposed. In this way we found $`\overline{x}0.05`$ (the overline represents an average over the mutational process). However, this quantity alone is not enough to characterize $`x(𝐒)`$, which fluctuates strongly in sequence space. For instance, it was estimated by one of us and coworkers that $`x(𝐒^{})0.7`$, where $`𝐒^{}`$ is the starting point of our evolutionary trajectories.
We measured indirectly the distribution of $`x(𝐒)`$ in sequence space from the distribution of the “trapping” time $`\tau _t(𝐒)`$ that a trajectory spends on sequence $`𝐒`$. The average value of the trapping time is inversely proportional to the neutral mutation rate:
$$\overline{\tau _t(𝐒)}=\frac{1}{x(𝐒)},$$
(10)
where the bar denotes average over the different mutations. The distribution of $`\tau `$ at fixed $`x`$ is a geometric one, $`P_x(\tau )=x(1x)^{\tau 1}`$, so that, averaging over the neutral set, we get
$$\left[P(\tau )\right]=_0^1𝑑xp(x)\left(\frac{x}{1x}\right)(1x)^\tau ,$$
(11)
where $`[]`$ denotes an average over sequences belonging to the neutral network (in this argument we neglect the error in evaluating whether a sequence belongs to the neutral set: in particular, the conditions of fast folding and of thermodynamic stability are subject to considerable evaluation errors).
The distribution of $`\tau _t`$ is broader than an exponential one (Fig.3), thus, even if we can not invert Eq. 11, we expect that the distribution of the neutral mutation rate $`x`$ is also broader than exponential.
The values of $`\tau _t`$ for neighboring sequences are rather correlated, but the correlation seems to vanish after few steps in sequence space (data not shown).
### C Genetic drift and population genetics
In Kimura’s neutral theory it is assumed that the fraction of neutral mutations $`x(𝐒)`$ does not depend on the sequence. With this hypothesis, the time evolution of the Hamming distance $`D(t)`$ between the starting sequence $`𝐒(0)`$ and the present sequence $`𝐒(t)`$ is given in our model by
$$D(t)/N\left(1\frac{1}{20}\right)\left[1\mathrm{exp}(xt/N)\right],$$
(12)
where the time $`t`$ represents the number of mutational events. However, this hypothesis is contradicted by our results, which show that the relaxation of the distance is not exponential. This fact is due to the large fluctuations of the neutral mutation rate along the neutral network.
This qualitative result can be interesting for the understanding of protein sequence evolution. A key issue in the theory of molecular evolution is the following: Is the rate at which amino acids are substituted in protein sequence constant on different branches of the phylogenetic tree? The constancy of the substitution rates was proposed by Zuckerkandl and Pauling in their pioneering study of molecular evolution as the molecular clock hypothesis . This hypothesis has been questioned recently , even if it seems to be at least approximately valid for several proteins.
Kimura’s theory predicts that neutral substitutions occur at a rate $`r=\mu x`$ which is the product of the bare mutation rate $`\mu `$ times the fraction $`x`$ of neutral mutations. This rate is independent of the size of the population. The value $`x`$ characterizes the substitution rate of a particular protein but does not vary during evolution. Mutations occuring in a geological time $`T`$ are thought to follow a Poissonian statistics with average value $`\mu T`$, so that the number of substitutions is predicted to have Poissonian statistic with average value $`n_S=\mu Tx`$. This has the important consequence that the fluctuations of the substitution process in different species should increase only as $`\sqrt{T}`$. More precisely, the ratio $`R(T)`$ between the variance and the average value of the number of substitutions should be identically equal to one. This strong prediction was first tested by Kimura with the conclusion that deviations from the Poissonian statistics are small. However, more recently Gillespie repeated the test for a larger number of proteins, finding that for most of them the value of $`R(T)`$ is much larger than one. He thus argued that the hypothesis that most mutations are neutral has to be rejected.
Our results provide an alternative explanation: the strong fluctuations of the substitution process can be attributed to the fluctuations of the neutral mutation rate in sequence space, even in the absence of any selective pressure. To test this hypothesis, we assume, as above, that the number $`m_k(T)`$ of attempted mutation events in a time $`T`$ during trajectory $`k`$ is a Poissonian variable of average value $`\mu T`$. In the present study, $`k=1,\mathrm{}8`$ is the label of the evolutionary trajectories. Then for every trajectory $`k`$ we count the number $`n_k(T)`$ of mutations accepted over $`m_k(T)`$ steps of our evolutionary algorithm. This number is then interpreted as the number of substitutions in “species” $`k`$. We can thus compute the variance and the average value of this variable over the eight trajectories. The ratio between them gives an estimate of the dispersion ratio $`R(T)`$. This is always larger than one, contradicting the Poissonian hypothesis. Moreover, $`R(T)`$ is found to be an increasing function of $`T`$, so that it is no longer true that the fluctuations grow with time as $`\sqrt{T}`$.
Since our model takes into account only neutral and lethal mutations, without considering either advantageous mutations or slightly deleterious ones, we conclude that the violation of Poissonian statistics is not a decisive proof against the validity of the neutral hypothesis.
## IV Properties of the sequences
We classify the nearly 12,000 sequences generated by our evolutionary algorithm in four classes, with the reminder that the identification of the ground state is only tentative for rejected sequences, as already discussed.
1. Selected sequences, belonging to the neutral network. Their number is a fraction $`f=0.050`$ of the total set.
2. Unstable sequences, $`f=0.172`$. Their lowest energy state coincides with $`𝐂^{}`$, but the stability condition is not fulfilled. The rejection was made in most cases already after the first MC run, if the condition $`q(𝐂,𝐂^{})>0.75`$ was not fulfilled, otherwise the sequence was studied in another MC run.
3. Slow folding sequences, $`f=0.472`$. For such sequences, no structure with energy lower than $`E(𝐂^{},𝐒)`$ was found, but the MC algorithm did not reach the target structure $`𝐂^{}`$. In some cases ($`f=0.061`$) $`𝐂^{}`$ was reached in the first MC run but not in the second one, while in most cases the rejection was made already after the first run.
4. Structurally mutated sequences, $`f=0.306`$. For such sequences the lowest energy structure $`𝐂_0`$, has lower energy than the target structure:
$$E(𝐂_0,𝐒)<E(𝐂^{},𝐒).$$
(13)
Before describing separately the properties of these classes of sequences, we show that the value of the $`Z`$ score is able to distinguish statistically the different classes. The $`Z`$ score is used to evaluate the match between a sequence $`𝐒`$ and a structure $`𝐂^{}`$ taken from a pool of alternative structures. It is defined as
$$Z(𝐂^{},𝐒)=\frac{E(𝐂^{},𝐒)E(𝐂,𝐒)}{\sqrt{E^2(𝐂,𝐒)E(𝐂,𝐒)^2}},$$
(14)
where the brackets denote average with respect to the ensemble of alternative structures at high temperature. The more negative $`Z`$ is, the better is the match between the sequence $`𝐒`$ and the structure $`𝐂^{}`$ and the more stable is the structure $`𝐂^{}`$, provided that it is really the lowest energy structure. This measure is often used in computer experiments of fold recognition .
Following Mirny and Shakhnovich , we use a simplified measure of the $`Z`$ score, considering as alternative structures only maximally compact structures and approximating their average energy and their variance with, respectively, the average energy and the variance of the set of all possible contacts (we take into account the fact that in the simple cubic lattice the only possible contacts are those between monomers of different parity). More precisely, our definition is
$$Z^{}(𝐂^{},𝐒)=\frac{E(𝐂^{},𝐒)Nc_{max}\left[U(S_i,S_j)\right]}{Nc_{max}\sqrt{\left[U^2(S_i,S_j)\right]\left[U(S_i,S_j)\right]^2}},$$
(15)
where
$$U(S_i,S_j)=\frac{\underset{ij}{}P_{ij}U(S_i,S_j)}{_{ij}P_{ij}},$$
(16)
$`P_{ij}`$ is one if a contact between amino acids $`i`$ and $`j`$ is possible in some configurations, zero otherwise and $`Nc_{max}`$ is the number of contacts for maximally compact structures, $`Nc_{max}=40`$ for $`N=36`$ on the cubic lattice. $`Z^{}`$ is a good approximation to the $`Z`$ score and it is very easy to compute numerically, without the need for a simulation at a high temperature.
We plot in figure 4 the distribution of the $`Z`$ score for the four classes of sequences. For the structurally mutated sequences we evaluated both the $`Z`$ score of the target structure $`𝐂^{}`$ and the $`Z`$ score of the lowest energy structure found, $`𝐂_0`$. Note that the starting sequence $`𝐒(0)`$ has the lowest value of the $`Z`$ score, i.e. $`Z=1.22`$.
The ranking of the $`Z`$ score for different classes is as expected on the basis of their stability. The most negative values of $`Z`$ are proper of selected sequences, which are most stable. Next come slow folding and unstable sequences. The $`Z`$ score of mutated ground states, $`Z(𝐂_0,𝐒)`$, are less negative. Last in this rank of stability comes the $`Z(𝐂^{},𝐒)`$ for the structurally mutated sequences. In this case, we are sure that $`𝐂^{}`$ is not the ground state of the sequence.
These results confirm that the evaluation of the $`Z`$ score is an efficient criterion to decide whether $`𝐂^{}`$ is the ground state of a sequence $`𝐒`$. Nevertheless, the fact that the distributions of the $`Z`$ score relative to different classes have large overlaps should make one worry that the $`Z`$ score is not a precise criterion. In Fig.5 we report the results that we would get using a threshold value of $`Z`$, $`Z_c`$, as a criterion for fold recognition, instead of studying the sequences with Monte Carlo simulations. The dotted line represents the fraction of sequences that are unstable or slow folders according to our criterion and would be accepted with a criterion based on the $`Z`$ score. This goes from less than 60% for the most stringent threshold to a plateau value of about 80%. We can say very little about this class of sequences. It includes sequences that are really of lower quality than selected sequences, sequences that are of the same quality but were not selected because of the uncertainties of the selection procedure and sequences which do not fold to the target state. The dashed line represents sequences whose ground state is surely different from the target one. Their fraction increases from zero to about 20% as the threshold becomes less stringent. Finally, the solid line represents sequences that would be selected with our criterion but not with the $`Z`$ score criterion, as a fraction of the total number of selected sequences. Our results indicate that a good choice for the threshold could be $`Z_c1.07`$: 14% of the sequences selected with this criterion would also be selected with our criterion, 80.5% would be sequences that do not fulfill the stability or fast folding conditions and 5.5% would be sequences which have certainly a different ground state, but probably similar to the target one, since the $`Z`$ score of mutated sequences is correlated to the similarity between the new ground state and the target state (see below). About 29% of the sequences that our criterion selects would be discarded with the $`Z`$ score criterion. This number becomes much larger if the threshold is made more stringent. Thus, the criterion based on $`Z`$ accepts most sequences that we reject and rejects a large fraction of those that we select.
The distribution for the slow folding class is quite similar to that of the unstable class. This is not surprising, since it is well known that stability and fast folding are correlated in lattice heteropolymer models . In particular, stability as we defined it requires a correlated energy landscape, which is considered a property of fast folding sequences. Thus these results encourage us in believing that the conditions we imposed and the algorithm to verify them were appropriate.
Interestingly, the distribution relative to $`Z(𝐂_0,𝐒)`$ lies to the right of the other ones, indicating that the stability of the structurally mutated ground states is rather low. This is not unexpected. In fact, structurally mutated sequences are only one point mutation apart from selected sequences, thus $`𝐂^{}`$ should still have a low energy and should decreases the stability of $`𝐂_0`$. We shall comment further on this point in the conclusions.
The $`Z`$ score correlates well to the native overlap $`q(𝐂,𝐂^{})`$ that we assumed as a measure of thermodynamic stability (Fig.6) for unstable sequences (correlation coefficient $`r=0.48`$) and for mutated sequences (this is due to the fact that the overlap between the mutated ground state and the native state correlates with to the $`Z`$ score). No correlation is visible for selected sequences, which always have $`q(𝐂,𝐂^{})>0.9`$. For unfolded sequences the measure of $`q(𝐂,𝐂^{})`$ is not possible (see Fig.6).
### A Selected sequences
With our criterion we accepted 566 sequences in six evolutive trajectories. Such sequences are both fast folding and thermodynamically stable. These properties are not typical of random sequences .
For selected sequences there is a significant correlation between the energy $`E`$ of a conformation and its overlap $`q`$ with the native state. In Fig. 7 we represent the 500 lowest energy configurations of three different sequences as points in the $`(E,q)`$ plane. For every sequence, all points fulfill the inequality
$$1E(𝐂,𝐒)/E(𝐂^{},𝐒)\alpha (𝐒)\left(1q(𝐂,𝐂^{})\right).$$
(17)
The adimensional parameter $`\alpha (𝐒)`$ is related to the energy gap of the ground state $`𝐂^{}`$ of sequence $`𝐒`$, as defined by Shakhnovich and coworkers . However, it characterizes more precisely the smoothness of the energy landscape. For the initial sequence we find $`\alpha (𝐒_0)=0.23`$. We did not measure $`\alpha (𝐒)`$ for all selected sequences, but it appears from few examples that its value does not decrease during the evolution of the protein. It is not surprising that selected sequences exhibit a correlated energy landscape, since the condition of thermodynamic stability that we impose rules out sequences with misfolded structures of low energy. Moreover, our selected sequences are fast folders, and one should expect that such sequences have a correlated energy landscape, since the relation between thermodynamic stability, fast folding and smoothness of the energy landscape has long been discussed . Furthermore, it has been found that models which cannot give rise to good folding sequences present weak correlations between $`q`$ and $`E`$ .
Consistently, it appears from our data that the folding time is correlated to the stability (it has a negative correlation with the $`Z`$ score and positive with the native overlap), but we are not able to quantify this effect, since our measure of the folding time, based only on two simulations, is too imprecise.
It is also interesting that for several sequences the energy $`E(𝐂^{},𝐒)`$ is lower than for the starting sequence $`𝐒^{}`$, although this has been obtained by minimizing the energy $`E(𝐂^{},𝐒)`$ in sequence space. The reason for this is that the minimization method requires that the composition of the sequence is kept fixed, while we do not impose this condition. However, the $`Z`$ score reaches its lowest value $`Z=1.22`$ for the starting sequence $`S^{}`$.
We observe that the $`Z`$ score of the target state, $`Z(𝐂^{},𝐒)`$, defines a complex landscape in sequence space, with valleys separated by barriers. This result is illustrated in Fig.8, where we show the $`Z`$ score of sequences in the neutral network as a function of the number of steps $`l`$ along the network, starting from $`𝐒^{}`$.
The roughness of the energy landscape is consistent with the results reported in a recent preprint by Tiana et al. , who sampled the energy landscape in sequence space for a fixed structure using Monte Carlo simulations. The authors of observed a hierarchy of clusters and superclusters of low energy sequences, with superclusters characterized by few fixed amino acids in key positions and not connected by neutral paths. This conclusion might at first seem at odd with the fact that we found connected neutral paths extended in sequence space, but there should be no contradiction between the present work and the results of ref. , since they deal with different questions. We studied a single neutral network asking whether it is possible to find in it pairs of sequences at any distance, while Tiana et al. ask whether it is possible to find non homologous proteins which are in disconnected neutral networks and still fold to the same structure. It is possible that both answers are positive and that several extended but disconnected neutral networks exist for a given protein structure. Such a picture, if correct, implies that non homologous proteins sharing the same fold may have been originated either through convergent evolution, possibly on disconnected neutral networks, or through divergent evolution from a common anchestor on a single neutral network, but it is very difficult or even impossible to decide between these two possibilities on the basis of the sequence alone.
On the other hand, we should note that the definition of neutral path in our work is different than the one used in . In fact, while in is assumed that a path in sequence space is neutral if all sequences belonging to it have $`Z`$ score lower than a predetermined threshold, we accept only sequences for which we can show through Monte Carlo simulations that the target structure is the ground state, it is thermodynamically stable and easy to reach kinetically. The criterion adopted in has the advantage of being computationally very efficient and it correlates well with our criterion. In several cases, however, the two criteria give different answers, as it is shown in Fig. 5, and it is possible that networks which are disconnected according to the $`Z`$ score criterion are found to be connected according to our criterion. Further study is necessary in order to assess the relevance of such possibility.
It is rather interesting that the complex energy landscape in sequence space offers an explanation for the large variations of the neutral mutation rate, $`x(𝐒)`$. Since a more stable sequence can tolerate larger rearrangements without changing its ground state, one can expect that the fraction of neutral mutations from sequence $`𝐒`$ increases with the stability of the sequence. This expectation is in agreement with the results of ref. , where the stability is measured by the $`Z`$ score $`Z(𝐂^{},𝐒)`$ and neutrality of a mutation is recognized with a criterion based on the $`Z`$ score of the mutated sequence.
We study the correlation between the stability, measured either by the native overlap or by the $`Z`$ score, and the fraction of neutral neighbors $`x(𝐒)`$. Since we did not measure $`x(𝐒)`$, we have to rely on the trapping time $`\tau _t`$ spent by a trajectory on sequence $`𝐒`$. This variable is related to $`x(𝐒)`$ through the geometric distribution $`P_x(\tau )=x(1x)^{\tau 1}`$ of average value $`1/x(𝐒)`$ (11). We thus estimate the correlation coefficient between the $`Z`$ score and $`1/x(𝐒)`$, using the relations $`\left[Z(𝐒)/x(𝐒)\right]\left[Z(𝐒)\tau _t(𝐒)\right]`$, $`\left[1/x(𝐒)\right]\left[\tau _t(𝐒)\right]`$, $`\left[1/x(𝐒)^2\right]1/2\left[\tau _t^2(𝐒)+\tau _t(𝐒)\right]`$, where the square brackets indicate average on the neutral network. This treatment neglects the fact that our criterion is subject to some arbitrariness, since we can not measure with high precision the native overlap $`q`$ and the typical folding time upon which our criterion is founded. Thus, the correlation coefficient estimated in this way is underestimated. We find a correlation coefficient $`r=0.20`$ between the $`Z`$ score and $`1/x`$ and $`r=0.21`$ between the native overlap and $`1/x`$. Although the estimate is not accurate, this study confirms the existence of correlations between the stability of the native state and the neutral mutation rate. We show the correlation between $`Z`$ and $`\tau _t`$ in Fig.9.
The previous observation can be the basis for a quantitative explanation of the large fluctuations of the neutral mutation rate in this model and possibly in real proteins. It would be very interesting to investigate to which extent such fluctuations are responsible for the observed patterns of molecular evolution, which appear to be much more irregular than predicted on the basis of the simple “homogeneous” neutral theory.
### B Unstable and slow folding sequences
To these two classes belong all sequences that were rejected although no structure of energy lower than $`𝐂^{}`$ was found. We do not know for which fraction of these sequences the ground state coincides with $`𝐂^{}`$ and for which the ground state is changed. For unstable sequences $`𝐂^{}`$ is the lowest energy structure found and it is reached in at least one of the two independent MC runs, but the stability condition is not fulfilled. The rejection was made already in the first MC run if we found $`q>0.75`$. Thus we can not exclude that in the second run also the fast folding condition would fail. For slow folding sequences $`𝐂^{}`$ was not reached either in the first or in the second MC run. The folding time for unstable sequences appears to be correlated to the native overlap $`q`$, even if our data do not allow quantitative estimations.
### C Structurally mutated sequences
For sequences in this class we found putative ground states with energy lower than that of the target structure. A fraction $`f=0.306`$ of the examined sequences belongs to this class.
We first analyze the number $`N_c^0`$ of contacts in the mutated ground state. The distribution $`P(N_c^0)`$ is reminiscent of a bimodal distribution (see Fig. 10). As previously mentioned, the number of target contacts, $`N_c^{}=40`$, is the largest possible for a sequence length of $`N=36`$ residues. The twin peaks at $`N_c^0=37`$ and at $`N_c^0=40`$ derive from target-like ground states, while the constraints of the lattice geometry are probably responsible for the depression of the values of $`P(38)`$ and $`P(39)`$. The broad peak at $`N_c^0`$=34 is close to the number of contacts expected for random sequences, although slightly higher. As mentioned above, in the case of a random contact potential with the same mean and variance as the one that we used and a Gaussian distribution, the number of contacts in the ground states ranges typically from 29 to 33 .
The number of contacts is higher than for random sequences because the “native” contacts of $`𝐂^{}`$ are advantageous even in the structurally mutated sequences. This is confirmed by the fact that there is a strong correlation correlation between the number of contacts $`N_c^0`$ and the overlap $`q_0`$ between the new ground state and the target state $`𝐂^{}`$ The correlation coefficient is $`r=0.80`$. See insert in Fig.10. The two quantities are also correlated to the energy $`E_0`$ of the ground state: The more native-like the mutated ground state is, the more compact it is and the lower is its energy. The correlation coefficients are: $`r=0.31`$ between $`q_0`$ and $`E_0`$, $`r=0.32`$ ($`N_c^0`$ and $`E_0`$), $`r=0.54`$ ($`q_0`$ and $`Z`$ score) and $`r=0.47`$ ($`N_c^0`$ and $`Z`$ score). The strongest correlation observed is that between $`q_0`$ and the $`Z`$ score. The energy and the $`Z`$ score are weekly correlated ($`r=0.36`$).
The distribution of $`q_0`$ is also bimodal (see Fig. 11). The peak at high $`q`$ is due to native-like structures and the broader peak at $`q=0.3`$ is close to (but still significantly higher than) the typical overlap between random structures, $`q_{ran}=0.1`$ for chains of length $`N=36`$.
The overlap $`q_0`$ between the ground state and the target state is negatively correlated to the $`Z`$ score. This is shown in Fig.12, where all the structurally mutated sequences are represented in a scatter plot in the plane $`(q_0,Z)`$. Only structures which are very similar to the original native state have a $`Z`$ score in the range found for selected sequences. This result suggests that mutated ground states dissimilar from the original one are in most cases not stable enough to represent a new acceptable fold of the model protein. In other words, neutral networks of unrelated structures may lay far apart in sequence space. This feature of the model is consistent with the observation that biological evolution conserves the native fold of proteins even when their function changes substantially .
## V Discussion
We simulated evolution on neutral networks for a protein model with twenty amino acid types, contact energy functions and structures represented as self avoiding walks on the simple cubic lattice. Stability of the ground state is measured as the thermodynamic average of its overlap with alternative structures.
Our simulations show that neutral networks are extended in sequence space: pairs of sequences on the neutral network are almost as different as random sequences, even if they have exactly the same fold. This observation is consistent with what is known about protein evolution. In our 36mer chains, residues in all positions could be substituted, even if the two positions in the core are much more difficult to change, consistently with the results of Ref. .
The neutral network that we studied turned out to be rather irregular: the fraction $`x(𝐒)`$ of neutral mutations of a sequence $`𝐒`$ has large variations in sequence space. The value of $`x(𝐒)`$ is positively correlated with the stability of the native state, evaluated either through the average native overlap $`q(𝐂,𝐂^{})`$ or through the $`Z`$ score. This appears very reasonable: The more stable is a sequence to structure match, the less probable is that a mutation destabilizes it. This is also in agreement with the results of recent complementary studies . It thus seems that thermodynamic stability implies also stability with respect to mutations.
The fluctuations of $`x(𝐒)`$ in sequence space are important for the evolutionary dynamics. If $`x(𝐒)`$ is constant, Kimura’s theory of neutral evolution predicts that the overlap in sequence space relaxes exponentially to the asymptotic value corresponding to random sequences, at a rate independent of the size of the biological population in which the evolution takes place. Moreover, the number of substitutions in a protein sequence during an evolutionary trajectory lasting $`T`$ generations is predicted to be a Poissonian variable of mean value $`\mu xT`$. In this case, the fluctuations of the value of this variable in different species evolving for a time $`T`$ should grow only as $`\sqrt{T}`$. This result would sustain the molecular clock hypothesis, according to which the substitution process can be used as a clock to date speciation events. However, the molecular clock hypothesis has been heatedly debated in the last decade, and it has been shown that the number of substitutions fluctuates much more than predicted in the “homogeneous” neutral theory . This deviation from the prediction of Kimura’s theory has been interpreted as an indication that in most cases protein evolution is not neutral but adaptive. We suggest that the irregularity of protein evolution could be an intrinsic property of the energy landscape of neutral networks of protein sequences. More stringent statistical tests should be designed to distinguish this situation from adaptive evolution, that undoubtedly occurs in many cases.
Selected sequences have a rather correlated energy landscape, which yields short folding times and high thermodynamic stability. The native overlap that we use as a measure of thermodynamic stability correlates well with the $`Z`$ score but it gives more information on the absence of states of low energy unrelated to the native state and favors a more correlated energy landscape.
Sequences whose ground state coincides with the native state may be discarded either due to the lack of thermodynamic stability or because they fold too slowly. Both classes of sequences have similar properties, since stability and folding time are related quantities.
For about 30% of the attempted mutations the resulting sequence has a ground state different than the original one. The overlap $`q_0`$ of this new ground state with the target state has a bimodal distribution, but only structures very similar to the target one appear to fulfill our criterion of thermodynamic stability. This is not surprising, since the target structure must conserve a low energy in the mutated sequence, so that it is able to destabilize the new ground state. Therefore, it seems that contacts between neutral networks of unrelated structures are very rare, if a stability condition is required. A similar conclusion has been suggested in a numerical study of the two dimensional HP model .
An implication of this result is that it is difficult to switch from a structure to a different one through point mutations corresponding to thermodynamically stable proteins. This could explain why evolution changes so rarely the fold of a protein, while it is possible to engineer protein sequences with as much as 60% similarity with a natural protein and completely different fold.
We studied a model which has the advantage of being reliably computable, but at the price of sacrificing possibly important ingredients. We discuss here the ones that we judge the most serious:
1. We use a simple lattice model. This choice was made due to the necessity of identifying the native state of each generated sequence, and this is feasible only for lattice models. Lattice models, although often criticized , have been recognized to capture some of the most relevant thermodynamic features of the folding process , such as the existence of a unique ground state and the cooperativity of the transition. However, they do not capture essential features of real proteins as for instance the existence of secondary structures.
2. We simulated the evolution of only one target structure. It would be interesting to see how our results change by changing the structure, and which properties of the structure (for instance compactness, locality of interactions and so on) are important to determine the neutral mutation rate. However, it was argued that the small number of folds occurring in natural proteins (at most some thousands) could be the ones corresponding to the largest number of sequences in sequence space , so that structures characterized by a large neutral set, even if they are not typical, could be the most interesting ones from the biological point of view.
3. The length of the sequences examined is short, so that there are only two core residues. Considering more core residues could impose more constraints on the evolution and reduce the rate of neutral evolution. It would thus be interesting to make the same study for longer sequences.
4. We did not represent biological activity in the present protein model. This might be obtained by imposing more constraints on the residues taking part to the active site.
5. In our model of evolution we assume that the environment remains fairly constant, so that the native structure favored by natural selection does not change throughout the evolution. This hypothesis is not unreasonable if the protein examined is an enzyme performing some chemical activity, since the cells possess a high homeostasis, i.e. they can maintain a stable chemical-physical internal environment despite large perturbations in the external environment. However, it is quite likely that some large ecological and climatic changes have been responsible for molecular substitutions for which the neutral theory, and our model in particular, do not apply .
6. We consider only point mutations, and not insertions and deletions, which also play an important role in evolution.
In our opinion, these limitations should not modify the qualitative picture. The existence of neutral networks, the variability of neutral mutation rates and the difficulty to reach through point mutations very different structures corresponding to stable proteins are features of our model that appear to be reflected also in the evolution of real proteins.
## Acknowledgments
We acknowledge interesting discussions with Peter Grassberger, Helge Frauenkron, Erwin Gerstner, Walter Nadler, Peter Schuster, Anna Tramontano, Tim Gibson, Erich Bornberg-Bauer, Guido Tiana, Ricardo Broglia and Eugene Shakhnovich. This work was conceived during the workshop on Protein Folding organized at the ISI Foundation, Torino, Italy, February 9-13 1998. Part of the work was made during the Euroconference on ”Protein Folding and Structure Prediction” organized at the ISI Foundation, Torino, Italy, June 8-19, 1998. Computations were carried out at the HLRZ, Forschungszentrum Jülich. |
no-problem/9912/cond-mat9912455.html | ar5iv | text | # Theory of vortex lattice effects on STM spectra in d-wave superconductors
## Abstract
Theory of scanning tunneling spectroscopy of low energy quasiparticle (QP) states in vortex lattices of d-wave superconductors is developed taking account of the effects caused by an extremely large extension of QP wavefunctions in the nodal directions and the band structure in the QP spectrum. The oscillatory structures in STM spectra, which correspond to van Hove singularities are analysed. Theoretical calculations carried out for finite temperatures and scattering rates are compared with recent experimental data for high-$`T_c`$ cuprates.
The electronic structure of the mixed state in d-wave superconductors reveals a number of fundamentally new features (see and references therein) as compared to the case of s-wave compounds, where low lying quasiparticle (QP) states are bound to the vortex core and are weekly perturbed by the presence of neibouring vortices at magnetic fields $`HH_{c2}`$. The vanishing pair potential in the nodal directions results in the extremely large extension of QP wavefunctions which are sensitive to the superfluid velocity ($`𝐕_s`$) fields of all vortices and, thus, the electronic structure is influenced by the vortex lattice geometry. The resulting peculiarities of the local density of states (DOS) can be detected, e.g., by a scanning tunneling microscope (STM). In this paper we focus on the theory of scanning tunneling spectroscopy of low energy QP states in vortex lattices of d-wave superconductors and compare the theoretical calculations with recent experimental data for high-$`T_c`$ cuprates, where the dominating order parameter is believed to be of d-wave symmetry. Hereafter we assume the Fermi surface (FS) to be two-dimensional (2D), take the gap function in the form $`\mathrm{\Delta }_𝐤=2\mathrm{\Delta }_0k_xk_y/k_F^2`$ (the $`x`$ axis makes an angle $`\pi /4`$ with the $`a`$ axis of the $`CuO_2`$ planes). Let us orient $`𝐇`$ along the $`c`$ axis ($`H_{c1}HH_{c2}`$) and consider two types of vortex lattices: (I) rectangular lattice with primitive translations $`𝐚_1=a𝐱_0`$, $`𝐚_2=\sigma a𝐲_0`$; (II) centered rectangular lattice with $`𝐚_1=a𝐱_0`$, $`𝐚_2=a(𝐱_0/2\sigma 𝐲_0)`$, where $`H\sigma a^2=\varphi _0`$ is the flux quantum, and $`𝐱_0`$, $`𝐲_0`$, $`𝐳_0`$ are the unit vectors of the coordinate system.
Van Hove singularities. Our consideration is based on the analysis of the Bogolubov-de Gennes (BdG) equations for low energy excitations with momenta close to a certain gap node direction (e.g., $`𝐤_1=k_F𝐱_0`$): $`(\widehat{H}_0+\widehat{H}^{})\widehat{g}=\epsilon \widehat{g}`$, where $`\widehat{g}=(u,v)`$ is the QP wavefunction, $`\widehat{H}_0=V_F\widehat{\sigma }_z\widehat{p}_x+V_\mathrm{\Delta }\widehat{\sigma }_x\widehat{p}_y`$, $`\widehat{\sigma }_x,\widehat{\sigma }_z`$ are the Pauli matrices, $`\widehat{H}^{}=MV_FV_{sx}(1+\widehat{\sigma }_z)+MV_\mathrm{\Delta }V_{sy}\widehat{\sigma }_x`$, $`M`$ is the electron effective mass, $`\widehat{𝐩}=i\mathrm{}e𝐀/c`$, $`V_F=\mathrm{}k_F/M`$, $`V_\mathrm{\Delta }=2\mathrm{\Delta }_0/(\mathrm{}k_F)`$, $`𝐇=H𝐳_0`$, $`𝐀=Hy𝐱_0`$, $`𝐕_s=(V_{sx},V_{sy})`$. The spectrum of the Dirac Hamiltonian $`\widehat{H}_0`$ can be obtained using the usual quantization rule for a cyclotron orbit (CO) area . The periodic potential $`\widehat{H}^{}`$ removes the degeneracy of the discrete energy levels with respect to the CO center and induces a band structure in the spectrum . The general solution can be written in the form of a magnetic Bloch wave:
$$\widehat{g}=\underset{n}{}e^{ix(q_x+2\pi n/a)+2in\sigma q_ya}\widehat{G}(y2n\sigma a,𝐪),$$
(1)
where $`n`$ is an integer, and $`𝐪`$ is the quasimomentum lying within the first magnetic Brillouin zone (MBZ): $`\pi /(2a)<q_x<\pi /(2a)`$, $`\pi /(2\sigma a)<q_y<\pi /(2\sigma a)`$. The wavefunction $`\widehat{G}(y,𝐪)`$ is localized in the domain with the size $`L`$ determined by $`𝐪`$ and energy values. The potential $`\widehat{H}^{}`$ results in the splitting of the CO near MBZ boundaries (see Fig. 1) and the spectrum consists of branches which correspond to the splitted portions of the CO.
For large Dirac cone anisotropy $`\alpha =V_F/V_\mathrm{\Delta }1`$ ($`\alpha =k_F\xi _0/2`$) and $`\epsilon <0.5\epsilon ^{}`$ ($`\epsilon ^{}=\pi \mathrm{}V_F/a\mathrm{\Delta }_0\sqrt{H/H_{c2}}`$) the harmonics in Eq. (1) do not overlap ($`L<2\sigma a`$) and one can replace $`\widehat{H}^{}`$ by the potential $`\widehat{H}^{}_x`$ averaged in the $`x`$ direction (see ). Such a simplification is a natural consequence of a small size of the cyclotron orbit (CO1 in Fig. 1) in the nodal direction as compared to the size of the MBZ. The energy spectrum consists of branches $`\epsilon _n(q_x=\pi Q/a)=\epsilon ^{}E_n(Q,\sigma \alpha )`$, which are displayed in Fig. 2 in the first MBZ for $`\pi \sigma \alpha =50`$ and $`\pi \sigma \alpha =100`$.
The number of energy branches which cross the Fermi level can be determined as follows: $`N2q_{}^{}/\delta q_{}2\sqrt{\pi \sigma \alpha }`$, where $`q_{}^{}`$ is the minimum possible size of the CO in the $`q_{}`$ direction and $`\delta q_{}`$ is the distance between MBZ boundaries. Each energy branch has an extremum as a function of the momentum $`q_x`$ near the MBZ boundary at a certain $`\stackrel{~}{\epsilon }_n`$ (we neglect here additional extrema which appear due to the exponentially small splitting of energy levels near the points of intersection of the branches in the $`EQ`$ plane). Due to the one-dimensional (1D) nature of the low energy spectrum the divergent contributions to the DOS take the form: $`\delta N(\epsilon )|\epsilon \stackrel{~}{\epsilon }_n|^{1/2}`$ ($`\epsilon >\stackrel{~}{\epsilon }_n`$ for energy minima and $`\epsilon <\stackrel{~}{\epsilon }_n`$ for maxima). The distance between these peaks $`\delta \epsilon \epsilon ^{}/(2\sigma \alpha )`$ coincides with a characteristic energy scale corresponding to van Hove singularities which occur when the CO intersects MBZ boundaries in the $`q_y`$ direction (see Fig. 1). The crossover between 1D and 2D regimes in the band spectrum occurs at $`\epsilon _c0.5\epsilon ^{}`$, when the CO size in the $`q_{}`$ direction becomes larger than the size of the first MBZ (CO2 in Fig. 1). For $`\epsilon \stackrel{_>}{_{}}\epsilon _c`$ the $`q_y`$-dependence of energy becomes essential and results in the appearance of 2D critical points, i.e. 2D local maxima (or minima) and saddle-points. Thus, instead of square-root van Hove singularities we obtain a set of discontinuities and logarithmic peculiarities ($`\delta N(\epsilon )ln|1\epsilon /\stackrel{~}{\epsilon }_n|`$) in the DOS, respectively. Obviously, these 2D singularities are more sensitive to temperature and finite lifetime effects and, consequently, the suppression of the corresponding oscillatory structure in the DOS should be stronger in the high energy regime. The above analysis can be generalized for gap nodes at $`𝐤=\pm k_F𝐲_0`$: the corresponding energy scales take the form $`\delta \epsilon 0.5\epsilon ^{}/\alpha `$, $`\epsilon _c0.5\epsilon ^{}/\sigma `$.
Even in the low energy regime the DOS oscillations with the energy scale $`\delta \epsilon `$ are surely smeared due to a finite scattering rate $`\mathrm{\Gamma }`$ and temperature and can be observed only for a moderate Dirac cone anisotropy and rather large magnetic fields. Comparing our results with a numerical solution of the BdG equations for $`\sigma =1`$ and $`\alpha =5/2`$ we find that the above mechanism gives a good estimate of the energy scale of the double-peak structure in the tunneling conductance at the core center at $`H/H_{c2}=0.3`$ ($`\delta \epsilon 0.1\mathrm{\Delta }_0`$) and can explain the absence of this structure at low fields $`H/H_{c2}=0.05`$ due to temperature broadening ($`T=0.1T_c>\delta \epsilon 0.05\mathrm{\Delta }_0`$). In principle, van Hove singularities may account for peaks with a large energy gap $`\mathrm{\Delta }_0/4`$ observed experimentally at the vortex centers in YBaCuO at $`H6T`$ provided we assume $`\alpha 1`$. Unfortunately the latter assumption is not consistent with the results of thermal conductivity measurements ($`\alpha 14`$), and, thus, the nature of the experimentally observed peaks is still unclear. It is also necessary to stress here that the critical points in the DOS are a direct consequence of perfect periodicity and the introduction of rather strong disorder surely remove these singularities.
Zero-bias conductance. Hereafter we neglect the DOS oscillations, discussed above, and consider the peculiarities of the zero-bias tunneling conductance $`g(𝐫)`$ starting from a modified semiclassical model proposed in . According to this approach, the Doppler shift of the QP energy, which plays important role for $`\epsilon \stackrel{_<}{_{}}\mathrm{\Delta }_0\sqrt{H/H_{c2}}`$, appears to be averaged in the nodal direction due to an extremely large size of a semiclassical wave packet in this energy interval. Within such an approximation a diagonal (retarded) Green’s function can be written in the form:
$$G^R(𝐤,\epsilon ,𝐫)=\frac{\epsilon +i\mathrm{\Gamma }\mathrm{}𝐤_F𝐕_{av}+ϵ_𝐤}{(\epsilon +i\mathrm{\Gamma }\mathrm{}𝐤_F𝐕_{av})^2\mathrm{\Delta }_𝐤^2ϵ_𝐤^2},$$
(2)
where $`ϵ_𝐤`$ is the normal state electron dispersion, $`𝐕_{av}=𝐕_s_x+𝐕_s_y`$. The scattering rate $`\mathrm{\Gamma }`$ should be determined self-consistently: $`\mathrm{\Gamma }=N(\mathrm{\Gamma },\epsilon )/(2N_F\tau )`$ (Born limit), $`\mathrm{\Gamma }=N_F\mathrm{\Gamma }_u/N(\mathrm{\Gamma },\epsilon )`$ (unitary limit), where $`2\tau `$ and $`N_F`$ are the relaxation time and DOS at the Fermi level in the normal state, $`\mathrm{\Gamma }_u=n_{imp}/(\pi N_F)`$, $`n_{imp}`$ is the concentration of of point potential scatterers, and $`N(\mathrm{\Gamma },\epsilon )=ImG^Rd^2k/(2\pi ^3)`$ is the local DOS.
Let us first consider the effect of finite temperature on the zero-bias conductance in the clean limit ($`\mathrm{\Gamma }0`$). The expression for the normalized conductance reads:
$$\stackrel{~}{g}(𝐫)=\frac{g(𝐫)}{g_N}=\underset{\mathrm{}}{\overset{+\mathrm{}}{}}\frac{N(\mathrm{\Gamma }=0,\epsilon )d\epsilon }{4N_FTcosh^2\left(\frac{\epsilon }{2T}\right)}=$$
$$\frac{T\mathrm{𝑙𝑛}2}{\mathrm{\Delta }_0}+\frac{T}{2\mathrm{\Delta }_0}\underset{i=x,y}{}\mathrm{𝑙𝑛}cosh\frac{\epsilon _i^{}\mathrm{\Phi }_i}{4T}$$
(3)
Here $`g_N`$ is the normal state conductance, $`\mathrm{\Phi }_x=\mathrm{\Phi }(x/R_x)`$, $`\mathrm{\Phi }_y=\mathrm{\Phi }(y/R_y)`$, $`\mathrm{\Phi }(z)=2z(2m+1)`$ for $`m<z<m+1`$ ($`m`$ is an integer), $`R_y`$ ($`R_x`$) is the distance between the lines parallel to the $`x`$ ($`y`$) axis and passing through the vortex centers, $`\epsilon _x^{}=\pi \mathrm{}V_FHR_x/\varphi _0`$, $`\epsilon _y^{}=\pi \mathrm{}V_F\sigma /R_y`$, For type I (II) lattices we have $`R_x=a`$, $`R_y=\sigma a`$ ($`R_x=a/2`$, $`R_y=\sigma a`$). One can separate two qualitatively different regimes in the behavior of the conductance:
(i) superflow dominated regime $`T\epsilon _{x,y}^{}`$,
$$\stackrel{~}{g}\frac{1}{8}\sqrt{\frac{\pi \sigma H}{2H_{c2}}}F_1(x,y),$$
(4)
(ii) temperature dominated regime $`T\epsilon _{x,y}^{}`$,
$$\stackrel{~}{g}\frac{T\mathrm{𝑙𝑛}2}{\mathrm{\Delta }_0}+\frac{\pi \mathrm{\Delta }_0\sigma H}{32TH_{c2}}F_2(x,y),$$
(5)
where $`F_1(x,y)=|\mathrm{\Phi }_x|(R_x/R_y)+|\mathrm{\Phi }_y|`$ and $`F_2(x,y)=\mathrm{\Phi }_x^2(R_x/R_y)^2+\mathrm{\Phi }_y^2`$. In Fig. 3 we display the contour plots of the functions $`F_1(x,y)`$, $`F_2(x,y)`$ for a square lattice of type I (which is close to the one observed experimentally in YBaCuO ). There are two consequences of an increase in temperature: (i) first, the spatial dimensions of peaks in the local DOS become rather small comparing to the intervortex distance only for $`T>T^{}\mathrm{\Delta }_0\sqrt{H/H_{c2}}`$; (ii) second, the amplitude of the peaks appears to be essentially suppressed in the limit $`TT^{}`$. For magnetic fields $`H6T`$ (which is typically the field of STM experiment ) one obtains $`T^{}20K`$. Thus, we conclude that the finite temperature effects can not explain neither the narrow zero-bias conductance peaks observed in YBaCuO nor the absence of these peaks in BiSrCaCuO at $`T=4.2K`$. To explain these experimental facts it is necessary to take account of the finite lifetime effects which can stronly influence on the behavior of the DOS, as it follows from the results of Refs. obtained on the basis of the usual semiclassical approach with a local Doppler shift. Starting from the modified semiclassical model (2) we obtain the following expression for the tunneling conductance at $`T=0`$:
$$\stackrel{~}{g}=\frac{N(\mathrm{\Gamma },\epsilon =0)}{N_F}=\frac{\mathrm{\Gamma }}{4\pi \mathrm{\Delta }_0}\left(4\mathrm{𝑙𝑛}\frac{\mathrm{\Delta }_0}{\mathrm{\Gamma }}+\underset{i=x,y}{}f_i\right)$$
(6)
$$f_i=\frac{\epsilon _i^{}|\mathrm{\Phi }_i|}{\mathrm{\Gamma }}tan^1\frac{\epsilon _i^{}|\mathrm{\Phi }_i|}{2\mathrm{\Gamma }}\mathrm{𝑙𝑛}\left(1+\frac{(\epsilon _i^{}\mathrm{\Phi }_i)^2}{4\mathrm{\Gamma }^2}\right)$$
Obviously Born scatterers result only in a moderate change of the DOS (see ) since the corresponding $`\mathrm{\Gamma }`$ value for $`\mathrm{\Delta }_0\tau 1`$ is very small comparing to $`\epsilon _{x,y}^{}`$ and the conductance is given by Eq. (4). On the contrary, in the unitary limit the expression (4) is valid only in the clean case $`\mathrm{\Gamma }_u\mathrm{\Gamma }_{x,y}^{}0.1\epsilon _{x,y}^2/\mathrm{\Delta }_0`$ (for a square lattice $`\mathrm{\Gamma }_{x,y}^{}0.1\mathrm{\Delta }_0H/H_{c2}`$). In the dirty limit $`\mathrm{\Gamma }_u\mathrm{\Gamma }_{x,y}^{}`$ we obtain:
$$\stackrel{~}{g}\stackrel{~}{g}(H=0)\left(1+\frac{\mathrm{\Delta }_0H\sigma }{64\mathrm{\Gamma }_uH_{c2}}F_2(x,y)\right),$$
(7)
where $`\stackrel{~}{g}(H=0)0.5\sqrt{\mathrm{\Gamma }_u/\mathrm{\Delta }_0}`$. In the vicinity of each vortex center the local DOS exhibits a fourfold symmetry with maxima along the nodal directions in a good agreement with numerical calculations based on the Eilenberger theory . For $`H=6T`$ finite lifetime effects become substantial if we assume $`\mathrm{\Gamma }_u\stackrel{_>}{_{}}10^2\mathrm{\Delta }_0`$. Thus, our approach allows to explain rather narrow conductance peaks (see Fig. 3b) observed near vortex centers in YBaCuO , even without taking account of the nontrivial structure of the tunneling matrix element, discussed in . With a further increase of the $`\mathrm{\Gamma }_u`$ value the amplitude of the peaks at the vortex centers vanishes: $`\delta \stackrel{~}{g}\sqrt{\mathrm{\Delta }_0/\mathrm{\Gamma }_u}(H/H_{c2})`$. Such a high sensitivity of the $`\delta \stackrel{~}{g}`$ value to finite lifetime effects can probably explain the difficulties in the observation of these peaks in the mixed state of BiSrCaCuO . Note in conclusion that according to Eq. (7) the spatially averaged DOS in the dirty limit varies as $`H`$ rather than $`HlnH`$ (the latter dependence has been predicted in within the semiclassical approach taking account of the local $`𝐕_s`$ value).
I am pleased to acknowledge useful discussions with Dr.N.B.Kopnin, Dr.Yu.S.Barash, Dr.A.A.Andronov, Dr.I.D.Tokman, and Dr.D.A.Ryndyk. |
no-problem/9912/astro-ph9912519.html | ar5iv | text | # Evidence for a 304-day Orbital Period for GX 1+4
## Introduction
GX 1+4 is a unique accretion-powered pulsar in a low-mass x-ray binary system (LMXB). In the 1970s the pulsar exhibited a spin-up behavior with a rate of $`\dot{P}2`$ s/year, the hightest among all persistent X-ray pulsars, and was one of the brightest and hardest X-ray sources in the sky. After an extended low-intensity state in the early 1980s, GX 1+4 re-emerged in a spin-down state mak88 and has produced occasional short-term variations of $`\dot{P}`$ ever since. The optical counterpart is a M5 III giant star, V2116 Oph, in a rare type of symbiotic system gla73 ; dav77 ; charo97 . The identification was made secure by a ROSAT accurate position pre95 and by the discovery of optical pulsations consistent with the spin period of the neutron star nos97 ; per97 . In 1991, BATSE initiated a continuous and nearly uniform monitoring of GX 1+4, confirming the spin-down trend with occasional dramatic spin-up/down torque reversal events cha96 ; chaetal97 . GX 1+4 has a much longer (factor of $``$ 100) spin period than the other four known LMXB accretion-powered pulsars and its orbital period has been known to be at least one order of magnitude longer than the periods of the other systems charo97 . Attempts to find the orbital period by Doppler shifts of the pulsar pulse timingcha96 or optical lines dav77 ; dot81 ; soo95 have both been inconclusive so far. Using a small number of X-ray measurements carried out during the spin-up phase of GX 1+4 in the 1970s, Cutler, Dennis & Dollan cdd86 produced an ephemeris for predicting periodical enhancements in the spin-up rate of the neutron star and claimed that this could be due to an elliptical orbit with a 304-day period. Here we report the discovery of a 304-day modulation in the BATSE frequency data and discuss its implications to the models for this source.
## Data Analysis and Results
The frequency and the pulsed flux data between Julian Day (JD) 2448376.5 and 2451138.5 (i.e., 1991 April 29 to 1998 October 20) used in this work were obtained from Chakrabarty cha96 and from the BATSE public domain data. The 20–50 keV pulsed signals are extracted from DISCLA 1.024s channel 1 data. 15-day mean values for the fluxes and pulse frequencies of GX 1+4 were calculated for the entire dataset.
A dataset of GX 1+4 residual pulsation frequencies was obtained from the frequency history by subtraction of a standard cubic spline function to remove low frequency variations in the spin-down trend. The fitting points are mean frequency values calculated over suitably chosen time intervals. The results of the spline fitting are fairly insensitive to intervals greater than $``$ 200 days between fitting points (we have used $`\mathrm{\Delta }t=215`$ days). The pulsed X-ray flux, frequency history and residual frequencies are shown in Figure 1 as functions of time.
We have carried out a power spectrum analysis to search for periodicities of less than 1000 days in both the residual frequency and the pulsed flux data. A Lomb-Scargle periodogram pre92 , suitable for time series with gaps, shows a significant periodic signal at 302.0 days (Fig. 2) in the residual frequency time series. The power spectrum shows a red noise with an approximate power-law index index of $``$2. In order to estimate the statistical significance of the detection, a series of numerical simulations of the frequency time series with 1-sigma gaussian deviations were performed per99 . The simulations show that the use of the 215-d spline, besides providing an effective filter for frequencies below $`2\times 10^3\mathrm{d}^1`$, does not produce power in any specific frequency in the range of interest. By comparing the amplitude of our 302-day peak with the local value obtained by the mean of the numerical simulations, we obtain a statistical significance of 99.98% for the detection. Epoch folding the data using the 302-day period yields a 1-$`\sigma `$ uncertainty of 1.7 days.
By analyzing the variation of the period of GX 1+4 during the spin-up phase in the 1970s, Cutler, Dennis & Dolan cdd86 proposed a 304 -day orbital period and an ephemeris to predict the events of enhanced spin-up: $`T=\mathrm{JD}2,444,574.5\pm 304n,`$ where $`n`$ is an integer. This ephemeris is based on four events discussed by the authors, whose existence was inferred from ad-hoc assumptions and extrapolations of the observations. The projected enhanced spin-up events derived from that ephemeris for the epochs contained in the BATSE dataset, represented as solid vertical lines in the lower panel of Fig. 1, are in excellent agreement with the BATSE reduced spin-down and spin-up events. The BATSE dataset is obviously significantly more reliable than the one given by Cutler, Dennis & Dolan cdd86 since it is based on 9 well-covered events measured with the same instrument as opposed to the 4 events discussed by those authors. The striking agreement of their ephemeris with the BATSE observations is very conspicuous and give a very strong support to the claim that the orbital period of the system is indeed $``$ 304 days. Taking integer cycle numbers, with the $`T0`$ epoch of Cutler, Dennis & Dolan cdd86 as cycle $`23`$, and performing a linear least-squares fit to the frequency residuals seen in the lower panel of Fig. 1, we find that the following ephemeris can represent the time of occurrence $`T`$ of the maxima in the frequency residuals: $`T=\mathrm{JD}2,448,571.3(\pm 3.2)\pm 303.8(\pm 1.1)n`$, where $`n`$ is any integer. The events predicted by the above ephemeris are shown as vertical dotted lines in the three panels of Fig. 1. The value of $`303.8\pm 1.1`$ days for the orbital period is consistent with the one obtained through power spectrum analysis performed on the BATSE data, which gives further support for the period determination.
## Discussion
In the 1970s, when the measurements used by Cutler, Dennis & Dolan cdd86 were carried out, the source was in a spin-up extended state. They proposed that the periodic occurrence of enhanced spin-up events was due to the fact that the system was in a elliptical orbit and the periastron passages would occur when $`\dot{P}`$ is maximum, as expected in standard accretion from a spherically expanding stellar wind. However, it is widely accepted today that the system has an accretion disk. Since the neutron star is currently spinning-down, the radius at which the magnetosphere boundary would corotate with the disk is probably smaller than the magnetosphere radius. Since the pulse period is $``$ 120 s and the luminosity is typically $`10^{37}`$ erg/s, it can be shown per99 that the period is probably close to the equilibrium value, for which the two radii are equal. This allows spin-down to occur even though accretion continues, the centrifugal barrier not being sufficiently effective whi88 . Assuming that the elliptical orbit is the correct interpretation for the presence of the modulation, the mass accretion rate (and hence the luminosity) should increase as the neutron star approaches periastron. The spin-down torque then gets smaller and the neutron star decelerates at a slower rate per99 . Occasionally, due to the highly variable mass loss rate of the red giant, the neutron star will spin-up for a brief period of time during periastron, as observed in the BATSE frequency curve in events 5, 7 and 9. According to this picture, one would expect an increase in X-ray luminosity at periastron. Although this is only marginally indicated in the BATSE pulsed flux light curve, it should be pointed out that total flux data from the ASM/RXTE for the epoch MJD 50088 to 51044 does not correlate significantly with the BATSE pulsed flux, indicating that the pulsed flux may not be a good tracer of the accretion luminosity in this system. Furthermore, the periodic $`5\mu `$Hz excursions in the residual frequency would lead to very low-significance variations in the X-ray flux measured by the ASM per99 .
An alternative interpretation for the observed modulation would be the presence of oscillation modes in the red giant star. However, the stability of the infrared magnitudes of V2116 Oph charo97 preclude it from being a long-period variable, since these stars undergo regular $``$ 1 mag variations in the infrared whi87 .
We conclude by pointing out that, given the 304-day orbital period and the spectral and luminosity characteristics of V2116 Oph, it can be shown that the companion in this system is probably not filling its Roche lobe and the accretion disk forms from the slow, dense stellar wind of the red giant per99 . A more thorough covering of the X-ray luminosity of the system, with high sensitivity and spanning several cycles, will be very important to test the elliptical model for GX 1+4.
We thank Dr. Bob Wilson from NASA Marshall Space Flight Center for gently providing us BATSE frequency and flux data on GX 1+4. M. P. is supported by a FAPESP Postdoctoral fellowship at INPE under grant 98/16529-9. J. B. thanks CNPq for support under grant 300689/92-6. F. J. acknowledges support by PRONEX/FINEP under grant 41.96.0908.00. |
no-problem/9912/cond-mat9912266.html | ar5iv | text | # Hall cross size scaling and its application to measurements on nanometer-size iron particle arrays
\[
## Abstract
Hall crosses were used to measure the magnetic properties of arrays of ferromagnetic, nanometer-scale iron particles. The arrays typically consist of several hundred particles of 9 – 20 nm in diameter. It is shown that the sensitivity of the measurements can be improved by matching the areas of the Hall cross and the array grown onto it by at least an order of magnitude. We predict that single particles of diameter as small as 10 nm can be measured if grown onto a Hall cross of appropriate size.
\]
There continues to be great interest in small magnetic particles and well-arranged particle arrays. This research is driven by the demand for an understanding of the physics and potential application of such particle arrays as advanced magnetic storage media. The challenge, however, is not only in the fabrication of such particles but also in the measurement of magnetic properties of small volumes. One obvious choice for the latter is the use of dc micro-SQUIDs with which excellent experiments have been performed. These measurements, however, are ordinarily restricted to low temperatures. MFM investigations can provide spatially resolved information on the magnetic state of the particles and are of special merit if conducted in applied fields. They suffer, however, in that quantitative results are difficult to extract. Hall magnetometry has the advantages of being versatile and comparatively simple in set-up. Furthermore, temperature and magnetic field strength are not restricted. Applications range from flux characterization in superconductors to scanning Hall probes. The sensitivity and the accessible temperature range depend on the material used for fashioning the device.
Hall gradiometry has been used to measure the magnetization of small particle arrays. Here, nanometer-scale particles of regular shape and arrangement were grown onto III-V semiconductor Hall crosses. This allowed magnetization measurements over a broad temperature range (up to 100 K). Moreover, enhanced interactions between the particles have been studied. Thus, important issues in data storage applications (superparamagnetic limit, interactions) can be addressed.
In this letter, we show that the sensitivity of Hall measurements can be significantly increased by matching the sizes of the active area of the Hall cross and of the particle array. Hall voltages calculated from the magnetic stray field of the particles are compared to measured values. We predict that single nanometer-size particles can be measured by appropriately small Hall crosses. The results can be applied to any magnetic object to be measured by Hall magnetometry.
Iron particle arrays were grown by a combination of chemical vapor deposition (CVD) and scanning tunneling microscopy (STM). This method has successfully been used to fabricate particles from 9 to 20 nm in diameter, 50 - 250 nm in height and with interparticle distances down to 80 nm onto gold and permalloy. During the growth process, vaporous iron pentacarbonyl is introduced into the STM chamber and decomposed within the electric field of the biased tip. For negative tip bias voltage, the iron deposit grows on the sample surface. At the same time, the tip is retracted, keeping the distance between the deposit’s top and the tip constant by maintaining a constant tunneling current of 50 pA. When the deposit has grown to the desired height (measured via the tip retraction) the tip is retracted completely and moved to the next location on the sample surface where the process is repeated to
form the particle array. An advantage of this fabrication procedure is that the particles’ height and location (with respect to each other and to features on the sample surface) can easily be controlled by steering the STM-tip. This feature becomes even more important as the size of the Hall crosses is decreased. It should be noted that the magnetic cores of the particles (consisting of bcc iron as revealed by TEM) are surrounded by a carbon coating which reduces oxidation and aging of the samples under air.
For the magnetic measurements, III-V semiconductor Hall crosses were prepared by photolithography and wet chemical etching of the substrate (GaAs-Ga<sub>0.7</sub>Al<sub>0.3</sub>As two-dimensional electron system (2DES), $`n_{2D}`$ = 1.2 $`\times \mathrm{\hspace{0.25em}10}^{11}`$cm<sup>-2</sup>, $`\mu `$(30 K) = 4.5 $`\times \mathrm{\hspace{0.25em}10}^5`$ cm<sup>2</sup>/Vs). A 40 nm thin gate was deposited onto the Hall bars, on top of which the arrays were directly grown. The SEM image of a typical array is presented in Fig. 1.
The measured Hall voltage originates from the magnetic stray field of the particles. A two-step procedure was developed to analyze the Hall voltage: In a first step, the stray field (i.e., its $`z`$-component perpendicular
to the plane of the 2DES) of each particle at the depth at which the 2DES located is calculated analytically. The contributions of each particle are then summed up to get the local $`z`$-component of the stray field emanating from the whole array. In a second step, the Hall voltage produced by this magnetic stray field within the active area of the Hall cross is calculated. The parameters needed are either known from the fabrication process of the 2DES or can easily be measured. With all other parameters known, the procedure can be used to estimate the mean diameter of the particle iron core from the measured Hall voltage.
In order to optimize the sensitivity of the Hall device the efficiency of the conversion of the particles’ stray field into Hall voltage, i.e. the second step, has to be evaluated. Properties most easily controlled in the fabrication process are the relative size and location of the array with respect to the Hall cross (we do not intend to discuss the properties of different 2DES materials in this letter). In Fig. 2 (circles) the calculated, relative Hall voltage produced by a typical test array is presented if the array were grown onto Hall crosses of different sizes (all other parameters remained unchanged including the drive current). Obviously, the array’s stray field is most effectively converted into Hall voltage if the Hall cross size does not exceed the array size. In this case, all electrons in the active area of the Hall cross are influenced by the stray field and the stray field influences the potential over the complete width of the voltage leg. Compared to arrays fabricated earlier one should be able to increase the sensitivity by an order of magnitude just by matching the sizes of array and Hall cross. The calculations were performed assuming aligned centers of array and Hall cross (filled circles). If the Hall cross is smaller than the array, only that corresponding portion of the array causes a Hall voltage. The total stray field at the center of an array is, however, slightly smaller than at an edge or even at a corner of the array. This effect which has been noted before can be explained by the fact that the closest (and therefore most effective) fluxlines from center particles form closed lines within the active area of the Hall cross and do not contribute to the Hall voltage. Hence, the Hall response is reduced at the array’s center. In contrast, for a small Hall cross located with its edge underneath the corresponding edge of the array, the Hall response is slightly increased.
Another parameter that could somewhat be influenced is the separation in $`z`$-direction between the 2DES layer and the particles (e.g. via the gate thickness). This, however, is expected to have only a minor influence on the Hall response (Fig. 2, crosses and upper scale).
To make use of the predicted increase in sensitivity we prepared Hall crosses of approximately 3 $`\times `$ 3 $`\mu `$m<sup>2</sup> in size (cf. Fig. 1). As described before, arrays of several hundred iron particles with interparticle distances of 150 nm were then grown onto these Hall crosses. The dimensions
were chosen to match the size of the Hall crosses to be grown onto and to approximate arrays fabricated earlier. This permits direct comparison of the measured Hall voltages. Magnetic measurements were performed from 10 – 100 K and for different angles of the applied field. Due to their elongated shape (aspect ratio of approx. 5:1) the particles possess a large shape anisotropy making their long axis an easy magnetization direction (EMD) along $`z`$. A mean particle switching field of $`H_{sw}=`$160 mT for fields applied parallel to the EMD (Fig. 3, top panel) was observed. Apart from the typical switching field distribution (about 30 mT) there was a small portion of particles with a distinguishable higher $`H_{sw}`$ of about 230 mT as indicated by “bumps” in the corresponding parts of the magnetization curves. Such a twofold switching field distribution may be caused by a distribution of the magnetic core diameter of the particles. It seems more likely, however, that the majority of the particles consisted of more than one grain. Therefore, they have a smaller switching field than single crystalline particles. Such structural differences would naturally account for distinguishable $`H_{sw}`$-values.
For fields perpendicular to the EMD, the magnetization behavior was controlled by reversible rotation (Fig. 3, lower panel). For increasing fields strength, the particle magnetization rotated toward the field direction, i.e. toward an orientation perpendicular to the $`z`$-direction. Thus, a decreasing Hall voltage is measured. For intermediate angle (shown in the center panel of Fig. 3 is the curve measured at 60) both reversible rotation and switching contribute to the overall magnetization behavior: the former ends to a small decrease of the $`z`$-component of the magnetization (visible at fields below about 130 mT) whereas the latter dominates around 200 mT. An estimate of the mean value of the shape anisotropy constant from the reversible part of this curve (based on the Stoner-Wohlfarth model) yielded $`K_S0.3`$ MJm<sup>-3</sup>. This value is about 40% of the number expected from the particles’ shape. This again indicates that most of the particles are polycrystalline. The peculiar shape of the curve measured at 90 can then be explained by a distribution of $`K_S`$-values with maximum $`K_S`$-values as high as 0.45 MJm<sup>-3</sup>.
The Hall voltages measured for 0 and 60 exceeded those measured earlier by more than an order of magnitude (after adjusting for changed experimental conditions, e.g. drive current, carrier concentration) in good agreement with our predictions. At zero field, the measured Hall voltage should not depend on the orientation of the applied field if all magnetizations of the individual particles point in the same direction. Obviously, this condition was not fulfilled for curves measured for 90. In fact, for fields decreasing from a value well above the anisotropy field the particle magnetization could rotate toward either direction along the EMD. From the 0 and 60-curves a mean particle core diameter of 17 nm was estimated (particles were grown 80 nm in height).
We emphasize that the resulting Hall voltage does not substantially depend on the absolute size of the array or the Hall cross–as long as they match. In the present experiment the noise of the Hall voltage was measured to be 0.04 $`\mu `$V$`/\sqrt{Hz}`$ at 30 K and zero field and increased to about 0.07 $`\mu `$V$`/\sqrt{Hz}`$ at 100 K and 1.0 T. We predict a Hall voltage of $`\mathrm{\hspace{0.17em}0.24}\mu `$V for a single particle of 10 nm diameter grown onto a $`400\times 400`$ nm<sup>2</sup> Hall cross (experimental conditions as for the measurements in Fig. 3, no depletion effects of the 2DES were taken into consideration). This voltage would exceed the highest noise level by a factor of 5. Hence, we expect to be able to measure any number of particles, from a single particle to a few particles up to arrays of several hundred particles, by growing them on Hall crosses of appropriate size. Here, STM assisted growth appears to be the perfect tool.
The authors wish to thank A. C. Gossard for providing the 2DES material and A. Anane for helpful discussions. |
no-problem/9912/quant-ph9912002.html | ar5iv | text | # Comment on ‘Counterfactual entanglement and non-local correlations in separable states’.
## Abstract
The arguments of Cohen \[Phys. Rev. A 60, 80 (1999)\] against the ‘ignorance interpretation’ of mixed states are questioned. The physical arguments are shown to be inconsistent and the supporting example illustrates the opposite of the original statement. The operational difference between two possible definitions of mixed states is exposed and the inadequacy of one of them is stressed.
PACS 03.65.Bz,
In a recent paper Cohen constructs an example of counterfactual entanglement and also deals with two possible scenarios that lead to the appearence of mixed states. These states arise either from tracing out unavailable degrees of freedom (‘ancilla interpretation’) or from the actual mixing of pure states (‘ignorance interpretation’). Cohen states that the ignorance interpretation is ‘unsatisfactory’. The abovementioned counterfactual entanglement and the arguments against the ignorance interpretation allow to question whether it is ‘appropriate to label weighted sums of projections on product states as “separable mixed states”’.
The relevance of counterfactual entanglement to the physical meaning of separable states and its deeper implications on the non-locality problems will not be discussed here. However, while some of Cohen’s arguments against the ignorence interpretation are of philosophical nature and their acceptance or refutal is a matter of personal opinion, several claims can be rigorously analyzed and are demonstrably wrong. It should be stressed, however, that while their refutal may weaken the case against the existence of separable states, it by no means undermines the validity of Cohen’s example of the countrfactual entanglement. Moreover, it should be noted that some of Cohen’s arguments against the ignorance interpretation of mixed states follow a similar discussion in the book of d’Espagnat .
One of the arguments of is that ‘in some cases different statistical mixtures of pure states may appear to be representable by the same density matrix but may nevertheless be experimentally distinguishable’. An example which is meant to illustrate this argument is a comparison between two large sets of spin-$`\frac{1}{2}`$ particles. In the first set exactly $`N`$ out of $`2N`$ particles are prepared in spin-up state in $`z`$ direction and the other half is prepared spin-down in $`z`$ direction. Another set is prepared according to the same recipe, but along the $`x`$-axis. As shown below the example illustrates exactly the opposite of Cohen’s claim.
We begin by noting that the density operator that we ascribe to a state depends not only on the preparation procedure, but also on the experimental techniques that can be used and on the number of systems available to the test. Moreover, either there is a finite probability to distinguish between the states, or they have the same density matrix (again, the confidence level and the possibility of the procedure itself depend on the quality of experimental techniques).
Let us consider the simplest distinguishability criterion: probability of error. It deals with two states whose density matrices $`\rho _1`$ and $`\rho _2`$ are known. An observer is given one of them and is allowed to perform any lawful quantum operation. At the end, the observer should give an unambiguous answer which state it was. The probability to err is
$$P_E=\frac{1}{2}\frac{1}{4}\mathrm{Tr}|\rho _1\rho _2|,$$
(1)
and it depends only on the density matrices. In particular, identity of density matrices is equivalent to total indistinguishability of the states and distinguishable states cannot ‘appear to be representable by the same density matrix’.
Now let us examine Cohen’s example. We suppose that the detectors are ideal and consider two possible ways to identify the states. First, we analyze the experiment where particles are tested individually along the same axis (say, $`z`$-axis). If we have the first set, we should get exactly $`N`$ up and $`N`$ down results, while the probabilities of these outcomes for the second set are distributed binomially. It is easy to see that the probaility of error in the identification is
$$P_E=\frac{1}{2}\times \frac{1}{2^{2N}}\frac{(2N)!}{(N!)^2}\frac{1}{2\sqrt{\pi N}},$$
(2)
On the other hand, multiparticle measurements not only improve the distinguishability, but also highlight the differences between preparation procedures. If the particles can be considered as distinguishable (like qubits in the register of a quantum computer), then no symmetrisation is needed. The preparations can be represented by two pure staes $`\psi _1`$ and $`\psi _2`$, which are composed from direct products of eigenstates of $`\sigma _z`$ and eigenstates of $`\sigma _x`$ respectively. If we take into account that the overlap between any eigenstate of $`\sigma _z`$ and any eigenstate of $`\sigma _x`$ is $`|u_i|v_j|=\frac{1}{\sqrt{2}}`$, the overlap between $`\psi _1`$ and $`\psi _2`$ is
$$|\psi _1|\psi _2|=\frac{1}{2^N}.$$
(3)
If the particles are linearly polarized photons and polarization is the only degree of freedom, then the correct description of both preparations is given by symmetric states in the Fock space. If $`\widehat{a}^{}`$ and $`\widehat{b}^{}`$ are creation operators in two perpendicular directions, and $`|0,0`$ is the vacuum state, then the first state is given by
$$\psi _1=(\widehat{a}^{}\widehat{b}^{})^N|0,0/N!=|N,N,$$
(4)
while the second state is
$$\psi _2=(\widehat{a}^{}+\widehat{b}^{})^N(\widehat{a}^{}+\widehat{b}^{})^N|0,0/(2^NN!).$$
(5)
Moreover, it is easy to see that these states are in fact almost (or even exactly) orthogonal. Using formulas 0.157 of , we find that their overlap $`|\psi _1|\psi _2|`$ for even $`N`$ is
$$|\psi _1|\psi _2|=\frac{1}{2^N}\frac{1}{N!}\left|\underset{k=0}{\overset{N}{}}(1)^k\left(\begin{array}{c}N\\ k\end{array}\right)^2\right|=\frac{1}{2^N((N/2)!)^2},$$
(6)
while for odd $`N`$ the states are orthogonal,
$$|\psi _1|\psi _2|=0.$$
(7)
(This happens, e.g., for $`N=1`$, giving a spin-1 triplet state). To compare two modes of investigation we note that for two pure states $`\psi _1`$ and $`\psi _2`$
$$P_E(\psi _1,\psi _2)=\frac{1}{2}(1\sqrt{1|\psi _1|\psi _2|^2}).$$
(8)
Thus for the distinguishable particles we have
$$P_E=\frac{1}{2^{2N+2}}.$$
(9)
For the photons we have
$$P_E|\psi _1|\psi _2|^2/4=\frac{1}{(2\pi e)^2}\left(\frac{e}{N}\right)^{2N+2}$$
(10)
for small overlaps (large $`N`$) when $`N`$ is even, and $`P_E=0`$ (exact distinguishability) for odd $`N`$. Thus we see that while we ought to agree with Cohen that the states are distinguishable, it is impossible to ascribe to them the same density matrix (especially if they are orthogonal).
To refine the above analysis we clarify what is exactly meant by an ‘ignorance mixture’. Two definitions are found in literature. Namely, the density operator is defined either as
$$\rho ^{(1)}=\underset{i}{}N_i|u_iu_i|/N,N=\underset{i}{}N_i,$$
(11)
where $`N_i`$’s are exact numbers (‘type-1 preparation’ ,), or
$$\rho ^{(2)}=\underset{i}{}p_i|u_iu_i|,\underset{i}{}p_i=1,$$
(12)
where $`p_i`$’s are probabilities to have states $`|u_i`$ (‘type-2 preparation’,). It is not stated explicitely in to which definition the author subscribes (however all examples are of the type-1).
Type-1 and type-2 preparations of ‘the same state’ $`\rho `$ are not equivalent. They may be distinguishable (with finite probability) if more than one copy is available. Consider a preparation of type-1 of exactly $`N`$ spin-up and and $`N`$ spin-down particles along some known axis, and a preparation of type-2 with $`p_1=p_2=N/2N`$ along the same axis. When detectors are ideal and particles are tested individually, the probability to err in their identification is given by Eq. (2). The immediate consequence of this result is that the preparation-1 type state cannot, in general, reproduce the local statistics of the EPR experiment. This statistics is reproduced by the maximally mixed state $`\rho ^{(2)}`$. This result, together with Eq. (2) and above examples, implies that despite its appearence, the type-1 preparation is not described by the maximally mixed density matrix (the correct description depends on the exact details of the preparation and may even be given by a pure state).
The subtle dependence of the ascribed $`\rho `$ on the details of the preparation and the observation procedures is further illustrated by the examples below. Consider again Cohen’s example, but let us replace type-1 preparations by those of type-2, with $`p_i=N_i/N`$.
If only individual particles can be tested, then for both preparations the probabilities of the oucomes are described by the same binomial distribution. Thus they are indistinguishable and the correct description of the states should be given by mixed density matrices $`\rho _1=\rho _2=\mathrm{𝟏}/2`$, where $`\mathrm{𝟏}`$ is a unit matrix. This result is independent from the number of available particles.
In the case of distinguishable particles the multiparticle state reduces to the direct product of the individual density matrices. Obviously, in this case it is also impossible to distinguish between the preparations.
On the other hand, mixtures of different $`2N`$-boson states lead to a different conclusion. Since the expressions becomes cumbersome for large $`N`$, let us consider the simplest case of two-particle states. Now we have
$$\rho _1=\frac{1}{4}|0,20,2|+\frac{1}{2}|1,11,1|+\frac{1}{4}|2,02,0|,$$
(13)
and
$`\rho _2`$ $`=`$ $`{\displaystyle \frac{1}{4}}|\varphi (0,2)\varphi (0,2)|+{\displaystyle \frac{1}{2}}|\varphi (1,1)\varphi (1,1)|`$ (15)
$`+{\displaystyle \frac{1}{4}}|\varphi (2,0)\varphi (2,0)|,`$
with
$$\varphi (k,2k)=\frac{1}{2\sqrt{k!}\sqrt{(2k)!}}(\widehat{a}^{}+\widehat{b}^{})^k(\widehat{a}^{}+\widehat{b}^{})^{2k}|0,0.$$
(16)
A straightforward calculation gives
$$P_E=\frac{1}{2}\frac{1}{4}\times \frac{1}{2}=\frac{3}{8},$$
(17)
and there is a nonvanishing probability to distinguish between these two states.
It is also stated in that ‘given an EPR spin-singlet pair each separate particle can be described by a mixed state, with the other particle then taking the role of ancilla. But if we then assume that this mixed state can also be given an ignorance interpretation, inconsistencies immediately arise’. We claim that no inconsistencies arise, as shown below. The local statistics of both EPR particles can be described by the maximally mixed density matrix. A possible interpretation of this local state is that it originates from a type-2 preparation procedure, since type-1 is inconsistent with experiment. However, when the correlations between the particles are analyzed, the description of the complete system in terms of mixed states should be dropped, regardless of our interpretation of Bell’s theorem or philosophical attitudes, but simply because it is inconsistent with experiment.
To summarize, we see that the type-1 interpretation is unsuitable for the description of mixed states, but because of reasons different from those that are given in . On the other hand, the type-2 realisation of mixed states leads to a consistent description of physical systems and its predictions are identical to those obtained with the help of ancilla interpretation.
Acknowledgments
Discussions with Asher Peres and his help in the preparation of the manuscript are gratefully acknowledged. Several of the above examples are extensions of the exercises from chapters 2 and 5 of his book . I also thank Oliver Cohen for clarifying important points of his article. This work is supported by a grant from the Technion Graduate School. |
no-problem/9912/hep-ph9912235.html | ar5iv | text | # 1 Feynman graphs for the decay 𝑡→𝑐𝐻in the unitary gauge (𝑚_𝑐=0 is assumed).
LC-TH-1999-012
ROME1-1281/99
December 1999
The $`tcH`$ decay width in the standard model.
B. Mele $`^{a,b},`$ S. Petrarca $`^{b,a}`$ and A. Soddu <sup>a,c</sup>
<sup>a</sup> INFN, Sezione di Roma 1, Rome, Italy
<sup>b</sup> Rome University “La Sapienza”, Rome, Italy
<sup>c</sup> University of Virginia, Charlottesville, VA, USA
Abstract
The $`tcH`$decay width has been computed in the standard model with a light Higgs boson. The corresponding branching fraction has been found to be in the range $`B(tcH)10^{13}÷10^{14}`$ for $`M_Z<m_H<\mathrm{\hspace{0.33em}2}M_W`$. Our results correct the numerical evaluation usually quoted in the literature.
The one-loop flavor-changing transitions, $`tcg`$, $`tc\gamma `$, $`tcZ`$and $`tcH`$, are particularly interesting, among the top quark rare decays. Indeed, new physics, such as supersymmetry, an extended Higgs sector and heavier-fermion families could conspicuosly affect the rates for this decays. In the standard model (SM), these processes are in general quite suppressed due to the Glashow-Iliopoulos-Maiani (GIM) mechanism, controlled by the light masses of the $`b,s,d`$ quarks circulating in the loop. The corresponding branching fractions $`B_i=\mathrm{\Gamma }_i/\mathrm{\Gamma }_T`$ are further decreased by the large total decay width $`\mathrm{\Gamma }_T`$ of the top quark. The complete calculations of the one-loop flavour-changing top decays have been performed, before the top quark experimental observation, in the paper by Eilam, Hewett and Soni (also based on Eilam, Haeri and Soni ). Assuming $`m_t=175`$GeV, the value of the total width $`\mathrm{\Gamma }_T\mathrm{\Gamma }(tbW)`$ is $`\mathrm{\Gamma }_T1.55`$ GeV, and one gets from ref.
$$B(tcg)410^{11},B(tc\gamma )510^{13},B(tcZ)1.310^{13}.$$
(1)
In the same ref. , a much larger branching fraction for the decay $`tcH`$is presented as function of the top and Higgs masses (in Fig. 1 the relevant Feynman graphs for this channel are shown).
For $`m_t175`$ GeV and 40 GeV$`<m_H<\mathrm{\hspace{0.33em}2}M_W`$, the value
$$B(tcH)10^7÷10^8$$
(2)
is obtained, by means of the analytical formulae presented in ref. for the fourth-generation quark decay $`b^{}bH`$, in a theoretical framework assuming four flavour families. Such relatively large values for $`B(tcH)`$ look surprising, since the topology of the Feynman graphs for the different one-loop channels is similar, and a GIM suppression, governed by the down-type quark masses, is acting in all the decays.
In order to clarify the situation, we recomputed from scratch the complete analytical decay width for $`tcH`$, as described in . The corresponding numerical results for $`B(tcH)`$, when $`m_t=175`$GeV and $`\mathrm{\Gamma }(tbW)1.55`$ GeV, are reported in Table 1. We used $`M_W=80.3`$GeV, $`m_b=5`$GeV, $`m_s=0.2`$GeV, and for the Kobayashi-Maskawa matrix elements $`|V_{tb}^{}V_{cb}|=0.04`$. Furthermore, we assumed $`|V_{ts}^{}V_{cs}|=|V_{tb}^{}V_{cb}|`$. As a consequence, the $`m_d`$ dependence in the amplitude drops out.
Our results are several orders of magnitude smaller than the ones reported in the literature. In particular, for $`m_HM_Z`$ we obtain
$$B_{new}(tcH)1.210^{13}$$
(3)
to be compared with the corresponding value presented in ref.
$$B_{old}(tcH)610^8.$$
(4)
In order to trace back the source of this inconsistency, we performed a thorough study of the analytical formula in eq. (3) of ref. , for the decay width of the fourth-family down-type quark $`b^{}bH`$, that is the basis for the numerical evaluation of $`B(tcH)`$ presented in ref. . The result of this study was that we agreed with the analytical computation in , but we disagreed with the numerical evaluation of $`B(tcH)`$ in .
The explanation for this situation can be ascribed to some error in the computer code used by the authors of ref. to work out their Fig. 3. This explanation has been confirmed to us by one of the authors of ref. (J.L.H.), and by the erratum appeared consequently , whose evaluation we now completely agree with.
In the following we give some euristic considerations useful in order to understand the correct order of magnitude of the rate for the decay $`tcH`$. The comparison between the rates for $`tcZ`$and $`tcH`$and the corresponding rates for the tree-level decays $`tbWZ`$and $`tbWH`$, when $`m_HM_Z`$ can give some hint on this order of magnitude. In fact, the latter channels can be considered a sort of lower-order parent processes for the one-loop decays, as can be seen in Fig. 2, where the relevant Feynman graphs are shown.
Indeed, the Feynman graphs for $`tcZ`$and $`tcH`$can be obtained by recombining the final $`b`$ quark and $`W`$ into a $`c`$ quark in the three-body decays $`tbWZ`$and $`tbWH`$, respectively, and by adding analogous contributions where the $`b`$ quark is replaced by the $`s`$ and $`d`$ quarks. Then, the depletion of the $`tcH`$rate with respect to the parent $`tbWH`$rate is expected to be of the same order of magnitude of the depletion of $`tcZ`$with respect to $`tbWZ`$, for $`m_HM_Z`$. In fact, the GIM mechanism acts in a similar way in the one-loop decays into $`H`$ and $`Z`$.
The $`tbWZ`$and $`tbWH`$decay rates have been computed, taking into account crucial $`W`$ and $`Z`$ finite-width effects, in ref. . For $`m_HM_Z`$, the two widths are comparable. In particular, for $`m_t175`$GeV, one has
$$B(tbWZ)610^7B(tbWH)310^7.$$
(5)
From , $`B(tcH)610^8`$ for $`m_HM_Z`$. Accordingly, the ratio of the one-loop and tree-level decay rates is
$$r_H\frac{B(tcH)}{B(tbWH)}0.2$$
(6)
to be confronted with
$$r_Z\frac{B(tcZ)}{B(tbWZ)}210^7.$$
(7)
On the other hand, $`r_H`$ and $`r_Z`$ are related to the quantity
$$\left(\frac{g}{\sqrt{2}}|V_{tb}^{}V_{cb}|\frac{m_b^2}{M_W^2}\right)^210^8$$
(8)
(where $`V_{ij}`$ are the Kobayashi-Maskawa matrix elements) arising from the higher-order in the weak coupling and the GIM suppression mechanism of the one-loop decay width. The large discrepancy between the value of the ratio $`r_H`$ in eq. (6) and what was expected from the factor in eq. (8), which, on the other hand, is supported by the value of $`r_Z`$, was a further indication that the values for $`B(tcH)`$ reported in eq. (2) could be incorrect.
Indeed, the new value of $`B(tcH)`$ in eq. (3) gives $`r_H410^7`$.
In conclusion, we have pointed out that one of the numerical results of ref. establishing a relatively large branching ratio for the decay $`tcH`$in the SM has been overestimated. The correct numerical estimates are shown in Table 1. We find $`B(tcH)110^{13}÷410^{15}`$ for $`M_Z<m_H<\mathrm{\hspace{0.33em}2}M_W`$. Such a small rate will not be measurable even at the highest luminosity accelerators that are presently conceivable. An eventual experimental signal in the rare $`t`$ decays will definitely have to be ascribed to some new physics effect.
We thank V.A. Ilyin for discussions and suggestions. |
no-problem/9912/cond-mat9912495.html | ar5iv | text | # Spin correlations in nonlinear optical response: Light-induced Kondo effect
## Abstract
We study the role of spin correlations in nonlinear absorption due to optical transitions from a deep impurity level to states above a Fermi sea. We demonstrate that the Hubbard repulsion between two electrons at the impurity leads to a logarithmic divergence in the third-order optical susceptibility $`\chi ^{(3)}`$ at the absorption threshold. This divergence is a manifestation of the Kondo physics in the nonlinear optical response of Fermi sea systems. We also show that, for off-resonant pump excitation, the pump-probe spectrum exhibits a narrow peak below the linear absorption onset. Remarkably, the light-induced Kondo temperature, which governs the shape of the Kondo-absorption spectrum, can be tuned by varying the intensity and frequency of the pump.
There are two prominent many-body effects in the linear absorption spectrum due to optical transitions from a localized impurity level to the continuum of states above a Fermi sea (FS). First is the Mahan singularity due to the attractive interaction between the FS and the localized hole. Second is the Anderson orthogonality catastrophe due to the readjustment of the FS density profile during the optical transition. Both effects have long become textbook material. The role of many-body correlations in the nonlinear optical response has only been investigated during the last decade . Recently, there has been a growing interest in the coherent ultrafast dynamics of the FS systems at low temperatures.
In this paper, we suggest a new many-body effect in the nonlinear absorption of a FS system with a deep impurity level. This effect originates from the spin correlations between the photoexcited and the FS electrons. We note that a number of different intermediate processes contribute to the third–order optical susceptibility $`\chi ^{(3)}`$. It is crucial that, in the system under study, some of the intermediate states involve a doubly-occupied impurity level. For example, the optical field can first cause a transition of a FS electron to the singly-occupied impurity level, which thus becomes doubly-occupied, and then excite both electrons from the impurity level to the conduction band. This is illustrated in Fig. 1(a). Important is that, while on the impurity, the two electrons experience a Hubbard repulsion. Our main observation is that such a repulsion gives rise to an anomaly in $`\chi ^{(3)}`$. The origin of this anomaly is intimately related to the Kondo effect.
To be specific, we restrict ourselves to pump-probe spectroscopy, where a strong pump and a weak probe optical field are applied to the system and the optical polarization along the probe direction is measured. We only consider near-threshold absorption at zero temperature and assume that the pump frequency is tuned below the onset of optical transitions from the impurity level so that dephasing processes due to electron-electron and electron-phonon interactions are suppressed. Under such excitation conditions, the following Hamiltonian describes the system: $`H_{tot}=H+H_1(t)+H_2(t)`$, where
$$H=\underset{𝐤\sigma }{}\epsilon _kc_{𝐤\sigma }^{}c_{𝐤\sigma }+\epsilon _d\underset{\sigma }{}d_\sigma ^{}d_\sigma +\frac{U}{2}\underset{\sigma \sigma ^{}}{}\widehat{n}_\sigma \widehat{n}_\sigma ^{},$$
(1)
is the Hamiltonian in the absence of optical fields; here $`c_{𝐤\sigma }^{}`$ and $`d_\sigma ^{}`$ are conduction and localized electron creation operators, respectively, ($`\widehat{n}_\sigma =d_\sigma ^{}d_\sigma `$); $`\epsilon _k`$ and $`\epsilon _d`$ are the corresponding energies, and $`U`$ is the Hubbard interaction (all energies are measured from the Fermi level). The coupling to the optical fields is described by the Hamiltonian $`H_i(t)=M_i(t)\widehat{T}^{}+h.c.`$ where $`\widehat{T}^{}=_{𝐤\sigma }c_{𝐤\sigma }^{}d_\sigma `$, ($`i=1,2`$ denotes the probe and pump, respectively) with $`M_i(t)=e^{i𝐤_i𝐫i\omega _it}\mu _i(t)`$. Here $`_i(t)`$, $`𝐤_i`$, and $`\omega _i`$ are the pump/probe electric field amplitude, direction and central frequency, respectively, and $`\mu `$ is the dipole matrix element. The pump-probe polarization is obtained by expanding the optical polarization, $`\mu \widehat{T}`$, to the first order in $`H_1`$ and keeping the terms propagating in the probe direction:
$`P(t)=i\mu {\displaystyle _{\mathrm{}}^t}𝑑t^{}M_1(t^{})\left[\mathrm{\Phi }(t)|\widehat{T}𝒦(t,t^{})\widehat{T}^{}|\mathrm{\Phi }(t^{})\mathrm{\Phi }(t^{})|\widehat{T}^{}𝒦(t^{},t)\widehat{T}|\mathrm{\Phi }(t)\right],`$ (2)
where $`𝒦(t,t^{})`$ is the evolution operator for the Hamiltonian $`H+H_2(t)`$ and the state $`|\mathrm{\Phi }(t)`$ satisfies the Schrödinger equation $`i_t|\mathrm{\Phi }(t)=[H+H_2(t)]|\mathrm{\Phi }(t)`$.
The third order polarization is obtained by expanding $`𝒦(t,t^{})`$ and $`|\mathrm{\Phi }(t)`$ up to the second order in $`H_2`$. Below we consider sufficiently large values of $`U`$ so that, in the absence of optical fields, the ground state of $`H`$, $`|\mathrm{\Omega }_0`$, represents a singly-occupied impurity and full FS. For large $`U`$, the doubly-occupied impurity states are energetically unfavorable and can be excluded from the expansion of the polarization (2) with respect to $`H_2`$. The third-order pump-probe polarization then takes the form $`P^{(3)}(t)=e^{i𝐤_1𝐫i\omega _1t}\stackrel{~}{P}^{(3)}`$ with
$`\stackrel{~}{P}^{(3)}=i\mu ^4{\displaystyle _{\mathrm{}}^t}𝑑t^{}_1(t^{})e^{i\omega _1(tt^{})}\left[Q_1(t,t^{})+Q_1^{}(t^{},t)+Q_2(t,t^{})+Q_3(t,t^{})\right],`$ (3)
where
$`Q_1(t,t^{})={\displaystyle _{\mathrm{}}^t^{}}𝑑t_1{\displaystyle _{\mathrm{}}^{t_1}}𝑑t_2f(t_1,t_2)F(t,t^{},t_1,t_2),`$ (4)
$`Q_2(t,t^{})={\displaystyle _t^{}^t}𝑑t_2{\displaystyle _t^{}^{t_2}}𝑑t_1f(t_1,t_2)F(t,t_2,t_1,t^{}),`$ (5)
$`Q_3(t,t^{})={\displaystyle _{\mathrm{}}^t^{}}𝑑t_1{\displaystyle _{\mathrm{}}^t}𝑑t_2f(t_1,t_2)F(t_1,t^{},t,t_2).`$ (6)
Here we denoted $`f(t_1,t_2)=_2(t_1)_2(t_2)e^{i\omega _2(t_1t_2)}`$, and
$`F(t,t^{},t_1,t_2)=`$ $`\mathrm{\Omega }_0|\widehat{T}e^{iH(tt^{})}\widehat{T}^{}e^{iH(t^{}t_1)}\widehat{T}e^{iH(t_1t_2)}\widehat{T}^{}|\mathrm{\Omega }_0`$ (7)
$`=`$ $`{\displaystyle \underset{\mathrm{𝐩𝐪𝐤}^{}𝐤\lambda s\sigma ^{}\sigma }{}}A_{\mathrm{𝐩𝐪𝐤}^{}𝐤}^{\lambda s\sigma ^{}\sigma }e^{i(\epsilon _p\epsilon _d)(tt^{})i(\epsilon _k\epsilon _k^{})(t^{}t_1)i(\epsilon _k\epsilon _d)(t_1t_2)},`$ (8)
$$A_{\mathrm{𝐩𝐪𝐤}^{}𝐤}^{\lambda s\sigma ^{}\sigma }=\mathrm{\Omega }_0|d_\lambda ^{}c_{𝐩\lambda }c_{𝐪s}^{}d_sd_\sigma ^{}^{}c_{𝐤^{}\sigma ^{}}c_{𝐤\sigma }^{}d_\sigma |\mathrm{\Omega }_0=\delta _{\lambda \sigma }\delta _{s\sigma ^{}}n_\sigma (1n_p)[\delta _{\mathrm{𝐩𝐤}}\delta _{\mathrm{𝐪𝐤}^{}}n_q+\delta _{\sigma \sigma ^{}}\delta _{\mathrm{𝐩𝐪}}\delta _{\mathrm{𝐤𝐤}^{}}(1n_k)],$$
(9)
with $`n_\sigma =\mathrm{\Omega }_0|d_\sigma ^{}d_\sigma |\mathrm{\Omega }_0`$ and $`n_k=\mathrm{\Omega }_0|c_{𝐤\sigma }^{}c_{𝐤\sigma }|\mathrm{\Omega }_0`$ (impurity occupation number is $`n_d=_\sigma n_\sigma =1`$ here). For monochromatic optical fields, $`_i(t)=_i`$, the time integrals can be explicitly evaluated. After a lengthy but straightforward calculation, the third-order polarization (3) takes the form $`\stackrel{~}{P}^{(3)}=\stackrel{~}{P}_0^{(3)}+\stackrel{~}{P}_K^{(3)}`$ with
$`\stackrel{~}{P}_0^{(3)}=`$ $`\mu ^4_1_2^2{\displaystyle \underset{\mathrm{𝐩𝐪}}{}}{\displaystyle \frac{(1n_p)}{\epsilon _p\epsilon _d\omega _1}}\left[{\displaystyle \frac{2}{(\epsilon _p\epsilon _q)(\epsilon _pE_d)}}{\displaystyle \frac{1}{(\epsilon _p\epsilon _d\omega _1)(\epsilon _qE_d)}}\right],`$ (10)
$`\stackrel{~}{P}_K^{(3)}=`$ $`(N1)\mu ^4_1_2^2{\displaystyle \underset{\mathrm{𝐩𝐪}}{}}{\displaystyle \frac{(1n_p)n_q}{\epsilon _p\epsilon _d\omega _1}}\left[{\displaystyle \frac{2}{(\epsilon _p\epsilon _q)(\epsilon _pE_d)}}{\displaystyle \frac{1}{(\epsilon _p\epsilon _d\omega _1)(\epsilon _qE_d)}}\right],`$ (11)
where $`N`$ is the impurity level degeneracy. Here we introduced the effective impurity level $`E_d=\epsilon _d+\omega _2`$. The first term, $`\stackrel{~}{P}_0^{(3)}`$, is the usual third-order polarization for spinless ($`N=1`$) electrons. The second term, $`\stackrel{~}{P}_K^{(3)}`$, originates from the suppression, due to the Hubbard repulsion $`U`$, of the contributions from doubly-occupied impurity states. As indicated by the prefactor $`(N1)`$, it comes from the additional intermediate states that are absent in the spinless case \[see Fig 1(b)\].
Consider the first term in Eq. (11). The restriction of the sum over $`𝐪`$ to states below the Fermi level results in a logarithmic divergence in the absorption coefficient, $`\alpha \mathrm{Im}\stackrel{~}{P}`$, at the absorption threshold, $`\omega _1=\epsilon _d`$:
$$\mathrm{Im}\stackrel{~}{P}_K^{(3)}=(N1)p_0\theta (\omega _1+\epsilon _d)\frac{2\mathrm{\Delta }}{\pi \delta \omega }\mathrm{ln}\left|\frac{D}{\omega _1+\epsilon _d}\right|,$$
(12)
where $`p_0=\pi _1\mu ^2g`$, $`\delta \omega =\omega _1\omega _2`$ is the pump-probe detuning, and $`\mathrm{\Delta }=\pi g\mu ^2_2^2`$ is the energy width characterizing the pump intensity; $`D`$ and $`g`$ are the bandwidth and the density of states (per spin) at the Fermi level, respectively. Recalling that the linear absorption is determined by $`\mathrm{Im}\stackrel{~}{P}^{(1)}=p_0\theta (\omega _1+\epsilon _d)`$, we see that it differs from Eq. (12) by a factor $`\frac{2\mathrm{\Delta }}{\pi \delta \omega }\mathrm{ln}\left|\frac{D}{\omega _1+\epsilon _d}\right|`$ (setting for simplicity $`N=2`$). In other words, $`\mathrm{Im}\stackrel{~}{P}^{(1)}`$ and $`\mathrm{Im}\stackrel{~}{P}_K^{(3)}`$ become comparable when
$$\omega _1+\epsilon _d\delta \omega +E_dD\mathrm{exp}\left(\frac{\pi \delta \omega }{2\mathrm{\Delta }}\right).$$
(13)
We see that the perturbative expansion of the nonlinear optical polarization in terms of the optical fields breaks down even for weak pump intensities (i.e., small $`\mathrm{\Delta }`$). The above condition of its validity depends critically on the detuning of the pump frequency from the Fermi level. For off-resonant pump, such that the effective impurity level lies below the Fermi level, $`|E_d|=|\epsilon _d|\omega _2\mathrm{\Delta }`$, the relation (13) can be written as $`\delta \omega +E_dT_K`$ with
$$T_K=De^{\pi E_d/2\mathrm{\Delta }}=D\mathrm{exp}\left[\frac{|\epsilon _d|\omega _2}{2g\mu ^2_2^2}\right].$$
(14)
This new energy scale can be associated with the Kondo temperature—an energy scale known to emerge from a spin-flip scattering of a FS electron by a magnetic impurity. Remarkably, in our case, the Kondo temperature can be tuned by varying the frequency and intensity of the pump. In fact, the logarithmic divergence in Eq. (12) is an indication of an optically-induced Kondo effect.
Let us now turn to the second term in Eq. (11). In fact, it represents the lowest order in the expansion of the linear polarization with impurity level shifted by $`\delta \epsilon =(N1)\mu ^2_2^2_𝐪\frac{n_q}{\epsilon _qE_d}`$,
$$\stackrel{~}{P}^{(1)}=\mu ^2_1\underset{𝐩}{}\frac{(1n_p)}{\epsilon _p\epsilon _d+\delta \epsilon \omega _1}.$$
(15)
The origin of $`\delta \epsilon `$ can be understood by observing that, for monochromatic pump, the coupling between the FS and the impurity can be described by a time-independent Anderson Hamiltonian $`H_A`$ with effective impurity level $`E_d=\epsilon _d+\omega _2`$ and hybridization parameter $`V=\mu _2`$. By virtue of this analogy, $`\delta \epsilon `$ is the perturbative solution of the following equation for the self-energy part:
$`E_0=\mathrm{\Sigma }(E_0)(N1)\mu ^2_2^2{\displaystyle \underset{𝐪}{}}{\displaystyle \frac{n_q}{\epsilon _qE_d+E_0}}(N1){\displaystyle \frac{\mathrm{\Delta }}{\pi }}\mathrm{ln}{\displaystyle \frac{E_dE_0}{D}},`$ (16)
which determines the renormalization of the effective impurity energy, $`E_d`$, to $`\stackrel{~}{E}_d=E_dE_0`$ . Indeed, to the first order in the optical field, Eq. (16) yields $`E_0=\delta \epsilon `$ after omitting $`E_0`$ in the rhs.
The logarithmic divergence (12) indicates that near the absorption threshold, a nonperturbative treatment is necessary. Recall that the attractive interaction $`v_0`$ between a localized hole and FS electrons also leads to a logarithmically diverging correction (in the lowest order in $`v_0`$) even in the linear absorption: $`\delta \stackrel{~}{P}^{(1)}\stackrel{~}{P}^{(1)}gv_0\mathrm{ln}[D/(\omega _1+\epsilon _d)]`$. In the nonperturbative regime, $`\delta \stackrel{~}{P}^{(1)}\stackrel{~}{P}^{(1)}`$, this correction evolves into the Fermi edge singularity. The question is how the Kondo correction (12) will evolve in the nonperturbative regime. We first discuss qualitatively our results and defer the details to the end of the paper.
It can be seen from the expression (14) for $`T_K`$ that there is a well-defined critical pump intensity, $`\mathrm{\Delta }_c\pi g\mu ^2_{2c}^2=\frac{\pi }{2}(|\epsilon _d|\omega _2)`$. The shape of the nonlinear absorption spectrum will depend sharply on the ratio between $`\mathrm{\Delta }`$ and $`\mathrm{\Delta }_c`$. For strong pump, $`\mathrm{\Delta }>\mathrm{\Delta }_c`$, the Kondo correction (12) will develop into a broad peak with width $`\mathrm{\Delta }`$ and height $`p_0`$. This is illustrated in Fig. 2(a).
Much more delicate is the case $`\mathrm{\Delta }\mathrm{\Delta }_c`$, which is analogous to the Kondo limit. The Kondo scale $`T_K`$ is then much smaller than $`\mathrm{\Delta }`$, which is the case for well-below-resonance pump excitation, $`|\epsilon _d|\omega _2\mathrm{\Delta }`$. The impurity density of states in the Kondo limit is known to have two peaks well separated in energy by $`|E_d|=|\epsilon _d|\omega _2\mathrm{\Delta }`$ ($`E_d`$ is the effective level position). As a result, in the presence of the pump, the system sustains excitations originating from the beats between these peaks. These excitations can assist the absorption of a probe photon. The corresponding condition for the probe frequency reads $`|E_d|+\omega _1|\epsilon _d|`$, or $`\omega _1\omega _2`$. Thus, in the Kondo limit, the absorption spectrum exhibits a narrow peak below the linear absorption onset. This is illustrated in Fig. 2(b).
To calculate the shape of the below-threshold absorption peak, we adopt the large $`N`$ variational wave-function method by following the approach of . For monochromatic optical fields, the polarization (2) can be written as $`\stackrel{~}{P}=\mu ^2_1[G^<(E_0\delta \omega )+G^>(E_0+\delta \omega )]`$, where $`G^<(\epsilon )=\mathrm{\Omega }|T^{}(\epsilon H_A)^1T|\mathrm{\Omega }`$ \[$`G^>(\epsilon )`$ is similar but with $`TT^{}`$\]. In the leading order in $`N^1`$, $`|\mathrm{\Omega }`$ is given by $`|\mathrm{\Omega }=A\left(|0+_𝐪n_qa_q|𝐪,1\right)`$, where $`|𝐪,1=N^{1/2}_\sigma d_\sigma ^{}c_{𝐪\sigma }|0`$ ($`|0`$ stands for the full FS). The coefficients $`A`$ and $`a_k`$ are found by minimizing $`H_A`$ in this basis; one then obtains, e.g., $`A^2=1n_d`$, where $`n_d=(1+\pi \stackrel{~}{E}_d/N\mathrm{\Delta })^1`$ is the impurity occupation ($`N\mathrm{\Delta }`$ is finite in the large $`N`$ limit). The relevant Green function is obtained as
$$G^<(\epsilon )=\frac{\pi }{\mathrm{\Delta }}\left[\mathrm{\Sigma }(\epsilon )+\frac{|\mathrm{\Sigma }(\epsilon )|^2}{\epsilon \mathrm{\Sigma }(\epsilon )}\right].$$
(17)
Since $`\mathrm{\Sigma }(E_0)=E_0`$ \[see Eq. (16)\], for $`\epsilon =E_0\delta \omega `$ the second term has a pole at $`\delta \omega =0`$ which gives rise to a resonance. The $`N^1`$ correction gives a finite resonance width $`\mathrm{\Delta }`$. Using that the residue at the pole is $`[\mathrm{\Sigma }(E_0)/E_01]^1=n_d1`$ , we finally obtain
$`\mathrm{Im}\stackrel{~}{P}_K={\displaystyle \frac{p_0E_0^2(1n_d)^2}{\delta \omega ^2+\mathrm{\Delta }^2}}\left({\displaystyle \frac{\pi E_dT_K}{N\mathrm{\Delta }}}\right)^2{\displaystyle \frac{p_0}{\delta \omega ^2+\mathrm{\Delta }^2}}.`$ (18)
For the last estimate, we used that, in the Kondo limit ($`\mathrm{\Delta }\mathrm{\Delta }_c`$), $`1n_d\pi T_K/N\mathrm{\Delta }`$ and $`E_0E_d`$. Then the rhs of (18) describes the narrow below-threshold peak \[see Fig. 2(b)\]. In the Kondo limit, the factor $`(1n_d)^2`$ has the physical meaning of a product of populations of electrons in the narrow peak of the impurity spectral function (Kondo resonance) and “holes” in the wide peak (centered at $`\epsilon _d`$ below the Fermi level). Note, however that the above calculation was not restricted to the Kondo limit. For $`\mathrm{\Delta }\mathrm{\Delta }_c`$ (mixed-valence regime), we have $`1n_d1`$ and $`E_0N\mathrm{\Delta }`$. Then Eq. (18) reproduces the absorption peak in Fig. 2(a).
Note that, although we considered here, for simplicity, the limit of singly occupied impurity level in the ground state, the Kondo-absorption can take place even if the impurity is doubly occupied. Indeed, after the probe excites an impurity electron, the spin-flip scattering of FS electrons with the remaining impurity electron will lead to the Kondo resonance in the final state of the transition. In this case, however, the Kondo effect should show up in the fifth-order polarization.
A feasible system in which the proposed effect might be observed is, e.g., GaAs/AlGaAs superlattice delta-doped with Si donors located in the barrier. The role of impurity in this system is played by a shallow acceptor, e.g., Be. Molecular-beam epitaxy growth technology allows one to vary the quantum well width and to place acceptors right in the middle of each quantum well . In quantum wells, the valence band is only doubly degenerate with respect to the total angular momentum J. Thus, such a system emulates the large U limit considered here. The dipole matrix element for acceptor to conduction band transitions can be estimated as $`\mu \mu _0a`$, where $`\mu _0`$ is the interband matrix element and $`a`$ is the size of the acceptor wave function. For typical excitation intensities, the parameter $`\mathrm{\Delta }`$ ranges on the meV scale resulting in $`T_K\mathrm{\Delta }`$ for the pump detuning of several meV.
In conclusion, let us discuss the effect of a finite duration of the pump pulse, $`\tau `$. Our result for $`\chi ^{(3)}`$ remains unchanged if $`\tau `$ is longer than $`\mathrm{}/T_K`$. If $`\tau <\mathrm{}/T_K`$, then $`\tau `$ will serve as a cutoff of the logarithmic divergence in (12), and the Kondo correction will depend on the parameters of the pump $`_2`$ and $`\tau `$ as follows: $`\mathrm{Im}\stackrel{~}{P}_K^{(3)}_2^2\mathrm{ln}(D\tau /\mathrm{})`$. In the non-perturbative regime, our basic assumption was that, for monochromatic pump, the system maps onto the ground state of the Anderson Hamiltonian. Our results apply if the pump is turned on slowly on a time scale longer than $`\mathrm{}/T_K`$. For shorter pulse duration, the build up of the optically-induced Kondo effect will depend on the dephasing of FS excitations. The role of interactions between FS and impurity electrons in the presence of hybridization was addressed in . An avenue for future studies would be the interplay between the Kondo-absorption and the Fermi edge singularity. Note finally that the effect of irradiation on the Kondo transport in quantum dots was investigated in .
This work was supported by NSF grants ECS-9703453 (Vanderbilt) and DMR 9732820 (Utah), and Petroleum Research Fund grant ACS-PRF #34302-AC6 (Utah).
Fig. 1
Fig. 2 |
no-problem/9912/cond-mat9912484.html | ar5iv | text | # Charge ordering and hopping in a triangular array of quantum dots
## Abstract
We demonstrate a mapping between the problem of charge ordering in a triangular array of quantum dots and a frustrated Ising spin model. Charge correlation in the low temperature state is characterized by an intrinsic height field order parameter. Different ground states are possible in the system, with a rich phase diagram. We show that electronic hopping transport is sensitive to the properties of the ground state, and describe the singularities of hopping conductivity at the freezing into an ordered state.
Ordering and phase transitions in artificial structures such as Josephson junction arrays and arrays of quantum dots have a number of interesting properties. One attractive feature of these systems is the control on the Hamiltonian by the system design. Also, the experimental techniques available for probing magnetic flux or charge ordering, such as electrical transport measurements and scanning probes, are more diverse and flexible than those conventionally used to study magnetic or structural ordering in solids. There has been a lot of theoretical and experimental studies of phase transitions and collective phenomena in Josephson arrays, and also some work on the quantum dot arrays.
There is an apparent similarity between the Josephson and the quantum dot arrays problems, because the former problem can be mapped using a duality transformation on the problem of charges on a dual lattice. These charges interact via the $`D=2`$ Coulomb logarithmic potential, and can exhibit a Kosterlitz-Thouless transition, as well as other interesting first and second order transitions. In the quantum dots arrays, charges are coupled via the $`D=3`$ Coulomb $`1/r`$ potential, which can be also partially screened by ground electrodes or gates. In terms of the interaction range, the quantum dot array system is intermediate between the Josephson array problem and the lattice gas problems with short range interactions, such as the Ising model and its varieties. From that point of view, an outstanding question is what are the new physics aspects of this problem.
A very interesting system fabricated recently is based on nanocrystallite quantum dots that can be produced with high reproducibility, of diameters $`15100`$Åtunable during synthesis, with a narrow size distribution ($`<5\%`$ rms). These dots can be forced to assemble into ordered three-dimensional closely packed colloidal crystals, with the structure of stacked two-dimensional triangular lattices. Due to higher flexibility and structural control, these systems are expected to be good for studying effects inaccessible in the more traditional self-assembled quantum dot arrays fabricated using epitaxial growth techniques. In particular, the high charging energy of nanocrystallite dots, in the room temperature range, and the triangular lattice geometry of the dot arrays are very interesting from the point of view of exploring novel kinds of charge ordering.
Motivated by recent attempts to bring charge carriers into these structures using gates, and to measure their transport properties, in this article we study charge transport in the classical Coulomb plasma on a triangular array of quantum dots. We assume that the dots can be charged by an external gate and that conducting electrons or holes can tunnel between neighboring dots.
For drawing a connection to the better studied spin systems, it is convenient to map this problem on the classical Ising antiferromagnet on a triangular lattice ($`\mathrm{}`$IAFM). Without loss of generality, we consider the case when the occupancy of the dots is either 1 or 0, and interpret these occupancies as the spin “up” and “down” states. In this language, the gate voltage is represented by an external magnetic field coupled to the spins. Also, more generally, any spatially varying electrostatic field is mapped on a spatially varying magnetic field in the spin problem. For example, the electrostatic field due to, e.g., charged defects, corresponds to the spin problem with a random field. The electron hopping (tunneling) between the dots corresponds to spin exchange transport. The long-range Coulomb interaction between charges gives rise to a long-range spin-spin coupling, which leads to somewhat different physical properties than the short-range exchange interaction conventional for the spin problems.
Another difference between the charge and spin problems is that, due to charge conservation, there is no analog of spin flips. This certainly has no effect on the equilibrium statistical mechanics, because the spin ensemble with fixed total spin is statistically equivalent to the grand canonical ensemble. However, this is known to be important in the dynamical problem. The two corresponding types of dynamics are the spin conserving Kawasaki dynamics and the spin non-conserving Glauber dynamics, respectively. In the simulation described below we use the Kawasaki dynamics, involving spin exchange processes on neighboring sites.
The mapping between the charge and spin problems is of interest because of the following. If the interaction between the dots were of a purely nearest neighbor type, the problem could have been exactly mapped on the $`\mathrm{}`$IAFM problem, which is exactly solvable. The $`\mathrm{}`$IAFM problem is known to have an infinitely degenerate ground state with an intrinsic “solid-on-solid” structure described by the so-called “height field” (see below). The height field represents the correlations of occupancy of neighboring sites, and can be thought of in terms of an embedding of the structure into a three-dimensional space. The higher-dimensional representations of correlations in solids have also been found useful in a variety of frustrated spin problems. The ordering of electrons in triangular arrays of quantum dots, because of Coulomb coupling giving rise to a repulsive nearest-neighbor interaction, must be similar to that of the $`\mathrm{}`$IAFM ground states. It represents, however, a new physical system in which the height field will be strongly coupled to electric currents, which can make electronic transport properties very interesting.
The Hamiltonian of the electrons on the quantum dots is given by $`_{\mathrm{charge}}+_{\mathrm{tunnel}}+_{\mathrm{spin}}`$, where $`_{\mathrm{charge}}`$ describes Coulomb interaction between charges $`q_i=0,1`$ on the dots and coupling to the background disorder potential $`\varphi (r)`$ and to the gate potential $`V_\mathrm{g}`$:
$$_{\mathrm{charge}}=\frac{1}{2}\underset{i,j}{}V(𝐫_{ij})q_iq_j+\underset{𝐫_i}{}(V_\mathrm{g}+\varphi (𝐫_i))q_i$$
(1)
The position vectors $`𝐫_i`$ run over a triangular lattice with the lattice constant $`a`$, and $`𝐫_{ij}=𝐫_i𝐫_j`$. The interaction $`V`$ accounts for screening by the gate:
$$V(𝐫_{ij}0)=\frac{e^2}{ϵ|𝐫_{ij}|}\frac{e^2}{ϵ\sqrt{(𝐫_{ij})^2+(2d)^2}},$$
(2)
Here $`ϵ`$ is the dielectric constant of the substrate<sup>*</sup><sup>*</sup>* In the case of spatially varying $`ϵ`$ the interactions can be more complicated. For example, if the array of dots is placed over a semiconductor substrate, one has to replace $`ϵ(ϵ+1)/2`$ in the expression (2). , and $`d`$ is the distance to the gate plate. The single dot charging energy $`\frac{1}{2}V(0)=e^2/2C`$ is assumed to be high enough to maintain no more than single occupancy.
Electron tunneling between neighboring dots is described by $`_{\mathrm{tunnel}}`$. We assume that the tunneling is incoherent, i.e., is assisted by some energy relaxation mechanism, such as phonons. Below we consider stochastic dynamics in which charges can hop between neighboring dots with probabilities depending on the potentials of the dots and on temperature. Also, we consider only charge states and ignore all effects of electron spin described by $`_{\mathrm{spin}}`$, such as exchange, spin ordering, etc.
To lay out the framework for discussing ordering in the charge problem, let us review here the main results for the $`\mathrm{}`$IAFM problem using the charge problem language (and assuming nearest neighbor interactions). The ground state of this system has a large degeneracy, which can be understood as follows. For each triangular plaquette at least one bond is frustrated, for any occupancy pattern. To minimize the energy of the nearest neighbor interactions, i.e., to reduce the number of frustrated bonds, it is favorable to arrange charges so that the triangles combine in pairs in such a way that each pair of triangles shares a common frustrated bond. The pairing of triangles can be described by partitioning the structure into rhombuses. Given a charge configuration, the corresponding rhombuses pattern can be visualized by erazing all frustrated bonds, as shown in Fig.1.
Conversely, given a configuration of rhombuses, one can reconstruct the arrangement of charges in a unique way, up to an overall sign change. The use of the representation involving rhombuses is that it leads to the notion of a height field. Any configuration of rhombuses can be thought of in terms of a projection of a faceted surface in a 3D cubic lattice along the $`(111)`$ direction. This surface defines lifting of the 2D configuration in the 3D cubic lattice space, i.e., an integer-valued height field. The number of different surfaces that project onto the domain of area $`𝒜`$ is of the oder $`e^{w𝒜}`$, where $`w`$ is a constant equal to the entropy of the ground state manifold per plaquette.
At a finite temperature, the charge ordering with respect to the height field, i.e., to the pairing of triangles, may have some defects (see Fig.2). An elementary defect is represented by an isolated unpaired triangle surrounded by rhombuses, i.e., by paired triangles. This defect has a topological character similar to a screw dislocation, because it leads to an ambiguity in the height field. This ambiguity is readily seen in Fig.2, where by continuing the height field around an unpaired triangle one finds a discrete change in it upon returning to the starting point.
There is no finite temperature phase transition in the $`\mathrm{}`$IAFM problem because of the topological defects present at a finite concentration at any temperature. However, since the fugacity of a defect scales as $`e^{V(a)/T}`$, and the defect concentration scales with the fugacity, the system of defects is becoming very dilute as $`T0`$. As a result, the $`T=0`$ state is ordered, and belongs to the degenerate manifold of ground states without defects, i.e., is characterized by a globally defined height variable. Also, since the distance between defects diverges exponentially as $`T0`$, there is a large correlation length for the $`T=0`$ ordering even when $`T`$ is finite. This situation is described in the literature on the $`\mathrm{}`$IAFM problem as a $`T=0`$ critical point (see recent article and references therein).
In this work, we study the charge system with the interaction (2) by a Monte Carlo (MC) simulation of electron hopping in equilibrium, as well as in the presence of an external electric field. We find that, although many aspects of the $`\mathrm{}`$IAFM physics are robust, the long-range character of the interaction (2) makes the charge problem different from the $`\mathrm{}`$IAFM problem in several ways.
The states undergoing the MC dynamics are all charge configurations with no more than single occupancy ($`q_i=0,1`$) on a square patch $`N\times N`$ of a triangular array ($`N=12`$ in Figs.1,2,3). Periodic boundary conditions are imposed by defining the energy (1) using the $`N\times N`$ charge configuration extended periodically in the entire plane. Also, we allow charge hopping across the patch boundary, so that the charge disappearing on one side of the patch reappears on the opposite side, consistent with the periodicity condition.
The stochastic MC dynamics is defined by letting electrons hop on unoccupied neighboring sites with probabilities given by Boltzmann weights:
$$W_{ij}/W_{ii}=e^{(\mathrm{\Phi }_i\mathrm{\Phi }_j)/kT},W_{ij}+W_{ii}=1,$$
(3)
where
$$\mathrm{\Phi }_i=\underset{r_jr_i}{}V(𝐫_{ij})q_j+V_\mathrm{g}+\varphi (𝐫_i).$$
(4)
To reach an equilibrium at a low temperature, we take the usual precautions by running the MC dynamics first at some high temperature, and then gradually decreasing the temperature to the desired value.
All the work reported below was done on the system without disorder, $`\varphi (𝐫_i)=0`$. The distance to the gate which controls the range of the interaction (2) was chosen to be $`d=2`$.
The properties of the system depend on the charge filling fraction, i.e., on the mean occupancy $`n=q_i/N^2`$, which is conserved in the MC dynamics. (The $`\mathrm{}`$IAFM in the absence of external magnetic field corresponds to $`n=1/2`$.) We find that at low temperature the equilibrium states with $`1/3n2/3`$ are very well described by pairing of the triangles, as illustrated in Figs.1,3. The defects with respect to this height field ordering have a very small concentration, if present at all. The short-range ordering in the charge problem turns out to be the same as in the $`\mathrm{}`$IAFM. Qualitatively, this similarity is explained by a relatively higher magnitude of the nearest neighbor coupling (2) compared to the coupling at larger distances.
Like in the $`\mathrm{}`$IAFM, we observe topological defects. They are present at warmer temperatures (see Fig.2) and quickly freeze out at colder temperatures (see Fig.1). The disappearance of the defects, because of their topological character, is possible only via annihilation of the opposite sign defects. An example of such a process can be seen in the lower right corner of Fig.2. The defects in the charge problem, besides carrying topological charge, can carry an electric charge. Two defects of opposite electric charge can be found at the top and on the left of Fig.2.
Expectedly, besides the robust features, such as the height field and the topological defects, there are certain non-robust aspects of the problem. The two most interesting issues that we discuss here are related with lifting of the degeneracy of the $`\mathrm{}`$IAFM ground state, and with the instability of the $`T=0`$ critical point, both arising due to long-range coupling in the charge problem.
As we already have mentioned, the ground state manifold of the $`\mathrm{}`$IAFM is fully degenerate only for purely nearest neighbor interactions. The degeneracy is lifted even by a weak non-nearest-neighbor interaction. For instance, for our problem with the interaction (2) at $`n=1/2`$, the favored ground states have the form of stripes, spaced by $`\sqrt{3}`$ in the lattice constraint units, which corresponds to electrons filling every other lattice row. Because there are three possible orientations of the stripes, the system cooled in the $`n=1/2`$ state freezes in a state characterized by domains of the three types. One of such domains of stripes can be seen in the upper left part of Fig.1. Upon long annealing, the domains somewhat grow and occasionally coalesce. However, we were not able to determine whether the system always reaches a unique ground state, or remains in a polycrystalline state of intertwining domains.
This can be contrasted with the behavior at $`n=1/3`$, where the ground state is a $`\sqrt{3}\times \sqrt{3}`$ triangular lattice, corresponding to electrons filling one of the three sublattices of the triangular lattice (see Fig.3). The rotational symmetry of this state is the same as that of the underlying lattice. Because there is a (triple) translational degeneracy of the ground state, but no orientational degeneracy, the system always forms a perfect triangular structure upon cooling, without domains. For the densities close to $`1/3`$, the ground state contains $`|n1/3|`$ charged defects, vacancies for $`n<1/3`$ and interstitials for $`n>1/3`$, formed on the background of the otherwise perfect $`\sqrt{3}\times \sqrt{3}`$ structure. At $`n>1/3`$, the interstitials are moving over the honeycomb network of empty sites, as illustrated in Fig.3.
Before discussing the issue of finite $`T`$ versus $`T=0`$ phase transitions, let us explain how we use the MC simulation to find electrical conductivity. It is straightforward to add an external electric field $`𝐄`$ to the MC algorithm. For that, one can simply modify the expressions (3) for the hopping probabilities by adding the field potential difference $`𝐄(𝐫_{ij})`$ between the two sites $`𝐫_i`$ and $`𝐫_j`$. In doing this, one has to respect the periodic boundary condition, which amounts to taking the shortest distance between the sites $`𝐫_i`$ and $`𝐫_j`$ on the torus. Then the charges are statistically biased to hop along $`𝐄`$, which gives rise to a finite average current $`𝐉`$.
The current $`𝐉`$ is Ohmic at small $`𝐄`$, and saturates at $`|𝐄|a\mathrm{min}(V(a)n^{1/2},kT)`$, where $`a(n)=n^{1/2}`$ is the inter-electron spacing. Even though only the Ohmic regime $`J𝐄`$ is of a practical interest, it is useful to maintain the field $`𝐄`$ used in the MC simulation on the level of not less than few times below the nonlinearity offset field, to minimize the effect of statistical fluctuations on the time averaging of the current $`𝐉`$.
The dependence of the electric current on the density $`n`$ is shown in Fig.4 for several temperatures. Because of the electron–hole symmetry of the system, the conductivity problem is invariant under the transformation $`n1n`$, and thus only the interval $`0n1/2`$ can be considered. At high temperatures $`kTe^2/ϵa(n)`$, the Coulomb interaction is not important, and one can evaluate the current by considering electrons freely hopping over lattice sites subject only to the single occupancy constraint. The current $`𝐉`$ in this case goes as
$$𝐉=\frac{3}{4}n(1n)\frac{𝐄}{kT},$$
(5)
where the factor $`n(1n)`$ in (5) gives the probability that a particular link connects two sites of different occupancy, so that hopping along this link can occur, whereas the factor $`(𝐄/kT)`$ in (5) is the hopping probability bias due to the weak field $`|𝐄|kT`$. The constant factor $`3/4`$ in (5) is given by the coordination number of the lattice (equal to 6) divided by 8.
The parabolic $`n(1n)`$ dependence of the current (5) is clearly reproduced by the highest temperature curve in Fig.4. In Fig.4 the inverse temperature factor of the expression (5) is eliminated by rescaling the current by $`E/kT`$. The rescaled current $`J(kT/E)`$ in Fig.4 is practically temperature independent at small densities $`n`$, in agreement with the result (5). Regarding the $`1/kT`$ rescaling, note that in a real system the frequency of electron attempts to hop is a function of temperature, because hopping is assisted by energy relaxation processes, such as phonons. In this case, our MC result for the current $`𝐉`$ has to be multiplied by some function $`A(T)`$, which will be of a power law form $`T^\alpha `$ for the phonon absorption rate in the case of a phonon-assisted hopping.
At the temperatures of the order and smaller than $`e^2/ϵa(n)`$, the current becomes suppressed due to charge correlations reducing the frequency of hopping. The onset of this suppression takes place at temperatures of order of the interaction between neighboring electrons, $`kT0.2V(a)n^{1/2}`$. Note that the onset temperature is lower for smaller density, in agreement with the plots in Fig.4.
The electrical conductivity is a sensitive probe of freezing transitions, at which the system of interacting charges locks collectively into a particular ordered or disordered state, and the conductivity vanishes. Whether the freezing occurs as a finite $`T`$ or a $`T=0`$ phase transition is related to the degree of degeneracy of the ground state. The situation appears to be very different at different densities $`n`$. For simple rational $`n=1/3,1/2,2/3`$, and alike, characterized by the ground state unique up to discrete symmetries, there is a well defined freezing temperature. This is illustrated in Fig.5, where electrical conductivity of the $`n=1/3`$ state is plotted versus temperature. The abrupt drop of the conductivity at $`kT0.09e^2/ϵa`$, where $`a`$ is the lattice constant, indicates a sharp freezing transition. To make sure that this is not a finite size effect, we show the conductivity curves for systems of three different sizes, $`6\times 6`$, $`12\times 12`$, and $`24\times 24`$.
On the other hand, away from the specific densities with simple ground states, the freezing appears to be very gradual, and in the temperature range we explored in Fig.4 there has been no evidence of a sharp transition. It may be that the ordering actually takes place at much smaller temperatures than the characteristic interaction, the situation not uncommon in frustrated systems. Also, it may be that at incommensurate densities the state remains disordered down to $`T=0`$. From our observations, the latter seems to be a more likely scenario. We expect that upon cooling the charges freeze into a quasirandom state determined by the cooling history. In that case, this system represents an electronic glass that exists in the absence of external disorder.
The nature of the ground state in this case is unclear. To list several options, it may be that the system forms a polycrystal consisting of intertwining domains, like it does at $`n=1/2`$, or that the state represents a distorted incommensurate charge density wave, or that it is a genuine glass. Studying this would require enhancing the MC algorithm to make it capable of treating the slow dynamics of annealing at low temperatures.
In conclusion, correlations of charges in the triangular arrays are described by an intrinsic order parameter, the height field, similar to the $`\mathrm{}`$IAFM problem. The charge–spin mapping shows that various interesting phenomena arising in frustrated spin systems can be studied in charge systems, for which more powerful experimental techniques are available. The type of order in the ground state depends on the charge filling density. Electron hopping conductivity is sensitive to charge ordering and can be used as a probe of the nature of the ordered state. Conductivity couples to the height field, and thus one can expect novel effects in electronic transport properties which have no analog in spin systems with the same geometry.
###### Acknowledgements.
We are very grateful to Marc Kastner, Moungi Bawendi, and Nicole Morgan for drawing our attention to this problem, as well as for many useful discussions and for sharing with us their unpublished experimental results. This work is supported by the NSF Award 67436000IRG. |
no-problem/9912/astro-ph9912390.html | ar5iv | text | # A Keck/HST Survey for Companions to Low-Luminosity Dwarfs
## 1. Introduction
Recent detections of planetary and brown dwarf companions to nearby stars have fueled efforts to undertake a complete inventory of circumstellar bodies (Mayor & Queloz 1995; Nakajima et al. 1995; Marcy & Butler 1996). It is thus likely that we stand at the beginning of an exciting era of astronomical discovery in which a gradually unfolding census promises to provide key evidence for the modes of origin for planets and binary stars. As part of this endeavor, we have undertaken a search for companions to recently discovered low-luminosity field dwarfs (Kirkpatrick et al. 1999; 2000). Our study will provide a first look at the binary companion rate for sub-stellar objects. Furthermore, it will yield a special opportunity for imaging giant Jovian planets, since sensitivity is enhanced in the reduced glare of faint dwarf primaries.
The local field detection rate of very low-luminosity dwarfs in infrared sky surveys suggests they comprise a sizeable population which is well represented by an extension of the field-star mass function, $`\mathrm{\Psi }(M)M^\alpha `$, with $`1<\alpha <2`$ (Reid et al. 1999). The occurrence frequency of multiplicity among these systems is completely unknown; it is an open question as to whether the distribution of their companions matches that of M dwarfs or bears the stamp of a different, sub-stellar formation mechanism. Stellar companions are detected in approximately 35% of M dwarf systems with a distribution peaking at a radius in the range $`330`$ AU (Fischer & Marcy 1992; Henry & McCarthy 1993; Reid & Gizis 1997). Efforts to uncover the mass and radial distribution of extra-solar planets around M stars are just beginning to meet with success and have revealed super Jovian-mass planets within a few AU of their central stars, consistent with results for earlier spectral types (Marcy et al. 1998). The relationship of this population to that of binary companions and planetary systems like our own is a topic of current debate (Black 1997). The true answer will not be readily apparent until a more complete range of mass and orbital distances has been surveyed.
To date, very few multiple systems have been identified with L-dwarf components. Several L-dwarf secondaries have been discovered around nearby stars (Becklin & Zuckerman 1988; Rebolo et al. 1998; Kirkpatrick et al. 2000). Among a handful of known binary brown-dwarf systems (e.g., Basri & Martín 1997), only two have primary spectral types as late as L: 2MASSW J0345 is a double-lined spectroscopic L dwarf system (Reid et al. 1999), and DENIS-P J1228 was shown to be double in HST imaging observations (Martín et al. 1999). The latter is composed of equal-luminosity components with a projected separation of 0.275<sup>′′</sup> (5 AU at the 18 pc distance of DENIS-P J1228). Here we report preliminary results from a Keck near-infrared imaging survey of a large sample of low-luminosity dwarfs and outline a complementary study with Hubble Space Telescope.
## 2. Survey Characteristics
### 2.1. Keck/NIRC Imaging Survey
Our target sample is culled from the 2MASS and DENIS near-infrared sky surveys and consists of objects spectroscopically confirmed to be L dwarfs together with a smaller sample of nearby very late M dwarfs. Survey parameters are plotted in Fig. 1, including sky coverage, spectral type, and range of distances. Imaging is carried out at the Keck I telescope with NIRC, a cryogenically-cooled near-infrared camera which incorporates a 256$`\times `$256 Indium-antimonide array at the f/25 focus in an optical framework which yields a 0.15<sup>′′</sup> plate scale and 38<sup>′′</sup>-square field of view (Matthews & Soifer 1994). The survey is sensitive to companions brighter than $`m_\mathrm{K}`$ = 21 at separations greater than 1<sup>′′</sup> (5-50 AU in the sampled range of distances) within a $`20^{\prime \prime }\times 20^{\prime \prime }`$ square aperture (out to 100-1000 AU), and is capable of detecting components with luminosity close to that of the primary ($`m_\mathrm{K}13`$) at $``$0.3<sup>′′</sup> separation. At this level of sensitivity, several additional sources are detected in a typical frame. Repeat observations in a second epoch, one year or more later, are being taken to determine if any of these share a common proper motion with the target; second-epoch observations are complete for only a subset of the sample which includes 10 L dwarfs at present.
In addition to the common proper motion analysis of faint sources, we inspect the core of each of the primaries to search for extended emission associated with a marginally resolved binary. Second-epoch observations are used to obtain evidence of common proper motion and to mitigate systematic psf-distortion effects due to errors in phasing of the segmented primary mirror. Point-like sources observed nearby in the sky and within an hour of the target observations serve as psf measurements. Dithered images of candidate binaries and psf stars are not shifted and combined but are treated as independent data sets. Psf stars are fit in duplicate to each of the candidate binary images using a least-squares minimization method, to determine component properties.
### 2.2. HST/WFPC2 Imaging Survey
High contrast companions within 0.5<sup>′′</sup> are better detected at spatial resolution that is not hampered by the effects of atmospheric seeing. We are carrying out a companion program with the Wide Field Planetary Camera 2 on HST to detect close companions to low-luminosity dwarfs. Equal-luminosity components are resolved at 0.09<sup>′′</sup> and contrasting luminosities of $`\delta `$m = 5 (I band) are detectable outside 0.32<sup>′′</sup> from the star.
## 3. Preliminary Results - Abundant L-Dwarf Binaries?
In preliminary analysis of Keck/NIRC image frames for which dual-epoch observations have been obtained, three objects met our criteria for reliable identification of a true close binary system (Koerner et al. 1999), including one imaged previously with HST/NICMOS by Martín et al. (1999). Contour plots of three L-dwarf binaries are displayed in Fig. 2, together with the psf stars used decompose them into separate components. In Fig. 3 are plotted the results of psf-fits to obtain the separation and PA for the components of DENIS-P J1228, DENIS-P J0205, and 2MASSW J1146. Mean values are $`0.27\pm 0.03^{\prime \prime }`$, $`0.51\pm 0.03^{\prime \prime }`$, $`0.29\pm 0.06^{\prime \prime }`$ and $`33\pm 15^{}`$, $`92\pm 18^{}`$, $`206\pm 19^{}`$, respectively. Projected separations correspond to physical separations of 4.9, 9.2, and 7.6 AU at distances implied by obtained trigonometric parallaxes (Dahn et al. 2000). Flux-component ratios for the binaries are $`1.1\pm 0.4`$, $`1.0\pm 0.4`$, and $`1.0\pm 0.3`$, respectively.
The binary systems presented here have similar projected separations (5 to 9 AU) and luminosity ratios near unity. They represent the first binary detections in preliminary analysis of a larger dual-epoch survey in which only 10 L-dwarf images have been completely analyzed in two epochs. No companions with wider separations or more highly contrasting luminosities were found thus far. These preliminary results suggest a conjecture for further testing: namely, that multiple systems are common in the L dwarf population, that their distribution peaks at radial separations like that of both Jovian planets in our solar system and M dwarfs generally ($`530`$ AU), and that low-contrast mass ratios are common. The latter claim is especially in need of testing, since our survey is not very sensitive to companions at the separations reported here if they have high luminosity contrast ratios. Further, the magnitude-limited surveys from which our sample is taken are biased toward the detection of equal-luminosity binaries, since their combined luminosity is greater than for single stars of the same spectral type. Ultimately, techniques with both high resolution and high dynamic range must be applied to a larger sample to reliably identify the distribution of circumstellar bodies that encircles this population of very cool objects.
## 4. Bayesian Inference of the Underlying Companion Distribution
We would like especially to discover the probability distribution of stellar and planetary companions in order to constrain theories of their origin; a successful theory should account for that distribution. In addition, an intensely human interest drives us to seek to understand how many habitable circumstellar environments exist and how typical is the planet on which we find ourselves. It will be decades at least before the inventory of circumstellar objects is complete enough to rely on counting statistics alone to provide the whole answer. In the interim, some regions of parameter space for model probability distributions will be more completely sampled than others. Relatively luminous companions at distances of 100 AU will soon be largely accounted for in nearly all stars detected nearby, for example. As parts of this census come to light, it will be challenging to ascertain the reliability of distribution estimates for substellar companions that are based on counting in incomplete samples.
Strictly speaking, the sampling of the companion distribution in our completed and combined Keck and HST surveys will still be incomplete, since high-luminosity-contrast companions close to the star or wide companions separated by more than 20$`\mathrm{"}`$ could go undetected. Furthermore, the range of linear separations is inhomogeneously sampled, since a wide range of distances is represented by the source list. Rather than simply count the number of detections and correct for incompleteness, we prefer to use a Bayesian model-fitting approach to quantify how well model companion distributions are constrained by our data. As outlined below, this approach yields the relative probability of a model distribution, given the data. By calculating this for a suitable range of models, we can determine both the most likely model, and the degree to which this choice is mandated by the data.
According to Bayes Theorem, the probability of a model given the data, $`P<M|D>`$, is calculated by multiplying the probability of the data given the model, $`P<D|M>`$, times the a priori probability of the model, $`P<M>`$. For the case of a model distribution that is a function only of linear separation, $`R=\theta /\pi `$ with angular separation $`\theta `$ and trigonometric parallax $`\pi `$, and luminosity $`L`$, the calculation of $`M(R,L)`$ is straightforward for an individual image frame $`D_i`$. The probability of a null detection is simply one minus the probability of a detection. Since the model is, itself, the probability of a detection, we can calculate this by integrating over one minus $`M(R,L)`$ as simply
$$P<D_i|M>=_{R_{in}}^{R_{out}}_{L_{ul}}^{L_{prim}}(1M(R,L))𝑑R𝑑L$$
where $`R_{in}`$ and $`R_{out}`$ define the inner and outer linear separations to which the image is sensitive, and $`L_{prim}`$ and $`L_{ul}`$ are, respectively, the luminosity of the primary and the upper-limit luminosity for the detection of a companion. Typically, $`L_{ul}`$ = $`L_{ul}(R)`$ for small angular separations. For images where a companion is detected at separation $`R^{}`$ with luminosity $`L^{}`$, the probability of the result, given the data, is given by
$$P<D_i|M>=_{Rin}^{R_{out}}_{L_{ul}}^{L_{prim}}\delta (R^{},L^{})M(R,L)𝑑R𝑑L$$
where $`\delta (R^{},L^{})`$ is the Dirac delta function. These terms may then be multiplied by the prior probability of the model according to Bayes Theorem. In the absence of any previous notions about the distribution, a “flat prior” may used by simply setting $`P<M>`$ = 1.
The probability of a particular model, given all the image frames, is then the normalized sum of the probabilities for the N individual frames:
$$P<M|D>=\frac{P<M|D_i>}{N}.$$
A wide range of models may be compared in this way by calculating the relative probability, $`P_{rel}<M_j|D>`$, for the $`j^{th}`$ model and normalizing over the whole suite of models considered:
$$P_{rel}<M_j|D>=\frac{P<M_j|D>}{P<M_i|D>}.$$
This approach has the advantage of bringing to bear all the information inherent in an inhomogeneous data set and weighting proportionally its influence on the choice of the most probable models. If, for example, only a few images test the model in some range of linear separations, their contribution to the overall probability will be small, such that models which vary in their estimate of companions at those separations will not have widely contrasting probabilities. Conversely, constraints will be strong in regions of the model parameter space that are densely sampled by the data. By considering an appropriate range of models, confidence levels as a function of parameter values can be attached to the best-fit model.
## 5. Parametrizing the Model Distribution
The scientific usefulness of the above methodology will depend heavily on the choice of models considered. It is possible, of course, to aim only to fit some analytic function to the data so as to derive a best-fit simplified representation of the observations. This is done easily by simply fitting a model which is parametrized in the directly measurable quantities, angular separation and relative luminosity. But it would be more worthwhile to derive a distribution function with physically meaningful parameters that have theoretical significance. The underlying properties which describe binary systems most completely are the orbital elements and masses. But theories of origin may be constrained by a few of these or by derivative quantities, such as semi-major axes or angular momentum. We note, for example, that binary origin simulations show a marked dependence on $`\beta `$, the ratio of rotational to gravitational energy in the original cloud (cf. Bonnell & Bastien 1992).
The testing of underlying physical models can proceed as above, so long as an appropriate transformation exists between the physical quantity and what is observed. For example, the most probable value of the semi-major axis, $`a_{\mathrm{rel}}`$, for a companion with observed angular separation $`\theta `$ and system distance $`d`$ has been estimated by Fischer & Marcy (1992) using Monte Carlo simulations to be
$$<a_{\mathrm{rel}}>=1.26d<\theta >.$$
This transformation can be incorporated easily into a scheme to determine an underlying model distribution of companions which is a function of $`<a_{\mathrm{rel}}>`$ rather than $`\theta `$. For main sequence stars, a transformation between luminosity and mass can be accomplished with relationships obtained by dynamic mass determinations (cf. Henry et al. 1999). For L dwarfs, the situation is not well-determined empirically but requires theoretical models which relate mass, age, and luminosity (e.g., Burrows et al. 1997). To obtain the coveted distribution of masses from these relations, assumptions about stellar ages will be required.
Further complications are introduced by the inclusion of higher-order multiple systems and, ultimately, in the consideration of planetary systems as well. The increased effort may well be worth the undertaking, since it may yield a general taxonomic classification of multiple dwarf and planetary systems with theoretical significance and descriptive power for characterizing the frequency and types of circumstellar systems. To this end, we will fit models in a variety of parametrized prescriptions. We thus consider the application of Bayesian inference to the problem of the low-luminosity dwarf companion frequency to comprise a pilot study for larger objectives.
## References
Basri, G., & Martín, E.L. 1997, in ASP Conf. Ser. 134, Brown Dwarfs and Extrasolar Planets, ed. R. Rebolo, E. Martin, & M.R. Zapatero-Osorio (San Francisco: ASP), 284
Becklin, E.E., & Zuckerman, B. 1988, Nature, 336, 656
Black, D.C., 1997, ApJ490, L171
Bonell, I., & Bastien, P. 1992, ApJ401, 654
Burrows, A., Marley, M., Hubbard, W.B., Lunine, J.I., Guillot, T., Saumon, D., Freedman, R., Sudarsky, D., & Sharp, C. 1997, ApJ491, 856
Dahn, C., et al, 1999, in preparation
Fischer, D.A., & Marcy, G.W. 1992, ApJ, 396
Henry, T.J., & McCarthy, D.W.Jr. 1993, AJ, 106, 773
Henry, T.J., Franz, O.G., Wasserman, L.H., Benedict, G.F., Shelus, P.J., Ianna, P.A., Kirkpatrick, J.D., & McCarthy, D.W.Jr., 1999, ApJ, 512, 864
Kirkpatrick, J.D., Reid, I.N., Leibert, J., Cutri, R.M., Nelson, B., Beichman, C., Dahn, C., Monet, D.G., Gizis, J.E., & Skrutskie, M.F. 1999, ApJ, 519, 802
Kirkpatrick, J.D., et al. 2000, in preparation
Koerner, D.W., Kirkpatrick, J.D., McElwain, M.W., & Bonaventura, N.R. 1999, ApJ, in press
Marcy, G.W., & Butler, R.P. 1996, ApJ, 464, L147
Marcy, G.W., Butler, R.P., Vogt, S.S., Fischer, D., Lissauer, J.J. 1998, ApJ, 505, L147
Martín, E.L., Basri, G., Delfosse, X., & Forveille, t., 1997, A&A, 327, L29
Martín, E.L., Brandner, W., & Basri, G. 1999, Science, 283, 1718
Mayor, M., & Queloz, D, 1995, Nature, 378, 355
Nakajima, T., Oppenheimer, B.R., Kulkarni, S.R., Golimowski, D.A., Matthews, K., & Durrance, T. 1995, Nature, 378, 463
Rebolo, R., Zapatero-Osorio, M.R., Madruga, S., Bejar, V.J.S., Arribas, S., & Licandro, J. 1998, Science, 282, 1309
Reid, I.N., & Gizis, J.E. 1997, AJ, 113, 2246
Reid, I.N., Kirkpatrick, J.D., Liebert, J., Burrows, A., Gizis, J.E., Burgasser, A., Dahn, C.C., Monet, D., Cutri, R., Beichman, C.A., Skrutskie, M. 1999, ApJ, 521, 613 |
no-problem/9912/cond-mat9912326.html | ar5iv | text | # Critical current from dynamical boundary instability for fully frustrated Josephson junction arrays
## Abstract
We investigate numerically the critical current of two-dimensional fully frustrated arrays of resistively shunted Josephson junctions at zero temperature. It is shown that a domino-type mechanism is responsible for the existence of a critical current lower than the one predicted from the translationally invariant flux lattice. This domino mechanism is demonstrated for uniform-current injection as well as for various busbar conditions. It is also found that inhomogeneities close to the contacts makes it harder for the domino propagation to start, which increases the critical current towards the value based on the translational invariance. This domino-type vortex motion can be observed in experiments as voltage pulses propagating from the contacts through the array.
The two-dimensional (2D) Josephson junction arrays (JJA’s) are ideal testing grounds for static and dynamic properties related to vortices and constitute an intriguing field of physics by itself. They can be fabricated with high precision and are ideally suited for experiments and simulation studies, as well as theoretical approaches. Vortex physics is also of great importance for understanding the properties of high-$`T_c`$ superconductors, in particular, the interplay between vortex correlations, pinning, and dynamics. The present paper describes an interesting example of such an interplay. It focuses on what happens at the current threshold for the onset of resistance in the case of a 2D JJA in a perpendicular magnetic field. The threshold is in such a case caused by the onset of motion of the vortices induced by the external magnetic field. We find that this onset of vortex motion is caused by a domino-type effect corresponding to voltage pulses traveling through the sample.
The value of the critical currents in 2D JJA’s have been investigated theoretically by several authors. These studies have revealed that the zero-temperature critical current $`I_c`$ depends strongly on the value of the frustration $`f`$ defined by the number of flux quanta per plaquette induced by the external magnetic field, and they unanimously have obtained the value $`I_c=\sqrt{2}10.41`$ (in units of the critical current of a single junction) for the fully frustrated ($`f=1/2`$) square array. On the other hand, numerical calculations based on the resistively shunted junction (RSJ) model with the uniform-current injection method (see Ref. for details) have persistently given the value $`I_c=0.35(1)`$ The discrepancy has usually been attributed to either finite-size effects or an artifact of the uniform-current injection method, but without any really substantiating evidence for either claim. As to the first suggestion, the same numerical value has now been found in a very large array ($`256\times 256`$) (Ref. ) and the absence of any size dependence up to this array size appears to preclude the possibility of an explanation in terms of finite-size effects. The second possibility also by now seems less likely since attempts to model the injection of current through busbars, in close analogy with actual experimental situations (see Ref. for details), in fact seem to give an even smaller value. Furthermore, in Ref. an improved busbar method was proposed and $`I_c0.35`$, close to the one obtained for the uniform-current injection, was found. Thus a value lower than the theoretically predicted one has been obtained with a variety of injection methods.
In the present paper we resolve this long-standing discrepancy and find that the true critical current is in fact lower than the theoretically predicted one and corresponds to a domino-type mechanism starting from the contacts which breaks the translational invariance of the vortex lattice. This means that the theoretically predicted value, which presumes translational invariance, in fact gives an upper limit of $`I_c`$. We conclude that the true critical current is always lower and that the precise value, because it is given by a domino mechanism, is sensitive to the inhomogeneities close to the contacts.
We numerically study the fully frustrated $`L_x\times L_y`$ RSJ square arrays at zero temperature. We use the periodic boundary condition in the $`y`$ direction and the external current $`I_d`$ is inserted on the left boundary ($`x=0`$) and the extracted on the right boundary ($`x=L_x`$). The current $`I_{ij}`$ from site $`i`$ to site $`j`$ is the sum of the supercurrent and the normal current:
$`I_{ij}=J_{ij}\mathrm{sin}(\theta _i\theta _j+A_{ij})+\dot{\theta _i}\dot{\theta _j},`$
where $`J_{ij}`$ is the critical current of the single junction, the fully frustrated case corresponds to the magnetic bond angle $`A_{ij}=\pi x`$ for all vertical links at $`x`$ and zero for all links in the $`x`$ direction, and finally, $`\dot{\theta _i}\dot{\theta _j}`$ is the normal current (in a convenient unit system). The RSJ equations are obtained by demanding current conservation on each site and are integrated either by starting from the ground-state configuration found from the simulated annealing Monte Carlo method for $`I_d=0`$ or from a random configuration. The current injection methods are shown in Fig. 1 and are as follows: The uniform-current (UC) injection method corresponds to $`J_{ij}=1`$ for all links \[Fig. 1(a)\]; the conventional busbar (CB) method where the links on the boundaries have large $`J_{ij}`$ corresponding to a superconducting busbar (we use $`J_{ij}=10`$ for the fat links in Fig. 1); Simkin’s improved busbar (SB) method where in addition to the superconducting busbar there are no magnetic fields on the plaquettes adjacent to the boundaries \[Fig. 1(c)\]; the busbar method with inhomogeneities near the boundaries (IB) \[Figs. 1(d) and (e) are two variants, IB1 and IB2, respectively\]. The idea behind these inhomogeneities close to the contacts is that they create barriers for the onset of the vortex motion. The fat horizontal links in Figs. 1(d) and (e) play the role of such barriers.
The translationally invariant ground state is a checker board pattern with a vortex at every second plaquette \[see Fig. 2(a)\]. Let us first assume the translational symmetry even when an external current $`I_d`$ is turned on. Every second horizontal junction will then by symmetry have the same current and we below focus on the horizontal junctions with the largest current. For some external current $`I_d=I_s`$ the current through these links reaches the maximum supercurrent for a single junction, which is 1 in our current unit. A straightforward calculation shows that this occurs at $`I_s=(3\sqrt{5})/2`$ (see Table I). For $`I_d<I_s`$ the ground state vortex configuration is always stable. However, at some $`I_d=I_cI_s`$ this ground state configuration becomes unstable and dissipation sets in. In the translationally invariant case this happens at $`I_c=\sqrt{2}1`$ (see Table I), which can readily be verified with the boundary condition in Ref. which preserves the translational invariance; $`I_c=0.4142(1)`$ has been obtained in excellent agreement with the prediction $`\sqrt{2}1`$.
In reality, however, the current pattern enforced by the contacts is not fully compatible with the translationally invariant vortex pattern. One consequence of this is that the external current $`I_{sb}`$ needed before a horizontal junction at the boundary reaches its maximum supercurrent is lower than for the translationally invariant case. Table I gives the actual values for the boundary conditions shown in Fig. 1 together with the vortex column for which it occurs. This lower value of $`I_{sb}`$ suggests that the vortex columns close to the contacts may start to move and dissipate energy before the rest of the vortices. Figure 2 illustrates some of the possibilities: The transition from Fig. 2(a) to 2(b) is the translationally invariant case where the whole vortex lattice moves in one piece. The transition from Fig. 2(a) to 2(c) followed by 2(c) to 2(a) illustrates the case when only the vortex column closest to the boundary moves and the others remain fixed. Note that this is a boundary effect in the sense that it only produces a voltage drop for the horizontal junctions closest to the contacts and no voltage drop across the rest of the sample. Another possibility is that the motion of the first vortex column \[Fig. 2(a) to 2(c)\] is followed by the second column \[as illustrated by Fig. 2(c) to 2(d)\], and then followed by the third and so on without stopping. Such a consecutive motion of vortex columns starting from the contacts and continuing without stopping we refer to as a domino-type motion. We have found that this domino-type motion starts at a lower current than what is required for the translationally invariant motion and that consequently the threshold current for the domino motion is the true critical current. Thus the domino effect is responsible for the critical current for all the boundary conditions shown in Fig. 1 and the corresponding values are given in Table I.
The domino instability has two basic ingredients: The first is that the boundary and current injection break the translationally invariant current pattern close to the boundary which lowers the threshold current for the first vortex column to move. This is related to that $`I_{s1}`$ for the first column is smaller than $`I_s`$ for the translationally invariant case (see Table I). The second ingredient is that this motion creates an instability which propagates through the whole sample. A hand-waving argument for this goes as follows: One first notes that as long as the checkerboard pattern of vortices is present a horizontal junction with lower current is always followed by one of higher current. However, when the first column moves \[as illustrated by Fig. 2(a) to 2(c)\] then two vortices become neighbors. This means that now two horizontal junctions with higher currents follow each other. Because of current conservation this means that the junctions in the second column with higher currents will momentarily further increase which makes this column unstable. The motion of the second column in the same fashion causes the third column to become unstable and so on.
Figures 3(a) and (b) illustrate the domino mechanism for the uniform current injection: Figure 3(a) shows that the voltage is zero up to a critical current $`I_c`$. The fact that the voltage per length saturates to a nonzero value above $`I_c`$ as the distance between the contacts is increased, demonstrates that it is a true onset of dissipation across the whole sample. Figure 3(b) shows the voltage drop $`V(x,t)`$ at time $`t`$ for the horizontal junction at position $`x`$ along a row of junctions in the presence of a current slightly larger than the critical current \[$`I_d=0.36>I_c=0.350(1)`$\]. As seen the voltage pulse always travels from one of the contacts towards the opposite in accordance with the domino mechanism. In addition, the interference between domino propagations traveling in opposite directions creates interesting pattern in this 3D plot. A measurable consequence of this is that, if the voltage drop across different positions along the current direction is measured as a function of time, then traveling voltage pulses can be observed. This is in contrast to the translationally invariant case where the voltage appears at the same time along the whole current direction as is illustrated in Fig. 3(c). This translationally invariant case is simulated with the fluctuating-twist-boundary method, as is described in Ref. .
The domino mechanism applies to all the situations shown in Fig. 1 with some variants: For example, in the case of the conventional busbar injection \[see Fig. 1(b)\] the first vortex column becomes unstable at a current $`I_{cb}`$ but the domino avalanche does not set in until a larger current $`I_c`$ is applied (see Table I). This is illustrated in Fig. 4, which shows the onset of dissipation at $`I_{sb}`$ for a finite system and a further increase at $`I_c`$. However, as the distance between the contacts becomes larger dissipation between $`I_{cb}`$ and $`I_c`$ decreases towards zero. The situation between $`I_{cb}`$ and $`I_c`$ is hence a boundary effect of the type illustrated by Fig. 2(a) to 2(c) followed by 2(c) to 2(a), i.e., only the first column moves.
In Ref. the critical current for a fully frustrated square array of Josephson junctions was measured and found to be $`I_c0.42(2)`$ As apparent from Table I this value is larger than what is obtained for the RSJ model with uniform-current injection \[Fig. 1(a)\] and busbar injections \[Fig. 1(b) and (c)\] and is in fact close to the translationally invariant value. This means that the domino mechanism is somehow suppressed. One way of suppressing this mechanism is to introduce barriers close to the boundary which prevents the domino avalanche from spreading. The situation in Fig. 1(d) shows an extreme variant of this and as seen from Table I this results in a critical current close to the translationally invariant case. However, also in this case the domino mechanism is responsible for the onset of dissipation. Thus the translationally invariant value appears to be an upper limit and the true critical current is always lower and given by the domino mechanism. A perhaps more realistic situation is with some imperfections close to the boundaries which creates barriers for the vortex motion like in Fig. 1(e) and as seen from Table I the critical current is again close to the translationally invariant value. Figure 5 shows the domino mechanism for this case in a 3D plot. \[Note the similarity with Fig. 3(b).\]
The time scale of the Josephson junction array is given by $`t_0=\mathrm{}/2eR_NJ`$ with the shunt resistance $`R_N`$. For the $`1000\times 1000`$ $`SNS`$ junction array in Ref. where $`R_N2`$ $`\mathrm{m}\mathrm{\Omega }`$ and $`J7`$ mA this means $`t_010^{11}`$ sec. The time for a voltage pulse to reach the middle would then for the uniform current injection \[Fig. 3(b)\] as well as for the case with inhomogeneities (Fig. 5) be of the order of $`10^7`$ sec. Thus it seems that it should be possible to verify the domino mechanism by either direct voltage measurements or by some vortex imaging technique.
In summary, on the basis of simulations for the RSJ model, we have concluded that the critical current for a 2D fully frustrated Josephson junction array is due to a domino mechanism and that the voltage pulses resulting from the domino avalanches should be open to experimental verification.
Discussions with C. J. Lobb, R. S. Newrock, and S. Teitel are gratefully acknowledged. The research was supported by the Swedish Natural Science Research Council through Contract No. FU 04040-322. |
no-problem/9912/cond-mat9912215.html | ar5iv | text | # The spin-orbit interaction as a source of new spectral and transport properties in quasi-one-dimensional systems
## Abstract
We present an exact theoretical study of the effect of the spin-orbit (SO) interaction on the band structure and low temperature transport in long quasi-one-dimensional electron systems patterned in two-dimensional electron gases in zero and weak magnetic fields. We reveal the manifestations of the SO interaction which cannot in principle be observed in higher dimensional systems.
It is known that an electron moving in an electric field experiences not only an electrostatic force but also a relativistic influence that is referred to as the spin-orbit (SO) interaction (or spin-orbit coupling ). It is caused by Pauli coupling between the spin moment of an electron and a magnetic field which appears in the rest frame of the electron due to its motion in the electric field. The Hamiltonian of the SO interaction has the form :
$$\widehat{H}_{SO}=\frac{\mathrm{}}{(2M_0c)^2}𝐄(𝐑)\left[\widehat{𝝈}\times \left\{\widehat{𝐩}+\frac{e}{c}𝐀(𝐑)\right\}\right].$$
(1)
Here $`M_0`$ is the free electron mass, $`\widehat{𝐩}`$ is the canonical momentum operator, $`\widehat{𝝈}`$ is the Pauli matrices, $`𝐄(𝐑)`$ is the electric field, $`𝐀(𝐑)`$ is a vector potential, and $`𝐑`$ is a 3D position vector. Usually the Hamiltonian (1) results in a spin-orientation dependence of the electron energy and/or wave functions. This dependence can become important if electric fields acting on a system of moving electrons are sufficiently strong.
One of the most promising solid-state nanostructures for the observation of SO-interaction effects is the quasi-one-dimensional electron system (Q1DES) patterned in two-dimensional electron gases (2DEG). In contrast with higher dimensional structures, Q1DES have three independent sources of strong electric fields: (i) crystal-field potential that is present in all dimensionalities owing to the intermolecular forces; (ii) a quantum-well potential that confines electrons to a 2D layer at the surface of the crystal; (iii) a transverse (in-plane) electric potential that is applied to squeeze the 2DEG into a quasi-one-dimensional channel . The strength of the in-plane potential determines an effective width of a Q1DES that can be controlled by changing the transverse voltage. In sufficiently narrow channels the transverse electric field can be made comparable with the other two electrostatic contributions.
The study of the SO interaction in Q1DES is interesting from the standpoint of remarkable transport phenomena which they exhibit: ballistic qiantisation of conductance ; the 0.7 conductance structure ; magnetic depopulation ; and negative magnetoresistance . Since these phenomena depend on the peculiarities of the energy spectrum of electrons, any new mechanism leading to non-trivial changes in the spectrum (especially to those involving the spin) may affect transport properties and thereby help their understanding.
Earlier theoretical and experimental works on the SO-related effects dealt mainly with 3D and 2D systems and did not touch on aspects of the SO coupling in Q1DES. In this paper we present the results of a theoretical analysis of the effect of the SO interaction on the energy spectrum and low temperature (ballistic) conductance of a long Q1DES. Since the crystal-field contribution to the SO interaction energy can be made negligible in comparison with the quantum-well effect in a variety of systems , we take into account two sources of the SO coupling: the quantum-well confinement in the direction perpendicular to the plane of the 2DEG and the confining electric potential transverse to the 2DEG. We show that even if the SO coupling due to the transverse potential is left out, the very presence of this potential changes drastically the SO-interaction effects caused by the quantum-well field in comparison with a purely 2D situation. In addition to this, the contribution of the transverse potential to the SO coupling adds interesting features to the electron energy spectrum and the conductance which cannot be accounted for by simply renormalising the quantum-well field. Also we find that relatively weak magnetic fields emphasise the effects of the SO interaction in Q1DES.
A unique feature of semiconductor Q1DES is that their properties can be varied significantly at the stage of design (via chemical composition, band engineering, external fields, etc.). In particular, it is possible to create systems with wide range of carrier concentrations including values where the strength of electron-electron interactions is relatively weak. On the other hand, the strength of the SO coupling can simultaneously be enhanced by, for example, increasing confining electric fields and using materials with larger SO constants (InAs, PbTe, etc.). As a result, experimental situations can be achieved when the SO coupling becomes dominant. In this case it is reasonable to assume that the electron-electron interaction does not remove SO effects in the band structure of Q1DES and they can be studied within the single-particle approximation. As regards the ballistic conductance of a quantum wire, it has been proved not to be renormalised by electron-electron interactions . Based on these arguments, we consider here a free 2DEG within one-band effective mass approximation . The corresponding Hamiltonian has the form:
$$\widehat{H}=\frac{1}{2M}\left(\widehat{𝐩}+\frac{e}{c}𝐀\right)^2+V(𝐑)+\frac{g}{2}\mu _B\left(\widehat{𝝈}𝐁\right)+\widehat{H}_{SO}.$$
(2)
Here $`M`$ is the effective electron mass, $`g`$ is the Landé $`g`$-factor, and $`\mu _B`$ is the Bohr magneton. The vector potential $`𝐀`$ is chosen in the Landau gauge $`𝐀(𝐑)=Bx𝐲`$ with a magnetic field $`𝐁=\mathrm{curl}𝐀=B𝐳`$ being perpendicular to the 2DEG. In line with Refs. , the transverse confining potential $`V(𝐑)`$ is approximated by a parabola
$$V(𝐑)V(x)=(M\omega ^2/2)x^2,$$
(3)
where $`\omega `$ controls the strength of the confining potential.
We assume that the SO Hamiltonian $`\widehat{H}_{SO}`$ (1) in Eq. (2) is formed by two contributions: $`\widehat{H}_{SO}=\widehat{H}_{SO}^\alpha +\widehat{H}_{SO}^\beta `$. The first one, $`\widehat{H}_{SO}^\alpha `$, arises from the quantum-well electric field that can reasonably be assumed uniform and directed along the $`z`$-axis, so that $`\widehat{H}_{SO}^\alpha `$ is given by
$$\widehat{H}_{SO}^\alpha =\frac{\alpha }{\mathrm{}}\left[\widehat{𝝈}\times \left(\widehat{𝐩}+\frac{e}{c}𝐀\right)\right]_z.$$
(4)
The SO-coupling constant $`\alpha `$ takes values $`10^{10}`$$`10^9`$ eV$`\times `$cm for different systems . We will refer to this mechanism of the SO coupling as $`\alpha `$-coupling.
The second contribution $`\widehat{H}_{SO}^\beta `$ to $`\widehat{H}_{SO}`$ comes from the in-plane electric field $`𝐄=_𝐑V=M\omega ^2𝐱`$ caused by the transverse confining potential (3):
$$\widehat{H}_{SO}^\beta =\frac{\beta }{\mathrm{}}\frac{x}{l_\omega }\left[\widehat{𝝈}\times \left(\widehat{𝐩}+\frac{e}{c}𝐀\right)\right]_x,l_\omega =\sqrt{\mathrm{}/M\omega }.$$
(5)
By comparison with typical quantum-well and transverse electric fields the SO-coupling constant $`\beta `$ in Eq. (5) can be roughly estimated as at least $`\beta 0.1\alpha `$. Moreover, in square quantum wells where the value of $`\alpha `$ is considerably diminished the constant $`\beta `$ may well compete with $`\alpha `$. We will call the SO interaction arising from the transverse confining potential (3) $`\beta `$-coupling.
To calculate the energy spectrum of electrons we must find eigenvalues $`E`$ of the Schrödinger equation $`\widehat{H}\mathrm{\Psi }=E\mathrm{\Psi }`$ where the wave function $`\mathrm{\Psi }=\mathrm{\Psi }(𝐑)=\left\{\mathrm{\Psi }(𝐑)_{}\mathrm{\Psi }(𝐑)_{}\right\}`$ is a two-component spinor. Since the Hamiltonian $`\widehat{H}`$ (2) – (5) is translationally invariant in the $`y`$-direction, the wave functions $`\mathrm{\Psi }_{}(𝐑)`$ are plane waves propagating along the $`y`$-axis, i.e. $`\mathrm{\Psi }_{}(𝐑)=\mathrm{exp}(ik_yy)\mathrm{\Phi }_{}(x)`$, and the longitudinal energy is given by $`E_y=\mathrm{}^2k_y^2/2M`$, where $`k_y`$ is the longitudinal wave number. The equations for $`\mathrm{\Phi }_{}(x)`$ stem from the Schrödinger equation:
$`\mathrm{\Phi }_{}^{\prime \prime }+\left[\epsilon _x\gamma (l_\omega /l_B)^2a_{}t^2b_{}t\right]\mathrm{\Phi }_{}(t)=`$ (6)
$`(l_\omega /l_\alpha )\left\{\pm \mathrm{\Phi }_{}^{}+\left[(l_\omega /l_B)^2t+k_yl_\omega \right]\mathrm{\Phi }_{}(t)\right\},`$ (7)
$`a_{}`$ $`=`$ $`1+(l_\omega /l_B)^4\pm (l_\omega /l_B)^2(l_\omega /l_\beta ),`$ (8)
$`b_{}`$ $`=`$ $`2(k_yl_\omega )\left[(l_\omega /l_B)^2\pm (1/2)(l_\omega /l_\beta )\right],`$ (9)
where $`l_B=\sqrt{c\mathrm{}/eB}`$ is the magnetic length and $`l_{\alpha (\beta )}=\mathrm{}^2/2M\alpha (\beta )`$ are typical spatial scales associated with the $`\alpha `$\- ($`\beta `$-) coupling. The quantities $`\epsilon _x(k_xl_\omega )^2`$ and $`t=x/l_\omega `$ are the dimensionless transverse energy and coordinates, $`k_x^2=(2M/\mathrm{}^2)Ek_y^2`$, and $`\gamma =(M/M_0)g/2`$.
As opposed to all the other terms in Eq. (2), the operator $`\widehat{H}_{SO}^\alpha `$ (4) is non-diagonal in the spin space. Therefore, as long as the $`\alpha `$-coupling is finite (i.e. if $`l_\omega /l_\alpha 0`$), the equations (7) are coupled to each other. It is therefore natural that the behaviour of the transverse energy $`\epsilon _x`$ which is determined by Eqs. (7) crucially depends on whether or not the $`\alpha `$-coupling is present in the system.
For zero $`\alpha `$-coupling ($`l_\omega /l_\alpha =0`$) Eqs. (7) decouple and reduce to analytically solvable Hermite equations. The transverse energy is then given by
$`\epsilon _x^{}`$ $`=`$ $`(2n+1)a_{}^{1/2}\pm \gamma (l_\omega /l_B)^2`$ (11)
$`a_{}^1\left[(l_\omega /l_B)^2(1/2)(l_\omega /l_\beta )\right]^2(k_yl_\omega )^2,`$
$`n=0,1,2,\mathrm{}`$ and the wave functions $`\varphi _{}^n(t)`$ form complete sets. The functions $`\epsilon _x^{}=\epsilon _x^{}(k_y)`$ resemble well-known magneto-electric parabolic subbands with the only exception that finite $`\beta `$-coupling brings in a spin-orientation dependence of the subband curvature.
We now consider the case of finite $`\alpha `$-coupling ($`l_\omega /l_\alpha 0`$). We do not find that the coupled Eqs. (7) can be solved in an explicitly analytical form. However, a strongly convergent matrix form exists. This is found by expanding each unknown wave function $`\mathrm{\Phi }_{}(t)`$ and $`\mathrm{\Phi }_{}(t)`$ in terms of both $`\varphi _{}^n(t)`$ and $`\varphi _{}^n(t)`$ and then combining all four expansions obtained into a closed linear homogeneous system of algebraic equations with respect to the coefficients of one of the expansions. The exact spectrum $`\epsilon _x`$ has been found numerically as zeros of the corresponding determinant as a function of $`k_y`$ (see Ref. for more details for the zero-magnetic-field case). The exploitation of the four expansions has allowed us to avoid inversions of infinite matrices, while the conveniently chosen bases have made the roots of the determinant rapidly convergent.
Solid lines in Fig. 1(a) present graphs of $`\epsilon _x=\epsilon _x(k_yl_\omega )`$ for zero $`\beta `$-coupling ($`l_\omega /l_\beta =0`$) and zero magnetic field. Here we see two-fold spin degeneracy of all quantum levels at $`k_y=0`$. Once $`k_y`$ becomes finite, the SO interaction lifts this degeneracy producing an energy splitting $`\mathrm{\Delta }\epsilon _x=\epsilon _x^{}\epsilon _x^{}0`$ between electron states with different spin orientations. For small $`k_yl_\omega 2`$ this splitting is linear in $`k_y`$ and agrees with results of both theoretical and experimental research on the SO-interaction effects caused by the quantum-well field in 2D systems. However, in a purely 2D geometry the linear splitting $`\mathrm{\Delta }\epsilon _xk_y`$ is known to be exact for all values of $`k_y`$, however large. In contrast to this, the Q1DES dispersion curves start to diverge from the linear behaviour at $`k_yl_\omega 2.5`$ and eventually anticross with an energy branch corresponding to the next higher (lower) quantum number $`n`$. This is a direct consequence of the presence of the transverse confining potential (3). Even though this potential does not contribute to the SO interaction directly (because $`l_\omega /l_\beta =0`$), it nevertheless strongly affects the other (quantum-well) mechanism of the SO coupling. More specifically, in the presence of the potential (3) the transverse wave functions $`\mathrm{\Phi }_{}(x)`$ of the unperturbed system (i.e. with $`l_\omega /l_\alpha =0`$) are no longer simple plane waves $`\mathrm{exp}(ik_xx)`$ (as it is in a stritcly 2D situation) but become parabolic cylinder functions . When the SO perturbation operator \[the rhs’ of Eqs. (7)\] acts on these functions, it projects the $`n`$-th state onto the $`(n\pm 1)`$-st states producing an effective hybridisation of “neighbouring” states and therefore the anticrossing of the energy branches in Fig. 1(a) and the non-monotonic dependence $`\mathrm{\Delta }\epsilon _x(k_y)`$ (see Ref. for more details).
The application of a weak ($`l_\omega /l_B1`$) perpendicular magnetic field bends all the energy curves downwards by an amount $`k_y^2`$ \[cf. solid and dotted lines in Fig. 1(a)\]. This behaviour is consistent with Eq. (11). We note that a weak magnetic field has only a small effect on the dispersion law to the left of the anticrossing region, i.e. for $`k_yl_\omega 3`$. For strong magnetic fields ($`l_\omega /l_B10`$), when the distance between Landau levels is very large, no anticrossing effects due to the SO interaction can be seen.
¿From Figs. 1(a,b) it is seen that switching on the $`\beta `$-coupling enhances the anticrossing of “neighbouring” energy branches. Moreover, the strength of the anticrossing now depends on $`n`$ and grows with $`n`$. Interestingly, this effect reduces the linear energy splitting $`\mathrm{\Delta }\epsilon _xk_y`$, in contrast to the expectation that an additional mechanism of the SO interaction should intensify the splitting rather than suppress it. What actually happens is that the $`\beta `$-coupling, as well as the $`\alpha `$-coupling, gives a contribution to the hybridisation of neighbouring electron states . As a result, the hybridisation becomes stronger and leads to the more pronounced anticrossing and effectively to the suppression of the energy splitting. This effect indicates the independent nature of $`\beta `$-coupling and its irreducibility to the $`\alpha `$-coupling. Owing to the enhanced interstate hybridisation caused by the $`\beta `$-coupling, the anticrossing of energy branches in Fig. 1(b) can be seen in a wider region of $`k_yl_\omega `$ up to $`k_yl_\omega 1314`$. A weak magnetic field modifies the spectrum in Fig. 1(b) in basically the same way as it does in Fig. 1(a) \[cf. solid and dotted lines in Fig. 1(b)\].
The electron eigenstates that were discussed above can be proven to obey the fundamental current-conservation identity so that a current can travel adiabatically in any of these states without scattering into any other. This property allows the low temperature (ballistic) conductance $`G`$ of a Q1DES to be calculated directly from the energy spectrum by relating it to the number $`M`$ of forward propagating electron modes at a given Fermi energy $`\epsilon _F`$ via simple Landauer formula : $`G=(e^2/h)M(\epsilon _F)`$.
The most interesting effects on $`G`$ are obtained for strong $`\alpha `$-coupling when $`l_\omega /l_\alpha 1.4`$. Here the anticrossing (non-monotonic) portion of curves $`\epsilon _x(k_y)`$ in Fig. 1(a) comes so close to the $`y`$-axis that the longitudinal term $`(k_yl_\omega )^2`$ in the total subband energy $`\epsilon =\epsilon _x+(k_yl_\omega )^2`$ does not disguise completely the original non-monotonicity of $`\epsilon _x(k_y)`$. As a result, we see a small non-monotonic portion (“bump”) on all the energy curves $`\epsilon (k_y)`$ in Fig. 2(a) \[see a magnified bump for the lowest level $`n=0`$ in Fig. 2(b)\]. Remarkably, these bumps give rise to three propagating electron modes as opposed to just one created by any monotonic interval of the spectrum. Furthermore, two of these modes \[the two leftmost black cicles in Fig 2(b)\] “mirror” each other in the sense that they have nearly the same spatial wave functions but oppositely directed longitudinal group velocities $`v_y=\mathrm{}^1(\epsilon /k_y)`$. It is therefore likely that weak elastic scattering between the forward and backward propagating modes may cause directed localisation and the twin modes will not contribute to the net current. However, in a sufficiently clean system the existence of such unusual modes could give rise to sharp ($`0.1`$ meV wide) periodic peaks in the dependence $`G(\epsilon _F)`$ \[Fig. 2(c)\].
A second manifestation of the $`\alpha `$-coupling in Fig. 2(c) is a shift of the conductance quantisation steps to lower energies in comparison with the case of zero SO interaction (cf. solid and dotted lines). This effect is caused by energy branches that go downwards in the region of the linear energy splitting \[see Fig. 1(a)\] and therefore lower the energy of a band edge.
In Fig. 1(b) we saw that switching on the $`\beta `$-coupling reduces the energy splitting created by the $`\alpha `$-coupling. As applied to the subband energy $`\epsilon (k_y)`$ this means suppression of the non-monotonic bumps and eventually quenching the peak-like structure in $`G(\epsilon _F)`$. For example, for $`l_\omega /l_\beta =0.2`$ only one (the lowest) bump in Fig. 2(a) survives and hence only the first peak in $`G(\epsilon _F)`$ can potentially be observed. The existence of the single peak (or just a few peaks) could be a clear experimental indication of the presence of the $`\beta `$-coupling in the system.
In contrast to $`\beta `$-coupling, a weak perpendicular magnetic field emphasises the conductance features caused by the $`\alpha `$-coupling. Indeed, a negative effective potential $`k_y^2`$ due to a magnetic field \[see Eq. (11) and Figs. 1(a,b)\] compenstates partially to the contribution of the longitudinal energy $`(k_yl_\omega )^2`$ to the total subband energy $`\epsilon =\epsilon _x+(k_yl_\omega )^2`$. Hence the non-monotonic portions of the transverse energy spectrum $`\epsilon _x`$ in Figs. 1(a,b) are now more important in forming $`\epsilon `$ than they were in zero magnetic field. As a result, the amplitude (height) of the bumps in Fig. 2(a) will increase and the conductance peaks in Fig. 2(c) become wider (2-3 times). As regards the peaks destroyed by the $`\beta `$-coupling, they reappear one by one starting from the lowest as the magnetic field is being increased.
In conclusion, we have revealed features of the energy spectrum of electrons and low temperature conductance arising from the specifics of the spin-orbit interaction in quasi-one-dimensional electron systems: non-monotonic wave vector dependence of subband energies, anticrossings between subbands, additional subband minima, and sharp peaks in the ballistic conductance.
AVM thanks the ORS, COT, and Corpus Christi College for financial support. CHWB thanks the EPSRC for financial support. |
no-problem/9912/quant-ph9912026.html | ar5iv | text | # On stimulated transitions between the self-trapped states of the nonlinear Schrödinger equation
## I Introduction
The nonlinear Schrödinger equation (NLS),
$$i\mathrm{}\frac{\psi }{t}=\frac{\mathrm{}^2}{2m}\mathrm{\Delta }\psi +U\left(\stackrel{}{r}\right)\psi +\lambda \left|\psi \right|^2\psi ,$$
(1)
serves as a tool for the account of many physical phenomena. The stationary version of this equation was introduced by Deigen and Pekar to describe the self-trapped (autolocalized) states of electrons in deformable crystal lattice (, see also ). Gross and Pitayevskii derived Eq. (1) as the mean-field approximation for the macroscopic wavefunction $`\psi (\stackrel{}{r},t)`$ of the Bose - Einstein condensate of the non-ideal Bose gas at vanishing temperature (see also ). The third avatar of the NLS came in the realm of the nonlinear optics, where $`\psi (\stackrel{}{r},t)`$ represents the envelope of a quasimonochromatic electromagnetic wave. The exact solution of the one-dimensional homogeneous ($`U\left(\stackrel{}{r}\right)=0`$) NLS found by Zakharov and Shabat formed the modern paradigm of the soliton theory .
Although the solutions of the NLS were extensively studied, it seems that comparatively little is known about their properties in case of time-dependent potentials. In this paper we address ourselves to a specific problem of this class. We shall consider the one-dimensional Eq. (1) with the potential that consists of two parts: a permanent potential $`U\left(x\right)`$ that has the form of symmetric double well, and some time-dependent potential $`V(x,t)`$ that will be called perturbation. In the absence of perturbation the properties of the stationary solutions of Eq. (1), that have the form
$$\psi (x,t)=\mathrm{\Phi }\left(x\right)\mathrm{exp}\left(i\frac{E}{\mathrm{}}t\right),$$
(2)
depend essentially on the nonlinearity parameter $`\lambda `$. At small $`\lambda `$ there is an infinite set of modes (2) that have symmetric wave functions - odd or even, $`\mathrm{\Phi }\left(x\right)=\pm \mathrm{\Phi }\left(x\right)`$. With the increase of $`\lambda `$ at some threshold value of $`\lambda _c`$ a pair of stationary solutions (2) with broken symmetry, $`\mathrm{\Phi }_s\left(x\right)\pm \mathrm{\Phi }_s\left(x\right)`$, comes into being. These solutions describe the states of the particle that are self-trapped in one of the wells of the permanent potential. For $`\lambda <0`$ the corresponding energy $`E_s`$ is lower than that of any symmetric mode .
Our main concern will be the following question: if the initial state of the system is one of these self-trapped states, then how the system will evolve under the influence of the non-stationary perturbation? In particular, can the perturbation transfer the system completely to the opposite self-trapped state?
There is a favorable circumstance that allows to simplify the problem. It happens that at moderate $`\lambda >\lambda _c`$ even the self-trapped modes of high asymmetry can be accurately described by linear combination of the two lowest symmetric modes, the even $`\mathrm{\Phi }_0\left(x\right)`$ and the odd $`\mathrm{\Phi }_1\left(x\right)`$ . Therefore studying the problem we can restrict ourselves by analysis of the evolution of the two-level system. In Section II we derive the basic equations for this model. In Section III we study the influence of the external perturbation that harmonically depends on time. The evolution of the system under the influence of the broadband noise is studied in Section IV. Section V contains the concluding discussion.
## II The basic equations
For the future use we introduce the following quantities related to the (supposedly real) eigenfunctions $`\mathrm{\Phi }_0\left(x\right)`$ and $`\mathrm{\Phi }_1\left(x\right)`$:
$$J_{00}=\mathrm{\Phi }_0^4𝑑x,J_{01}=\mathrm{\Phi }_0^2\mathrm{\Phi }_1^2𝑑x,J_{11}=\mathrm{\Phi }_1^4𝑑x.$$
(3)
Let’s represent the wave function of the system by the superposition of two lowest symmetric modes,
$$\psi (x,t)=b_0\left(t\right)\mathrm{\Phi }_0\left(x\right)e^{i\beta _0t}+b_1\left(t\right)\mathrm{\Phi }_1\left(x\right)e^{i\beta _1t},$$
(4)
where $`\beta _i=\mathrm{}^1\left(E_i+\lambda J_{ii}\right)`$. By substitution of Eq. (4) in Eq. (1), consequent multiplication by $`\mathrm{\Phi }_i\left(x\right)`$, and integration over the coordinate $`x`$ we get the following system of two equations for the complex amplitudes $`b_i`$:
$$\begin{array}{c}i\mathrm{}\frac{db_0}{dt}=\lambda b_0\left|b_0\right|^2J_{00}2\lambda b_0\left|b_1\right|^2J_{01}\lambda b_0^{}b_1^2J_{01}e^{i2\left(\beta _0\beta _1\right)t}\\ +b_0V_{00}\left(t\right)+b_1V_{01}\left(t\right)e^{i\left(\beta _0\beta _1\right)t},\\ i\mathrm{}\frac{db_1}{dt}=\lambda b_1\left|b_1\right|^2J_{11}2\lambda b_1\left|b_0\right|^2J_{01}\lambda b_0^2b_1^{}J_{01}e^{i2\left(\beta _0\beta _1\right)t}\\ +b_1V_{11}\left(t\right)+b_0V_{01}\left(t\right)e^{i\left(\beta _0\beta _1\right)t},\end{array}$$
(5)
where the matrix elements of the perturbation are given by the integrals
$$V_{ij}\left(t\right)=\mathrm{\Phi }_i\left(x\right)\widehat{V}(x,t)\mathrm{\Phi }_j\left(x\right)𝑑x.$$
(6)
The system (5) conserves the norm of the state $`\psi (x,t)`$ (the sum of probabilities $`\left|b_0\right|^2+\left|b_1\right|^2=1`$), and the common phase of the wave function (4) is physically irrelevant. Therefore we can describe the evolution of the system by just two real variables. The complex amplitudes could be cast in the form $`b_i=\sqrt{n_i}\mathrm{exp}(i\vartheta _i)`$, where $`n_i`$ and $`\vartheta _i`$ are real time-dependent variables. Let’s introduce the population difference $`\mathrm{\Delta }=n_0n_1`$ and the phase $`\mathrm{\Theta }=2\left(\vartheta _0\vartheta _1\right)+2\left(\beta _0\beta _1\right)t`$. Then the system (5) turns into equations
$$\begin{array}{c}\dot{\mathrm{\Delta }}=B\left(1\mathrm{\Delta }^2\right)\mathrm{sin}\mathrm{\Theta }+F\left(t\right)\frac{\sqrt{1\mathrm{\Delta }^2}}{2}\mathrm{sin}\frac{\mathrm{\Theta }}{2},\\ \dot{\mathrm{\Theta }}=\mathrm{\Omega }+2A\mathrm{\Delta }+2B\mathrm{\Delta }\mathrm{cos}\mathrm{\Theta }F\left(t\right)\frac{\mathrm{\Delta }}{\sqrt{1\mathrm{\Delta }^2}}\mathrm{cos}\frac{\mathrm{\Theta }}{2}+G\left(t\right),\end{array}$$
(7)
where the following notations have been introduced: $`\mathrm{\Omega }=2\left(\beta _1\beta _0\right)+\lambda \mathrm{}^1\left(J_{11}J_{00}\right)`$, $`A=\lambda \mathrm{}^1\left(4J_{01}J_{00}J_{11}\right)\mathrm{/}2`$, $`B=\lambda \mathrm{}^1J_{01}`$, $`F\left(t\right)=4\mathrm{}^1V_{01}\left(t\right)`$ and $`G\left(t\right)=2\mathrm{}^1\left(V_{00}\left(t\right)V_{11}\left(t\right)\right)`$. All these quantities have the same dimensionality, namely that of the frequency. If the nonlinearity parameter vanishes, $`\lambda =0`$, then Eqs. (7) become equivalent to the well-known Bloch equations .
The nonlinear Bloch equations (7) can be considered as a pair of canonical equations for the conjugated variables $`\mathrm{\Delta },\mathrm{\Theta }`$ of the non-autonomous system with one degree of freedom with the Hamiltonian function $`H=H_0+H_1\left(t\right)`$, where the unperturbed Hamiltonian is
$$H_0=\mathrm{\Omega }\mathrm{\Delta }+A\mathrm{\Delta }^2B\left(1\mathrm{\Delta }^2\right)\mathrm{cos}\mathrm{\Theta },$$
(8)
and the perturbation is
$$H_1\left(t\right)=G\left(t\right)\mathrm{\Delta }F\left(t\right)\sqrt{1\mathrm{\Delta }^2}\mathrm{cos}\frac{\mathrm{\Theta }}{2}.$$
(9)
In what follows we refer to the value of the function $`H_0`$ as the energy of the system. In the absence of perturbation the system (7) has two trivial stationary solutions, $`\mathrm{\Delta }=\pm 1`$ and $`\mathrm{\Theta }`$ arbitrary, that correspond to the symmetric eigenstates $`\mathrm{\Phi }_0`$ and $`\mathrm{\Phi }_1`$ respectively, and two non-trivial fixed points $`\mathrm{\Delta }_0=\mathrm{\Omega }\mathrm{/}2\left(A+B\right)`$, $`\mathrm{\Theta }=0`$ or $`\mathrm{\Theta }=2\pi `$, that correspond to the pair of self-trapped states that below will be called the stationary states. These states are divided from the bulk of the phase space by a separatrix (see Fig. 1).
For the future numerical calculations we need to specify the parameters of the unperturbed Hamiltonian (8). We have chosen the following set of values: $`\mathrm{\Omega }=5.388`$, $`A=1.902`$, and $`B=2.022`$. With this choice the stationary states which are located at $`\mathrm{\Delta }_0=0.686`$ correspond to the minimal energy of the system $`E_{}=3.871`$, the separatrix coincides with the isoenergetic line $`E=E_s=3.486`$, and the maximal energy $`E_+=7.290`$ corresponds to the line $`\mathrm{\Delta }=1`$.
## III The harmonic perturbation
We shall assume that the even (diagonal) perturbation is absent, $`G(t)=0`$, and the odd (non-diagonal) has the form
$$F\left(t\right)=F\mathrm{sin}\left(\omega t+\varphi \right).$$
(10)
The numerical experiments show, that for given values of frequency $`\omega `$ and initial phase $`\varphi `$ of the perturbation there is a threshold value $`F_c(\omega ,\varphi )`$ of its amplitude such that for $`F<F_c`$ the phase trajectory of the system remains indefinitely within one loop of the separatrix, whereas for $`F>F_c`$ the phase trajectory crosses the separatrix, does it repeatedly and may come close to the opposite stable point. The dependence of $`F_c(\omega ,\varphi )`$ on the initial phase is feeble: the relative variations of the thresholds due to the change of the initial phase have the order of a few percent. Hence for the time being we ignore this dependence and shall speak only about the dependence $`F_c\left(\omega \right)`$ (see Fig. 2).
The abrupt change of the character of motion at a certain threshold value of the perturbation magnitude strongly indicates on the onset of the global stochasticity that comes from the overlap of resonances and the destruction of the noble tori . This is indeed the case: by taking some phase point at the separatrix for the initial conditions, one can see that at the same threshold values $`F_c\left(\omega \right)`$ the stochastic layer around the separatrix explodes and covers the vicinity of the stable states. However, even at rather small excesses of $`F`$ over the threshold the crossing of separatrix comes fast, after about ten periods of the field. At these times the chaotic nature of the system’s dynamics remains concealed: the motion seems regular and rather simple. Therefore we can try to explain the behaviour of the dependence $`F_c\left(\omega \right)`$ in the frame of regular dynamics.
Specific features of the perturbing terms in Eqs. (7) create technical complications that are irrelevant to the nature of the phenomenon. The main qualitative features of the separatrix crossing under the influence of the harmonic field could be explained with a toy model - the one-dimensional Duffing oscillator with the equation of motion
$$\ddot{x}+xx^3=F\mathrm{sin}\omega t$$
(11)
and the initial conditions $`x\left(0\right)=0,\dot{x}\left(0\right)=0`$. This model also has a stable point, surrounded by a separatrix.
We assume the frequency detuning $`\delta =\omega 1`$ to be small, $`\left|\delta \right|<<1`$. If the perturbation $`F`$ is weak, then the non-linearity of the oscillator could be neglected, at least in the lowest approximation. Then the solution of Eq. (11) has the approximate form
$$x\left(t\right)\frac{F}{\delta }\mathrm{sin}\frac{\delta }{2}t\mathrm{cos}\left[\omega t\frac{\delta }{2}t\right].$$
(12)
By assumption that that the oscillator could be linearized in all the range $`\left|x\right|1`$, from this law of motion we find a crude estimate for the threshold of the separatrix crossing, namely $`F_c=\left|\delta \right|`$.
Now we improve this estimate by taking into account the nonlinearity. Let’s represent the motion of the oscillator in the form $`x\left(t\right)=A\mathrm{cos}\left(\omega t+\phi \right)`$, where $`A`$ and $`\phi `$ are slowly varying functions of time. The law (12) corresponds to the equations of motion for the slow amplitude and phase,
$$\dot{A}=\frac{F}{2}\mathrm{cos}\phi ,\dot{\phi }=\frac{\delta }{2},$$
(13)
with the initial conditions $`A\left(0\right)=0`$, $`\phi \left(0\right)=\pi `$. Let’s now replace the eigenfrequency of the oscillator $`\mathrm{\Omega }_0=1`$ in the RHS of the second of Eqs. (13) by the eigenfrequency of the nonlinear oscillator $`\mathrm{\Omega }\left(A\right)`$ that depends on the amplitude. For the model (11) for small $`A`$ we have $`\mathrm{\Omega }\left(A\right)=13A^2\mathrm{/}16`$. Consequently the evolution of the system could be described by the system of equations
$$\dot{A}=\frac{F}{2}\mathrm{cos}\phi ,\dot{\phi }=\frac{\delta }{2}\frac{3}{32}A^2,$$
(14)
with the same initial conditions. The threshold of the separatrix crossing could be found from the condition that oscillations may reach the saddle points: $`\mathrm{max}A(t)=1`$. From the second of Eqs. (14) it is seen, that if $`\delta >0,`$ then the phase shift decreases monotonously, thus decreasing the rate of the amplitude growth. If $`\delta <3\mathrm{/}16=0.188`$, then the phase shift increases monotonously while the amplitude stays below its critical value, $`A<1`$; and again the rate of the amplitude growth decreases with time. But in the band $`3\mathrm{/}16=0.188<\delta <0`$ the phase shift at first grows, then reaches the maximum and starts to decrease, passing the zero value at some later time $`t_0`$. Consequently, there are two moments in which the amplitude growth rate is maximal, $`t=0`$ and $`t=t_0`$. Thus one may expect that the dependence $`F_c\left(\omega \right)`$ will have the minimum somewhere in the range $`13\mathrm{/}16=0.812<\omega <1`$. The numerical solution of Eqs. (14) shows that this is true: the minimum of $`F_c\left(\omega \right)`$ is reached for $`\omega =0.87`$, about the middle of this band (see Fig. 3).
To find the condition for the separatrix crossing, the equations (14) should be solved on the finite interval of time, while the phase shift reaches the value $`\pm \pi \mathrm{/}2`$. This could be done in different ways, that may produce analytical estimates for the threshold values. We restrict ourselves by the only example. In the exact resonance (at $`\delta =0`$) in the zeroth approximation the amplitude dependence on time is linear, $`A_0=Ft\mathrm{/}2`$. By substitution of this expression in the RHS of the second of Eqs. (14), we have in the first approximation
$$\phi _1(t)=\pi \frac{1}{128}F^2t^3.$$
(15)
The time $`t_m`$ when the amplitude $`A(t)`$ reaches the maximum could be found from the condition $`\phi _1\left(t_m\right)=\pi \mathrm{/}2`$. Hence from the first of Eqs. (14) in the first approximation we have the threshold value of the perturbation in the exact resonance:
$$F_c(1)=\frac{27}{16}\left[\underset{0}{\overset{\pi \mathrm{/}2}{}}\theta ^{2\mathrm{/}3}\mathrm{cos}\theta d\theta \right]^3=0.0666$$
(16)
It agrees with the result of the numerical solution of the system (14) with the accuracy about 6%, but differs from the value obtained in the numerical experiment by a factor about 1.5.
The studied model (11) shares with the system (7) the ”soft” character of the nonlinearity of oscillations around the stable points: the eigenfrequency of oscillations decrease with the growth of their amplitude. This common feature is responsible for the similar behaviour of $`F_c(\omega )`$ in two models (compare Figs. 2 and 3) - namely, the presence of non-zero minimum at a frequency somewhat lower than that of the small oscillations, $`\mathrm{\Omega }_0.`$
Now we return to the case of the chaotic motion of the system above the threshold. For the system with the Hamiltonian $`H=H_0(\mathrm{\Delta },\mathrm{\Theta })+V(\mathrm{\Delta },\mathrm{\Theta })\mathrm{sin}\omega t`$ with a small perturbation $`V`$ the energy half-width of the stochastic layer is given by the Melnikov - Arnold integral
$$\mathrm{\Delta }E=\underset{\mathrm{}}{\overset{\mathrm{}}{}}\left(\frac{V}{\mathrm{\Delta }}\dot{\mathrm{\Delta }}+\frac{V}{\mathrm{\Theta }}\dot{\mathrm{\Theta }}\right)\mathrm{sin}\omega tdt,$$
(17)
where $`\mathrm{\Delta }\left(t\right)`$ and $`\mathrm{\Theta }\left(t\right)`$ are taken for the unperturbed motion on the separatrix . Thus we can expect that the motion in the phase space will persist inside the domain limited by the isoenergetic line $`E=E_s+\mathrm{\Delta }E`$. Since for the Hamiltonian systems the phase volume is conserved, we may expect the invariant density in the phase space to be uniform. This leads to the invariant distribution of the energy values $`w\left(E\right)`$ of the form
$$\begin{array}{c}w\left(E\right)=\eta \frac{2}{\mathrm{\Omega }\left(E\right)},(E_{}<E<E_s),\\ w\left(E\right)=\eta \frac{1}{\mathrm{\Omega }\left(E\right)},(E_s<E<E_s+\mathrm{\Delta }E),\end{array}$$
(18)
where $`\eta `$ is the normalization constant, and the factor ”2” in the first line accounts for the double degeneracy of the energy states. The comparison of this distribution with the one obtained in the numerical experiments is shown in Fig. 4. The general agreement is clearly present, in spite of rather large value of the perturbation magnitude. The discrepancy between the distributions for the energy values around and above $`E_s+\mathrm{\Delta }E`$ is due to the borderline resonances of the stochastic layer and could be anticipated.
If we define the vicinity of the stable state by the condition $`E<E_{}`$, then the average fraction of time spent in this domain is
$$\mu \left(E_{}\right)=\eta \underset{E_{}}{\overset{E_{}}{}}\frac{dE}{\mathrm{\Omega }\left(E\right)}\eta \frac{(E_{}E_{})}{\mathrm{\Omega }_0}$$
(19)
and the average transition time from vicinity of one of the self-trapped states to vicinity of the other is about
$$T\frac{\tau }{\mu \left(E_{}\right)}$$
(20)
where $`\tau `$ is the energy relaxation time. For the over-threshold perturbation amplitude the latter has value about the period of the harmonic field.
## IV The broadband perturbation
The threshold character of the separatrix crossing in the harmonic field stems from the termination of the resonant energy absorption before the value $`E_s`$ is achieved. If the perturbation is a broadband noise, then the energy absorption can go on indefinitely at arbitrarily small field amplitude. The problem of the energy absorption from the external noise has been studied extensively as a part of the theory of the escape over the potential barriers in dissipative systems in contact with the heat bath. The averaged evolution of the system can be described as a process of diffusion on the energy axis. The equation that governs this process could be derived from the Fokker - Planck equation . It is inconvenient, however, to adjust the known results to our case since our Hamiltonian (7,8) has rather unusual structure. Instead we derive the equation for the energy diffusion considering the system formally as quantum one, starting from quantum kinetic equations and going to the classical limit $`\mathrm{}0`$ in the result (cf. ).
Let’s consider a quantum system with the unperturbed Hamiltonian $`\widehat{H}_0`$ with one degree of freedom and discrete energy spectrum under the perturbation $`\widehat{V}\xi \left(t\right)`$ where $`\widehat{V}`$ depends on the dynamical variables of the system and $`\xi \left(t\right)`$ is a stationary weak broadband noise specified by its spectral density $`S\left(\omega \right)`$. The state of the system could be described by the probabilities $`\rho _n`$ of finding it in the quantum state $`|n`$. The evolution of these probabilities obeys the system of master equations
$$\frac{d\rho _n}{dt}=\rho _n\underset{k=n}{\overset{\mathrm{}}{}}\dot{W}_{n,n+k}+\underset{k=n}{\overset{\mathrm{}}{}}\rho _{n+k}\dot{W}_{n+k,n}.$$
(21)
The rates of transitions $`\dot{W}_{n+k,n}`$ are determined by the perturbation theory formula
$$\dot{W}_{n,n+k}=\frac{2\pi }{\mathrm{}^2}\left|V_{n,n+k}\right|^2S\left(\omega _{n,n+k}\right),$$
(22)
where $`V_{n,n+k}`$ are the matrix elements of the perturbation and $`S\left(\omega _{n,n+k}\right)`$ is the spectral density of noise at the frequency of transition. Let’s take the probabilities to be functions not on the level number $`n`$, but on its energy: $`\rho _n\rho \left(E_n\right)`$. In the quasiclassical case the energy spectrum of the system could be related to the frequency of its classical motion at a given energy $`\mathrm{\Omega }\left(E\right)`$:
$$E_{n+k}=E_n\pm \mathrm{}\mathrm{\Omega }\left(E_n\pm k\frac{\mathrm{}\mathrm{\Omega }}{2}\right),$$
(23)
and the matrix elements of the perturbation $`V_{n,n+k}`$ could be replaced by the Fourier components of the unperturbed motion of the dynamical variable that corresponds to the operator $`\widehat{V}`$: if
$$V\left(t\right)=\underset{k=\mathrm{}}{\overset{\mathrm{}}{}}V_ke^{ik\mathrm{\Omega }t},$$
(24)
then
$$V_{n,n+k}V_k\left(E_n+k\frac{\mathrm{}\mathrm{\Omega }}{2}\right).$$
(25)
We assume $`\rho \left(E\right)`$ to be a smooth function, and expand its value to the terms of the second order in $`\mathrm{}`$ and the values of $`E_{n+k}`$ and $`V_{n,n+k}`$ to the first order in $`\mathrm{}`$. After the substitution of these expansions in Eq. (21) and going to the limit $`\mathrm{}0`$ we obtain purely classical equation. At this point we restrict our consideration to the case of the white noise with the constant spectral density $`S\left(\omega \right)=1`$. For this case we have
$$\frac{\rho }{t}=\mathrm{\Omega }\frac{}{E}\left[D\left(E\right)\frac{\rho }{E}\right],$$
(26)
where the energy diffusion coefficient $`D\left(E\right)`$ is given by the expression
$$D\left(E\right)=2\pi \mathrm{\Omega }\left(E\right)\underset{k=1}{\overset{\mathrm{}}{}}k^2V_k^2\left(E\right)=\frac{1}{2}\underset{0}{\overset{2\pi \mathrm{/}\mathrm{\Omega }}{}}\left(\frac{dV}{dt}\right)^2𝑑t.$$
(27)
The energy dependence of the diffusion coefficient $`D`$ for the unperturbed system (7) with the perturbation $`V(\mathrm{\Delta },\mathrm{\Theta })=\sqrt{1\mathrm{\Delta }^2}\mathrm{cos}(\mathrm{\Theta }/2)`$ is shown in Fig. 5. We note a discontinuity of $`D(E)`$ at the separatrix value of energy: $`D(E_s+0)=2D(E_s0)`$. This jump is not a direct consequence of the presence of the separatrix, but reflects both the global structure of the phase space and the behaviour of the perturbation $`V`$ in the neighbourhood of the separatrix.
Since the classical distribution in energy $`w\left(E\right)`$ is connected to the probability density $`\rho \left(E\right)`$ by the relation $`w\left(E\right)=\rho \left(E\right)\left[\mathrm{}\mathrm{\Omega }\left(E\right)\right]^1`$, we have the equation for the $`w\left(E\right)`$ in the form
$$\frac{w}{t}=\frac{}{E}\left(D\left(E\right)\frac{}{E}\left(w\mathrm{\Omega }\right)\right).$$
(28)
The stationary solution of Eq. (28) has the form given by Eq. (18), but the second line holds now in all the range $`E_s<E<E_+`$. If the system initially was in one of the stable states, then in course of the energy diffusion it has an opportunity to migrate to vicinity of the contrary stable state. The characteristic time of the average transfer is determined by the relaxation time of the distribution to its stationary value; from Eq. (26) it can be estimated as
$$T\frac{\left(E_+E_{}\right)^2}{\mathrm{\Omega }D},$$
(29)
where the angular brackets denote averaging over the energy.
## V Conclusions
The main question addressed by this paper is: ”If the system, that is described by the one-dimensional non-linear Scrödinger equation with the potential of the symmetric double well, is initially in one of the lowest asymmetric (self-trapped) states, then can the time-dependent perturbation transfer the system completely to the opposite asymmetric mode?” The answer to it comes to be ”Yes, but almost completely and only by chance”.
For the harmonic perturbation with the amplitude $`F`$ that exceeds the threshold $`F_c(\omega )`$, the system’s motion is captured in the stochastic layer that embraces both domains of libration around the stable states and a strip around the separatrix. When moving in this domain, the system can come arbitrarily close to the opposite asymmetric state. However, the nature of this process is purely chaotic and, thence, unpredictable. There is no way to create for the nonlinear system the ”$`\pi `$ \- pulse” that will transfer the system unambiguously from one of the stable states to another. Lastly we note, that the threshold magnitude of the perturbation is only numerically small in comparison with the depth of the self-trapping wells: to make the transfer possible, the system must be perturbed strongly.
For the system under the influence of the white noise (or, generally speaking, any sufficiently broadband noise) the process of the energy diffusion eventually spreads the probability density over all phase space of the system $`H_0`$. In this case the system can occasionally come close to the opposite stable state. We note, however, that the probability of finding the system within one of the self-trapping wells is rather small; with our standard set of parameters it is only about few percent.
The main approximation of our calculations consisted in the truncation of the expansion of the wave function to just two modes (see Eq. (4)). It was justified by the high quality of this approximation in representation the unperturbed self-trapped states . Whether this accuracy will hold for the seriously perturbed system is quite a different question. For the harmonic perturbation of moderate amplitude the system stays locked within the narrow energy domain (see Fig. 4), and the addition of contributions of modes $`\mathrm{\Phi }_i`$ with $`i2`$, that will lead to the extension of the energy space of the system, will have little influence.
The situation may be different for the broadband noise, where the system can reach any point in the phase space. However the main contribution to the energy diffusion coefficient comes from the frequencies that are lower than $`\mathrm{\Omega }_0`$: in particular, at the separatrix they contribute about 0.66 of the total value. Thus if the spectrum of noise has cutoff of high frequencies just above the $`\mathrm{\Omega }_0`$, then the energy diffusion ceases at the energy $`E_h>E_s`$ at which $`\mathrm{\Omega }(E_h)=\mathrm{\Omega }_0`$ (for our set of parameters $`E_h=0.356`$) and the system stays locked within the restricted energy domain. Then in parallel to the case of the harmonic field we can conclude on the unimportance of the extension of the energy space by addition of higher modes.
## Acknowledgment
The authors are grateful to Prof. R. Sammut and Dr. A.V. Buryak for the introduction in the field and to Prof. Yu.M. Romanovsky for specifying the studied problem. The permanent informational support of this work by Dr. E.A. Ostrovskaya and Dr. D.G. Luchinsky and discussions with them were invaluable.
This work was financially supported by the Education and Science Center ”Fundamental Optics and Spectroscopy” (in the frame of the program ”Integration”) and by the Russian Federal grant # 96-15-96476 for the support of outstanding scientific schools.
## Figures captions
FIG.1. Phase trajectories of the system (7) in the absence of perturbations on the plane $`\mathrm{\Delta }\mathrm{\Theta }`$ for different values of energy: A - $`E=3.7`$, B - $`E=E_s=3.487`$, C - $`E=0`$, D - $`E=5`$.
FIG.2. Dependence of threshold amplitudes $`F_c`$ of the non-diagonal harmonic perturbation (10) on the relative frequency $`\omega /\mathrm{\Omega }_0`$ for the nonlinear Bloch equations (7) with the initial phase $`\varphi =0`$.
FIG.3. Dependence of threshold amplitudes $`F_c`$ of the perturbation on the frequency $`\omega `$ for the Duffing oscillator (11). The dashed line - the zeroth (linear) approximation, $``$ \- estimates found from Eqs. (14) (solid line is an eyeguide); $``$ \- results of the numerical experiment.
FIG.4. Distribution $`w`$ of system’s values of energy under the influence of the over-threshold harmonic perturbation with $`\omega =\mathrm{\Omega }_0=2.887`$ and $`F=2F_c=0.2`$. Dashed line -theoretical distribution Eq. (18), solid line - results of the numerical experiment.
FIG.5. Dependence of the coefficient of the energy diffusion $`D`$ under the influence of the white noise of the unit spectral density on the energy $`E`$. |
no-problem/9912/astro-ph9912023.html | ar5iv | text | # DEPENDENCE OF STAR FORMATION RATE ON OVERDENSITY
## Abstract
We use a large-scale $`\mathrm{\Lambda }`$CDM hydrodynamical simulation to assess the dependence of the cosmic Star Formation Rate (SFR) on overdensity of luminosity.
## 1. Introduction
The star formation rate (SFR) is the key measure that connects the structure formation in the universe and the actual observable light that is emitted by the galaxies. The plot of SFR as a function of redshift, known as the ‘Madau Plot’, depicts the evolution of the formation rate of stars in galaxies.
Using a large-scale hydrodynamical simulation, it was shown by Blanton et al. (1999) and Nagamine et al. (1999) that the gas which falls into gravitational potential-wells gets shock-heated, and the higher overdensity regions become no longer preferred sites for galaxy formation as the temperature increases towards the present.
Here we present the effects of this phenomena from different directions by analysing the SFR as a function of overdensity of luminosity. Details of the simulation can be found in the above two papers.
## 2. Star Formation Rate vs. Overdensity
The SFR is a direct output of our hydrodynamical simulation. Figure 1 shows the SFR divided into the quartiles of the light-overdensity distribution in V-band at z=0. The V-band luminosity of each stellar particle in the simulation was obtained using the latest isochrone synthesis model of GISSEL99 (Bruzual & Charlot 1999) with the metallicity of each particle taken into account.
It is clearly seen that the stellar particles in higher overdensity regions form earlier in time than those in lower overdensity regions. This is due to the effect which we described in the Introduction; the gas in high overdensity regions is shock-heated as it falls into the potential-well, thus further star formation is prohibited by the high temperature of the gas. Hence the steeper turn-off of the SFR between z=1 and 0. In lower overdensity regions, the gas is less heated than the higher overdensity regions and the peak of the SFR is at lower redshift.
Observationally, this is well known as the ‘morphology-density relation’ (eg., Dressler 1980). In clusters of galaxies, the old non-star-forming early-type galaxies dominate, whereas in the field, star forming late-type galaxies are seen more often. Also, the recent observations of galaxies in voids by Grogin & Geller (1999, 2000) find that the galaxies in the lowest density environments show stronger star formation than those in higher density region, which is consistent with what we find in Figure 1.
## 3. References
1. Blanton, M. et.al., 1999, submitted to ApJ, astro-ph/9903165.
2. Bruzual, G. A. and Charlot, S., 1999, in preparation.
3. Dressler, A., 1980, ApJ, 236, 351.
4. Grogin, N. A. and Geller, M. J., 1999, AJ in press, astro-ph/9910073.
5. Grogin, N. A. and Geller, M. J., 2000, AJ in press, astro-ph/9910096.
6. Nagamine, K., Cen, R., J. P. Ostriker, 1999, submitted to ApJ, astro-ph/9902372. |
no-problem/9912/nucl-th9912045.html | ar5iv | text | # Temperature dependent BCS equations with continuum coupling
## I Introduction
The effect of temperature upon pairing correlations in nuclei was studied long ago , after the theory of superfluidity was introduced to describe the appearance of a gap in the low-energy spectrum of finite nuclei . The extension of the theory of superfluidity to the finite temperature case was prompted by the search of evidence about the nuclear phase transition from a superfluid to a normal state, in analogy with the one found in condensed matter systems . The vanishing of pairing correlations with temperature is related to the fact that the excitation energy breaks pairs of particles which block the single particle levels close to the Fermi surface, where the pairing correlations are initiated.
The change of the pairing properties with the temperature was also studied in competition with the angular momentum. In this case a new effect was observed, which is due to the fact that at zero temperature the single-particle states close to the Fermi level are already partially blocked by the un-paired nucleons which form the finite angular momentum of the nucleus. Thus, when the temperature is switched on the first effect is a depletion of the partially blocked single-particle states which, in turn, induces an enhancement of the pairing correlations. This behaviour is different for the case of thermal excitations of nuclei with zero angular momentum, where the pairing correlations decrease monotonically with the increase of the temperature.
The temperature effects on pairing correlations were studied both in the BCS and HFB approximations, usually with a constant single-particle level density approximation. The available theoretical evidence shows that for medium and heavy-mass nuclei the pairing correlations disappear for a critical temperature of the order of 0.5-1.0 MeV. The excitation energies corresponding to this critical temperature are quite small for nuclei close to the beta stability line and therefore in all such calculations the coupling with the continuum spectrum was neglected. The situation is different for nuclei which are far from the stability line. In this case the Fermi level lies close to the continuum threshold and the coupling with the continuum becomes important. We shall focus here on the study of this coupling since, to our knowledge, the effect of the continuum coupling on thermodynamical properties of superfluid nuclei has not been investigated so far.
The main problem in dealing with the continuum coupling in statistical calculations of excited nuclei is the fact that the particles moving in the continuum have a finite probability to be emitted from the nucleus. In other words, such processes are time dependent and they are difficult to accomodate in stationary models as BCS or HFB. But to extent stationary many body theories to time-dependent formalisms is not an easy undertaking. In fact it may even be a not well defined task since the initial conditions may induce chaotic solutions. These features were already recognized in the beginning of quantum mechanics. Thus, Gamow and Wigner tried to reconciliate the outgoing character of the decaying process with the conveniences of stationarity by solving the Schrödinger equation with outgoing boundary conditions (for references see ). The corresponding solutions are related to the complex poles of the S-matrix, which define the so-called Gamow resonances. If the resonances are narrow the real parts of the complex poles of the S-matrix give the positions of the resonances while the corresponding imaginary parts give the decay widths. The narrow resonances are very important to describe nuclear correlations, especially in unbound or excited nuclei, because in these states the nucleons could move within the nuclear volume during a certain minimum time, so that they can interact with each other. One thus expects that a basis formed by bound and narrow Gamow resonances would provide a convenient framework to describe decaying processes . However, the drawback of the calculations based on this representation is that one gets complex probabilities which are not always easy to interpret .
Another alternative to describe unstable nuclei is to use a basis consisting of scattering states, instead of Gamow states, in the vecinity of the resonant poles . In this case all quantities are real and one does not have problems of interpreting complex probabilities. Within this representation the widths of the resonances are obtained by evaluating the derivative of the corresponding phase shifts. This also defines the continuum level density commonly used to estimate the contribution of the continuum to nuclear partition functions . The escape of the particles which move in resonant states is thus treated in these calculations as a stationary process, reflected by a constant particle density at large distances from the nucleus . The underling picture is an excited nucleus in dynamical equilibrium with an external nucleonic gas, whose contribution should be eventually extracted. Recently a similar framework was used to include continuum coupling in BCS equations at zero temperature . In this paper we will extend this method to study temperature dependent BCS equations (TBCS).
## II Formalism
The standard procedure to derive the temperature dependent BCS equations is to minimize the grand potential corresponding to a pairing Hamiltonian. For a bound single particle spectrum with energies $`ϵ_j`$ and a constant pairing interaction of strength G, the gap and particle number equations at a finite temperature $`T`$ are given by ,
$$\frac{2}{G}=\underset{j}{}\frac{12f_j}{2E_j},$$
(1)
$$N=\underset{j}{}\frac{1}{2}[1\frac{ϵ_j\lambda }{E_j}(12f_j)]$$
(2)
where $`E_j=[(ϵ_j\lambda )^2+\mathrm{\Delta }^2]^{1/2}`$ is the quasiparticle energy, $`f_j=(1+exp(\beta E_j))^1`$ is the Fermi distribution function and $`\beta =1/kT`$.
In principle the single particle energies $`ϵ_j`$ depend also upon temperature because the average mean field is a function of the nuclear excitations. However, in self-consistent temperature dependent Hartree-Fock calculations one finds that for temperatures below $`T=1`$ MeV, which is the range explored in temperature dependent BCS calculations, the single-particle spectrum is virtually the same as the one at zero temperature. We will also assume here that the single particle energies are those at zero temperature.
The contribution of the continuum on thermodynamical properties of finite nuclei was studied mainly in connection with the problem of the liquid-gas phase transition as well as in temperature dependent shell corrections , but without including pairing correlations. In these calculations the effect of the continuum was introduced into the thermodynamical quantities through the level density. Thus the grand potential for a non-interacting system was taken as
$$\mathrm{\Omega }=T(g_b(ϵ)+\stackrel{~}{g}(ϵ))ln(1+exp[\beta (ϵ\lambda )])𝑑ϵ$$
(3)
where $`g_b(ϵ)=_j\delta (ϵϵ_j)`$ is the level density of the bound spectrum and $`\stackrel{~}{g}(ϵ)`$ is the level density associated with the positive energy spectrum. In Ref. it is shown that the grand potential (3) describes a nucleus in dynamical equilibrium with a nucleonic gas. As discussed above, this is due to the fact that in a stationary treatment the nucleons scattered in the continuum are permanently emitted from the nucleus. To obtain the proper grand potential, i. e. the one corresponding to the nucleus itself, one should take away from Eq. (3) the contribution of the nucleonic gas. This can be done by subtracting from the grand potential (3) the grand potential of the free nucleonic gas , or by replacing in Eq. (3) the level density $`\stackrel{~}{g}(ϵ)`$ by the quantity
$$g(ϵ)=\stackrel{~}{g}g_{free}=\frac{1}{\pi }\underset{j}{}\frac{d\delta _j}{dϵ}$$
(4)
where $`g_{free}`$ is the level density in the absence of the mean field and $`\delta _j`$ is the phase shift. The quantity $`g(ϵ)`$ is the continuum level density . It takes into account the contribution of the resonant part of the continuum spectrum. The continuum coupling can thus be included by replacing in the grand potential (3) the density $`\stackrel{~}{g}`$ by the continuum level density $`g(ϵ)`$. This is a general recipe which can be applied to all quantities derived from the grand potential, as e. g. the energy and entropy of the system.
The incorporation of the continuum in interacting systems can readly be performed following a similar prescription. Thus the contribution of the continuum to the grand potential of an excited superfluid nucleus can be expressed in term of the continuum level density as in Eq. (3), with the difference that now instead of the single particle energies one should use the quasiparticle energies. The corresponding TBCS equations can be obtained directly from Eqs. (1) and (2) by replacing the level density of the bound states with the total level density, i. e. by $`g_b(ϵ)+g(ϵ)`$. Thus the TBCS equations with continuum coupling become,
$$\frac{2}{G}=\underset{j}{}\frac{12f_j}{2E_j}+g(ϵ)\frac{12f(ϵ)}{2E(ϵ)}𝑑ϵ,$$
(5)
$$N=\underset{j}{}\frac{1}{2}[1\frac{ϵ_j\lambda }{E_j}(12f_j)]+g(ϵ)\frac{1}{2}[1\frac{ϵ\lambda }{E(ϵ)}(12f(ϵ))]𝑑ϵ$$
(6)
where the second term gives the contribution of the continuum to the pairing correlations. In the limit T=0 one gets the same equations as in Refs . For temperatures higher than the critical temperature the gap vanishes and the particle number equation is similar to the one used in thermodynamical calculations of non-interacting systems .
The contribution of the continuum to the energy and to the entropy can be introduced in a similar fashion, as mentioned above. One gets,
$$E=\underset{j}{}n_jϵ_j+g(ϵ)n(ϵ)ϵ𝑑ϵ\frac{\mathrm{\Delta }^2}{G},$$
(7)
$$S=\underset{j}{}\{ln(\stackrel{~}{f}_j)+\beta f_jE_j\}+g(ϵ)\{ln(\stackrel{~}{f}_\nu (ϵ))+\beta f_\nu (ϵ)E_\nu (ϵ)\}𝑑ϵ$$
(8)
where $`n_j`$ is the occupancy of the state of energy $`ϵ_j`$, given by
$$n_j=\frac{1}{2}[1\frac{ϵ_j\lambda }{E_j}(12f_j)]$$
(9)
and $`\stackrel{~}{f}_j=1+exp(\beta E_j)`$. A similar notation is used for the corresponding quantities in the continuum.
In the limit of vanishing widths the continuum level density becomes a sum of Dirac delta functions and the resonances act as bound states (”quasibound states”). This is the case for, e. g., protons trapped in a high Coulomb barrier. If a resonance is not narrow then a pair scattered to that resonance have a large probability to escape from the system. Therefore, its contribution to the pairing correlation is small as compared with the corresponding contribution from a quasibound state with similar energy and angular momentum .
Although narrow resonances play a fundamental role in the enhancement of pair correlations (and in most other measurable physical processes as well) their proper inclusion in the applications may be difficult. The reason for this is that in the region of a narrow resonance the level density increases abruptly and the numerical evaluation of the integrals in the equations above may require extremely small mesh intervals. One can circumvent this problem by noticing that the resonance is narrow because there is a pole of the $`S`$ matrix which is very close to the real energy axis. Therefore, one can evaluate the integrals by changing the integration path, that is by choosing a contour $`C`$ in the complex energy plane which embodies the real axis around the narrow resonance, and by thereafter applying the Cauchy theorem . Eq. (7) thus becomes
$$E=\underset{i}{}n_iϵ_i+\underset{\nu }{}n(𝐄_\nu )𝐄_\nu +_Cg(ϵ)n(ϵ)ϵ𝑑ϵ\frac{\mathrm{\Delta }^2}{G}$$
(10)
where $`n(𝐄_\nu )`$ are the occupation probabilities calculated in the complex poles $`𝐄_\nu `$ enclosed by the path $`C`$. If one neglects the contribution of the integral over the contour $`C`$ in Eq. (10) then the energy $`E`$ would become complex. This is the case in representations based on Gamow resonances only . As already mentioned, within such representations one obtains complex physical quantities which have to be interpreted. For the energy such a task is rather easy, since already by looking at the temporal evolution of the wave function one realizes that the real part corresponds to the actual energy of the system while the imaginary part is related to the corresponding decay probability. This interpretation is valid if the resonance is narrow, i. e. if the ratio between the width and the energy of the resonance is small . But the interpretation of complex probabilities in general is not so straightforward . These problems do not appear here because in Eq. (10) the contribution of the integral over the contour $`C`$ is included and therefore the energy $`E`$ is real.
## III Numerical application
In order to illustrate how the continuum affects the properties of superfluid nuclei close to the drip line, in what follows we present the results given by the TBCS equations for the isotope $`{}_{}{}^{84}Ni`$.
We calculated the single particle spectrum by using the HF approximation and the Skyrme III interaction . The resonant energies are defined as the energies where the phase shift passes through $`\pi /2`$ with a positive slope . The width is extracted from the value of the energy where the derivative of the phase shift is half of its maximum value. The bound states outside the closed shell N=50 and the resonant states considered in the TBCS calculations are listed in Table 1. It can be seen that these states form the equivalent of the major shell N=50-82. One notices that the relative positions of the single-particle states differ substantially with the ones corresponding to beta stable nuclei. Thus, the states with low angular momenta are shifted down as compared with the ones with high angular momenta. This is a general feature related to the diffusivity of the mean field in nuclei close to the drip line , which is larger than the corresponding one in stable nuclei .
The TBCS equations (5) and (6) are solved starting with the HF spectrum calculated at zero temperature. In the absence of experimental information on heavy $`Ni`$ isotopes in the open shell $`N=5082`$ (the heaviest known $`Ni`$ isotope is the double magic nucleus <sup>78</sup>Ni) we use for the strength of the pairing force the standard value $`G=25/A`$ MeV, where $`A`$ is the mass number.
The variation of the gap with temperature is shown in Fig. 1. In the limit of $`T=0`$ the gap has the value $`\mathrm{\Delta }(0)=0.955`$ MeV and decreases to zero at the critical temperature $`T_c=0.524`$ MeV. This critical temperature is smaller by about $`3.8\%`$ than the one obtained from the relation $`T_c=0.57\mathrm{\Delta }(0)`$, which is the value predicted by a constant level density approximation .
In Fig. 1 it is also shown the gap one would get if the width of the resonances are neglected, i.e. if in Eqs. (5-6) one would take instead of the continuum level density a sum of Dirac delta functions. The gap at zero temperature is in this case bigger than in the previous case . As discussed above, the wider is a resonance the smaller is its contribution to the pairing correlation. That is, a pair scattered to a wide resonance spends less time in that state as compared to the time that the same pair would spend in a quasibound state and, therefore, the wider resonance contributes less to the pairing correlations. This has the important consequence that neglecting the widths of the resonances one increases the correlations of the system and therefore increases the critical temperature. This is what happens, e. g., with calculations that quantize the continuum by using an impenetrable box if the dimensions of the box are not extremely large (so that one gets a dense enough spectrum in the energy regions of resonant states). However, one has to mention that this is not a serious problem if the resonances are very narrow.
Neglecting the widths one gets for the critical temperature the value $`T_c=0.722`$ MeV, which is only $`1\%`$ larger than the value one would obtain in a constant level density approximation. This indicates that this approximation would work quite well for nuclei close to the proton drip line, where the width of the proton single-particle resonances may be so narrow that they can be neglected.
In Fig. 2 we show the dependence of the excitation energy upon temperature up to the point where the superfluid phase vanishes. It can be seen that if the effect of the width of resonant states is neglected, then the slope of the excitation energy becomes much smaller. This is also a manifestation of the fact that by neglecting the widths, the ground state becomes more correlated and therefore stiffer to thermal excitations. The same effect is observed for the entropy, as seen in Figure 3.
At critical temperature the excitation energy has the value 2.185 MeV and the entropy is 5.708. For a constant level density $`g`$ the excitation energy at critical temperature is given by $`E_caT_c^2+0.5g\mathrm{\Delta }(0)^2`$, where $`a=\frac{\pi ^2}{3}g`$ is the level density parameter, which in this case acquires the value $`a=3.958`$ MeV<sup>-1</sup>. We found that neglecting the widths the level density parameter increases to the value $`a=4.143`$ MeV<sup>-1</sup>. This implies that the difference in the excitation energies seen in Fig. 2 is esentially due to the mean occupancy of the resonant states (which is larger if the widths are neglected) and not due to the effective level density parameter.
In conclusion, in this paper we have extended the temperature dependent BCS equations by introducing the coupling with the single-particle continuum. The contribution of the continuum is given by the resonant states and their effect is taken into account through the continuum level density.
We found that the widths of the resonances affect significantly all physical quantities. In particular, the pairing correlations are diminished and this modifies significantly the value of the critical temperature at which the supefluid phase disappears. Also the dependence upon temperature of the excitation energy and of the entropy is considerably affected by the widths of the resonances, i.e. by their lifetime.
However, in the case of proton superfluidity the single-particle resonant states close to the continuum threshold may have a very small width and therefore they can be treated in the TBCS calculations as quasibound states, neglecting their widths altogether.
(O.C.) is partially supported by the grant (PICT0079) and by the CONICET, Argentina. (N.S.) is supported by the Wenner-Gren Foundation.
## Figure Captions
Figure 1: Dependence of the gap parameter $`\mathrm{\Delta }`$ upon the temperature $`T`$. The dashed line corresponds to the case when the resonant states are considered as quasibound states.
Figure 2: Excitation energy plotted as a function of temperature. The dashed line corresponds to the case when the resonant states are considered as quasibound states. The energy is plotted up to the critical temperature.
Figure 3: Entropy plotted as a function of temperature. The dashed line corresponds to the case when the resonant states are considered as quasibound states. The entropy is plotted up to the critical temperature. |
no-problem/9912/nucl-th9912010.html | ar5iv | text | # Unusual statistics of interference effects in neutron scattering from compound nuclei
## I Introduction
Due to chaotic nature of compound nuclei, positions of $`s`$ and $`p`$ resonances in neutron scattering from a heavy nucleus, and amplitudes involving these states, are uncorrelated. This gives rise to an unusual statistical effect in the asymmetry of the transmission of neutrons with positive and negative helicities . This asymmetry corresponds to the $`𝝈𝐩`$ correlation. It violates parity conservation, and is produced by the weak interaction in the nucleus, which mixes the $`s`$ and $`p`$ neutron partial waves. The magnitude of the asymmetry is strongly enhanced if the neutron energy is tuned into the $`p`$ resonance . In this case its magnitude is determined by the perturbative mixing
$$\eta =\frac{s|W|p}{E_sE_p}$$
(1)
of the $`s`$ and $`p`$ resonances by the weak interaction $`W`$. The matrix element between the compound states behaves as a Gaussian random variable, and $`\eta `$ is also a random variable with zero mean. The characteristic mixing (and asymmetry) can be estimated simply as $`\eta _cw/D`$, where $`w`$ is the root-mean-square matrix element, and $`D`$ is the mean spacing between the compound nucleus resonances. However, when one takes a closer look at the probability distribution of $`\eta `$, it turns out that its variance is infinite! This effect originates from a high probability to find small spacings $`E_sE_p`$, which results in the slow decrease of the probability density, $`f(\eta )1/\eta ^2`$.
The standard Central Limit Theorem (CLT) is not applicable to such random variables. Instead, it can be shown that if we consider the statistics of the average of $`n`$ such variables, the probability density of the sum $`(\eta _1+\mathrm{}+\eta _n)/n`$ becomes independent of $`n`$ at large $`n`$, i.e., fluctuations of $`\eta `$ are not averaged out . This contrasts the usual situation where CLT would give a Gaussian distribution whose width decreases as $`1/\sqrt{n}`$. This unusual behaviour is explained by the fact that among $`n`$ uncorrelated $`\eta _i`$ there is a large probability to find one, whose magnitude is $`n`$ times greater than their typical value, $`\eta _in\eta _c`$. Such $`\eta _i`$ will always dominate the sum and ensure that fluctuations are not suppressed by averaging. Another physical instance in which rare events dominate the distribution is seen in “Levy flights” in the random force diffusion model . In a usual homogeneous system, diffusion is modelled by Brownian motion, where the distribution of displacements is Gaussian. But in the random force diffusion model, disorder induces rare but large displacements which dominate the distribution (the Levy distribution). Under certain values of parameters, these have statistics similar to those found in nuclear scattering problems.
In this paper we analyse other effects in neutron scattering that have such unusual statistics. They are more conventional than those discussed in Ref. , since they do not involve the weak interaction. For example, in Sec. II we consider the difference between the differential cross-section in the forward and backward scattering, due to the interference of the $`p`$-wave resonance scattering and the background scattering amplitude, for a spinless particle. This effect can be described as the $`𝐩_i𝐩_f`$ correlation, where $`𝐩_i`$ and $`𝐩_f`$ are the momentum of the incident and emitted particles, respectively. We derive the statistics of the observable and the way it behaves upon averaging over many different nuclei, in a similar fashion to what was done in . We also discuss the limit of this effect when finite widths of the resonances are taken into account. In Section III we rederive the $`𝐩_i𝐩_f`$ correlation for particles with spin (e.g., neutrons) incident upon a spinless nucleus, as well as study the effect of a different correlation between the neutron spin $`𝝈`$ and the scattering plane, $`𝝈(𝐩_i\times 𝐩_f)`$. This is shown to have different statistics to the first correlation, and we derive a limit theorem for the average of the second correlation effect (details can be found in the Appendix). As it turns out, fluctuations of this average increase upon averaging, because when larger sets of data are considered there is a finite probability of finding an effect whose magnitude is $`n^2`$ times larger than its typical value.
## II Cross section asymmetry for a spinless particle
Let us first study the simple case of the $`𝐩_i𝐩_f`$ correlation for scattering of a spinless particle. Here we consider the difference between forward and backward elastic scattering differential cross sections near threshold, due to interference of the $`p`$-wave resonance scattering with the background $`s`$-wave amplitude. For a spinless particle the scattering amplitude at low momenta is written using the Breit-Wigner formula as (see, e.g., )
$$f(\theta )=A\frac{g_p}{2k}\frac{\mathrm{\Gamma }_p^{(n)}}{EE_p+\frac{i}{2}\mathrm{\Gamma }_p}\mathrm{cos}\theta ,$$
(2)
where $`A`$ is the $`s`$-wave scattering length, $`k`$ and $`E`$ are the wave number and energy of the projectile, and $`g_p`$, $`E_p`$, $`\mathrm{\Gamma }_p^{(n)}`$ and $`\mathrm{\Gamma }_p`$, are the statistical weight, energy, capture (or elastic) width and total width of the $`p`$ resonance, respectively. We assume that at energy $`E`$ the $`s`$-wave background is nonresonant, and there is a $`p`$-wave resonance nearby, a condition which would favour larger asymmetries. This leads to expressions for the forward ($`+`$) and backward ($``$) scattering amplitudes
$$f^\pm =A\frac{g_p}{2k}\frac{\mathrm{\Gamma }_p^{\left(n\right)}}{\epsilon }$$
(3)
where $`\epsilon =EE_p`$ is the distance to the $`p`$-wave resonance, and we assume that it is greater then the resonance width, i.e., $`\epsilon \mathrm{\Gamma }_p`$.
The relevant observable is the asymmetry
$$x=\frac{(d\sigma /d\mathrm{\Omega })_+(d\sigma /d\mathrm{\Omega })_{}}{(d\sigma /d\mathrm{\Omega })_++(d\sigma /d\mathrm{\Omega })_{}},$$
(4)
where $`(d\sigma /d\mathrm{\Omega })_\pm `$ are the forward and backward scattering cross sections. Substituting amplitudes (3) and taking into account that at low momenta the contribution of the $`p`$ wave is much smaller than that of the $`s`$ wave, we obtain
$$x=\frac{\mathrm{\Gamma }_p^{(n)}}{\beta \epsilon }$$
(5)
where $`\beta =Ak/g_p`$. A typical value of this asymmetry is $`g_p\mathrm{\Gamma }_p^{(n)}/AkD`$.
### A Statistical analysis
We would now like to obtain the probability density for the observable $`x`$. The capture width is proportional to the square of the capture amplitude. The capture amplitudes for complex compound nuclear states have a Gaussian distribution , hence, the widths $`\mathrm{\Gamma }_p^{(n)}`$ are distributed according to the Porter-Thomas law
$$g(\gamma )=\frac{1}{\sqrt{2\pi \overline{\gamma }\gamma }}\mathrm{exp}\left(\frac{\gamma }{2\overline{\gamma }}\right),$$
(6)
where $`\gamma \mathrm{\Gamma }_p^{\left(n\right)}`$ for convenience and $`\overline{\gamma }`$ is the mean width $`\overline{\gamma }=\gamma g(\gamma )𝑑\gamma `$.
For a given energy $`E`$ the distance to the nearest $`p`$ resonance in a compound nucleus is random. If the relative positions of the $`p`$ resonances were uncorrelated it would be described by a Poissonian distribution
$$f_D(\epsilon )=D^1\mathrm{exp}\left(\frac{2\left|\epsilon \right|}{D}\right)$$
(7)
where $`D`$ is the mean spacing between the $`p`$ resonances and $`\overline{|\epsilon |}=D/2`$. Correlations between the positions of compound states of the same symmetry, often referred to as level repulsion, modify the above distribution. These correlations are described by the random matrix theory , and can be approximated by the Wigner law. In this case the distance to the nearest $`p`$ resonance has the following probability density :
$$f_D(\epsilon )=D^1\mathrm{exp}\left(\frac{\pi \epsilon ^2}{D^2}\right).$$
(8)
To avoid confusion we should stress that here $`\epsilon `$ is the distance to the nearest $`p`$-wave resonance and not the interval between the $`p`$-wave resonances. Therefore, Eq. (8) differs from the Wigner-Dyson distribution. The difference between Eqs. (7) and (8) is not very important for our consideration, as long as $`f_D(\epsilon )`$ remains finite (and equal to $`D^1`$) at $`\epsilon 0`$ .
Using Eqs. (6) and (8) we calculate the distribution of the observable $`x`$ as
$`f(x)`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑\gamma {\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\epsilon f_D(\epsilon )g(\gamma )\delta \left(x{\displaystyle \frac{\gamma }{\beta \epsilon }}\right)`$ (9)
$`=`$ $`{\displaystyle \frac{1}{\sqrt{2\pi x_0|x|}}}{\displaystyle _0^{\mathrm{}}}\sqrt{t}\mathrm{exp}\left({\displaystyle \frac{|x|t}{2x_0}}\pi t^2\right)𝑑t,`$ (10)
where $`x_0=|\overline{\gamma }/\beta D|`$ represents some typical value of the effect. The integral in (10) can be given explicitly in terms of the parabolic cylinder functions $`D_\nu (z)`$,
$$f(x)=\frac{(2\pi )^{3/4}\mathrm{\Gamma }(\frac{3}{2})}{\sqrt{2\pi x_0|x|}}\mathrm{exp}\left(\frac{x^2}{32\pi x_0}\right)D_{\frac{3}{2}}\left(\frac{|x|}{x_0\sqrt{8\pi }}\right).$$
(11)
On the other hand, one can easily find the asymptotic behaviour of the probability density at $`|x|x_0`$ directly from Eq. (10):
$$f(x)x_0/x^2.$$
(12)
The CLT will not apply to distributions with this asymptotic form, since they do not have a finite variance: $`x^2f(x)𝑑x\mathrm{}`$.
### B Limit theorem for the first correlation
Suppose the forward-backward asymmetry is measured in an experiment where a number of different nuclei with similar-sized cross sections are involved. The measurement will then yield some average asymmetry, and we want to find the probability distribution of it. Otherwise, one may just analyse the asymmetries measured separately for a number of nuclei. Let us then consider the average of $`n`$ independent random variables $`x_i`$ introduced above,
$$X=\frac{1}{n}\underset{x=1}{\overset{n}{}}x_i.$$
(13)
In Ref. a derivation of the limit theorem for distributions with asymptotic behaviour (12) was presented. A general solution of the problem of limit distributions of sums of independent variables with an infinite variance for which $`f(x)|x|^{\alpha 1}`$, can be found in Ref. ($`\alpha >0`$ to keep the total probability $`f(x)𝑑x`$ finite). A simple derivation of the limit theorem for such distributions is given in the Appendix.
The random variable $`x`$ has a symmetric probability distribution, $`f(x)=f(x)`$. In this case for $`\alpha =1`$ the limit distribution is obtained from Eqs. (A22) and (A24) with $`a=0`$ and $`c=\pi x_0`$. So, for $`n\mathrm{}`$ the probability density $`F_n(X)`$ approaches its limit form
$$F_n(X)=\frac{1}{\pi }\frac{X_c}{X^2+X_c^2},$$
(14)
where $`X_c=\pi x_0`$ . This is called the Cauchy distribution, and its main property is that it is independent of $`n`$, in particular, it does not become narrower as $`n`$ increases. Therefore, fluctuations are not suppressed by averaging. Compare this with the standard central limit theorem, where the width decreases as $`\sigma _n=\sigma _1/\sqrt{n}`$.
The parameter $`X_c`$ for our physical observable is given by $`X_c=\pi \overline{\gamma }/\beta D=\pi g_p\overline{\mathrm{\Gamma }_p^{(n)}}/AkD`$. Throughout the derivation we considered the scattering length $`A`$ as a constant. Indeed, if the energy $`E`$ does not coincide with an $`s`$-wave resonance, the $`s`$-wave amplitude $`A`$ represents the potential scattering amplitude. It does not vary strongly between isotopes, or nuclei of similar masses, because the nuclear potential does not vary much. The energy scale of its variation is MeV, similar to single-particle energy level spacing. Contrast this with the scale of the compound resonance spacings which are of the order of 10 eV. This difference in energy scales means that the compound resonance states can be treated statistically, while the scattering length is treated as a constant.
### C Influence of the resonance widths on the statistics of the average asymmetry
The above calculations have been based on the assumption that $`\epsilon \mathrm{\Gamma }_p`$, so that the possible effects of the resonance widths have been neglected. This is justified, as long as the probability of finding a very small interval $`\epsilon \mathrm{\Gamma }_p`$ is indeed small. However, it is easy to see the role of the width as we increase $`n`$. As we explained in the Introduction, the nonvanishing fluctuations of the average depends on having one value of the effect large, $`x_inx_c`$, where $`x_c`$ is a typical value. Indeed, we can expect that if we make $`n`$ measurements then at least one will have an energy spacing of the order $`\epsilon D/n`$, thus giving a large asymmetry (5). The energy denominator, however, can not be made arbitrarily small, and as smaller $`EE_p`$ are considered, it will reach a limit $`|\epsilon +i\mathrm{\Gamma }_p/2|\mathrm{\Gamma }_p`$. Thus $`D/n\mathrm{\Gamma }_p`$ determines the largest values of $`n`$, beyond which the Cauchy distribution of the average effect begins to turn into a Gaussian one. Hence our statistical analysis is valid until $`nD/\mathrm{\Gamma }_p`$ (in heavy non-fissionable nuclei $`D/\mathrm{\Gamma }_p500`$).
If we continue to take measurements after this and further increase the number of measurements $`n`$, the maximum value of the asymmetries will stabilise, being of the order $`x_ix_0D/\mathrm{\Gamma }_p=\pi \overline{\gamma }/\beta \mathrm{\Gamma }_p`$. Hence we will no longer have increasingly large values of the effect to continue the “non-vanishing” averaging. Thus, when $`nD/\mathrm{\Gamma }_p`$ the Gaussian statistics take over, the standard central limit theorem applies, and the usual $`1/\sqrt{n}`$ suppression of fluctuations takes place. Note that $`x_0D/\mathrm{\Gamma }_px_0`$ in fact determines the true finite, but large, variance of $`x`$. Beyond this value $`f(x)`$ decreases faster than $`x^2`$, and it effectively determines the lower and upper limits in the variance integral $`x^2f(x)𝑑x`$.
## III Spin one-half particle correlations
Let us now consider scattering of low-energy spin-$`\frac{1}{2}`$ particles, looking at both the $`𝐩_i𝐩_f`$ and $`𝝈(𝐩_i\times 𝐩_f)`$ correlations. Again we assume that there is no nearby compound nucleus $`s`$-wave resonance and that the asymmetry is dominated by one nearby $`p`$-wave resonance. This is justified because the statistics of the average effect $`X`$ for large $`n`$, $`F_n(X)`$, is determined by large values of the individual effects, i.e., by the asymptotic large-$`x`$ behaviour of the probability density $`f(x)`$, cf. Eq. (12) .
Consider scattering of the neutron, $`s=\frac{1}{2}`$ from a nucleus with spin $`I`$. The total angular momentum of the $`p`$-wave neutron is $`𝐣=𝐥+𝐬`$ and the total angular momentum of the compound resonance is $`𝐉=𝐈+𝐣`$. The amplitude of $`p`$-wave resonant scattering at arbitrary angle $`\theta `$ between the incoming and outgoing particle can be written as (see Ref. )
$`f_p`$ $`=`$ $`{\displaystyle \frac{1}{2k}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{j,j_z,m}{j^{},j_z^{},m^{}}}{}}C_{II_zj^{}j_z^{}}^{JJ_z}C_{1m^{}\frac{1}{2}s_z^{}}^{j^{}j_z^{}}\sqrt{4\pi }Y_{1m^{}}^{}(𝐧_k^{})\sqrt{\mathrm{\Gamma }_{p_j^{}}^{(n)}(E)}`$ (16)
$`\times {\displaystyle \frac{1}{EE_p+\frac{i}{2}\mathrm{\Gamma }_p}}C_{II_zjj_z}^{JJ_z}C_{1m\frac{1}{2}s_z}^{jj_z}\sqrt{4\pi }Y_{1m}(𝐧_k)\sqrt{\mathrm{\Gamma }_{p_j}^{(n)}(E)},`$
where $`I_z`$, $`𝐧_k`$, $`s_z`$, and $`I_z^{}`$, $`𝐧_k^{}`$, $`s_z^{}`$, describe the projection of the target spin, the direction of the neutron momentum and the projection of the neutron spin in the initial and final states, respectively, $`\mathrm{\Gamma }_{p_j}^{(n)}`$ is the capture width for the neutron with angular momentum $`j`$, the $`Y_{lm}`$ are the angular wave functions, and $`C_{II_zjj_z}^{JJ_z}`$ are the Clebsch-Gordon coefficients.
Consider scattering of a neutron incident along the $`x`$ direction off a spinless $`\left(I=0\right)`$ nucleus. We quantise the neutron spin in the $`z`$ direction and consider neutrons scattered in the $`xy`$ plane. The $`s`$-wave scattering amplitude is simply $`A\delta _{s_zs_z^{}}`$. Having in mind that we need to calculate interference terms between the $`s`$ and $`p`$ waves, we can write the $`p`$-wave scattering amplitude (16) in the following form
$$f_p(\theta )=\frac{1}{2k}\underset{m}{}\left|C_{1m\frac{1}{2}s_z}^{jj_z}\right|^2Y_{1m}^{}(𝐧_k^{})\frac{1}{EE_p+\frac{i}{2}\mathrm{\Gamma }_p}Y_{1m}(𝐧_k)4\pi \mathrm{\Gamma }_{p_j}^{(n)}(E).$$
(17)
### A First correlation
The resonance $`p`$-wave amplitude for forward and backward scattering (see Sec. II) with $`j=\frac{1}{2}`$ is
$$f_{p_{1/2}}^\pm =\frac{1}{2k}\frac{\mathrm{\Gamma }_{p_{1/2}}^{(n)}}{EE_p+\frac{i}{2}\mathrm{\Gamma }_p},$$
(18)
which is similar to the spinless particle scattering (Eq. 2), with the parameter $`g_p=1`$. Similarly for $`j=\frac{3}{2}`$ states the amplitude is
$$f_{p_{3/2}}^\pm =\frac{2}{2k}\frac{\mathrm{\Gamma }_{p_{3/2}}^{(n)}}{EE_p+\frac{i}{2}\mathrm{\Gamma }_p}$$
(19)
which is similar to spinless particle scattering with $`g_p=2`$. This means that the statistics derived for the spinless particle $`𝐩_i𝐩_f`$ correlation in Sec. II are valid for the case where spin is included. In fact, since we do not know whether the nearest resonance is $`p_{1/2}`$ or $`p_{3/2}`$ we must combine the two distributions.
### B Second correlation
The second correlation $`𝝈(𝐩_i\times 𝐩_f)`$ between the direction of the spin relative to the scattering plane is, of course, specific to particles with a non-zero spin. To calculate the asymmetry of the cross section with respect to flipping the spin, we take the initial neutron momentum along the $`x`$-direction as before, and look at the difference between the scattering amplitude in the $`+y`$ direction, $`f^+`$, and that in the $`y`$ direction, $`f^{}`$. Equation (17) yields the $`p`$-resonance scattering amplitudes in the $`+y`$ and $`y`$ directions for $`j=\frac{1}{2}`$
$$f_{p_{1/2}}^\pm =\pm \frac{i}{2k}\frac{\mathrm{\Gamma }_{p_{1/2}}^{(n)}}{EE_p+\frac{i}{2}\mathrm{\Gamma }_p},$$
(20)
and similarly for $`j=\frac{3}{2}`$ we obtain
$$f_{p_{3/2}}^\pm =\frac{i}{2k}\frac{\mathrm{\Gamma }_{p_{3/2}}^{(n)}}{EE_p+\frac{i}{2}\mathrm{\Gamma }_p}$$
(21)
which differs from the $`j=\frac{1}{2}`$ case only by sign.
Thus the total scattering amplitude for the $`𝝈(𝐩_i\times 𝐩_f)`$ correlation is given by
$$f^\pm =A\frac{i\eta _p}{2k}\frac{\mathrm{\Gamma }_p^{\left(n\right)}}{EE_p+\frac{i}{2}\mathrm{\Gamma }_p}$$
(22)
where $`\eta _p=1`$ for $`j=\frac{1}{2}`$ and $`\eta _p=+1`$ for $`j=\frac{3}{2}`$. Then, taking into account that the second term in Eq. (22), which represents the $`p`$-wave contribution, is much smaller than the first one, we obtain for the observable difference of the corresponding cross sections \[see Eq. (4)\]
$$x=\frac{\eta _p}{2kA}\frac{\mathrm{\Gamma }_p\mathrm{\Gamma }_p^{(n)}}{(EE_p)^2+\frac{1}{4}\mathrm{\Gamma }_p^2}.$$
(23)
As we discussed above, the scattering length varies weakly. The same is true for the total width of the compound resonances $`\mathrm{\Gamma }_p`$. Its fluctuations are small because it is dominated by the radiative width, given by the a sum of a large number of partial widths due to transitions into all lower-lying nuclear states. Introducing $`\beta =2kA/\eta _p\mathrm{\Gamma }_p`$, and taking into account that $`\epsilon =EE_p`$ is usually much larger than the resonance width, $`|\epsilon |\mathrm{\Gamma }_p`$, we obtain for the asymmetry
$$x=\frac{\mathrm{\Gamma }_p^{(n)}}{\beta \epsilon ^2}.$$
(24)
The typical size of this effect $`\eta _p\mathrm{\Gamma }_p^{(n)}\mathrm{\Gamma }_p/AkD^2=(\eta _p\mathrm{\Gamma }_p^{(n)}/AkD)(\mathrm{\Gamma }_p/D)`$, is much smaller than the first correlation, by a factor of $`\mathrm{\Gamma }_p/D`$. However, this observable has a $`\epsilon ^2`$ dependence on the distance to the nearest $`p`$ resonance, while the first correlation was proportional to $`\epsilon ^1`$. The $`\epsilon ^2`$ singularity emphasizes even stronger the possibility of small denominators. Note also, that for a given scattering length $`A`$ the sign of this interference effect is always the same. We will see that this leads to a very different statistics of the $`𝝈(𝐩_i\times 𝐩_f)`$ correlation.
#### 1 Statistical analysis
Let us derive a probability distribution for the observable $`x`$ given by Eq. (24). Similarly to Sec. II A we have
$$f(x)=\frac{1}{\sqrt{2\pi \overline{\gamma }}}_0^{\mathrm{}}_{\mathrm{}}^{\mathrm{}}\mathrm{exp}\left(\frac{\pi \epsilon ^2}{D^2}\frac{\gamma }{2\overline{\gamma }}\right)\frac{d\epsilon d\gamma }{\sqrt{\gamma }}\delta \left(x\frac{\gamma }{\beta \epsilon ^2}\right)$$
(25)
where we again use $`\gamma `$ for the capture width $`\mathrm{\Gamma }_p^{(n)}`$, and the probability densities $`g(\gamma )`$ and $`f_D(\epsilon )`$ are taken from Eqs. (6) and (8), respectively. Assuming that the scattering length is positive, hence, $`\beta >0`$, we calculate the above integral and obtain
$$f(x)=\frac{\sqrt{x_0}}{\sqrt{\pi x}(x+\pi x_0)},x>0,$$
(26)
and $`f(x)=0`$ for $`x<0`$, where
$$x_0=\frac{2\overline{\gamma }}{\beta D^2}$$
(27)
characterises typical values of the asymmetry (24). The asymptotic behaviour of this probability density at $`xx_0`$ is
$$f(x)\frac{\sqrt{x_0}}{\sqrt{\pi }|x|^{3/2}}.$$
(28)
The probability density $`f(x)`$ is normalized as $`_0^{\mathrm{}}f(x)𝑑x=1`$. However, the corresponding mean value $`f(x)x𝑑x`$ is infinite, and the integral for the variance $`f(x)x^2𝑑x`$ diverges even faster than that for the first correlation \[cf. Eq. (12)\]. This signifies even larger fluctuations of the second correlation effect.
#### 2 Limit theorem for the second correlation
The probability distribution of the second correlation (28) corresponds to $`\alpha =\frac{1}{2}`$ (see Appendix). Using the asymptotic parameters $`c_1=0`$ and $`c_2=\sqrt{x_0/\pi }`$ \[compare Eqs. (28) and (A1)\] we obtain $`c=\sqrt{2x_0}`$ and $`\beta =1`$ from Eqs. (A10) and (A11). In fact, it is possible to calculate the Fourier transform of $`f(x)`$ of Eq. (26) explicitly,
$`\stackrel{~}{f}(\omega )`$ $`=`$ $`e^{i\pi x_0\omega }\pi ^{1/2}\mathrm{\Gamma }(1/2,i\pi x_0\omega )`$ (29)
$`=`$ $`e^{i\pi x_0\omega }\left[1\sqrt{2x_0}(1\pm i)|\omega |^{1/2}+O(\omega ^{3/2})\right],`$ (30)
where $`\mathrm{\Gamma }(\mathrm{})`$ is the incomplete $`\mathrm{\Gamma }`$-function, and $`\pm `$ corresponds to $`\omega 0`$.
It follows now from Eqs. (A22) and (A26) that the limit distribution $`F_n(X)`$ of the average effect $`X`$ is given by
$$F_n(X)=\sqrt{\frac{nx_0}{\pi }}\frac{e^{nx_0/X}}{X^{3/2}},(X>0).$$
(31)
This equation shows explicitly that as the number of effects included in the average increases, the distribution widens proportionally to $`n`$. Accordingly, the typical values of the average also grow as $`Xnx_0`$.
To understand this recall that the second asymmetry is inversely proportional to $`\epsilon ^2`$. Thus, when we consider the average of $`n`$ such variables, one of them is likely to have $`ϵD/n`$, which makes it $`n^2`$ times greater than the typical value $`x_0`$. This variable will dominate the average and give $`Xnx_0`$. Also, to describe a possible experiment more realistically, one must combine the distributions with different $`\eta _p`$ remembering that the probability to find a close $`p_{3/2}`$ resonance is twice that of a $`p_{1/2}`$ resonance.
The role of resonance widths can be understood in the same way as it was done in Sec. II C. The limit statistics (31) is valid until $`nD/\mathrm{\Gamma }_p`$, i.e., when the characteristic value of the maximal effect that dominates the average $`X`$ requires denominators as small as $`\epsilon D/n\mathrm{\Gamma }`$. When $`nD/\mathrm{\Gamma }_p`$ the usual statistics of the Central Limit Theorem become valid, and we eventually have typical values of the average decreasing as $`1/\sqrt{n}`$.
## IV Conclusion
We have considered interference effects between $`p`$-wave resonance neutron scattering amplitude and background $`s`$-wave amplitude in compound nuclei and found that these effects do not obey the Standard Central Limit Theorem. That is, the probability density of the average effect over $`n`$ measurements, $`X=\frac{1}{n}_{i=1}^nx_i`$ , does not tend to a Gaussian distribution with variance $`\sigma _n^2=\sigma _1^2/n`$ for large $`n`$. We have examined two effects, (i) the $`𝐩_i𝐩_f`$ correlation, which corresponds to the forward-backward asymmetry of the differential cross section, and (ii) the $`𝝈(𝐩_i\times 𝐩_f)`$ correlation, which describes the asymmetry of the scattering cross section with respect to flipping the spin relative to the scattering plane.
The first of these was found to have a distribution with asymptotic behaviour $`f(x)1/x^2`$ for large $`x`$. In this case the limit theorem for the distribution of the average $`X`$ tends to a Cauchy distribution $`F_n(X)=X_c/[\pi (X^2+X_c^2)]`$. This is independent of $`n`$, so the typical value of the average ($`X_c`$) does not decrease with increasing number of measurements. Physically this is understood by the following arguments. The asymmetry in question is inversely proportional to the spacing between the incident neutron energy and the energy of the closest $`p`$-wave resonance ($`x\epsilon ^1`$). If we have $`n`$ measurements, we have a high probability that one of these spacings will be of the order $`\epsilon D/n`$ where $`D`$ is the mean $`p`$ level spacing. This will produce an asymmetry of the size $`xnx_0`$ where $`x_0`$ is a typical value of the effect. Thus the typical average value is $`Xx_0`$, nonvanishing with increasing $`n`$.
The second correlation we considered produces a much smaller effect than the first correlation. However, it has been found to have a $`\epsilon ^2`$ dependence, giving a distribution with asymptotic behaviour $`f(x)x^{\frac{3}{2}}`$ for large $`x`$. This means that there is a higher probability to obtain relatively large values of $`x`$, compared to the first correlation. As a result, typical values for the average of $`n`$ asymmetries actually increase with increasing number of measurements $`n`$ as $`Xnx_0`$.
Above we assumed $`ϵ>\mathrm{\Gamma }_p`$ where $`ϵ`$ is the distance to the resonance and $`\mathrm{\Gamma }_p`$ is the resonance width. When we consider the the influence of the resonance widths, it is found that they affect the distribution by limiting the size of the effect, $`x`$. Indeed, the minimum value for the denominator $`|\epsilon +i\mathrm{\Gamma }_p/2|`$ is given by $`\mathrm{\Gamma }_p`$, hence, the maximal possible effects are limited. We have found that our analysis of the statistics of the averages $`X`$ is valid for $`n<D/\mathrm{\Gamma }_p`$ (for heavy non-fissionable nuclei $`D/\mathrm{\Gamma }_p500`$). For $`nD/\mathrm{\Gamma }_p`$ we expect the average to once again obey the standard CLT and vanish with increasing $`n`$.
Because of the interest in scattering problems, it would be of benefit to actually perform the experiments discussed in this paper. If one measured the first correlation in low-energy neutron scattering off different isotopes of heavy nuclei, then we expect to see an observable effect in the average. This would not decrease until the number of measurements $`n500`$. (Note, however, that the total number of relatively stable isotopes $`1000`$). Because it is hard to measure the forward and backward scattering, the experiment would have to be performed at some angle $`\theta `$ to the axis of incidence. This would only cause an extra factor of $`\mathrm{cos}\theta `$ in the value of the effect.
The second correlation is much smaller (by a factor $`\mathrm{\Gamma }_p/D10^3`$). This would make it much harder to observe, but it would have an increasing typical value upon averaging over many measurements. This means that one might be able to observe the effect after performing the experiment over many isotopes.
## A Limit theorem for probability distributions with infinite variances
Consider a random variable whose probability density has the following asymptotic behaviour:
$`f(x)=\{\begin{array}{cc}c_1/|x|^{\alpha +1},x\mathrm{},\hfill & \\ c_2/x^{\alpha +1},x+\mathrm{},\hfill & \end{array}`$ (A1)
with $`0<\alpha <2`$, and is normalized in the usual way, $`f(x)𝑑x=1`$. The existence of the mean, $`xf(x)𝑑x`$, depends on whether $`\alpha `$ is greater or less than unity, but the variance integral $`x^2f(x)𝑑x`$ is infinite in both cases, and the standard Central Limit Theorem is inapplicable.
To derive the limit statistics of the average $`X=\frac{1}{n}_{x=1}^nx_i`$ of $`n`$ independent random variables $`x_i`$, we use characteristic functions (or Fourier transforms)
$$\stackrel{~}{f}(\omega )=_{\mathrm{}}^{\mathrm{}}e^{i\omega x}f(x)𝑑x.$$
(A2)
The Fourier transform of the probability density $`F_n(X)`$ of the average $`X`$ is given by
$`\stackrel{~}{F}_n(\omega )`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}e^{i\omega X}𝑑X{\displaystyle \delta \left(X\frac{1}{n}\underset{i=1}{\overset{n}{}}x_i\right)\underset{i=1}{\overset{n}{}}f(x_i)dx_i}`$ (A3)
$`=`$ $`{\displaystyle \underset{i=1}{\overset{n}{}}}\stackrel{~}{f}(\omega /n)=\left[\stackrel{~}{f}(\omega /n)\right]^n.`$ (A4)
Thus, the form of $`F_n(\omega )`$ for large $`n`$ is related to that of $`\stackrel{~}{f}(\omega )`$ at small $`\omega `$. This is in turn decided by the large-$`x`$ asymptotic behaviour of $`f(x)`$ given by Eq. (A1).
For $`1<\alpha <2`$ the integral $`xf(x)𝑑x`$ converges and $`\stackrel{~}{f}(\omega )`$ can be written as
$$\stackrel{~}{f}(\omega )=1i\omega xf(x)𝑑x+(e^{i\omega x}1+i\omega x)f(x)𝑑x.$$
(A5)
Let us consider the contribution of the interval from $`0`$ to $`+\mathrm{}`$ to the last term above. Using the asymptotic form (A1) we present it as
$$_0^{\mathrm{}}(e^{i\omega x}1+i\omega x)\left[f(x)\frac{c_2}{x^{\alpha +1}}\right]𝑑x+c_2_0^{\mathrm{}}(e^{i\omega x}1+i\omega x)\frac{dx}{x^{\alpha +1}}.$$
(A6)
If we assume that $`f(x)`$ approaches its asymptotic behaviour sufficiently rapidly, e.g., $`|f(x)c_2/x^{\alpha +1}|O(1/x^{\alpha +2})`$, then the first integral above behaves as $`O(\omega ^2)`$ at $`\omega 0`$. To calculate the second integral we turn the integration path into the complex plane by changing the variable $`\omega x=it`$ (for $`\omega >0`$ the $`t`$ is real positive), which gives
$$c_2e^{i\frac{\pi \alpha }{2}}\omega ^\alpha _0^{\mathrm{}}t^{\alpha 1}(e^t1+it)𝑑t,$$
(A7)
where the integral is a representation of the $`\mathrm{\Gamma }`$-function on a segment of the negative argument axis, $`\mathrm{\Gamma }(\alpha )=\mathrm{\Gamma }(\alpha +1)/\alpha `$ .
Applying the same procedure to the integral over $`(\mathrm{},0)`$ in expression (A5), and turning the integration path into the complex plane by using $`\omega x=it`$ (for $`\omega >0`$ and real positive $`t`$), we obtain the expansion of $`\stackrel{~}{f}(\omega )`$ at small $`\omega `$:
$$\stackrel{~}{f}(\omega )=1i\omega a\left(c_1e^{i\frac{\pi \alpha }{2}}+c_2e^{i\frac{\pi \alpha }{2}}\right)\omega ^\alpha \frac{\mathrm{\Gamma }(\alpha 1)}{\alpha }+\mathrm{}$$
(A8)
where $`a=xf(x)𝑑x`$ is the mean value. The expansion for negative $`\omega `$ can be obtained from the above by simply replacing $`\omega `$ with $`|\omega |`$ and complex-conjugating the exponential phase factors in the second term. Finally, at the same level of accuracy, we can re-write expansion (A8) in the form valid for positive and negative small $`\omega `$:
$$\stackrel{~}{f}(\omega )e^{i\omega a}\left[1c\left(1+i\beta \text{sign}(\omega )\mathrm{tan}\frac{\pi \alpha }{2}\right)|\omega |^\alpha \right],$$
(A9)
where
$`c`$ $``$ $`(c_1+c_2){\displaystyle \frac{\mathrm{\Gamma }(1\alpha )}{\alpha }}\mathrm{cos}{\displaystyle \frac{\pi \alpha }{2}},c>0,`$ (A10)
$`\beta `$ $``$ $`{\displaystyle \frac{c_2c_1}{c_2+c1}},1\beta 1,`$ (A11)
and $`\text{sign}(\omega )=\pm 1`$ for $`\omega >0`$ and $`\omega <0`$, respectively. The parameters $`c`$ and $`\beta `$ are determined by the asymptotic behaviour (A1) of the probability density. The value of $`\beta `$ depends on the asymmetry of the probability density $`f(x)`$.
The final form (A9) is very convenient. If we consider a random variable $`x_1`$ shifted with respect to $`x`$, $`x_1=xa`$ ($`a`$ is an arbitrary number here), its characteristic function would differ from that of $`x`$ by a simple phase factor, $`\stackrel{~}{f}_1(\omega )=e^{i\omega a}\stackrel{~}{f}(\omega )`$. On the other hand, the asymptotic behaviour of the probability density, Eq. (A1) is not affected by this transformation. Therefore, the phase factor in Eq. (A9) can always be eliminated by this simple shift.
For $`0<\alpha <1`$ in Eq. (A1) we re-write the Fourier transform as
$$\stackrel{~}{f}(\omega )=1+(e^{i\omega x}1)f(x)𝑑x.$$
(A12)
The contribution of positive $`x`$ to the above integral can be transformed into
$$_0^{\mathrm{}}(e^{i\omega x}1)\left[f(x)\frac{c_2}{x^{\alpha +1}}\right]𝑑x+c_2_0^{\mathrm{}}(e^{i\omega x}1)\frac{dx}{x^{\alpha +1}}.$$
(A13)
Provided the difference between $`f(x)`$ and $`c_2/x^{\alpha +1}`$ decreases as $`x^{\alpha 2}`$ or faster, as $`x+\mathrm{}`$, the first integral can be expanded in powers of $`\omega `$, with the leading term given by
$$i\omega _0^+\mathrm{}x\left[f(x)\frac{c_2}{x^{\alpha +1}}\right]𝑑x.$$
(A14)
The second integral in (A13) is transformed by variable substitution $`\omega x=it`$ (for $`\omega >0`$) into \[cf. Eq. (A7)\]
$$c_2e^{i\frac{\pi \alpha }{2}}\omega ^\alpha _0^{\mathrm{}}t^{\alpha 1}(e^t1)𝑑t,$$
(A15)
which again gives the $`\mathrm{\Gamma }`$-function . After we apply the same procedure to the negative-$`x`$ part of the integral in (A13), the expansion of $`\stackrel{~}{f}(\omega )`$ at small $`\omega `$ is established in exactly the same form as that of Eq. (A8) (for $`\omega >0`$). However, for $`0<\alpha <1`$ the parameter $`a`$ is no longer the mean value. Instead, it is given by
$$a=_{\mathrm{}}^0x\left[f(x)\frac{c_1}{|x|^{\alpha +1}}\right]𝑑x+_0^{\mathrm{}}x\left[f(x)\frac{c_2}{x^{\alpha +1}}\right]𝑑x.$$
(A16)
Also, the next term omitted in Eq. (A8) may now be greater than $`O(\omega ^2)`$. Nevertheless, the small-$`\omega `$ behaviour of the Fourier transform is still represented by Eq. (A9).
If $`\alpha =1`$ in Eq. (A1), the expansion of $`\stackrel{~}{f}(\omega )`$ also contains $`\omega \mathrm{ln}\omega `$ terms. In this case it can be presented as
$$\stackrel{~}{f}(\omega )1i\omega ac|\omega |\left[1i\frac{2}{\pi }\beta \text{sign}(\omega )\mathrm{ln}|\omega |\right],$$
(A17)
where $`c=\frac{\pi }{2}(c_1+c_2)`$, which can be obtained from Eq. (A10) at $`\alpha 2`$, $`\beta `$ is given by Eq. (A11), and
$$a=(c_2c_1)(1𝑪)+_{\mathrm{}}^0x\left[f(x)\frac{c_1}{1+x^2}\right]𝑑x+_0^{\mathrm{}}x\left[f(x)\frac{c_2}{1+x^2}\right]𝑑x,$$
(A18)
where $`𝑪0.577`$ is the Euler constant. Note that if the probability distribution is symmetric asymptotically, i.e., $`c_1=c_2`$, then $`\beta =0`$, and $`a`$ in Eqs. (A16) and (A18) is the mean value calculated in the principal value sense. If the probability density is fully symmetric, $`f(x)=f(x)`$, then $`\stackrel{~}{f}(\omega )`$ is real, $`a=\beta =0`$, and the behaviour of the characteristic function at small $`\omega `$ is especially simple:
$$\stackrel{~}{f}(\omega )1c|\omega |^\alpha .$$
(A19)
After establishing the form of of $`\stackrel{~}{f}(\omega )`$ at small $`\omega `$, Eq. (A9) for $`\alpha 1`$, we can proceed to derive the limit theorem, starting from Eq. (A4):
$`\stackrel{~}{F}_n(\omega )`$ $`=`$ $`e^{i\omega a}\left[1{\displaystyle \frac{c\left(1+i\beta \text{sign}(\omega )\mathrm{tan}\frac{\pi \alpha }{2}\right)n^{1\alpha }|\omega |^\alpha }{n}}\right]^n`$ (A20)
$``$ $`e^{i\omega a}\mathrm{exp}\left[c\left(1+i\beta \text{sign}(\omega )\mathrm{tan}{\displaystyle \frac{\pi \alpha }{2}}\right)n^{1\alpha }|\omega |^\alpha \right],`$ (A21)
for large $`n`$ (this formula appears in the theorem by A. Ya. Khintchine and P. Lévy as a canonical representation of stable probability distributions, see Ref. ). Using the last expression in $`F_n(X)=\frac{1}{2\pi }e^{i\omega X}F_n(\omega )𝑑\omega `$ we obtain the limit distribution in the following form:
$$F_n(X)=n^{\frac{\alpha 1}{\alpha }}c^{\frac{1}{\alpha }}f_{\alpha \beta }\left[n^{\frac{\alpha 1}{\alpha }}c^{\frac{1}{\alpha }}(Xa)\right],$$
(A22)
where
$$f_{\alpha \beta }(x)=_{\mathrm{}}^{\mathrm{}}e^{i\omega x|\omega |^\alpha }\mathrm{exp}\left[i\beta \text{sign}(\omega )\mathrm{tan}\frac{\pi \alpha }{2}|\omega |^\alpha \right]\frac{d\omega }{2\pi }.$$
(A23)
is a universal function of the two parameters, $`\alpha `$ and $`\beta `$, normalized to unity: $`f_\alpha (x)𝑑x=1`$. The results for $`\alpha =1`$ are obtained in a similar way, with $`a`$ replaced by $`a+c\frac{2}{\pi }\beta \mathrm{ln}n`$ in Eq. (A22), and $`f_{1\beta }(x)`$ given by Eq. (A23), in which $`\mathrm{tan}\frac{\pi \alpha }{2}`$ is replaced with $`\frac{2}{\pi }\mathrm{ln}|\omega |`$.
Equation (A22) shows that for $`0<\alpha <1`$ the limit distribution of the average widens with the increase of $`n`$, i.e., fluctuations of the average increase with the number of variables averaged. Since $`n^{\frac{\alpha 1}{\alpha }}a0`$ for $`n\mathrm{}`$, the shift of the distribution (A22) by $`a`$ is actually unimportant in this case and one can put $`a=0`$. For $`\alpha =1`$ the shape of the distribution $`F_n(X)`$ does not depend on $`n`$, i.e., fluctuations are neither enhanced nor suppressed by averaging. If $`\beta 0`$ the whole distribution is gradually shifted proportionally to $`\mathrm{ln}n`$ into the direction determined by the sign of $`\beta `$. For $`1<\alpha <2`$ the distribution of the average does become narrower with $`n`$, however the rate of suppression of fluctuations, $`Xn^{(\alpha 1)/\alpha }`$ is slower than the standard CLT $`n^{1/2}`$. Again, for symmetrically distributed $`x_i`$, the limit distribution $`F_n(X)`$ is even simpler, as $`a=\beta =0`$ in Eqs. (A22) and (A23).
There are a few cases where $`f_{\alpha \beta }`$ and, hence, $`F_n(X)`$, are known explicitly. For $`\alpha =1`$, $`\beta =0`$ ($`c_1=c_2`$) Eq. (A23) gives the Cauchy law,
$$f_{1,0}(x)=\frac{1}{\pi }\frac{1}{1+x^2},$$
(A24)
and $`c=\frac{\pi }{2}(c_1+c2)`$. For $`\alpha =1/2`$, $`\beta =0`$ the limit function can be expressed in terms of the error function $`\mathrm{\Phi }(s)=2\pi ^{1/2}_0^se^{t^2}𝑑t`$ :
$$f_{\frac{1}{2},0}(x)=\frac{1}{2\sqrt{\pi }|x|^{3/2}}\mathrm{Im}\left\{e^{i\frac{\pi }{4}}e^{\frac{i}{4x}}\left[1\mathrm{\Phi }\left(\frac{1}{2\sqrt{ix}}\right)\right]\right\}.$$
(A25)
For the same $`\alpha =1/2`$ in the maximally asymmetric case, $`c_1=0`$, $`c_2>0`$, i.e., $`\beta =1`$, which takes place if the random variables $`x_i`$ are positive, one easily obtains the following simple answer :
$`f_{\frac{1}{2},1}(x)=\{\begin{array}{cc}0,x<0,\hfill & \\ (2\pi )^{1/2}e^{1/2x}x^{3/2},x>0,\hfill & \end{array}`$ (A26) |
no-problem/9912/math9912227.html | ar5iv | text | # Translated tori in the characteristic varieties of complex hyperplane arrangements
## 1 Introduction
The characteristic varieties of a space $`X`$ are the jumping loci of the cohomology of $`X`$ with coefficients in rank $`1`$ local systems: $`V_d(X)=\{𝐭(^{})^{b_1(X)}dim_{}H^1(X,_𝐭)d\}.`$ If $`X`$ is the complement of a normal-crossing divisor in a compact Kähler manifold with vanishing first homology, then $`V_d(X)`$ is a finite union of torsion-translated subtori of the character torus, see . If $`𝒜`$ is an arrangement of hyperplanes in $`^{\mathrm{}}`$, with complement $`X=X(𝒜)`$, then the irreducible components of $`V_d(𝒜):=V_d(X)`$ which contain the origin can be determined combinatorially, from the intersection lattice of $`𝒜`$. This follows from the fact that the tangent cone to $`V_d(𝒜)`$ at $`\mathrm{𝟏}`$ coincides with the resonance variety, $`_d(𝒜)`$, of the Orlik-Solomon algebra (see for a proof, and , , for other proofs and generalizations). The variety $`_d(𝒜)`$ in turn admits explicit combinatorial descriptions, see , .
It was first noted in that there exists a hyperplane arrangement for which $`V_2`$ contains translated tori. These translated tori are isolated torsion points in $`V_2`$, lying at the intersection of several components of $`V_1`$ which do pass through the origin (see Example 4.4 in and Example 3.2 below). Thus, the question arose whether the characteristic varieties of a complex hyperplane arrangement may have positive-dimensional translated components, see , Problem 5.1. In this note, we answer that question, as follows.
###### Theorem
There exist arrangements of complex hyperplanes for which the top characteristic variety, $`V_1`$, contains positive-dimensional irreducible components which do not pass through the origin.
The simplest such arrangement is the “deleted $`\mathrm{B}_3`$” arrangement, $`𝒟`$, discussed in Example 4.1. It is an arrangement of $`8`$ planes in $`^3`$, with defining forms $`xz`$, $`yz`$, $`x`$, $`y`$, $`xy+z`$, $`z`$, $`xyz`$, $`xy`$. The arrangement $`𝒟`$ is fiber-type, with exponents $`\{1,3,4\}`$. The variety $`V_1(𝒟)`$ has a component parametrized by $`\{(t,t^1,t^1,t,t^2,1,t^2,1)t^{}\}`$. This is a $`1`$-dimensional torus, translated by a second root of $`\mathrm{𝟏}`$. Figure 1 depicts the real part of a generic $`2`$-dimensional section of $`𝒟`$ (obtained by setting $`z=2x+3y+1`$), together with the local system corresponding to the point $`t^{}`$.
The deleted $`\mathrm{B}_3`$ arrangement can also be used to answer Conjecture 4.4 from , and Problems 5.2 and 5.3 from .
As noted in , (see also ), all the components of $`V_1`$ passing through the origin must have dimension at least $`2`$. On the other hand, all the positive-dimensional translated components that we find correspond to $`𝒟`$ sub-arrangements, and so have dimension $`1`$. We do not know whether translated components can have dimension greater than $`1`$, but we exhibit an arrangement where $`V_1`$ has $`0`$-dimensional components (Example 5.1).
The translated components in the characteristic varieties of an arrangement $`𝒜`$ are not detected by the tangent cone at the origin, and thus contain information which is not available from the Orlik-Solomon algebra of $`𝒜`$, at least not directly. We illustrate this phenomenon in Example 5.3, where we find a pair of arrangements for which the resonance varieties are (abstractly) isomorphic, but the characteristic varieties have a different number of components. These two arrangements have non-isomorphic lattices, though. Thus, it is still an open question whether the translated components of $`V_d(𝒜)`$ are combinatorially determined.
One of the main motivations for the study of characteristic varieties of a space $`X`$ is the very precise information they give about the homology of finite abelian covers of $`X`$, see , . From that point of view, the existence of translated components in $`V_1`$ has immediate repercussions on the Betti numbers of some finite covers of $`X`$ (those corresponding to torsion characters belonging to that component). But it also affects the torsion coefficients of some abelian covers of $`X`$, and the number of certain metabelian covers of $`X`$. These aspects are pursued in joint work with D. Matei, . The starting point of that paper was the discovery of $`2`$-torsion in the homology of certain $`3`$-fold covers of $`X(𝒟)`$. We were led to the translated component in $`V_1(𝒟)`$ by an effort to explain that unexpected torsion.
## 2 Characteristic varieties and hyperplane arrangements
We start by reviewing methods for computing the fundamental group, the characteristic varieties, and the resonance varieties of a complex hyperplane arrangement.
### 2.1 Characteristic varieties
Let $`X`$ be a space having the homotopy type of a connected, finite CW-complex. For simplicity, we will assume throughout that $`H=H_1(X,)`$ is torsion free. Set $`n=b_1(X)`$, and fix a basis $`\{t_1,\mathrm{},t_n\}`$ for $`H^n`$. Let $`G=\pi _1(X)`$ be the fundamental group, and $`𝑎𝑏:GH`$ the abelianization homomorphism.
Let $`^{}`$ be the multiplicative group of units in $``$, and let $`𝐻𝑜𝑚(G,^{})`$ be the group of characters of $`G`$. Notice that $`𝐻𝑜𝑚(G,^{})`$ is isomorphic to the affine algebraic group $`𝐻𝑜𝑚(H,^{})(^{})^n`$, with coordinate ring $`H[t_1^{\pm 1},\mathrm{},t_n^{\pm 1}]`$. For each integer $`d0`$, set
$$V_d(X)=\{𝐭=(t_1,\mathrm{},t_n)(^{})^ndim_{}H^1(G,_𝐭)d\},$$
where $`_𝐭`$ is the $`G`$-module $``$ given by the representation .
Then $`V_d(X)`$ is an algebraic subvariety of the complex $`n`$-torus, called the $`d`$-th characteristic variety of $`X`$. The characteristic varieties form a descending tower, $`(^{})^n=V_0V_1\mathrm{}V_{n1}V_n`$, which depends only on the isomorphism type of $`G=\pi _1(X)`$, up to a monomial change of basis in $`(^{})^n`$, see .
As shown in , the characteristic varieties of $`X`$ may be interpreted as the determinantal varieties of the Alexander matrix of the group $`G=\pi _1(X)`$. Given a presentation $`G=x_1,\mathrm{},x_mr_1,\mathrm{},r_s`$, the Alexander matrix is the $`s\times m`$ matrix $`A=\left(\frac{r_i}{x_j}\right)^{𝑎𝑏}`$, with entries in $`[t_1^{\pm 1},\mathrm{},t_n^{\pm 1}]`$, obtained by abelianizing the Jacobian of Fox derivatives of the relations. Let $`A(𝐭)`$ be the evaluation of $`A`$ at $`𝐭(^{})^n`$. For $`0d<n`$, we have
$$V_d(X)=\{𝐭(^{})^n𝑟𝑎𝑛𝑘A(𝐭)<md\}.$$
Remarkably, the existence of certain analytic or geometric structures on a space puts strong qualitative restrictions on the nature of its characteristic varieties. There are several results along these lines, due to Green, Lazarfeld, Simpson, and Arapura. The result we need is the following:
###### Theorem (Arapura )
Let $`X`$ be the complement of a normal-crossing divisor in a compact Kähler manifold with vanishing first homology. Then each characteristic variety $`V_d(X)`$ is a finite union of torsion-translated subtori of the algebraic torus $`(^{})^{b_1(X)}`$.
### 2.2 Fundamental groups of arrangements
Let $`𝒜=\{H_1,\mathrm{},H_n\}`$ be an arrangement of (affine) hyperplanes in $`^{\mathrm{}}`$, $`\mathrm{}2`$, with complement $`X(𝒜)=^{\mathrm{}}_{i=1}^nH_i`$. We review the procedure for finding the braid monodromy presentation of the fundamental group of the complement, $`G(𝒜)=\pi _1(X(𝒜))`$. This presentation is equivalent to the Randell-Arvola presentation (see ), and the $`2`$-complex modelled on it is homotopy-equivalent to $`X=X(𝒜)`$ (see ). Since we are only interested in $`G=G(𝒜)`$, the well-known Lefschetz-type theorem of Hamm and Lê allows us to assume that $`\mathrm{}=2`$, by replacing $`𝒜`$ with a generic $`2`$-dimensional slice, if necessary.
Let $`v_1,\mathrm{},v_s`$ be the intersection points of the lines of $`𝒜`$. The combinatorics of the arrangement is encoded in its intersection poset, $`L(𝒜)=\{L_1(𝒜),L_2(𝒜)\}`$, where $`L_1=𝐧:=\{1,\mathrm{},n\}`$ and $`L_2=\{I_1,\mathrm{},I_s\}`$, with $`I_k=\{i𝐧H_iv_k\mathrm{}\}`$. Choosing a generic linear projection $`p:^2`$, and a basepoint $`y_0`$ such that $`𝑅𝑒(y_0)>𝑅𝑒(p(v_1))>\mathrm{}>𝑅𝑒(p(v_s))`$ gives orderings of the lines and vertices, which we may assume coincide with the orderings specified above. Choosing also a path in $``$, starting at $`y_0`$, and passing successively through $`p(v_1),\mathrm{},p(v_s)`$ gives a “braided wiring diagram,” $`𝒲(𝒜)=\{I_1,\beta _1,I_2,\mathrm{},\beta _{s1},I_s\}`$, where $`\beta _k`$ are certain braids in the Artin braid group $`B_n`$.
Let $`\{A_{i,j}\}_{1i<jn}`$ be the usual generating set for the pure braid group $`P_n`$, as specified in . More generally, for $`I𝐧`$, let $`A_IP_n`$ be the full twist on the strands indexed by $`I`$. The braid monodromy presentation of $`G=\pi _1(X)`$ is given by:
$$G=x_1,\mathrm{},x_n\alpha _k(x_i)=x_i\mathrm{for}iI_k\{\mathrm{max}I_k\}\mathrm{and}k𝐬,$$
(1)
where each $`\alpha _k`$ is a pure braid of the form $`A_{I_k}^{\delta _k}=\delta _k^1A_{I_k}\delta _k`$, acting on $`𝔽_n=x_1,\mathrm{},x_n`$ by the Artin representation $`P_n𝐴𝑢𝑡(F_n)`$. The conjugating braids $`\delta _k`$ may be obtained from $`𝒲`$, as follows.
In the case where $`𝒜`$ is the complexification of a real arrangement, $`𝒲`$ may be realized as a (planar) wiring diagram (with all $`\beta _k=1`$), in the obvious way. Each vertex set $`I_k𝒲`$ gives rise to a partition $`𝐧=I_k^{}I_kI_k^{\prime \prime }`$ into lower, middle, and upper wires. Set $`J_k=\{iI_k^{\prime \prime }\mathrm{min}I_k<i<\mathrm{max}I_k\}`$. Then $`\delta _k`$ is the subword of $`A_𝐧=_{i=2}^n_{j=1}^{i1}A_{j,i}`$, given by $`\delta _k=_{iI_k}_{jJ_k}A_{j,i}`$, see (and also ). In the general case, the braids $`\beta _1,\mathrm{},\beta _{k1}`$ must also be taken into account, see for details.
### 2.3 Characteristic and resonance varieties of arrangements
For an arrangement $`𝒜`$, with complement $`X=X(𝒜)`$, let $`V_d(𝒜):=V_d(X)`$. In equations, $`V_d(𝒜)=\{𝐭(^{})^n𝑟𝑎𝑛𝑘A(𝐭)<nd\}`$, where $`A`$ is the Alexander matrix corresponding to the presentation (1) of $`G=\pi _1(X)`$. By Arapura’s Theorem, $`V_d(𝒜)`$ is a finite union of torsion-translated tori in $`(^{})^n`$. Denote by $`\stackrel{ˇ}{V}_d(𝒜)`$ the union of those tori that pass through $`\mathrm{𝟏}`$, and by $`𝒱_d(𝒜)`$ the tangent cone of $`\stackrel{ˇ}{V}_d(𝒜)`$ at $`\mathrm{𝟏}`$. Clearly, $`𝒱_d(𝒜)`$ is a central arrangement of subspaces in $`^n`$. The exponential map, $`\mathrm{exp}:\mathrm{T}_\mathrm{𝟏}((^{})^n)=^n(^{})^n`$, $`\lambda _ie^{2\pi i\lambda _i}=t_i`$, takes each subspace in $`𝒱_d(𝒜)`$ to the corresponding subtorus in $`\stackrel{ˇ}{V}_d(𝒜)`$. In equations, $`𝒱_d(𝒜)=\{\lambda ^n𝑟𝑎𝑛𝑘A^{(1)}(\lambda )<nd\}`$, where $`A^{(1)}`$ is the linearized Alexander matrix of $`G`$, see (and also ). The variety $`𝒱_d(𝒜)`$ (and thus, $`\stackrel{ˇ}{V}_d(𝒜)`$, too) admits a completely combinatorial description, as follows.
The $`d^{\mathrm{th}}`$ resonance variety of a space $`X`$ is the set $`_d(X)`$ of cohomology classes $`\lambda H^1(X,)`$ for which there is a subspace $`WH^1(X,)`$, of dimension $`d+1`$, such that $`\lambda W=0`$ (see ). In other words,
$$_d(X)=\{\lambda dimH^1(H^{}(X,),\lambda )d\}.$$
The resonance varieties of an arrangement, $`_d(𝒜):=_d(X(𝒜))`$, were introduced and studied in . It turns out that $`𝒱_d(𝒜)=_d(𝒜)`$, see , for two different proofs, and , for recent generalizations.
As seen above, the top resonance variety is the union of a subspace arrangement: $`_1(𝒜)=C_1\mathrm{}C_r`$. It is also known that $`dimC_i2`$, $`C_iC_j=\{\mathrm{𝟎}\}`$ for $`ij`$, and $`_d(𝒜)=\{\mathrm{𝟎}\}_{dimC_id+1}C_i`$, see . For each $`IL_2(𝒜)`$ with $`\left|I\right|3`$, there is a local component, $`C_I=\{\lambda _i\lambda _i=0\mathrm{and}\lambda _i=0\mathrm{for}iI\}`$. Note that $`dimC_I=\left|I\right|1`$, and thus $`C_I_{\left|I\right|2}(𝒜)`$.
The non-local components also admit a description purely in terms of $`L(𝒜)`$, see , . A partition $`𝖯=(𝗉_1|\mathrm{}|𝗉_q)`$ of $`𝐧`$ is called neighborly if, for all $`IL_2(𝒜)`$, the following holds: $`\left|𝗉_jI\right|\left|I\right|1I𝗉_j.`$ To a neighborly partition $`𝖯`$, there corresponds a subspace
$$C_𝖯=\{\lambda \underset{i}{}\lambda _i=0\}\underset{I}{}\{\lambda \underset{iI}{}\lambda _i=0\},$$
where $`I`$ ranges over all vertex sets not contained in a single block of $`𝖯`$. Results of imply that, if $`dimC_𝖯2`$, then $`C_𝖯`$ is a component of $`_1(𝒜)`$. All the components of $`_1(𝒜)`$ arise in this fashion from neighborly partitions of sub-arrangements of $`𝒜`$.
This completes the combinatorial description of $`𝒱_d(𝒜)=_d(𝒜)`$, and thus, that of $`\stackrel{ˇ}{V}_d(𝒜)`$.
### 2.4 Decones and linearly fibered extensions
We conclude this section with two constructions which simplify in many instances the computation of the characteristic varieties of an arrangement.
The first construction associates to a central arrangement $`𝒜=\{H_1,\mathrm{},H_n\}`$ in $`^{\mathrm{}}`$, an affine arrangement, $`𝒜^{}`$, of $`n1`$ hyperplanes in $`^\mathrm{}1`$, called a decone of $`𝒜`$. Let $`Q`$ be a defining polynomial for $`𝒜`$. Choose coordinates $`(z_1,\mathrm{},z_{\mathrm{}})`$ in $`^{\mathrm{}}`$ so that $`H_n=\mathrm{ker}(z_{\mathrm{}})`$. Then, $`Q^{}=Q(z_1,\mathrm{},z_\mathrm{}1,1)`$ is a defining polynomial for $`𝒜^{}`$, and $`X(𝒜)X(𝒜^{})\times ^{}`$, see . It follows that:
$$V_d(𝒜)=\{𝐭(^{})^n(t_1,\mathrm{},t_{n1})V_d(𝒜^{})\mathrm{and}t_1\mathrm{}t_n=1\},$$
and so the computation of $`V_d(𝒜)`$ reduces to that of $`V_d(𝒜^{})`$, see .
The second construction associates to an affine arrangement, $`𝒜`$, in $`^2`$, a linearly fibered arrangement, $`\widehat{𝒜}`$, also in $`^2`$, called a big arrangement associated to $`𝒜`$. The construction depends on the choice of a linear projection $`\overline{p}:^2`$, for which no line of $`𝒜`$ coincides with $`\overline{p}^1(\mathrm{point})`$. Let $`H_1,\mathrm{},H_n`$ be the lines of $`𝒜`$, let $`v_1,\mathrm{},v_s`$ be their intersection points, and let $`\{w_1,\mathrm{},w_r\}=\overline{p}(\{v_1,\mathrm{},v_s\})`$. Then $`\widehat{𝒜}=𝒜\{H_{n+1},\mathrm{},H_{n+r}\}`$, where $`H_{n+j}=\overline{p}^1(w_j)`$. The restriction $`\overline{p}:\widehat{X}\{w_1,\mathrm{},w_r\}`$ is a (linear) fibration, with fiber $`\{n\mathrm{points}\}`$. The monodromy generators, $`\overline{\alpha }_1,\mathrm{},\overline{\alpha }_r`$ may be found using a slight modification of the algorithm from (see also ). Deform $`\overline{p}`$ to a generic projection $`p:^2`$, and let $`\alpha _1,\mathrm{},\alpha _s`$ be the corresponding braid monodromy generators. Then, $`\overline{\alpha }_j=_{p(v_k)=w_j}\alpha _k`$. The fundamental group of $`\widehat{𝒜}`$ is the semidirect product $`\widehat{G}=𝔽_n_{\overline{\alpha }}𝔽_r`$, with presentation
$$\widehat{G}=x_1,\mathrm{},x_n,y_1,\mathrm{},y_rx_i^{y_j}=\overline{\alpha }_j(x_i).$$
(2)
Given an arrangement $``$ so that $`=\widehat{𝒜}`$, the presentation (2) of $`\pi _1(X(\widehat{𝒜}))=\widehat{G}`$ is often simpler to use than the presentation (1), obtained from the general braid monodromy algorithm applied directly to $``$. In particular, if we pick $`\{t_1,\mathrm{},t_{n+r}\}=\{x_1,\mathrm{},x_n,y_1,\mathrm{},y_r\}^{𝑎𝑏}`$ as basis for $`H_1(\widehat{G})^{n+r}`$, the Alexander matrix of $`\widehat{G}`$ has the block form
$$A=\left(\begin{array}{cccc}𝑖𝑑t_{n+1}\mathrm{\Theta }(\overline{\alpha }_1)& d_1& \mathrm{}& 0\\ \mathrm{}& & \mathrm{}& \\ 𝑖𝑑t_{n+r}\mathrm{\Theta }(\overline{\alpha }_r)& 0& \mathrm{}& d_1\end{array}\right),$$
(3)
where $`\mathrm{\Theta }:P_n𝐺𝐿(n,[t_1^{\pm 1},\mathrm{},t_n^{\pm 1}])`$ is the Gassner representation, and $`d_1=\left(t_11\mathrm{}t_n1\right)^{}`$, see \[7, §3.9\].
## 3 Warm-up Examples
We continue with some relatively simple examples of hyperplane arrangements and their characteristic varieties. These examples, which illustrate the above discussion, will be useful in understanding subsequent, more complicated examples.
###### Example 3.1
Let $`𝒜_3`$ be the braid arrangement in $`^3`$, with defining polynomial $`Q=xyz(xy)(xz)(yz)`$. The decone $`𝒜_3^{}`$, obtained by setting $`z=1`$, is depicted in Figure 2(a). Note that $`𝒜_3^{}=\widehat{𝒜}`$, where $`𝒜`$ consists of the lines marked $`1,2,3`$. Thus, $`𝒜_3`$ is fiber-type, with exponents $`\{1,2,3\}`$, and $`G^{}=𝔽_3_{\overline{\alpha }}𝔽_2`$, where $`\overline{\alpha }_1=A_{12}`$, $`\overline{\alpha }_2=A_{13}`$ (of course, $`G=P_4G^{}\times `$).
The resonance and characteristic varieties of $`𝒜_3`$ were computed in , , . The variety $`V_1(𝒜_3)(^{})^6`$ has $`4`$ local components, corresponding to the triple points $`124,135,236,456`$, and one essential component, corresponding to the neighborly partition $`(16|25|34)`$:
$$\mathrm{\Pi }=\{(s,t,(st)^1,(st)^1,t,s)s,t^{}\},$$
see Figure 2(b). The components of $`V_1`$ meet only at $`\mathrm{𝟏}`$. Moreover, $`V_2=\mathrm{}=V_6=\{\mathrm{𝟏}\}`$. The intersection poset of the characteristic varieties of $`𝒜_3`$ is depicted in Figure 2(c). The poset is ranked by dimension (indicated by relative height), and filtered according to depth in the characteristic tower (indicated by color: $`V_1`$ in black, $`V_2`$ in white).
###### Example 3.2
A realization of the non-Fano plane is the arrangement $`𝒩`$, with defining polynomial $`Q=xyz(xy)(xz)(yz)(x+yz)`$. A decone $`𝒩^{}`$ is depicted in Figure 3(a).
The characteristic varieties of $`𝒩`$ were computed in (see also ). The variety $`V_1(^{})^7`$ has $`6`$ local components, corresponding to triple points, and $`3`$ non-local components, $`\mathrm{\Pi }_1=\mathrm{\Pi }(25|36|47)`$, $`\mathrm{\Pi }_2=\mathrm{\Pi }(17|26|35)`$, $`\mathrm{\Pi }_3=\mathrm{\Pi }(14|23|56)`$, corresponding to braid sub-arrangements. The local components meet only at $`\mathrm{𝟏}`$, but the non-local components also meet at the point $`\rho =(1,1,1,1,1,1,1)`$. The variety $`V_2=\{\mathrm{𝟏},\rho \}`$ is a discrete algebraic subgroup of $`(^{})^7`$, isomorphic to $`_2`$. The characteristic intersection poset of $`𝒩`$ is depicted in Figure 3(b).
###### Example 3.3
Let $`_3`$ be the reflection arrangement of type $`\mathrm{B}_3`$, with defining polynomial $`Q=xyz(xy)(xz)(yz)(xyz)(xy+z)(x+yz)`$. A decone is shown in Figure 4(a). Note that $`_3^{}=\widehat{𝒜}`$, where $`𝒜`$ consists of the lines marked $`1,\mathrm{},5`$. Thus, $`_3`$ is fiber-type, with exponents $`\{1,3,5\}`$, and $`G^{}=𝔽_5_{\overline{\alpha }}𝔽_3`$, where $`\overline{\alpha }_1=A_{234}`$, $`\overline{\alpha }_2=A_{14}^{A_{24}A_{34}}A_{25}`$, $`\overline{\alpha }_3=A_{35}^{A_{23}A_{25}}`$.
A computation with Fox derivatives, using the techniques from §2, shows that the characteristic variety $`V_1(^{})^9`$ has $`19`$ components:
* $`7`$ local components, corresponding to $`4`$ triple points and $`3`$ quadruple points.
* $`11`$ components corresponding to braid sub-arrangements.
* $`1`$ essential, $`2`$-dimensional component, corresponding to the neighborly partition $`(156|248|379)`$, identified in , Example 4.6:
$$\mathrm{\Gamma }=\{(t,s,(st)^2,s,t,t^2,(st)^1,s^2,(st)^1)s,t^{}\}.$$
Three triples of braid components meet $`\mathrm{\Gamma }`$ on $`V_2`$, at the points
$$\rho _1=(1,1,1,1,1,1,1,1,1),\rho _2=(1,1,1,1,1,1,1,1,1),$$
and $`\rho _1\rho _2`$. The variety $`V_2`$ consists of three $`3`$-dimensional tori (corresponding to quadruple points), together with the discrete subgroup $`_2^2=\{1,\rho _1,\rho _2,\rho _1\rho _2\}`$. The characteristic intersection poset of $`_3`$ is depicted in Figure 4(b).
## 4 Positive-dimensional translated tori
We now come to our basic example of a complex hyperplane arrangement whose top characteristic variety contains a positive-dimensional translated component.
###### Example 4.1
Let $`𝒟`$ be the arrangement obtained from the $`\mathrm{B}_3`$ reflection arrangement by deleting the plane $`x+yz=0`$. A defining polynomial for $`𝒟`$ is $`Q=xyz(xy)(xz)(yz)(xyz)(xy+z)`$. The decone $`𝒟^{}`$, obtained by setting $`z=1`$, is depicted in Figure 5(a). Note that $`𝒟^{}=\widehat{𝒜}`$, where $`𝒜`$ consists of the lines marked $`1,\mathrm{},4`$. Thus, $`𝒟`$ is fiber-type, with exponents $`\{1,3,4\}`$, and $`G^{}=𝔽_4_{\overline{\alpha }}𝔽_3`$, where $`\overline{\alpha }_1=A_{23}`$, $`\overline{\alpha }_2=A_{13}^{A_{23}}A_{24}`$, $`\overline{\alpha }_3=A_{14}^{A_{24}}`$.
The Alexander matrix of $`G^{}`$, given by (3), is row-equivalent to
$$A=\left(\begin{array}{ccccccc}1t_5& 0& 0& 0& t_11& 0& 0\\ 0& t_5\left(t_31\right)& 1t_2t_5& 0& t_31& 0& 0\\ 0& 1t_5& t_2\left(1t_5\right)& 0& t_2t_31& 0& 0\\ 0& 0& 0& 1t_5& t_41& 0& 0\\ t_6\left(t_31\right)& \left(t_31\right)\left(t_1t_61\right)& t_2\left(1t_1t_6\right)& 0& 0& t_31& 0\\ 1t_6& t_1\left(t_31\right)\left(t_61\right)& t_1t_2\left(1t_6\right)& 0& 0& t_1t_31& 0\\ 0& t_6\left(t_41\right)& 0& 1t_2t_6& 0& t_41& 0\\ 0& 1t_6& 0& t_2\left(1t_6\right)& 0& t_2t_41& 0\\ t_7\left(t_41\right)& \left(t_41\right)\left(t_1t_71\right)& 0& t_2\left(1t_1t_7\right)& 0& 0& t_41\\ 1t_7& t_1\left(1t_7\right)& 0& t_1t_2\left(1t_7\right)& 0& 0& t_1t_2t_41\\ 0& 1t_7& 0& 0& 0& 0& t_21\\ 0& 0& 1t_7& \left(t_31\right)\left(1t_7\right)& 0& 0& t_4\left(t_31\right)\end{array}\right).$$
Now recall that $`V_1(𝒟)=\{𝐭(^{})^8(t_1,\mathrm{},t_7)V_1(𝒟^{})\mathrm{and}t_1\mathrm{}t_8=1\}`$, where $`V_1(𝒟^{})`$ is the sub-variety of $`(^{})^7`$ defined by the ideal of $`6\times 6`$ minors of the matrix $`A`$. Computing the primary decomposition of that ideal reveals that the variety $`V_1(𝒟)`$ has $`13`$ components:
* $`7`$ local components, corresponding to $`6`$ triple points and one quadruple point.
* $`5`$ non-local components passing through $`\mathrm{𝟏}`$, corresponding to braid sub-arrangements: $`\mathrm{\Pi }_1=\mathrm{\Pi }(15|26|38)`$, $`\mathrm{\Pi }_2=\mathrm{\Pi }(28|36|45)`$, $`\mathrm{\Pi }_3=\mathrm{\Pi }(14|23|68)`$, $`\mathrm{\Pi }_4=\mathrm{\Pi }(16|27|48)`$, $`\mathrm{\Pi }_5=\mathrm{\Pi }(18|37|46)`$.
* $`1`$ essential component, which does not pass through $`\mathrm{𝟏}`$. This component is $`1`$-dimensional, and is parametrized by
$$C=\{(t,t^1,t^1,t,t^2,1,t^2,1)t^{}\}.$$
(Note that the translated torus $`C`$ is one of the two connected components of $`\mathrm{\Gamma }\{t_3=1\}`$.) The braid components of $`V_1(𝒟)`$ meet $`C`$ at the points
$$\begin{array}{cc}\rho _1\hfill & =\mathrm{\Pi }_1\mathrm{\Pi }_2\mathrm{\Pi }_3C=(1,1,1,1,1,1,1,1),\hfill \\ \rho _2\hfill & =\mathrm{\Pi }_3\mathrm{\Pi }_4\mathrm{\Pi }_5C=(1,1,1,1,1,1,1,1),\hfill \end{array}$$
both of which belong to $`V_2(𝒟)`$. The characteristic intersection poset of $`𝒟`$ is depicted in Figure 6.
As noted in the Introduction, this example answers Problem 5.1 in . It also answers Problems 5.2 and 5.3 in . Indeed, let $`\lambda =(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2})`$. Clearly, $`\lambda `$, and all its integral translates, do not belong to $`_1(𝒟)`$, because all components of $`_1(𝒟)`$ are non-essential. Hence, $`H^1(H^{}(X,),\lambda +N)=0`$, for all $`N^8`$. On the other hand, $`𝐭:=\mathrm{exp}(\lambda )=(i,i,i,i,1,1,1,1)`$ belongs to $`C`$, and thus $`dimH^1(X,_𝐭)=1`$.
Finally, this example also answers in the negative Conjecture 4.4 from , at least in its strong form. Indeed, there are infinitely many $`𝐭=\mathrm{exp}(\lambda )C`$ for which
$$1=dimH^1(X,_𝐭)\underset{N^8}{sup}dimH^1(H^{}(X,),\lambda +N)=0.$$
## 5 Further examples
In this section, we give a few more examples that illustrate the nature of translated components in the characteristic varieties of arrangements.
###### Example 5.1
Let $`𝒜=\mathrm{A}_2(10)`$ be the simplicial arrangement from the list in Grünbaum . A defining polynomial for $`𝒜`$ is $`Q=xyz(yx)(y+x)(2yz)(yxz)(yx+z)(y+x+z)(y+xz)`$. Figure 7 shows a decone $`𝒜^{}`$, together with the characteristic intersection poset of $`𝒜`$. The variety $`V_1(𝒜)`$ has $`33`$ components:
* $`10`$ local components, corresponding to $`7`$ triple and $`3`$ quadruple points.
* $`17`$ non-local components corresponding to braid sub-arrangements.
* $`1`$ non-local component, $`\mathrm{\Gamma }=\mathrm{\Gamma }(1510|248|379)`$, corresponding to a $`_3`$ sub-arrangement.
* $`3`$ components that do not pass through the origin, $`C_1=C(134568910)`$, $`C_2=C(234568910)`$, $`C_3=C(345678910)`$, corresponding to $`𝒟`$ sub-arrangements.
* $`2`$ isolated points of order $`6`$, $`\zeta =(\eta ^2,\eta ^2,\eta ,\eta ^2,\eta ^2,1,\eta ^2,\eta ,\eta ^2,\eta )`$ and $`\zeta ^1`$, where $`\eta =e^{\pi i/3}`$.
Note that all positive-dimensional components of $`V_1(𝒜)`$ are non-essential, whereas the two $`0`$-dimensional components are essential. The non-local components meet at $`7`$ isolated points of order $`2`$, belonging to $`V_2(𝒜)`$:
$$\begin{array}{ccc}\rho _1\hfill & =(1,1,1,1,1,1,1,1,1,1),\rho _2\hfill & =(1,1,1,1,1,1,1,1,1,1),\hfill \\ \rho _3\hfill & =(1,1,1,1,1,1,1,1,1,1),\rho _4\hfill & =(1,1,1,1,1,1,1,1,1,1),\hfill \end{array}$$
$`\rho _1\rho _2`$, $`\rho _3\rho _4`$, and $`\zeta ^3`$.
###### Example 5.2
Consider the arrangements $`_1`$ and $`_2`$, with defining polynomials $`Q_1=(xyz)Q`$ and $`Q_2=(xy2z)Q`$, where $`Q=xyz(xy)(yz)(xz)(x2z)(x3z)`$. This pair of arrangements was introduced by Falk in . Their decones and characteristic varieties are depicted in Figure 8.
Both arrangements are fiber-type, with exponents $`\{1,4,4\}`$. Thus, by the LCS formula of Falk and Randell (see ), the ranks, $`\varphi _k(G):=𝑟𝑎𝑛𝑘𝑔𝑟_k(G)`$, of the lower central series quotients of the two groups are the same. As noted in , though, the ranks, $`\theta _k(G):=𝑟𝑎𝑛𝑘𝑔𝑟_k(G/G^{\prime \prime })`$, of the Chen groups are different: $`\theta _k(G(_1))=\frac{1}{2}(k1)(k^2+3k+24)`$ and $`\theta _k(G(_2))=\frac{1}{2}(k1)(k^2+3k+22)`$, for $`k4`$. Moreover, as noted in , the resonance varieties of the two arrangements are not isomorphic, even as abstract varieties: $`_1(_1)`$ has $`12`$ components, whereas $`_1(_2)`$ has $`11`$ components. An even more pronounced difference shows up in the characteristic varieties: $`V_1(_1)`$ has a $`13^{\mathrm{th}}`$ component (corresponding to a sub-arrangement isomorphic to $`𝒟`$), which does not pass through the origin.
###### Example 5.3
Consider the arrangements $`𝒵_1`$ and $`𝒵_2`$, with defining polynomials $`Q_1=(xy2z)Q`$ and $`Q_2=(xy3z)Q`$, where $`Q=xyz(xy)(yz)(xz)(x2z)(x3z)(x4z)(x5z)(xyz)(xy4z)`$. This pair of arrangements was introduced by Ziegler . Their decones and characteristic varieties are depicted in Figures 9 and 10.
Both arrangements are fiber-type, with exponents $`\{1,6,6\}`$; thus, $`\varphi _k(G(𝒵_1))=\varphi _k(G(𝒵_2))`$. Even more, the ranks of the Chen groups are the same: $`\theta _1=13`$, $`\theta _2=30`$, $`\theta _3=140`$, and $`\theta _k=\frac{1}{24}(k1)(k^4+10k^3+47k^2+86k+696)`$, for $`k4`$. Moreover, $`_1(𝒵_1)_1(𝒵_2)`$ (as varieties), although one may show, by a rather long calculation of the respective polymatroids, that there is no linear isomorphism $`^{13}^{13}`$ taking $`_1(𝒵_1)`$ to $`_1(𝒵_2)`$.
On the other hand, the two groups can be distinguished numerically by their characteristic varieties: $`V_1(𝒵_1)`$ has $`32`$ components, whereas $`V_1(𝒵_2)`$ has $`31`$ components. Both varieties have $`11`$ local components (corresponding to $`9`$ triple points, $`1`$ quintuple point, and $`1`$ septuple point), and $`18`$ components corresponding to braid sub-arrangements. In addition, both varieties have components which do not pass through $`\mathrm{𝟏}`$, corresponding to $`𝒟`$ sub-arrangements: $`V_1(𝒵_1)`$ has $`3`$ such components, $`V_1(𝒵_2)`$ has only $`2`$.
## 6 Concluding remarks
We conclude with a few questions raised by the above examples.
Let $`𝒜`$ be an arrangement of $`n`$ complex hyperplanes, and let $`V_d(𝒜)(^{})^n`$ ($`1dn`$) be its characteristic varieties.
###### Question 6.1
Are the translated components of $`V_d(𝒜)`$ combinatorially determined?
This problem was posed in and , before the existence of translated components in $`V_1(𝒜)`$ was known. Recall that $`\stackrel{ˇ}{V}_d(𝒜)`$—the union of the components of $`V_d(𝒜)`$ passing through the identity of the torus $`(^{})^n`$is combinatorially determined. If $`\stackrel{ˇ}{V}_d(𝒜)V_d(𝒜)`$ (as in the examples from §§45), the question is whether $`V_d(𝒜)\stackrel{ˇ}{V}_d(𝒜)`$ is also determined by the intersection lattice of $`𝒜`$.
###### Question 6.2
What are the possible dimensions of the translated components of $`V_d(𝒜)`$?
The (positive-dimensional) components passing through the origin must have dimension at least $`2`$, and all dimensions between $`2`$ and $`n1`$ can be realized. On the other hand, at least in the examples we gave here, the components not passing through $`\mathrm{𝟏}`$ have dimension either $`0`$ or $`1`$. The question is whether $`V_d(𝒜)\stackrel{ˇ}{V}_d(𝒜)`$ can have higher-dimensional components.
###### Question 6.3
What are the possible orders of translation of the components of $`V_d(𝒜)`$?
In our examples, the components not passing through the origin are translated by characters of order $`2`$ or $`6`$. The question is whether other orders of translation can occur. Furthermore, one may ask (as a weak form of Question 6.1) whether the orders of translation are combinatorially determined. We know of a combinatorial upper bound on the lowest common multiple of these orders, but do not know when this bound is attained.
I wish to thank D. Cohen and D. Matei for useful discussions. The computations for this work were done primarily with Macaulay 2 () and Mathematica 4.0. Additional computations were done with GAP 4.1 (). |
no-problem/9912/quant-ph9912068.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Long-range electron transfer (ET) is a very actively studied area in chemistry, biology, and physics; both in biological and synthetic systems. Of special interest are systems with a bridging molecule between donor and acceptor. For example the primary step of charge separation in the bacterial photosynthesis takes place in such a system . But such systems are also interesting for synthesizing molecular wires . It is known that the electronic structure of the bridge component in donor-bridge-acceptor systems plays a critical role . When the bridge energy is much higher than the donor and acceptor energies, the bridge population is close to zero for all times and the bridge site just mediates the coupling between donor and acceptor. This mechanism is called superexchange and was originally proposed by Kramers to describe the exchange interaction between two paramagnetic atoms spatially separated by a nonmagnetic atom. In the opposite limit when donor and acceptor as well as bridge energies are closer than $`k_\mathrm{B}T`$, the bridge site is actually populated and the transfer is called sequential. The interplay between these two types of transfer has been investigated theoretically in various publications .
In the present work we compare two different approaches based on the reduced density matrix formalism. In the first model one pays attention to the fact that experiments in systems similar to the one discussed here show vibrational coherence . Therefore a vibrational substructure is introduced for each electronic level within a multi-level Redfield theory . Below we call this the vibronic model. In the second approach only electronic states are taken into account because it is assumed that the vibrational relaxation is much faster than the ET. This model is referred to as tight-binding model below. In this case only the relaxation between the electronic states remains. Such a kind of relaxation has been phenomenologically introduced for ET by Davis et al. and very recently derived in our group as a second order perturbation theory in the system-bath interaction similar to Redfield theory. The vibronic and the tight-binding model are described in the next section and compared in Section 3.
## 2 Theory
For the description of charge transfer and other dynamical processes in the system we introduce the Hamiltonian
$$\widehat{H}=\widehat{H}_\mathrm{S}+\widehat{H}_\mathrm{B}+\widehat{H}_{\mathrm{SB}},$$
(1)
where $`\widehat{H}_\mathrm{S}`$ denotes the relevant system, $`\widehat{H}_\mathrm{B}`$ the dissipative bath, and $`\widehat{H}_{\mathrm{SB}}`$ the interaction between the two. Before discussing the system part of the Hamiltonian in Sections 2.1 and 2.2, we describe the bath and the procedure how to obtain the equations of motion for the reduced density matrix, because this is the same for both models studied below. The bath is modeled by a distribution of harmonic oscillators and characterized by its spectral density $`J(\omega )`$. Starting with a density matrix of the full system, the reduced density matrix of the relevant (sub)system is obtained by tracing out the bath degrees of freedom . While doing so a second-order perturbation expansion in the system-bath coupling and the Markov approximation have been applied .
### 2.1 Vibronic model
The bridge ET system $`\mathrm{H}_2\mathrm{P}\mathrm{ZnP}\mathrm{Q}`$ with free-base porphyrin ($`\mathrm{H}_2\mathrm{P}`$) being the donor, zinc porphyrin ($`\mathrm{ZnP}`$) the bridge, and quinone ($`\mathrm{Q}`$) the acceptor is modeled by three diabatic electronic potentials, corresponding to the neutral excited electronic state $`|1=|\mathrm{H}_2\mathrm{P}^{}\mathrm{ZnP}\mathrm{Q}`$, and states with charge separation $`|2=|\mathrm{H}_2\mathrm{P}^+\mathrm{ZnP}^{}\mathrm{Q}`$, $`|3=|\mathrm{H}_2\mathrm{P}^+\mathrm{ZnP}\mathrm{Q}^{}`$ (see Fig. 1). Each of these electronic potentials has a vibrational substructure. The vibrational frequency is assumed to be 1500 cm<sup>-1</sup> as a typical frequency within carbon structures. The potentials are displaced along a common reaction coordinate which represents the solvent polarization . Following the reasoning of Marcus the free energy differences $`\mathrm{\Delta }G_{mn}`$ corresponding to the electron transfer from molecular block $`n`$ to $`m`$ ($`n=1`$, $`m=2,3`$) are estimated to be
$$\mathrm{\Delta }G_{mn}=E_m^{\mathrm{ox}}E_n^{\mathrm{red}}E^{\mathrm{ex}}\frac{e^2}{4\pi ϵ_0ϵ_\mathrm{s}}\frac{1}{r_{mn}}+\mathrm{\Delta }G_{mn}(ϵ_\mathrm{s})$$
(2)
with the term $`\mathrm{\Delta }G_{mn}(ϵ_\mathrm{s})`$ correcting for the fact that the redox energies $`E_m^{\mathrm{ox}}`$ and $`E_n^{\mathrm{red}}`$ are measured in a reference solvent with dielectric constant $`ϵ_\mathrm{s}^{\mathrm{ref}}`$:
$$\mathrm{\Delta }G_{mn}(ϵ_\mathrm{s})=\frac{e^2}{4\pi ϵ_0}\left(\frac{1}{2r_m}+\frac{1}{2r_n}\right)\left(\frac{1}{ϵ_\mathrm{s}}\frac{1}{ϵ_\mathrm{s}^{\mathrm{ref}}}\right).$$
(3)
The excitation energy of the donor $`\mathrm{H}_2\mathrm{P}\mathrm{H}_2\mathrm{P}^{}`$ is denoted by $`E^{\mathrm{ex}}`$. $`r_n`$ denotes the radius of either donor (1), bridge (2), or acceptor (3) and $`r_{mn}`$ the distance between two of them. They are estimated to be $`r_1=r_2=5.5`$ $`\mathrm{\AA }`$, $`r_3=3.2`$ $`\mathrm{\AA }`$, $`r_{12}=12.5`$ $`\mathrm{\AA }`$, and $`r_{13}=14.4`$ $`\mathrm{\AA }`$ .
Also sketched in Fig. 1 are the reorganization energies $`\lambda _{mn}=\lambda _{mn}^\mathrm{i}+\lambda _{mn}^\mathrm{s}`$. These consist of an internal reorganization energy $`\lambda _{mn}^\mathrm{i}`$, which is estimated to be 0.3 eV , and a solvent reorganization energy
$$\lambda _{mn}^\mathrm{s}=\frac{e^2}{4\pi ϵ_0}\left(\frac{1}{2r_m}+\frac{1}{2r_n}\frac{1}{r_{mn}}\right)\left(\frac{1}{ϵ_{\mathrm{}}}\frac{1}{ϵ_\mathrm{s}}\right).$$
(4)
Further parameters are the electronic couplings between the potentials. First it should be underlined that $`V_{13}=0`$ because of the spatial separation of $`\mathrm{H}_2\mathrm{P}`$ and $`\mathrm{Q}`$. So there is no direct transfer between donor and acceptor. The other couplings are $`V_{12}=65`$ $`\mathrm{meV}`$ and $`V_{23}=2.2`$ $`\mathrm{meV}`$ . The damping is described by the spectral density $`J(\omega )`$ of the bath. This is only needed at the frequency of the vibrational transition and is determined $`J(\omega _{\mathrm{vib}})/\omega _{\mathrm{vib}}=0.372`$ by fitting the ET rate for the solvent methyltetrahydrofuran (MTHF). In the vibronic model the spectral density is taken as a constant with respect to $`ϵ_\mathrm{s}`$.
Next the calculation of the dynamics is sketched. Starting from the Liouville equation, performing the abovementioned approximations the equation of motion for the reduced density matrix $`\rho _{\mu \nu }`$ can be obtained
$$\frac{}{t}\rho _{\mu \nu }=\frac{i}{\mathrm{}}(E_\mu E_\nu )\rho _{\mu \nu }i\underset{\kappa }{}\{v_{\nu \kappa }\rho _{\mu \kappa }v_{\kappa \mu }\rho _{\kappa \nu }\}+R_{\mu \nu }.$$
(5)
The index $`\mu `$ combines the electronic quantum number $`m`$ and the vibrational quantum number $`M`$ of the diabatic levels $`E_\mu `$. $`v_{\mu \nu }=V_{mn}F_{\mathrm{FC}}(m,M,n,N)`$ comprises Franck-Condon factors $`F_{\mathrm{FC}}`$ and the electronic matrix elements $`V_{mn}`$. The third term describes the interaction between the relevant system and the heat bath. Equation (5) is solved numerically with the initial condition that only the donor state is occupied in the beginning. The population of the acceptor state
$$P_3(t)=\underset{M}{}\rho _{3M3M}(t)$$
(6)
and the ET rate
$$k_{\mathrm{ET}}=\frac{P_3(\mathrm{})}{\underset{0}{\overset{\mathrm{}}{}}𝑑t(1P_3(t))}$$
(7)
are calculated by tracing out the vibrational modes.
### 2.2 Tight-binding model
The reasoning for the following system Hamiltonian is the assumption that the vibrational excitations are relaxed on a much shorter time scale than the ET time scale. Therefore only electronic states without any vibrational substructure are taken into account (see Fig. 2). As a consequence the relaxation during the ET process has to be described in a different manner than in the previous subsection. If now relaxation takes place, it takes place between the electronic states and not between vibrational states within one electronic state potential surface. A similar model has been introduced phenomenologically by Davis et al. who solved it in the steady state limit.
The energies of the electronic states $`E_m`$ are chosen to be the ground states of the harmonic potentials given in the previous section. So they vary with the dielectric constant. The electronic coupling is fixed in two different ways. In the naive way they are chosen to be the same as in the vibronic model. But because in the tight-binding model there is no reaction coordinate, in a second version we scale the electronic couplings with the Franck-Condon overlap elements between the vibrational ground states of each pair of electronic surfaces
$$v_{mn}=V_{mn}F_{\mathrm{FC}}(m,0,n,0)=V_{mn}\mathrm{exp}\frac{|\lambda _{mn}|}{2\mathrm{}\omega _{\mathrm{vib}}}.$$
(8)
In the vibronic model not only the free energy differences $`\mathrm{\Delta }G`$ but also the reorganization energies $`\lambda `$ scale with the dielectric constant $`ϵ_\mathrm{s}`$. Due to this scaling of $`\lambda `$ the system-bath interaction is scaled with the dielectric constant $`ϵ_\mathrm{s}`$. In the high temperature limit the reorganization energy is given by
$$\lambda =\mathrm{}_0^{\mathrm{}}𝑑\omega \frac{J(\omega )}{\omega }.$$
(9)
This relation is taken as motivation to scale the tight-binding spectral density with $`ϵ_\mathrm{s}`$ like the reorganization energies $`\lambda `$ in the vibronic model. In the present calculations $`\mathrm{\Gamma }_{21}=\mathrm{\Gamma }_{23}=\mathrm{\Gamma }`$ is assumed. The absolute value of the damping rate $`\mathrm{\Gamma }`$ between the electronic states (see Fig. 2) is then determined by fitting the ET rate for the solvent MTHF to be $`\mathrm{\Gamma }=2.8\times 10^{11}`$ s<sup>-1</sup>.
The advantage of the tight-binding model is the possibility to determine the transfer rate $`k_{\mathrm{ET}}`$ and the final population of the acceptor state either numerically or analytically. We employ the rotating wave approximation because we are only interested in the reaction rates here. For the analytic calculation three extra assumptions have to be made: small bridge population, the kinetic limit $`t\mathrm{\Gamma }^1`$, and the absence of initial coherence in the system. But for all situations described in this paper the differences between analytic and numerical results without the extra assumptions are negligible. The analytic expressions are
$$k_{\mathrm{ET}}=g_{23}+\frac{g_{23}(g_{12}g_{32})}{g_{21}+g_{23}}$$
(10)
and
$$P_3(\mathrm{})=\frac{g_{12}g_{23}}{g_{21}+g_{23}}(k_{\mathrm{ET}})^1,$$
(11)
which contain both, dissipative and coherent contributions
$$g_{mn}=d_{mn}+\frac{v_{mn}^2\underset{k}{}(d_{mk}+d_{kn})}{\mathrm{}^2\left\{2\omega _{mn}^2+\frac{1}{2}\left[\underset{k}{}(d_{mk}+d_{kn})\right]^2\right\}}.$$
(12)
Herein the $`d_{mn}`$ are just abbreviations for $`\mathrm{\Gamma }_{mn}|n(\omega _{mn})|`$ and $`n(\omega _{mn})`$ denotes the Bose distribution at frequency $`\omega _{mn}=(E_mE_n)/\mathrm{}`$. For details and comparison with the Grover-Silbey theory as well as the Haken-Strobl-Reineker theory we refer the reader to Ref. .
## 3 Comparison
In Fig. 3 it is shown how the minima of the potential curves change with varying the solvent due to the changes in Eqs. (2) to (4). The solvents are listed in Table 1 together with their parameters and the results for the ET rates in both models. For larger $`ϵ_\mathrm{s}`$ the coordinates of the potential minima of bridge and acceptor increase while their energies decrease with respect to the energy of the donor. The energy difference between donor and bridge decreases with increasing $`ϵ_\mathrm{s}`$. This makes a charge transfer more probable. For small $`ϵ_\mathrm{s}`$ the acceptor state is higher in energy than the donor state; nevertheless there is a small ET rate due to coherent mixing.
For fixed $`ϵ_{\mathrm{}}`$ the ET rate is plotted as a function of the dielectric constant $`ϵ_\mathrm{s}`$ in Fig. 4. The ET rate in the vibronic model increases strongly for small values of $`ϵ_\mathrm{s}`$ while the increase is very small for $`ϵ_\mathrm{s}`$ in the range between 5 and 8. The increase for small values of $`ϵ_\mathrm{s}`$ is due to the fact that with increasing $`ϵ_\mathrm{s}`$ the minimum of the acceptor potential moves from a position higher than the minimum of the donor level to a position lower than the donor level. So the transfer becomes energetically favorable. This can also be seen when looking at the results for the tight-binding model without scaling the electronic coupling with the Franck-Condon factor. In this case the ET rate increases almost linearly with increasing $`ϵ_\mathrm{s}`$. The effect missing in this model is the overlap between the vibrational states. If one corrects the electronic coupling in the tight-binding model by the Franck-Condon factor of the vibrational ground states as described in Eq. (8), good agreement is observed between the vibronic and the tight-binding model.
The ET rate for the vibronic model shows some oscillations as a function of $`ϵ_\mathrm{s}`$. This is due to the small density of vibrational levels in this model with one reaction coordinate. All three electronic potential curves are harmonic and have the same frequency. So there are small maxima in the rate when two vibrational levels are in resonance and minima when they are far off resonance. Models with more reaction coordinates do not have this problem nor does the simple tight-binding model. If these artificial oscillations would be absent, the agreement between the results for the tight-binding and the vibronic model would be even better, because the rate for the vibronic model happens to have a maximum just at the reference point $`ϵ_\mathrm{s}=6.24`$ which we have chosen to fix the spectral density, i. e. for MTHF.
The comparison of the two models has been made assuming that the scaling of energies as a function of the dielectric function is correct in the Marcus theory. There have been a lot of changes to Marcus theory proposed in the last years. Marcus theory assumes excess charges within cavities surrounded by a polarizable medium and there one only takes the leading order into account. Higher order terms are included in the so called reaction field theory (see for example ). But to compare different solvation models is out of the range of the present investigation. Some more details on this issue for the tight-binding model are given in Ref. . Here we just want to note in passing that the effect of scaling the system-bath interaction with $`ϵ_\mathrm{s}`$, as assumed in the present work for the tight-binding model, has no big effect on the ET rates.
As conclusion we mention that one gets good agreement for the ET rates of the models with and without vibrational substructure, i. e. the vibronic and the tight-binding model, if one scales the electronic coupling with the Franck-Condon overlap matrix elements between the vibrational ground states. The advantage of the model with electronic relaxation only is the possibility to derive analytic expressions for the ET rate and the final population of the acceptor state. But of course for a more realistic description of the ET transfer process in such complicated systems as discussed here, more than one reaction coordinate should be taken into account. Work in this direction is in progress.
## 4 Acknowlegements
We thank I. Kondov for the help with some programming as well as U. Rempel and E. Zenkevich for stimulating discussions. Financial support of the DFG is gratefully acknowledged. |
no-problem/9912/astro-ph9912054.html | ar5iv | text | # 1 Future of the Universe
## 1 Future of the Universe
It is very popular in cosmology to make definite predictions about infinitely remote future of our Universe. Such predictions may be found in virtually any book on cosmology, popular or sophisticated. Usually they have the following form:
1) if the spatial curvature of our Universe is zero or negative, it will expand eternally;
2) if the spatial curvature is positive, the Universe will stop expanding in future and begin to recollapse.
However, it is obvious that any prediction about dynamical evolution of a physical system cannot remain reliable at infinite time. In any branch of science, sure forecasts exist for finite periods of time only, ranging from days in meteorology to millions of years in the Solar system astronomy. So, how can cosmology be an exception from this general rule? Evidently, it can’t. Therefore, the conviction that the infinite time prediction given above is reliable should be no more than an illusion. At present we begin to understand profound reasons for this.
The impossibility to make exact predictions for infinite time evolution in cosmology results from the two reasons: 1) absence of precise knowledge of the present composition of matter in the Universe and future transformations between different kinds of matter; and 2) imprecise knowledge of present initial conditions for spatial inhomogeneities in the Universe. The first reason is vital even for an exactly homogeneous and isotropic Universe, while the second one requires consideration of deviations from isotropy and homogeneity. It was thought for a long time that the second reason is the main source of unpredictability in remote future, but it seems now that the first reason is the most important one.
Recent observational data on supernova explosions at high redshifts $`z1`$ obtained by two groups independently , as well as numerous previous arguments (see, e.g., ), strongly support the existence of a new kind of matter in the Universe which energy density is positive and dominates over energy densities of all previously known forms of matter. This form of matter has a strongly negative pressure and remains unclustered at all scales where gravitational clustering of baryons and cold non-baryonic dark matter is seen. Its gravity results in an acceleration of the expansion of the present Universe: $`\ddot{a}(t_0)>0`$, where $`a(t)`$ is the scale factor of the Friedmann-Robertson-Walker (FRW) isotropic cosmological model with time $`t`$ measured from the cosmological singularity (the Big Bang) in the past, $`t_0`$ is the present moment. In the first approximation, this kind of matter may be described by a constant Lambda-term in gravity equations which was introduced by Einstein. However, a Lambda-term (also called quintessence sometimes) might be slowly varying with time. If so, this will be soon determined from observational data. In particular, if we use the simplest model of a variable Lambda-term borrowed from the inflationary scenario of the early Universe, namely, an effective scalar field $`\varphi `$ with some self-interaction potential $`V(\varphi )`$ minimally coupled to gravity, then the functional form of $`V(\varphi )`$ may be determined from observational cosmological functions: either from the luminosity distance $`D_L(z)`$ , or from the linear density perturbation in the dust-like (cold dark matter (CDM) plus baryon) component of matter in the Universe $`\frac{\delta \rho }{\rho }(z)`$ (provided the Lambda-term satisfies the weak energy condition $`\epsilon _\mathrm{\Lambda }+p_\mathrm{\Lambda }0`$).
Should the Lambda-term be always exactly constant, the prediction for the future of the Universe is simple and boring: the Universe will expand forever, energy densities of all kinds of matter apart from the Lambda-term tend to zero exponentially, and the space-time metric locally approaches the de Sitter metric (though globally it has a much more general quasi-de Sitter form, see ). Thus, in this case the Universe becomes cold and empty finally. However, this is just the point: we are not sure that the Lambda-term will remain exactly the same at all times. And if it changes with time, predictions for remote future of the Universe may appear completely different.
On the other hand, sure forecasts for finite intervals of time are certainly possible in cosmology. Moreover, it is the present high degree of order in the Universe that makes the interval of predictability very large - much larger than in other branches of science. By the way, let us note that according to the inflationary scenario the present regularity of the Universe is a consequence of the fact that the Universe was even more regular - actually, almost maximally symmetric - in the past, during a de Sitter (inflationary) stage. The curvature at that stage was very high, close to the Planck curvature (though at least five orders of magnitude less near the end of the inflationary stage), in sharp contrast with a very low curvature at the asymptotic quasi-de Sitter stage in future discussed in the previous paragraph. Let me give you an example of such kind of predictions. If we make the following three assumptions: the present Hubble constant $`H_050`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, the present age of the Universe $`t_010`$ Gy and the energy density of the Lambda-term is non-negative (and will remain so for the period of time given below), than the Universe will continue its expansion for at least 20 Gy irrespective of the sign of its spatial curvature . At present, we are practically sure from existing observational data that all these three assumptions are correct. Since this interval exceeds the time of active life of main sequence stars (and the Sun, in particular), this estimate is more than sufficient for discussion of the future of the Earth and human civilization.
Derivation of this result goes as follows. If $`\epsilon _\mathrm{\Lambda }0`$, the most critical case with respect to recollapse of the Universe in future occurs just when $`\epsilon _\mathrm{\Lambda }0`$ and the Universe is closed ($`𝒦=1`$, positive spatial curvature). The law of the evolution of a closed dust-dominated FRW cosmological model has the following parametric form:
$$a=\frac{1}{2}a_{max}(1\mathrm{cos}\eta ),t=\frac{1}{2}a_{max}(\eta \mathrm{sin}\eta ),0\eta 2\pi ,$$
(1)
where $`a_{max}`$ is the maximal radius of the Universe (I put $`c=1`$ here and below). The parameter $`\eta `$ is the conformal time $`\eta =𝑑t/a(t)`$ actually.
The corresponding Hubble parameter is
$$H(t)\frac{d}{dt}\mathrm{ln}a(t)=\frac{2}{a_{max}}\frac{\mathrm{sin}\eta }{(1\mathrm{cos}\eta )^2}.$$
(2)
Note that the Hubble constant $`H_0=H(t_0)`$. Then it follows from the inequalities for $`H_0`$ and $`t_0`$ given above that
$$H_0t_0=\frac{\mathrm{sin}\eta _0(\eta _0\mathrm{sin}\eta _0)}{(1\mathrm{cos}\eta _0)^2}0.51,\eta _01.92$$
(3)
where $`\eta _0=\eta (t_0)`$. The remaining time of expansion before beginning of recollapse of the Universe which takes place at $`\eta =\pi `$ in this model is:
$$T_{exp}=\frac{\pi }{2}a_{max}t_0=t_0\frac{\pi \eta _0+\mathrm{sin}\eta _0}{\eta _0\mathrm{sin}\eta _0}2.2t_022\mathrm{Gy}.$$
(4)
Given above was just the rounded form of this inequality. Incidentally, it follows from (3) that the upper limit on the present energy density of dust-like matter in terms of the critical one $`\epsilon _c=3H_0^2/8\pi G`$ is $`\mathrm{\Omega }_m=\epsilon _m/\epsilon _c1.5`$. Of course, presently existing observational data, especially the supernova data mentioned above and data on temperature angular anisotropy $`\frac{\mathrm{\Delta }T}{T}`$ of the cosmic microwave background (CMB) restrict spatial curvature of the Universe even better: $`|\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }1|0.3`$ (see, e.g., the second reference in , and ).
Still people are interested in more and more remote future. Predictions for this period can be made, of course, but they become less and less reliable with time growth, because we have to base on more and more assumptions. So, speaking about very remote future, we can at best present a list of some possibilities for future evolution of the Universe. This list, however incomplete it is, shows that real future evolution of the Universe is infinitely complicated and has no boring smooth asymptotic behaviour at $`t\mathrm{}`$.
But before discussing these remote possibilities, let me mention two significantly new effects which arise in the case of a constant $`\mathrm{\Lambda }`$-term ($`\epsilon _\mathrm{\Lambda }>0`$). From now on, I assume that the Universe is spatially flat ($`𝒦=0`$) for the following reasons: a) no observational data directly point to $`𝒦0`$ at present; b) a spatial curvature of the Universe is strongly bounded as mentioned above, and does not dominate over matter (including both dust-like matter and a $`\mathrm{\Lambda }`$-term); c) the simplest inflationary models of the early Universe predict $`|\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }1|1`$; d) for simplicity.
1. Reversal of a sign of $`\dot{z}`$ for sufficiently close objects.
Let us consider the question how the redshift of a given object changes with time. The present redshift $`zz(t_0)`$ is given by the expression
$$1+z=\frac{a(\eta _0)}{a(\eta _{em})},\eta _{em}=\eta _0r,$$
(5)
where $`r`$ is the constant coordinate (comoving) distance to the object and $`\eta _{em}=\eta (t_{em})`$ is the moment when the object emitted light observing now. The physical distance to the object is $`R=ar`$. To find $`\dot{z}`$, one has to differentiate (5) with respect to $`t_0`$. If $`\mathrm{\Lambda }=0`$, then $`\dot{z}<0`$ for all $`z`$. Moreover, $`z(t)`$ monotonically decreases with time and tends to $`0`$ as $`t\mathrm{}`$. On the contrary, if $`\mathrm{\Lambda }>0`$, $`z(t)`$ stops decreasing at some moment and then begin to increase due to an acceleration of the Universe in the $`\mathrm{\Lambda }`$-dominated regime. As a result, $`\dot{z}>0`$ if $`z<z_c`$ at the present time. The value $`z_c`$ for which $`\dot{z}_c(t_0)=0`$ (so $`\dot{z}`$ considered as a function of $`z`$ for given the $`t=t_0`$ changes its sign) is determined from the equation:
$$\dot{a}(t_0)=\dot{a}(t_{em}(z_c)),\eta _{em}(z_c)=\eta _0r(z_c).$$
(6)
If the Universe is flat, then this equation reduces to the algebraic equation
$$(1+z_c)\left(\mathrm{\Omega }_m+\frac{1\mathrm{\Omega }_m}{(1+z_c)^3}\right)=1.$$
(7)
In particular, $`z_c=2.09`$ if $`\mathrm{\Omega }_m=0.3`$ which is the best fit to the supernova data . Note that $`z_c`$ decreases with increasing $`\mathrm{\Omega }_m`$. This effect may be even directly observed in future, though not too soon because measuring $`\dot{z}`$ represents a formidable task (see the discussion of problems arising in ).
2. Loss of possibility to reach distant objects.
The existence of a constant $`\mathrm{\Lambda }>0`$ leads to the appearance of the future event horizon (as in the de Sitter space-time). This means that looking at sufficiently remote galaxies with $`z>z_{eh}`$ at the present time, we can neither reach them physically in an arbitrary long time period, nor even send a message to intelligent beings in them (supposing that such exist or will appear in future) saying “we are!”. In other words, the coordinate volume of space which our civilization may affect is finite. Its border is given by $`r_{eh}=\eta (t=\mathrm{})\eta _0.`$ The redshift $`z_{eh}(r_{eh},\mathrm{\Omega }_m)`$ can found from the equation
$$_1^{1+z_{eh}}\frac{dx}{\sqrt{1\mathrm{\Omega }_m+\mathrm{\Omega }_mx^3}}=_0^1\frac{dx}{\sqrt{1\mathrm{\Omega }_m+\mathrm{\Omega }_mx^3}}$$
(8)
(both sides of this equation are equal to $`R_{eh}H_0=a(t_0)r_{eh}H_0`$). If $`\mathrm{\Omega }_m=0.3`$, then $`z_{eh}=1.80`$ (note that $`z_{eh}`$ grows with $`\mathrm{\Omega }_m`$ reaching infinity for $`\mathrm{\Omega }_m=1`$). This is not much, we see many galaxies and quasars with larger redshifts. So, all of them are unaccessible for us. Another similar effect was recently considered in .
Now we return to long-time predictions. The standard one usually presented refers to the case of a constant $`\mathrm{\Lambda }>0`$. Then, as was already mentioned above, the Universe will expand infinitely for any sign of its spatial curvature. It quickly approaches the de Sitter state with $`H=H_{\mathrm{}}=\sqrt{\mathrm{\Lambda }/3}=H_0\sqrt{1\mathrm{\Omega }_m}`$. So, this scenario may be called “inflation in future”. Matter density $`\epsilon _ma^3(t)0`$ while density perturbations $`\delta \epsilon _m/\epsilon _mconst`$ if they are still in the linear regime now. Circumstantially, CMB multipole angular anisotropies $`(\mathrm{\Delta }T/T)_l`$, in particular the quadrupole one, freeze at some constant values, too (see the first reference in ). On the other hand, gravitationally bound systems which physical size is $`R<10h^1`$ Mpc at present (our Galaxy, in particular) will remain bound, at least as far as classical gravity is concerned (here $`h=H_0/100`$ km s<sup>-1</sup> Mpc<sup>-1</sup>). So, islands of galaxies will remain in the ever expanding and becoming more and more vacuum-like on average Universe.
However, this is not the only possibility for a future fate of the Universe even at the classical level, and probably not the correct one at all if quantum-gravitational effects are taken into account. A number of possible alternatives is presented below.
1. Decay of $`\mathrm{\Lambda }`$ in future.
If a $`\mathrm{\Lambda }`$-term is unstable and decays faster than $`a^2`$ (i.e., $`\epsilon _\mathrm{\Lambda }a^20`$ at $`t\mathrm{}`$), then recollapse of some parts of the Universe becomes possible due to existing inhomogeneities even if $`𝒦=0`$. A $`\mathrm{\Lambda }`$-term may decay with time, e.g., in the simplest scalar field model mentioned above if $`V(\varphi )`$ decreases sufficiently fast with growth of $`a(t)`$. At present, the $`\mathrm{\Lambda }`$-term is changing rather slowly, if at all. If we assume for simplicity that its pressure $`p_\mathrm{\Lambda }=k\epsilon _\mathrm{\Lambda },k=const`$, then it follows from observational data that $`k<0.6`$ (see, e.g., ). Since $`\epsilon _\mathrm{\Lambda }a^{3(1+k)}`$ in this case, this corresponds to $`\epsilon _\mathrm{\Lambda }`$ decaying less rapidly than $`a^{1.2}`$ at present. However, this behaviour may change in future.
2. Collision with a null singularity.
There exists a rather unpleasant possibility that our future world line will cross a real space-time singularity with infinite values of the Riemann tensor (though its scalar invariants are less singular and may even remain finite sometimes) concentrated at a null hypersurface. So, this singularity may be called a gravitational shock wave with an infinite amplitude. It was conjectured that such singularities should arise along Cauchy horizons inside rotating or charged black holes , and it has been shown that this really occurs in some simplified cases (see for the most recent treatment).
It not is clear at present if this collision is deadly to an intelligent life. However, it is certainly fatal for our ability to predict future of our Universe since any classical extension of space-time beyond such a singularity is non-unique. The most unpleasant is the fact that an intelligent being cannot even forecast this event until the shock wave hits him/her. Fortunately, this possibility seems to be rather improbable since it requires a very specific global space-time structure of the Universe (namely, the existence of a Cauchy horizon intersecting our future light cone). However, I cannot exclude it completely basing on our present knowledge.
3. Formation of a classical space-like curvature singularity during expansion.
To hit a real space-time singularity with infinite invariants of the Riemann tensor, it is not necessary to have an isotropic recollapse first. Such a singularity may also occur as a result of sudden growth of anisotropy and inhomogeneity at some moment during expansion, or even as a result of infinite growth of $`a(t)`$ in a finite time period. The former possibility realizes, e.g., in the model of a variable $`\mathrm{\Lambda }`$-term based on a scalar field with a self-interaction potential $`V(\varphi )`$ as before, but non-minimally coupled to gravity due to the term $`\xi R\varphi ^2`$ in its Lagrangian density. If $`\xi >0`$ and if the field $`\varphi `$ will reach the critical value $`\varphi _{cr}=1/\sqrt{8\pi \xi G}`$ at some finite moment of time $`t_{cr}`$ in future, the effective gravitational constant $`G_{eff}`$ becomes infinite, small spatial inhomogeneities grow without limit and a generic inhomogeneous space-like singularity (not oscillating) forms . Very close to this singularity, the volume factor $`\sqrt{g}`$ stops growing and finally approaches zero $`(t_{cr}t)^q,0<q<1`$, but this recollapse is strongly anisotropic.
The latter possibility takes place in an even simpler case (though not justified by a reasonable field-theoretic model) of the linear equation of state $`p_\mathrm{\Lambda }=k\epsilon _\mathrm{\Lambda },k=const`$ with $`k<1`$, so that the weak energy condition $`p_\mathrm{\Lambda }+\epsilon _\mathrm{\Lambda }0`$ is violated at the classical level. Then $`a(t)`$ becomes infinite (and the curvature singularity is reached) in a finite interval of time (measured from the present moment)
$$T_s=H_0^1\frac{2}{3|1+k|}_0^1\frac{dx}{\sqrt{1\mathrm{\Omega }_m+\mathrm{\Omega }_mx^{\frac{2|k|}{|1+k|}}}}.$$
(9)
As was discussed above, the $`\mathrm{\Lambda }`$-term is changing sufficiently slowly, if at all. Using the supernova data, it can be shown that $`k`$ should be certainly more than $`1.5`$. Then, taking $`\mathrm{\Omega }_m=0.3`$ and $`H_0=70`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, we obtain $`T_s>22`$ Gy. So, even for this very speculative model, we get practically the same lower bound on the period of safe expansion of Universe in future as was given before in Eq. (4).
More justified and refined field-theoretic models having such a regime which is called “superinflation”, or “pole inflation” do exist. In particular, this regime was already present among possible solutions of the higher-derivative gravity model used in to construct the first viable cosmological model of the early Universe with the initial de Sitter (inflationary) stage (though, of course, another solution of this model having the “graceful exit” from inflation to the FRW radiation-dominated stage was used in this paper). Another model where pole inflation occurs is the “Pre-Big-Bang” scenario of the early Universe . So, could a “Post-Big-Bang” in future be possible? Once more, I cannot exclude this possibility now.
4. Hitting a space-like singularity in future due to quantum-gravitational effects.
Finally, if none of the classical effects listed above (and other ones not known now) occurs, there always exist quantum-gravitational fluctuations. They are non-trivial (not coinciding with vacuum fluctuations in the Minkowski space-time) if $`\mathrm{\Lambda }0`$. There are two kinds of them.
A. Fluctuations of an effective scalar field producing a $`\mathrm{\Lambda }`$-term.
During future expansion of the Universe at the $`\mathrm{\Lambda }`$-dominated stage, these fluctuations may occasionally result in jumps to a higher energy (and a higher curvature) state (“false vacuum”), in particular, even to an initial inflationary state. Depending on an effective mass of this scalar field, this transition may occur either in one jump , see also recent papers (where this process was called “recycling of the Universe”) and , or as a result of a long series of small jumps, as it occurred during stochastic inflation in the early Universe . So, in the latter case we have “stochastic inflation in future”.
In both cases, it is necessary that the whole part of the Universe inside the de Sitter event horizon (or even a little bit larger) makes this transition. It is clear that the probability of this process is fantastically small. I don’t think that one can really grasp how small it is by his/her senses. Still it is non-zero, so this event will occur finally. This probability mainly depends on the future asymptotic value of a $`\mathrm{\Lambda }`$-term $`\mathrm{\Lambda }_{\mathrm{}}=3H_{\mathrm{}}^2`$:
$$w_s\mathrm{exp}\left(\frac{\pi }{GH_f^2}\frac{\pi }{GH_{\mathrm{}}^2}\right),$$
(10)
where $`H_f^2=\mathrm{\Lambda }_f/3`$ is the curvature of a false vacuum state. The second term in the exponent is $`10^{122}`$, so it is practically impossible to imagine how large is a typical time required for this transition. However, it is finite. Thus, in this case future curvature space-like singularity is reached during continuous expansion of the Universe.
B. Quantum fluctuations of the gravitational field.
However, it appears that it is much simpler to reach future curvature singularity due to quantum fluctuations of the gravitational field itself. These fluctuations can produce a significant anisotropy described by a non-zero value of the conformal Weyl tensor comparable to that of the Riemann tensor. The corresponding quantum transition may be described by the $`S_2\times S_2`$ instanton:
$`ds^2=d\tau ^2+H_1^2\mathrm{sin}^2H_1\tau dx^2+H_1^2d\mathrm{\Omega }^2=(1H_1^2\stackrel{~}{x}^2)d\stackrel{~}{\tau }^2+{\displaystyle \frac{d\stackrel{~}{x}^2}{1H_1^2\stackrel{~}{x}^2}}+H_1^2d\mathrm{\Omega }^2,`$ (11)
$`d\mathrm{\Omega }^2=d\theta ^2+\mathrm{sin}^2\theta d\phi ^2,H_1^2=\mathrm{\Lambda }_{\mathrm{}}=3H_{\mathrm{}}^2.`$
Here $`\stackrel{~}{\tau }`$ is a cyclic variable with the period $`2\pi /H_1`$. The second, “thermal” form of the instanton suggests that the transition occurs in a “local” part of the Universe with a size slightly larger than $`H_1^1`$. The resulting space-time metric after the transition is:
$$ds^2=(1H_1^2\stackrel{~}{x}^2)d\stackrel{~}{t}^2\frac{d\stackrel{~}{x}^2}{1H_1^2\stackrel{~}{x}^2}H_1^2d\mathrm{\Omega }^2,$$
(12)
which covers a part of the Bondi-Nariai space-time with a finite range of $`x`$:
$$ds^2=dt^2a^2(t)dx^2b^2(t)d\mathrm{\Omega }^2,a(t)=H_1^1\mathrm{cosh}H_1t,b=H_1^1=const,$$
(13)
(see for discussion of quantum-gravitational effects in the metric (13)). Note that the choice $`a(t)=a_1\mathrm{exp}H_1t`$ is also possible. It corresponds to covering of another part of the Bondi-Nariai space-time.
The probability of this quantum jump is given by the difference of actions for the $`S_4`$ and $`S_2\times S_2`$ instantons with the same value of $`\mathrm{\Lambda }`$:
$$w_g\mathrm{exp}\left(\frac{\pi }{GH_1^2}\right).$$
(14)
Note that the exponent in Eq. (14) is $`3`$ times less by modulus than that in Eq. (10). Thus, this second process due to purely quantum-gravitational fluctuations is much more probable, $`w_sw_g^3`$ (though, of course, $`w_g`$ is fantastically small, too).
What happens with the considered region of space-time after the jump? The space-time (13) is classically unstable with respect to long-wave gravitational perturbations ($`\mathrm{\Lambda }=const`$). With the probability $`0.5`$, $`b`$ grows up and then this region returns to the locally de Sitter behaviour $`a(t)b(t)\mathrm{exp}(H_{\mathrm{}}t)`$ at $`t\mathrm{}`$ (so that the whole space-time approaches a specific form of the general quasi-de Sitter asymptote ). On the other hand, with the other $`0.5`$ probability, $`b`$ goes down, the region begins to recollapse soon, and the Kasner singularity $`a(t)(t_1t)^{1/3},b(t)(t_1t)^{2/3}`$ forms. Thus, this region of the Universe returns to a supercurved state.
So, one way or another, local parts of the Universe return to a singular supercurved state, though it might require a very huge amount of time. Thus, it seems at present that “cold death” is not a viable possibility for the future of our Universe. Let me emphasize that this return to a future singularity occurs in a very inhomogeneous fashion in all examples considered above. Therefore, any finite coordinate volume of the Universe becomes more and more inhomogeneous with time growth, in accordance with the Second Law of thermodynamics (understood in a very broad and imprecise sense). The same refers to the global structure of the Universe: it becomes more and more complicated in future, too. On the other hand, characteristic times for significant growth of complexity of our Universe are very large. As a result, the Universe will certainly remain very ordered for periods of the order of a few tenths of Gy that significantly exceeds its present age.
What happens after the return to a singular state? We don’t know it at the present state of the art. Still it is possible to conjecture that at least a very small part of the region which hits a singularity will bounce back and return to a low-curvature state. Especially interesting and remarkable would be if, during the process, this part spend some time at an inflationary stage. Then infinitely many low curvature and ordered universes similar to our present Universe may be created from this part in future. Repeating all this hypothetical, but not firmly prohibited process more and more, we see that the future of our Universe may be not simply very complicated but even infinitely complicated.
## 2 Past of the Universe
We see that discussion of the future of our Universe has naturally led us to the question of the origin of our Universe in the past, about $`14`$ Gy ago. The preferred and very well developed theory of a period of the evolution of the Universe preceding the hot radiation-dominated FRW stage is given by the inflationary scenario of the early Universe. According to this scenario, our Universe was in an almost maximally symmetric (de Sitter, or inflationary) state during some period of time in the past. I think that the main attractive features of the inflationary scenario are the following: 1) its extreme aesthetic elegance and beauty, and 2) complete predictability of properties of the observed part of the Universe after the end of the inflationary stage (in particular, at the present time). Thus, any concrete realization of the inflationary scenario may be falsified by observations, and many of them had been falsified already.
But it is remarkable that there exist a large class of the so called simplest inflationary models (with one slowly rolling effective scalar field producing the inflationary stage) whose predictions, just the opposite, were confirmed by observations. This especially refers to results of a COBE satellite experiment where low multipoles of the CMB angular temperature anisotropy $`(\mathrm{\Delta }T/T)_l`$ with $`l`$ up to $`20`$ were measured, and to results of numerous recent medium- and small-angle measurements of $`\mathrm{\Delta }T/T`$ which confirm the inflationary prediction about the location and the approximate height of the so called first acoustic (Doppler) peak. So, the inflationary scenario really has a large predictive power!
Still it is clear that since any inflationary stage is not stable, but only metastable, it cannot be the very beginning of our Universe. Something was before, that was the origin of the inflationary stage. The most well known proposal, put forward long before the inflationary scenario was introduced in 1979-1982, was the “creation of the Universe from nothing” . Here nothing means literally nothing, in particular, that were no space-time before our Universe was created. This idea does not work without some inflationary state following the creation, so it was forgotten for some time and was revived only after the development of the inflationary scenario. In that case the creation is mathematically described by the $`S_4`$ (de Sitter) instanton. In the papers the creation of a closed FRW universe was considered, however, it was recently shown that an open FRW universe may be produced “from nothing”, too, using approximately the same (though already singular) instanton .
However, at the same moment the idea of “creation from nothing” was renewed, it was pointed that this is not the only possibility to create an inflationary stage . Let me present an incomplete list of other alternatives.
1. Quasi-classical motion of space-time from a generic inhomogeneous anisotropic singularity to the de Sitter attractor solution.
2. Decay of less symmetric, higher curvature self-consistent solutions of gravity equations with all quantum corrections included (e.g., the Bondi-Nariai solution (13)).
3. Stochastic drift from a singularity with the Planckian value of curvature along a sequence of de Sitter-like solutions (this is what actually occurs in the so called eternal chaotic inflation ).
4. Quantum nucleation of our Universe from some other “Super-universe”, in particular, even from some asymptotically flat space-time (the latter possibility includes “creation of the Universe in a laboratory”, see ).
5. Creation of the Universe from a higher-dimensional space-time.
Evidently, many more possibilities remain not mentioned. It seems that they are all indistinguishable from observations. That is why, in order to tackle this great ambiguity, a completely different principle of “creation of the Universe from anything” was put forward in . Namely, it states that:
“local” observations cannot help distinguish between different ways of formation of an inflationary stage.
By “local” I mean all observations inside the presently observed Universe, and even all observations made along our future world line in arbitrary remote future. “Creation from anything” intrinsically includes all ways of creating the de Sitter (inflationary) stage, with the “creation from nothing” being only one (and therefore, scarcely probable) way among them.
It is amusing that the mathematical description of “creation from anything” is based on the same $`S_4`$ instanton as “creation from nothing”, but now written in a static, “thermal” form:
$$ds^2=(1H^2r^2)d\tau ^2+\frac{dr^2}{1H^2r^2}+r^2d\mathrm{\Omega }^2,$$
(15)
where $`\tau `$ is periodic with the period $`2\pi /H`$ – the inverse Gibbons-Hawking temperature (I assume here $`\mathrm{\Lambda }=const=3H^2`$ for simplicity).
Now, using the thermal interpretation of the $`S_4`$ instanton, we may ascribe the total entropy
$$S(\mathrm{entropy})=|S|(\mathrm{action})=\frac{\pi }{GH^2}1$$
(16)
to the Universe at the inflationary stage. This entropy just reflects the absence of knowledge of a given observer about a space-time structure beyond the de Sitter horizon and about a way how this de Sitter stage was formed. Since $`\sqrt{G}H<10^5`$ at the end of an inflationary stage, $`S>10^{10}`$ there.
Of course, this principle (as all principles introduced by hand) may be a little bit extreme. I cannot exclude the possibility that we shall be able to get some knowledge about a pre-inflationary history of our Universe. Then a value of the entropy of the Universe at the end of an inflationary stage will be less than that given by Eq. (16). |
no-problem/9912/quant-ph9912064.html | ar5iv | text | # Two-photon Franson-type experiments and local realism
## Abstract
The two-photon interferometric experiment proposed by Franson \[Phys. Rev. Lett. 62, 2205 (1989)\] is often treated as a “Bell test of local realism”. However, it has been suggested that this is incorrect due to the $`50\%`$ postselection performed even in the ideal gedanken version of the experiment. Here we present a simple local hidden variable model of the experiment that successfully explains the results obtained in usual realizations of the experiment, even with perfect detectors. Furthermore, we also show that there is *no* such model if the switching of the local phase settings is done at a rate determined by the *internal geometry of the interferometers*.
>
The two-particle interferometer introduced by Franson has been used in many two-photon interferometric experiments that reveal complementarity between single and two-photon interference. The experiments cannot be described using standard methods involving classical electromagnetic fields . However, the original paper was entitled *Bell Inequality for Position and Time*, and many subsequent papers claimed that the experiment constitutes a “Bell test of local realism involving time and energy”. Some authors were more skeptical that a true, unambiguous test of a Bell inequality was possible with these experiments, even in principle, since even the ideal gedanken model of the experiment requires a post-selection procedure in which $`50\%`$ of the events are discarded when computing the correlation functions . If all events are taken into account the Bell inequalities are not violated. Thus, a local hidden-variable (LHV) model is not ruled out, but even so, no LHV model for the experiment has yet been constructed .
The situation is further obscured by similar claims concerning certain other two-photon polarization experiments where the problem of discarding $`50\%`$ of the events also appears . This was initially treated on equal footing with the problems of Franson-type experiments, but a recent analysis in reestablishes the possibility of violating local realism. Unfortunately, that analysis cannot be adapted to the Franson experiment.
Our aim is to resolve this uncertainty. First, we shall construct a simple local realistic model for the usual operational realization of the experiment. Second, we shall prove that under the additional condition that the random changes of the state of the local interferometers are at a rate dictated by the *internal geometry* of the interferometers, *no* local hidden variable model exists for the perfect gedanken version of this type of experiment. Even then, the usual Bell inequality will be inadequate.
Let us briefly describe the idea behind the Franson-type experiments (Fig. 1). The source yields photon pairs, correlated to within their coherence times, and the two photons are fed into two identical unbalanced Mach-Zehnder interferometers. The difference of the optical paths in those interferometers, $`\mathrm{\Delta }L`$, satisfies the relation $`\mathrm{\Delta }LcT_{\text{coh}}`$, where $`c`$ is the speed of light and $`T_{\text{coh}}`$ is the coherence time of the photons. Such optical path differences prohibit any single-photon interference, so the single-photon probabilities are $`P(l|\varphi )=P(m|\psi )=\frac{1}{2}`$ (see Fig. 1). For the $`50\%`$ two-photon events that are coincident, one cannot distinguish between events where both photons take the long path and events where both take the short; hence, two-photon interference occurs:
$$P(l;m(\text{coinc.})|\varphi ,\psi )=\frac{1}{8}\left(1+lm\mathrm{cos}(\varphi +\psi )\right).$$
(1)
For the other half of the two-photon events, one photon takes its short path and the other takes its long path, so that the registration times differ by $`\mathrm{\Delta }L/c`$; there is consequently no interference because the events are distinguishable. One has $`P(l_\text{L};m_\text{E}|\varphi ,\psi )=P(l_\text{E};m_\text{L}|\varphi ,\psi )=\frac{1}{16}`$, where E denotes the *earlier* count, and L denotes the *later* count. For future reference, we note that the local phase settings appearing in these formulas are those present when a photon in the long path is passing through the phase shifter, i.e., the phase setting at the actual detection time $`t_\text{d}`$, minus the time $`t_{\text{ret}}`$ it takes light to reach the detector from the location of the phase-shifter by the optical paths available within the interferometer.
Initially, the experiment is assumed to be performed in the following way. The usual locality condition is imposed, i.e., the local phase setting at one side does not affect the measurement result at the other side. Experimentally, this is enforced by switching the local phase settings on the time-scale $`D/c`$, where $`D`$ is the source–interferometer distance. We assume that $`D\mathrm{\Delta }L`$ . The two experimenters (one at each side) record the $`\pm 1`$ counts, the detection times, and the appropriate values of the local phase settings. After the experiment is completed they perform a post-detection analysis on their recorded data, rejecting all pairs of events whose registration times differ by $`\mathrm{\Delta }L/c`$. We now present a LHV model for the Franson experiment, valid in this experimental situation.
There are some general features that a LHV model of the experiment should have. The *emission time* should be one of the variables, because if the beamsplitters of, say, the right interferometer were removed, the photons would be detected solely by the detector $`+1`$, and the detection time $`t_\text{E}`$ would indicate the moment of emission. In this case, for any local setting of the phase $`\varphi `$, the detections behind the left interferometer would either be coincident with the detections on the right side at $`t_E`$ (we shall call this an *early* detection), or delayed at $`t_\text{L}=t_\text{E}+\mathrm{\Delta }L/c`$ (a *late* detection). This must be determined by the LHV model. Half of the events on the left side are early (E) and half are late (L). With the right interferometer in place, 1/4 of the events are early on the left and late on the right (EL), 1/4 are late on the left and early on the right (LE), and 1/2 are coincident. These coincident events must then consist of equal parts early-early (EE) and late-late (LL) events; no such distinction exists in the quantum description.
In our model, the hidden variables are chosen to be an angular coordinate $`\theta [0,2\pi ]`$ and an additional coordinate $`r[0,1]`$. The ensemble of hidden variables is chosen as that of a uniform distribution in this rectangle in $`(\theta ,r)`$-space; each pair of particles is then described by a definite point $`(\theta ,r)`$ in the rectangle, defined at the source at the moment of emission. At the left detector station, the measurement result is decided by the hidden variables $`(\theta ,r)`$ and the local setting $`\varphi `$ of the apparatus. When a photon arrives at the detection station, if the interferometer works properly the variable $`\theta `$ is shifted by the current setting of the local phase shifter (i.e. $`\theta ^{}=\theta \varphi `$), and the result is read off Fig. 2. At the right detector station, a similar procedure is followed . In this case, the shift is to the value $`\theta ^{\prime \prime }=\theta +\psi `$, and the result is obtained in Fig. 3 in the same manner as before.
The single-particle detection probabilities straightforwardly follow the quantum predictions, because in both Figs. 2 and 3, the total areas corresponding to $`+1_\text{E}`$, $`1_\text{E}`$, $`+1_\text{L}`$, and $`1_\text{L}`$ are all equal. The particle is equally likely to arrive early or late, and equally likely to go to the $`+1`$ or $`1`$ output port of the interferometer. The coincidence probabilities are determined by interposing the two figures with the proper shifts.
For example, the probability of having $`l=+1_\text{E}`$ and $`m=1_\text{E}`$ simultaneously is the area of the set indicated in Fig. 4 divided by $`2\pi `$ (the total area is $`2\pi `$ whereas the total probability is 1). The net coincidence probability is
$`P(+1;1(\text{coinc.})|\varphi ,\psi )`$ (2)
$`=P(+1_\text{E};1_\text{E}|\varphi ,\psi )+P(+1_\text{L};1_\text{L}|\varphi ,\psi )`$ (3)
$`={\displaystyle \frac{2}{2\pi }}{\displaystyle _0^{\varphi +\psi }}{\displaystyle \frac{\pi }{8}}\mathrm{sin}(\theta )𝑑\theta ={\displaystyle \frac{1}{8}}\left(1\mathrm{cos}(\varphi +\psi )\right).`$ (4)
It is easy to verify that this model also gives the correct prediction for the other detection events.
Somewhat remarkably, the above construction implies that *the Franson experiment does not and cannot violate local realism if one disregards the fact that the unbalanced Mach-Zehnder interferometers are extended objects.* The reason that this construction is possible is that the $`50\%`$ post-selection procedure discussed above may yield *an ensemble of detected pairs that depends on the phase settings* (rendering the Bell inequality useless ). However, we shall now show that if the phase switching is performed at the time-scale $`\mathrm{\Delta }L/c`$, typical for retardations within the interferometers, there is *no* LHV description of the experiment. In particular, we will describe an experimental procedure that allows us to post-select an unchanging part of the LHV ensemble, thus reenabling the Bell inequality on this part of the ensemble.
Let us look at one interferometer as an extended object to establish what would take place if local realism were to hold. In the interferometer, the decision of a detection to occur early (at $`t_\text{E}`$) or late (delayed by $`\mathrm{\Delta }L/c`$) cannot be made later than the time $`t_\text{E}`$. This decision is based on the local variables and the *properly retarded* phase setting. No phase setting after $`t_\text{E}t_{\text{ret}}`$ can causally affect this E/L choice . The choice $`\pm 1`$ is also based on the local variables and the properly retarded phase setting at the interferometer in question, but this choice may be made as late as the detection time $`t_\text{d}`$ ($`t_\text{d}=t_\text{E}`$ for early events or $`t_\text{d}=t_\text{L}t_\text{E}+\mathrm{\Delta }L/c`$ for late). Therefore, in the case of a *late* detection, the choice E/L and the choice $`\pm 1`$ can be made at different times ($`t_\text{E}`$ and $`t_\text{L}`$, respectively) *based on possibly different phase settings*.
Looking at only one interferometer, it is not possible to discern early detections from late ones, so an experimenter at that interferometer knows only the result $`\pm 1`$, the detection time $`t_\text{d}`$, and two *possibly different* phase settings at $`t_\text{d}\mathrm{\Delta }L/ct_{\text{ret}}`$ and $`t_\text{d}t_{\text{ret}}`$. She also knows that for the events that are late, the later of these two phase settings cannot causally have affected the E/L decision, so the hypothetical late subensemble does not depend on the phase setting at $`t_\text{d}t_{\text{ret}}`$ but only on the phase setting at the earlier time $`t_\text{d}\mathrm{\Delta }L/ct_{\text{ret}}`$. By rejecting events where the phase setting at $`t_\text{d}\mathrm{\Delta }L/ct_{\text{ret}}`$ does not have a certain value ($`\varphi _0`$, say), she ensures that the late subensemble does not change at all. To allow for settings other than $`\varphi _0`$ at the later decision time, a device which switches fast (on the time-scale $`\mathrm{\Delta }L/c`$) and randomly between phase settings is needed .
Thus, in the modified full experiment both experimenters should use fast devices that randomly switch between the phase settings $`\varphi _0`$, $`\varphi _1`$, $`\mathrm{}`$, $`\varphi _N`$ on the left side and $`\psi _0`$, $`\psi _1`$, $`\mathrm{}`$, $`\psi _N`$ on the right. They record the appropriate data and reject (a) pairs of events whose registration times differ by $`\mathrm{\Delta }L/c`$ and (b) pairs of events which do not have the feature that *the phase setting at* $`t_\text{d}\mathrm{\Delta }L/ct_{\text{ret}}`$ *was $`\varphi _0`$ on the left and $`\psi _0`$ on the right*. The latter event rejection ensures that the hypothetical LL subensemble within the remaining data is independent of the phase settings at $`t_\text{d}t_{\text{ret}}`$. Then, if local realism holds, the Bell-CHSH inequality applies to this LL subensemble,
$`|E_{\text{L}\text{L}}(\varphi _1,\psi _1)+E_{\text{L}\text{L}}(\varphi _2,\psi _1)|`$ (5)
$`+|E_{\text{L}\text{L}}(\varphi _2,\psi _2)E_{\text{L}\text{L}}(\varphi _1,\psi _2)|2,`$ (6)
where the phases are taken at $`t_\text{d}t_{\text{ret}}`$, and $`E_{\text{L}\text{L}}(\varphi ,\psi )`$ denotes the Bell-type *conditional* correlation function on the remaining LL subensemble. This is valid only because each of the correlation functions above is an average on the same ensemble. Had the ensemble depended on the phase settings at $`t_\text{d}t_{\text{ret}}`$, the bound would have been higher.
Indeed, the remaining EE subensemble may still depend on the phase setting at $`t_\text{d}t_{\text{ret}}`$ even after this selection, and we only have
$$|E_{\text{E}\text{E}}(\varphi ,\psi )|1.$$
(7)
Experimentally, this “noise” in form of EE events cannot be distinguished from the LL events. Of all events that survive the described selection, again half are EE and half are LL, so that
$$E_{\text{coinc.}}(\varphi ,\psi )=\frac{1}{2}E_{\text{L}\text{L}}(\varphi ,\psi )+\frac{1}{2}E_{\text{E}\text{E}}(\varphi ,\psi ).$$
(8)
Thus, a modified Bell-CHSH inequality valid for all the coincident events is implied by (6)–(8), namely
$`|E_{\text{coinc.}}(\varphi _1,\psi _1)+E_{\text{coinc.}}(\varphi _2,\psi _1)|`$ (9)
$`+|E_{\text{coinc.}}(\varphi _2,\psi _2)E_{\text{coinc.}}(\varphi _1,\psi _2)|\frac{1}{2}(2+4)=3.`$ (10)
Unfortunately, this inequality is not violated by the conditional quantum correlation function $`E_{\text{coinc.}}^{\text{QM}}(\varphi ,\psi )=\mathrm{cos}(\varphi +\psi )`$ which yields a maximum of $`2\sqrt{2}`$. However, a violation may be obtained by a “chained” extension of the Bell-CHSH inequality (see Ref. ):
$`|E_{\text{L}\text{L}}(\varphi _1,\psi _1)+E_{\text{L}\text{L}}(\varphi _2,\psi _1)|`$ (11)
$`+|E_{\text{L}\text{L}}(\varphi _2,\psi _2)+E_{\text{L}\text{L}}(\varphi _3,\psi _2)|`$ (12)
$`+|E_{\text{L}\text{L}}(\varphi _3,\psi _3)E_{\text{L}\text{L}}(\varphi _1,\psi _3)|4.`$ (13)
If local realism holds, (7), (8), and (13) yield
$`|E_{\text{coinc.}}(\varphi _1,\psi _1)+E_{\text{coinc.}}(\varphi _2,\psi _1)|`$ (14)
$`+|E_{\text{coinc.}}(\varphi _2,\psi _2)+E_{\text{coinc.}}(\varphi _3,\psi _2)|`$ (15)
$`+|E_{\text{coinc.}}(\varphi _3,\psi _3)E_{\text{coinc.}}(\varphi _1,\psi _3)|\frac{1}{2}(4+6)=5.`$ (16)
This inequality *is* violated by quantum predictions, e.g., at $`\varphi _1=0`$, $`\varphi _2=\pi /3`$, $`\varphi _3=2\pi /3`$, $`\psi _1=\pi /6`$, $`\psi _2=\pi /2`$, and $`\psi _3=5\pi /6`$ we obtain
$$5\mathrm{cos}(\pi /6)\mathrm{cos}(5\pi /6)=6\mathrm{cos}(\pi /6)5.20>5.$$
(17)
In conclusion, to obtain a violation of local realism in an experiment one needs random fast switching and a filtering of the hypothetical late-late subensemble so that this ensemble does not depend on the phase settings . Even then, the standard Bell inequalities are not sensitive enough to show a violation of local realism in the experiment, because their bound is raised by the “noise” introduced by the early-early subensemble. However, a “chained Bell inequality” may be used, which is violated even with this “noise” included.
The reported violations of local realism from Franson experiments have to be reexamined. While the results formally violate the standard Bell-CHSH inequality, the inequality is not applicable. The inequality (16) *is* applicable, but when using it, one should note that it is violated only if the visibility is more than $`5/5.296\%`$. This is significantly higher than the usual $`71\%`$ bound discussed in the reported experiments .
It has been proposed that entangled photons can be used to perform quantum cryptography ; specifically, the Franson-type experiment has been discussed in this context . In such schemes security checks can be performed by testing whether the signals violate the Bell inequalities. It remains a subtle question if the link to local realism is important for this kind of security check; if so, the Bell-CHSH inequality is not appropriate for the Franson setup.
Sven Aerts acknowledges a grant by the Flemish Institute for the Advancement of Scientific-Technological Research in the Industry (IWT). Jan-Åke Larsson has received partial support from the Swedish Natural Science Research Council. Marek Żukowski was supported by the Flemish-Polish Scientific Collaboration Program No. 007, by UG Program BW/5400-5-0264-9, and also acknowledges discussions with E. Santos, H. Weinfurter and A. Zeilinger. |
no-problem/9912/gr-qc9912018.html | ar5iv | text | # Nondifferentiable Dynamic: Two Examples
## I Introduction
It is possible that some discrete mathematical objects which can not be the continuous functions nevertheless are the physical degrees of freedom in the Nature. It can be a signature of metric, dimensionality, topology of space and so on. A time evolution of such kind of the variables is a big problem for the classical and quantum gravity.
The change of the metric signature in the classical gravity ordinary is connected with the presence of a surface $`\delta `$-like matter (see for example, , ). Certainly, we have the question: is this matter exotic or ordinary, i.e. can such conditions realized in the Nature ? In quantum gravity the change of metric signature is the result of integrating in the path integral over the Euclidean and Lorentzian metrics. The difficulties connected with this problem is easy to see in the vier-bein formalism
$$ds_{(5)}^2=\eta _{ab}e^ae^b$$
(1)
here $`\eta _{ab}=(,+,+,+)`$, $`e^a=h_\mu ^adx^\mu `$, $`a=0,1,2,3`$ is the vier-bein index, $`\mu `$ is the spacetime index. In the classical regime only the tetrad components $`h_\mu ^a`$ is the dynamical variables and varying with respect to $`h_\mu ^a`$ leads to the Einstein equations. But we can not vary with respect to $`\eta _{ab}`$ and therefore we have not the corresponding equations. This allow us to say that the difficulties connected with the signature change are connected with that $`\eta _{ab}`$ are the nondynamical variables. In quantum gravity $`\eta _{ab}`$ become the dynamical quantities. We see that $`\eta _{ab}`$ are the discrete variables and in fact an integration over $`\eta _{ab}`$ should be a summation.
In this paper we would like to show that in quantum gravity can exist some discrete (nondifferentiable) physical degrees of freedom<sup>*</sup><sup>*</sup>*here we consider the case with the components of $`\eta _{ab}`$ which also can be the dynamical variables. In particular the signature change can happen not on the boundary between different regions with the Euclidean and Lorentzian regions but it can take place a fluctuation (“quantum trembling”) between $`\eta _{ab}=+1`$ and $`\eta _{ab}=1`$.
## II Fluctuation without the change of the sign of metric determinant
Let we consider the 5D metric
$$ds^2=\sigma \mathrm{\Delta }(r)dt^2+dr^2+a(r)d\mathrm{\Omega }^2+\sigma \frac{r_1^2}{\mathrm{\Delta }(r)}(d\chi \omega (r)dt)^2$$
(2)
here $`\chi `$ is the 5<sup>th</sup> extra coordinate; $`r,\theta ,\phi `$ are the $`3D`$ polar coordinates; $`t`$ is the time; $`d\mathrm{\Omega }^2=d\theta ^2+\mathrm{sin}^2\theta d\phi ^2`$ is the metric on the $`S^2`$ sphere; $`\sigma =\pm 1`$ describes the interchange of metric signature: $`(,+,+,+,+)(+,+,+,+,)`$. The functions $`\mathrm{\Delta }(r),a(r)`$ are the even functions this means that the 3D part of the (2) metric is the wormhole-like 3D space. The 5D vacuum Einstein equations give us
$`{\displaystyle \frac{\mathrm{\Delta }^{\prime \prime }}{\mathrm{\Delta }}}{\displaystyle \frac{\mathrm{\Delta }_{}^{}{}_{}{}^{2}}{\mathrm{\Delta }^2}}+{\displaystyle \frac{a^{}\mathrm{\Delta }^{}}{a\mathrm{\Delta }}}{\displaystyle \frac{r_0^2}{\mathrm{\Delta }^2}}\omega _{}^{}{}_{}{}^{2}`$ $`=`$ $`0,`$ (3)
$`\omega ^{\prime \prime }2\omega ^{}{\displaystyle \frac{\mathrm{\Delta }^{}}{\mathrm{\Delta }}}+\omega ^{}{\displaystyle \frac{a^{}}{a}}`$ $`=`$ $`0,`$ (4)
$`{\displaystyle \frac{\mathrm{\Delta }_{}^{}{}_{}{}^{2}}{\mathrm{\Delta }^2}}+{\displaystyle \frac{4}{a}}{\displaystyle \frac{a_{}^{}{}_{}{}^{2}}{a^2}}{\displaystyle \frac{r_0^2}{\mathrm{\Delta }^2}}\omega _{}^{}{}_{}{}^{2}`$ $`=`$ $`0,`$ (5)
$`a^{\prime \prime }2`$ $`=`$ $`0.`$ (6)
with the following solution
$`a`$ $`=`$ $`r_0^2+r^2,`$ (7)
$`\mathrm{\Delta }`$ $`=`$ $`{\displaystyle \frac{2r_0}{q}}{\displaystyle \frac{r^2+r_0^2}{r^2r_0^2}},`$ (8)
$`\omega `$ $`=`$ $`{\displaystyle \frac{4r_0^2}{r_1q}}{\displaystyle \frac{r}{r^2r_0^2}}.`$ (9)
here $`r_0>0`$ and $`q`$ are some constants. We see that the (3)-(5) equations do not depend on the $`\sigma `$. It is the most important thing for understanding as occurs a quantum fluctuation of the metric signature. We have one solution for two metrics with the different signature and the classical dynamical equations (3)-(5) can not distinguish them. But the quantum paradigm says us: that which is not forbidden is permitted. Following this rule we can say that in this situation should exist the fluctuations (“quantum trembling”) between two signatures.
Let $`\eta _1`$ is a quantum state with the $`(,+,+,+,+)`$ signature and $`\eta _2`$ with the $`(+,+,+,+,)`$ signature. Then we can assume that
$$\eta _1=\frac{1}{\sqrt{2}}\left(\genfrac{}{}{0pt}{}{1}{0}\right),\eta _2=\frac{1}{\sqrt{2}}\left(\genfrac{}{}{0pt}{}{0}{1}\right)$$
(10)
as the probabilities for both signatures $`(\pm ,+,+,+,)`$ should be equal. The eigenstates $`\eta _{1,2}`$ describe the states with $`(,+,+,+,\pm )`$ accordingly.
We see that the eigenstates (10) are the same as the eigenstates of z-component of the spin
$`\left({\displaystyle \frac{\mathrm{}}{2}}\sigma _3\right)\zeta `$ $`=`$ $`\pm {\displaystyle \frac{\mathrm{}}{2}}\zeta `$ (11)
$`\zeta _1={\displaystyle \frac{1}{\sqrt{2}}}\left({\displaystyle \genfrac{}{}{0pt}{}{1}{0}}\right),`$ $`\zeta _2={\displaystyle \frac{1}{\sqrt{2}}}\left({\displaystyle \genfrac{}{}{0pt}{}{0}{1}}\right)`$ (12)
here $`\frac{\mathrm{}}{2}\sigma _3`$ is the operator of $`z`$-component of the spin, $`\zeta _{1,2}`$ are its eigenstates and $`\pm \frac{\mathrm{}}{2}`$ are its eigenvalues.
This allow us to presume that such “quantum trembling” between two signatures exists and have the physical interpretation in the following sense. In it was proposed a model of composite wormhole consisting from a 5D throatwhich is the solution (7)-(9) and two Reissner-Nordström black holes attached to the 5D throat on the event horizon, see Fig.1.
In Ref. Wheeler assumed that the geometrical origin of spin probably is connected with a quantum fluctuation orientability $``$ nonorientabilityi.e. we have a “two-valuedness”: “If however both spaces<sup>§</sup><sup>§</sup>§orientable and non-orientable are permissible, then a space with one wormhole has one classical two-valuedness associated with it, and one with $`n`$ wormholes has $`n`$-fold duplicity. If it should turn out that there are $`2^n`$ inequivalent ways to get to a geometry with $`n`$ wormholes, and if it should make sense to assign $`2^n`$ distinct probability amplitudes to the same macroscopic field configuration, then one would be in possession of a non-classical two valuedness with as many spin-like degrees of freedom as there are wormholes. $`\mathrm{}`$ A number of options shows up which is qualitatively of the same order as the number of degrees of freedom of a spinor field, when one goes to the virtual foam-like space of quantum geometrodynamics. It is difficult to say anything more specific about the reasonableness or unreasonableness of this conceivable ”correlation of spin with parity” until more is known about the formalism of quantum geometrodynamic”. Here we offer the above-mentioned “unclassical two-valuedness” connected with “quantum trembling” between two classical solutions with the different metric signature as a model of an inner geometrical structure of spin in spirit of the Wheeler idea “spin = two-valuedness”.
It is very interesting that our composite WH in some approximation is close to a string attached to two D-branes. Let we factorize the 5<sup>th</sup> throat by the manner represented on the Fig.2, i.e. all $`S^2`$ spheres in the left side of the picture are contracted to the points on the right side, two event horizons are contracted to two points of attachment the string to the branes.
As we saw above “quantum trembling” between signatures leads to the appearance of the spin-like structure, i.e. to the fermion degrees of freedom. But it is well known that these degrees of freedom are connected with the grassmanian coordinates. It is possible that this means that the quantum fluctuations between two metric signatures leads to the appearance of a superspace with the ordinary and grassmanian coordinates. In this case the throat of composite WH + fluctuation between two signatures after above-mentioned factorization is equivalent to a superstring attached to two branes, see Fig.2. The whole composite WH is equivalent to the D-brane (superstring + two branes). It allow us to assume that the superstring possible has an inner structure and in this sense it is an approximation for the composite WH. In this connection it can mention the citation from Ref.: “Discrete degrees of freedom often manifest themselves as fermion in the quantum formalism. It is also conceivable that the continuum theories at the basis of our considerations will have up include string- and D-brane degrees of freedom $`\mathrm{}`$ ”.
## III Fluctuation with the change of the sign of metric determinant
In the previous section we consider the case of the quantum fluctuation between two signatures $`(,+,+,+,+)(+,+,+,+,)`$ without the change of the sign of metric determinant. Here we assume that can be a situation when a fluctuation between the Euclidean and Lorentzian metrics is possible. Evidently this can be only a cosmological solution in contrast with the previous spherically symmetric case. This idea differs from the initial Hawking idea about changing of the metric signature on the boundary between Euclidean and Lorentzian regions in such a manner that in the Universe is a region where takes place a quantum fluctuation (“trembling”) between different metric signature. It can be in the very Early Universe on the level of Planck scale.
For example, we examine a vacuum 5D Universe with the metric
$`ds_{(5)}^2=\sigma dt^2+b(t)\left(d\xi +\mathrm{cos}\theta d\phi \right)^2+a(r)d\mathrm{\Omega }_2^2+`$ (13)
$`r_0^2e^{2\psi (t)}\left[d\chi \omega (t)\left(d\xi +\mathrm{cos}\theta d\phi \right)\right]^2`$ (14)
here $`\sigma =\pm 1`$ for the Euclidean and Lorentzian signatures respectively. 3D space metric $`dl^2=b(t)\left(d\xi +\mathrm{cos}\theta d\phi \right)^2+a(r)d\mathrm{\Omega }_2^2`$ describes the Hopf bundle with the $`S^1`$ fibre over the $`S^2`$ base. In the 5-bein formalism we have
$$ds_{(5)}^2=\eta _{\overline{A}\overline{B}}e^{\overline{A}}e^{\overline{B}}$$
(15)
here $`\overline{A},\overline{B}`$ are the 5-bein indexes and
$`\eta _{\overline{A}\overline{B}}`$ $`=`$ $`(\pm 1,+1,+1,+1,+1),`$ (16)
$`e^{\overline{0}}`$ $`=`$ $`dt,`$ (17)
$`e^{\overline{1}}`$ $`=`$ $`\sqrt{b}\left(d\xi +\mathrm{cos}\theta d\phi \right),`$ (18)
$`e^{\overline{2}}`$ $`=`$ $`\sqrt{a}d\theta ,`$ (19)
$`e^{\overline{3}}`$ $`=`$ $`\sqrt{a}\mathrm{sin}\theta d\phi ,`$ (20)
$`e^{\overline{5}}`$ $`=`$ $`r_0e^\psi \left[d\chi \omega (t)\left(d\xi +\mathrm{cos}\theta d\phi \right)\right]`$ (21)
According to the following theorem , :
Let $`G`$ be a structural group of the principal bundle. Then there is a one-to-one correspondence between the $`G`$-invariant metrics
$$ds^2=\phi (x^\alpha )\mathrm{\Sigma }^a\mathrm{\Sigma }_a+g_{\mu \nu }(x^\alpha )dx^\mu dx^\nu $$
(22)
on the total space $`𝒳`$ and the triples $`(g_{\mu \nu },A_\mu ^a,\phi )`$. Here $`g_{\mu \nu }`$ is the 4D Einstein’s pseudo - Riemannian metric on the base; $`A_\mu ^a`$ are the gauge fields of the group $`G`$ ( the nondiagonal components of the multidimensional metric); $`\phi \gamma _{ab}`$ is the symmetric metric on the fibre ($`\mathrm{\Sigma }^a=\sigma ^a+A_\mu ^a(x^\alpha )dx^\mu `$, $`\mathrm{\Sigma }_a=\gamma _{ab}\mathrm{\Sigma }^b`$, $`\gamma _{ab}=\delta _{ab}`$; $`a=5,\mathrm{},dimG`$ is the index on the fibre and $`\mu =0,1,2,3`$ is the index on the base).
we have the electromagnetic potential
$$A=\omega (t)\left(d\xi +\mathrm{cos}\theta d\phi \right)=\frac{\omega }{\sqrt{b}}e^{\overline{1}}$$
(23)
For this potential the Maxwell tensor is
$$F=dA=\frac{\dot{\omega }}{\sqrt{b}}e^{\overline{0}}e^{\overline{1}}\frac{\omega }{a}e^{\overline{2}}e^{\overline{3}}$$
(24)
Therefore we have the electrical field
$$E_{\overline{1}}=F_{\overline{0}\overline{1}}=\frac{\dot{\omega }}{\sqrt{b}}$$
(25)
and the magnetic field
$$H_{\overline{1}}=\frac{1}{2}ϵ_{1\overline{j}\overline{k}}F^{\overline{j}\overline{k}}=\frac{\omega }{a}$$
(26)
Let we write down the vacuum 5D Einstein equations
$`G_{\overline{0}\overline{0}}2{\displaystyle \frac{\dot{b}\dot{\psi }}{b}}+4{\displaystyle \frac{\dot{a}\dot{\psi }}{a}}+2{\displaystyle \frac{\dot{a}\dot{b}}{ab}}+{\displaystyle \frac{\dot{a}^2}{a^2}}+\sigma \left({\displaystyle \frac{b}{a^2}}{\displaystyle \frac{4}{a}}\right)+r_0^2e^{2\psi }\left(\sigma H_{\overline{1}}^2E_{\overline{1}}^2\right)`$ $`=`$ $`0,`$ (27)
$`G_{\overline{1}\overline{1}}4\ddot{\psi }+4\dot{\psi }^2+4{\displaystyle \frac{\ddot{a}}{a}}+4{\displaystyle \frac{\dot{a}\dot{\psi }}{a}}+\sigma \left(3{\displaystyle \frac{b}{a^2}}{\displaystyle \frac{4}{a}}\right){\displaystyle \frac{\dot{a}^2}{a^2}}+r_0^2e^{2\psi }\left(\sigma H_{\overline{1}}^2E_{\overline{1}}^2\right)`$ $`=`$ $`0,`$ (28)
$`G_{\overline{2}\overline{2}}=G_{\overline{3}\overline{3}}`$ (29)
$`4\ddot{\psi }+4\dot{\psi }^2+2{\displaystyle \frac{\ddot{b}}{b}}+2{\displaystyle \frac{\dot{b}\dot{\psi }}{b}}{\displaystyle \frac{\dot{b}^2}{b^2}}+2{\displaystyle \frac{\ddot{a}}{a}}+2{\displaystyle \frac{\dot{a}\dot{\psi }}{a}}+{\displaystyle \frac{\dot{a}\dot{b}}{ab}}{\displaystyle \frac{\dot{a}^2}{a^2}}\sigma {\displaystyle \frac{b}{a^2}}r_0^2e^{2\psi }\left(\sigma H_{\overline{1}}^2E_{\overline{1}}^2\right)`$ $`=`$ $`0,`$ (30)
$`R_{\overline{5}\overline{5}}\ddot{\psi }+\dot{\psi }^2+{\displaystyle \frac{\dot{a}\dot{\psi }}{a}}+{\displaystyle \frac{\dot{b}\dot{\psi }}{2b}}+{\displaystyle \frac{r_0^2}{2}}e^{2\psi }\left(\sigma H_{\overline{1}}^2+E_{\overline{1}}^2\right)`$ $`=`$ $`0,`$ (31)
$`R_{\overline{2}\overline{5}}\ddot{\omega }+\dot{\omega }\left({\displaystyle \frac{\dot{a}}{a}}{\displaystyle \frac{\dot{b}}{2b}}+3\dot{\psi }\right)\sigma {\displaystyle \frac{b}{a^2}}\omega `$ $`=`$ $`0`$ (32)
where $`G_{\overline{A}\overline{B}}=R_{\overline{A}\overline{B}}\frac{1}{2}\eta _{\overline{A}\overline{B}}R`$ is the Einstein tensor.
Now we can formulate our basic assumption: by some conditions, in one regionfor example, in the very Early Universe, can exist a quantum fluctuation between the Euclidean and Lorentzian metric signatures. This means that in the classical equations (27)-(32) arises a quantum fluctuating quantity $`\sigma `$ defining the metric signature. Another words, we have two copies of the classical equations: one with $`\sigma =+1`$ and another with $`\sigma =1`$. The equation (27) is invariant relative to $`\sigma =\pm 1`$ exchange. Let we consider the remaining equations with $`\sigma `$. The basic question arising in this situation is: how is a probability for each equation with $`\sigma =+1`$ and $`\sigma =1`$ in the (28)-(32) equations system ?
As in Ref. we will define this probability starting from the algorithmical complexity (AC) of each equation. What is the AC and how is its physical interpretation? In 60<sup>th</sup> Kolmogorov have defined the notion of probability from the algorithmical point of view . His basic idea is very simple: a probability of an appearance of some object depends from its AC and the AC is defined as the minimal length of an algorithm describing given object on some universal computer (Turing machine, for example). Simply speaking: the simpler the more probable.
The key word for such definition of the probability is the word “minimal”. In this case the length of the algorithm is determined uniquely. The exact definition is
The algorithmic complexity $`K(xy)`$ of the object $`x`$ by given object $`y`$ is the minimal length of the “program” $`P`$ that is written as a sequence of the zeros and unities which allows us to construct $`x`$ having $`y`$:
$$K(xy)=\underset{A(P,y)=x}{\mathrm{min}}l(P)$$
(33)
where $`l(P)`$ is length of the program $`P`$; $`A(P,y)`$ is the algorithm calculating object $`x`$, using the program $`P`$, when the object $`y`$ is given.
In this connection we can recall that ’t Hooft in Ref. has proposed to investigate the Universe as a certain computer: “The finiteness of entropy of a black hole implies that the number of bits information that can be stored there is finite and determinated by the area of its horizon. This gave us the idea that Nature at the Planck scale is an information processing machine like a computer, or more precisely, a cellular automaton”.
Now we can presuppose that the fluctuations of the metric signature occurs as the fluctuations between the algorithms (Einstein equations with the different metric signature)
$$\begin{array}{ccc}\sigma =+1& & \sigma =1\\ & & \\ G_{\overline{0}\overline{0}}^+& & G_{\overline{0}\overline{0}}^{}\\ G_{\overline{1}\overline{1}}^+& & G_{\overline{1}\overline{1}}^{}\\ G_{\overline{2}\overline{2}}^+& & G_{\overline{2}\overline{2}}^{}\\ G_{\overline{3}\overline{3}}^+& & G_{\overline{3}\overline{3}}^{}\\ R_{\overline{5}\overline{5}}^+& & R_{\overline{5}\overline{5}}^{}\end{array}$$
(34)
The signs $`\pm `$ denote the belonging of the appropriate equation to the Euclidean or Lorentzian mode. The expression (34) designates that the appearance of the quantum magnitude $`\sigma `$ leads to a quantum fluctuation $`R_{\overline{A}\overline{B}}^+R_{\overline{A}\overline{B}}^{}`$ or $`G_{\overline{A}\overline{B}}^+G_{\overline{A}\overline{B}}^{}`$. The question is: how we can calculate a probability for each $`R_{\overline{A}\overline{B}}^\pm `$ ($`G_{\overline{A}\overline{B}}^\pm `$) equation? Our assumption for these calculations is that these probabilities are connected with the AC of each equation.
### A Fluctuation $`G_{\overline{2}\overline{5}}^+G_{\overline{2}\overline{5}}^{}`$.
The $`R_{\overline{2}\overline{5}}`$ equation in the Euclidean mode is
$$\ddot{\omega }+\dot{\omega }\left(\frac{\dot{a}}{a}\frac{\dot{b}}{2b}+3\dot{\psi }\right)\frac{b}{a^2}\omega =0$$
(35)
and in the Lorentzian mode is
$$\ddot{\omega }+\dot{\omega }\left(\frac{\dot{a}}{a}\frac{\dot{b}}{2b}+3\dot{\psi }\right)+\frac{b}{a^2}\omega =0$$
(36)
Let we consider $`\psi =0`$ casebelow we will see that it will be in agreement with $`R_{\overline{5}\overline{5}}`$ equation. It is easy to see that the first case can be deduced from the instanton condition
$$E_1^2=H_1^2or\frac{\omega }{a}=\pm \frac{\dot{\omega }}{\sqrt{b}}$$
(37)
The second equation (36) have not such reduction from the instanton condition (37) to this field equation. This is well known fact that the instanton can exist only in the Euclidean space. It allow us to say that the Euclidean equation (35) is simpler from the algorithmical point of view than the Lorentzian equation (36). In the first rough approximation we can suppose that the probability of the Euclidean mode is $`p_{25}^+=1`$ and consequently for the Lorentzian mode $`p_{25}^{}=0`$. Strictly speaking the exact definition for each $`p_{ab}^\pm `$ probability should be
$$p_{ab}^\pm =\frac{e^{K_{ab}^\pm }}{e^{K_{ab}^+}+e^{K_{ab}^{}}}$$
(38)
here $`K_{ab}^\pm `$ is the AC for the $`R_{ab}^\pm =0`$ equation. If $`K_{25}^+K_{25}^{}`$ then we have $`p_{25}^+=1`$ and $`p_{25}^{}=0`$.
### B Fluctuation $`R_{\overline{5}\overline{5}}^+R_{\overline{5}\overline{5}}^{}`$.
The $`R_{\overline{5}\overline{5}}`$ equation in the Euclidean mode is
$$\ddot{\psi }+\dot{\psi }^2+\frac{\dot{a}}{a}\dot{\psi }+\frac{\dot{b}}{b}\dot{\psi }+\frac{r_0^2}{2}e^{2\psi }\left(H_{\overline{1}}^2+E_{\overline{1}}^2\right)=0,$$
(39)
and in the Lorentzian mode
$$\ddot{\psi }+\dot{\psi }^2+\frac{\dot{a}}{a}\dot{\psi }+\frac{\dot{b}}{b}\dot{\psi }+\frac{r_0^2}{2}e^{2\psi }\left(H_{\overline{1}}^2+E_{\overline{1}}^2\right)=0,$$
(40)
It is easy to see that the second equation (40) (Lorentzian mode) is much more simpler as it has the trivial solution
$$\psi =0$$
(41)
provided that we have the instanton condition
$$H_{\overline{1}}^2=E_{\overline{1}}^2.$$
(42)
In the contrast with the previous subsection we see that in this case the Lorentzian mode is more preferable. As well in the contrast of the previous case we can suppose that in the first rough approximation the probability of the Euclidean mode of this equation is $`p_{55}^+=0`$ and consequently for the Lorentzian mode $`p_{55}^{}=1`$ .
### C Fluctuation $`G_{\overline{1}\overline{1}}^+G_{\overline{1}\overline{1}}^{}`$ and $`G_{\overline{2}\overline{2}}^+G_{\overline{2}\overline{2}}^{}`$
Taking into account (41) we can write these equations in the following form
$`4{\displaystyle \frac{\ddot{a}}{a}}+\sigma \left(3{\displaystyle \frac{b}{a^2}}{\displaystyle \frac{4}{a}}\right){\displaystyle \frac{\dot{a}^2}{a^2}}+r_0^2e^{2\psi }\left(\sigma H_{\overline{1}}^2E_{\overline{1}}^2\right)`$ $`=`$ $`0,`$ (43)
$`2{\displaystyle \frac{\ddot{b}}{b}}{\displaystyle \frac{\dot{b}^2}{b^2}}+2{\displaystyle \frac{\ddot{a}}{a}}+{\displaystyle \frac{\dot{a}\dot{b}}{ab}}{\displaystyle \frac{\dot{a}^2}{a^2}}\sigma {\displaystyle \frac{b}{a^2}}r_0^2e^{2\psi }\left(\sigma H_{\overline{1}}^2E_{\overline{1}}^2\right)`$ $`=`$ $`0,`$ (44)
In the Euclidean mode ($`\sigma =+1`$) with the instanton condition (42) it can be $`b=a`$ (an isotropical Universe) and we have only one equation
$$4\frac{\ddot{a}}{a}\frac{\dot{a}^2}{a^2}\frac{1}{a}=0.$$
(45)
In the Lorentzian mode ($`\sigma =1`$) $`ba`$ (this is a case of an nonisotropical Universe) and we have two equations
$`4{\displaystyle \frac{\ddot{a}}{a}}\left(3{\displaystyle \frac{b}{a^2}}{\displaystyle \frac{4}{a}}\right){\displaystyle \frac{\dot{a}^2}{a^2}}r_0^2e^{2\psi }\left(H_{\overline{1}}^2+E_{\overline{1}}^2\right)`$ $`=`$ $`0,`$ (46)
$`2{\displaystyle \frac{\ddot{b}}{b}}{\displaystyle \frac{\dot{b}^2}{b^2}}+2{\displaystyle \frac{\ddot{a}}{a}}+{\displaystyle \frac{\dot{a}\dot{b}}{ab}}{\displaystyle \frac{\dot{a}^2}{a^2}}+{\displaystyle \frac{b}{a^2}}+r_0^2e^{2\psi }\left(H_{\overline{1}}^2+E_{\overline{1}}^2\right)`$ $`=`$ $`0,`$ (47)
We see that in the Lorentzian mode we have the nonisotropical Universe as the consequence of the presence of the magnetic field $`H_1=\omega /a`$.
Certainly the one equation in the Euclidean mode with the instanton condition (42) is simpler than two equations in the Lorentzian mode. Our previous arguments according to the connection between the probability and the Kolmogorov’s algorithmical complexity permit us to assume that in the first rough approximation the probability for (45) in the Euclidean mode is $`p_{11}^+=1`$ and consequently $`p_{11}^{}=0`$.
### D Fluctuation $`G_{\overline{0}\overline{0}}^+G_{\overline{0}\overline{0}}^{}`$
Let we write the equation $`G_{\overline{0}\overline{0}}^\pm =0`$ in the following form
$$2\frac{\dot{b}\dot{\psi }}{b}+4\frac{\dot{a}\dot{\psi }}{a}+2\frac{\dot{a}\dot{b}}{ab}+\frac{\dot{a}^2}{a^2}+\sigma \left(\frac{4}{a}+\frac{b}{a^2}\right)+r_0^2e^{2\psi }\left(\sigma H_{\overline{1}}^2E_{\overline{1}}^2\right)=0$$
(48)
By the conditions $`\psi =0`$, instanton condition and $`b=a`$ this equation in the Euclidean mode is
$$\frac{\dot{a}^2}{a^2}\frac{1}{a}=0$$
(49)
and in the Lorentzian mode
$$3\frac{\dot{a}^2}{a^2}+3\frac{1}{a}r_0^2e^{2\psi }\left(H_{\overline{1}}^2+E_{\overline{1}}^2\right)=0.$$
(50)
Certainly the first Euclidean equation is simpler than the second Lorentzian one. In the first rough approximation this allow us to put $`p_{00}^+=1`$ and $`p_{00}^{}=0`$.
### E Mixed system of the equations
As the probability for each equations (34) are only $`p=0,1`$ we can write the mixed equation system for the Universe fluctuated between Euclidean and Lorentzian modes
$`{\displaystyle \frac{\dot{a}^2}{a^2}}{\displaystyle \frac{1}{a}}`$ $`=`$ $`0,`$ (51)
$`\dot{\omega }`$ $`=`$ $`\pm {\displaystyle \frac{\omega }{\sqrt{a}}},`$ (52)
$`4{\displaystyle \frac{\ddot{a}}{a}}{\displaystyle \frac{\dot{a}^2}{a^2}}{\displaystyle \frac{1}{a}}=0.`$ (53)
here $`b=a`$, $`\psi =0`$ and the instanton condition (42) is applied. It is easy to find the following solution
$`a`$ $`=`$ $`{\displaystyle \frac{t^2}{4}},`$ (54)
$`\omega `$ $`=`$ $`t^2.`$ (55)
### F The mixed origin of the Universe
Hawking offers the following model of the quantum birth of Universe: at first appears a small piece of an Euclidean space ($`R^4`$, $`S^4`$ or another smooth non-singular Euclidean space) then a Lorentzian Universe starts from a boundary of the initial Euclidean piece. In this scenario there is a hypersurface between Euclidean and Lorentzian spaces, see Fig.3.
In this section we assume that at first there is a quantum Universe fluctuated between Euclidean and Lorentzian modes and later happens a quantum transition to the Lorentzian mode.
The 4D metric part of the MD metric with (54) is
$$ds_{(4)}^2=d\tau ^2+\tau ^2d\mathrm{\Omega }_3^2$$
(56)
where $`\tau `$ is the Euclidean time and $`d\mathrm{\Omega }_3^2=\frac{1}{4}[(d\xi +\mathrm{cos}\theta d\phi )^2+(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2)]`$ is the metric of the 3D unit sphere $`S^3`$. The (56) metric is the metric for the flat $`R^4`$ Euclidean space.
Now we can assume that the fluctuating<sup>\**</sup><sup>\**</sup>\**between Euclidean and Lorentzian modes, non-singular, multidimensional, empty Universe is born from ”Nothing“ as a piece ($`\tau \tau _{Pl}`$) of the $`R^4\times S^1`$ Euclidean $``$ Lorentzian space. Then in some moment a quantum transition to one mode (Lorentzian) takes place and simultaneously (or later) the $`G_{55}`$ metric component become the non-dynamical variable. All this gives us the 4D Lorentzian Universe, see Fig.4.
Thus, the piece of the solution by $`\tau \tau _{Pl}`$ can be interpreted as the quantum birth of the Universe with the fluctuating signature of metric. Roughly speaking, the 4D Universe<sup>††</sup><sup>††</sup>††the base of the principal bundle is in the Euclidean mode but the 5<sup>th</sup> extra dimension<sup>‡‡</sup><sup>‡‡</sup>‡‡the fibre of the principal bundle is in the Lorentzian mode. As well we have the next interesting result: the fluctuation of the metric signature leads to frozing the part of MD metric connected with the 5<sup>th</sup> dimension.
## IV Conclusions
In this paper we consider two examples of the nondifferentiable dynamic: the interchange of the sign between two components of the multidimensional metric and the quantum fluctuation between Euclidean and Lorentzian metrics. The existence of such kind of the nondifferentiable dynamic can lead to the interesting physical consequences: appearance of the spin-like structure on the throat of composite wormhole (this can be a possible connection with a superstring attached to two D-branes), frozing the physical degrees of freedom connected with the metric on the extra dimensions, quantum birth of the non-singular flat Universe with matter<sup>\**</sup><sup>\**</sup>\**gauge field as the non-diagonal components of the MD metric and with the quantum fluctuation of the metric signature between Euclidean and Lorentzian modes.
It is necessary to note that all these possibilities occur in a vacuum in the spirit of the Einstein idea that the right of the gravitational equations should be zero.
The Planck length paradigm says us that on this level the various unusual<sup>\*†</sup><sup>\*†</sup>\*†from the viewpoint of the non-Planckian physic phenomena can occur. In the application to our case this permits us to say that the discrete (nondifferentiable) quantities can fluctuate on this level<sup>\*‡</sup><sup>\*‡</sup>\*‡we can name the dynamic of these quantities as the nondifferentiable dynamic.. It is possible that the physical phenomena connected with the nondifferentiable dynamic can play very important role in the very Early Universe or on the level of the spacetime foam.
The basic idea presented by ’t Hooft in Ref. can be perceived so, that the fundamental states on the Planck level are the classical states and then the quantization should be stochastical: “$`\mathrm{}`$ In our theory, quantum states are not the primary degrees of freedom. The primary degrees of freedom are deterministic states $`\mathrm{}`$”. It is possible that the idea presented here have some connection with the ’t Hooft stochastical quantization model. Actually, in the first case (section II) we have the quantum fluctuation between two classical states (between two metric signatures $`\eta _{ab}=(\pm ,,,,)`$), in the second one we have the quantum fluctuating quantity $`\sigma `$ in the classical Einstein equations that leads to the fluctuation between Euclidean and Lorentzian modes with the stochastical definition of probability according to the (38) equation.
## V Acknowledgments
This work is supported by a Georg Forster Research Fellowship from the Alexander von Humboldt Foundation. I would like to thank H.-J. Schmidt for the invitation to Potsdam Universität for research and D. Singleton for discussion. |
no-problem/9912/quant-ph9912085.html | ar5iv | text | # 1 The imaginary part of the dielectric constant as function of frequency (upper graph) and the reduction of the Casimir energy between plane metallic reflectors with respect to plane perfect mirrors as a function of distance (lower graph) for Au (solid line) and Cu (dashed line).
Comment on “Demonstration of the Casimir Force in the 0.6 to 6 $`\mu `$m Range”
In a recent Letter , Lamoreaux reports a measurement of the Casimir force for distances in the 0.6 to 6 $`\mu `$m range. The force has been measured between a flat and a spherical plate. Both plates are coated with a layer of Cu, covered with an additional 0.5 $`\mu `$m thick layer of Au. The author compares his experimental data to theoretical predictions and reports an agreement at 5% between theory and experiment if a pure Cu surface is assumed. Since his theoretical evaluation gives very different values for the Casimir force between Au surfaces and Cu surfaces, there results a net discrepancy between expected and experimentally observed values.
We have recently recalculated the Casimir force between metallic mirrors and obtained results differing significantly from , in particular for Au mirrors. Details about the evaluation procedure, the interpolation and extrapolation of optical data, and the numerical integration techniques are given in . Here, we restrict our attention on the Au/Cu problem underlined by Lamoreaux.
The upper graph of figure 1 shows the imaginary part of the dielectric constant $`\epsilon ^{\prime \prime }(\omega )`$ as a function of frequency $`\omega `$ for Au and Cu. All optical data are taken from . At low frequencies they are extrapolated by a Drude model which is consistent with present theoretical knowledge of optical properties of metals and, at the same time, fits quite nicely higher frequency optical data. Since the optical response functions are very similar for Au and Cu, the Casimir forces evaluated from these functions are expected to be nearly equal.
In the experiment, the Casimir force is measured in the plane-sphere geometry. Theoretically it is evaluated by using the proximity force theorem. We do not discuss here the validity of this approximation but focus our attention on the effect of finite conductivity. We calculate the reduction factor $`\eta `$ (notation of ; notation $`\eta _E`$ in ) of the force in the plane-sphere geometry as the reduction factor of the energy evaluated in the plane-plane configuration. The frequency dependent reflection coefficients are derived from the dielectric constant, using causality relations, and $`\eta `$ is then deduced through numerical integrations.
The lower graph of figure 1 shows $`\eta `$ for Au and Cu with, as expected, equal values at better than 1% in the range of distances studied in the experiment. This contradicts theoretical values obtained by Lamoreaux. For Au at $`0.6\mu `$m our value $`\eta =0.87`$ exceeds by 12% the value $`\eta =0.78`$ given in , while at the same distance the values for Cu are compatible within 2%.
This result clears up the Au/Cu discrepancy pointed out in . Besides this specific difficulty, more work is needed, on both the experimental and theoretical side, to reach an accurate agreement between theoretical expectation and experimental measurements of Casimir force (see and references therein).
Astrid Lambrecht and Serge Reynaud
Laboratoire Kastler Brossel
Campus Jussieu, case 74
75252 Paris Cedex 05, France
PACS numbers: 12.20 Fv, 07.07 Mp, 03.70 +k |
no-problem/9912/astro-ph9912327.html | ar5iv | text | # Enabling Concepts for a Dual Spacecraft Formation-Flying Optical Interferometer for NASA’s ST3 Mission
## 1 Introduction
The last 15 years has seen considerable progress in the conception and development of ideas for multi-spacecraft optical interferometers. Stachnik, Melroy, and Arnold (1984) first laid out the conceptual framework for an orbiting Michelson interferometer, and the following year the European Space Agency devoted a colloquium to spacecraft arrays of this type. Particular emphasis in a number of studies (Johnston & Nock 1990; DeCou 1991) was placed on the choice of orbits to minimize fuel usage and provide maximal UV coverage. DeCou (1991) described a family of orbits near geostationary which were particularly efficient in this respect.
In the early 1990’s a coherent effort began at the NASA/Caltech Jet Propulsion Laboratory to develop a consistent and detailed design for a three-spacecraft Michelson interferometer known initially as the Separated Spacecraft Interferometer (SSI; Kulkarni 1994), and then as the New Millenium Interferometer (NMI; McGuire & Colavita 1996; Blackwood et al 1998) because of its alignment with NASA’s New Millenium Technology program, in which it was scheduled as Deep Space 3 (DS3).
However, funding constraints eliminated the original three-spacecraft baseline design in late 1998 and a de-scoped version involving somewhat reduced capability and requiring only two spacecraft, was adopted. The new mission, known as Space Technology 3 (ST3) due to re-alignment of the parent NASA program, has moved into a prototype construction phase, with launch now set for 2005.
In this report we describe the primary enabling concepts for the dual spacecraft system, which combine specific choices of array geometry with a novel fixed optical delay line capable of supporting a continuously variable interferometer baseline from 40 to 200 m. The next section describes the choice of geometry, followed by a section describing the overall optical layout, and the fixed delay line. A related paper in this volume (Lay et al.) describes in detail the operation of the interferometer system.
## 2 Observing Geometry
### 2.1 Original dual-spacecraft concept
Before the initial three-spacecraft configuration for DS3 was adopted as the working design, a variety of different configurations were considered which gave various levels of technology demonstration with respect to a formation–flying multiple spacecraft interferometer. One of these proposed early configurations was in fact a dual–spacecraft system (Folkner 1996). The basic geometry of this configuration is shown in Figure 1. Here the collector spacecraft (which acts simply as a moving relay mirror) travels along a parabolic trajectory with the combiner spacecraft at the focus of the parabola, which we choose as the origin in this plot. The combiner spacecraft then carries a fixed optical delay line which compensates for the additional pathlength that the collector spacecraft produces. This is indicated schematically by showing the fixed delay line as if it were reflecting off another relay mirror at the surface of the reference parabola, thus ensuring equal delay in the two arms of the interferometer.
For the pictured geometry, the collector spacecraft position $`(x,y)`$ must satisfy:
$$\sqrt{B^2+y^2}=\tau +y$$
(1)
where $`x`$ coordinate is defined as the projected baseline $`B`$, and the total fixed delay carried by the combiner spacecraft is $`\tau `$. In the case of Fig. 1 the $`y`$-position of the collector spacecraft was always negative with respect to the combiner for simplicity in the relay optics. Equation (1) then determines the required collector spacecraft position for a given projected baseline
$$y=\frac{B^2}{2\tau }\left[1\frac{\tau ^2}{B^2}\right].$$
(2)
For the configuration of Fig. 1, the fixed delay is $`\tau =100`$ m, and the maximum baseline (at $`y=0`$) is then also 100 m.
The difficulty with this approach is the requirement that the combiner spacecraft must carry a 100 m fixed delay line in a very compact configuration, of order 1-2 m in overall length. This amount of delay is not easily achievable in a broad-band system (450-1000 nm) as was planned for DS3. Approaches involving $`50`$ reflections between opposing spherical or flat mirrors typically produce too much wavefront distortion, absorption, and scattering losses to be useful for a white light interferometer. Alternatives such as the use of optical fiber also do not afford the broadband single-mode operation required for a delay line.
### 2.2 Modified approach
Figure 2 shows a modified approach to the two spacecraft system in which a much shorter fixed delay line can be utilized. Here the spacecraft configuration entails a collector spacecraft position which moves along the reference parabola above the combiner spacecraft with respect to the source direction. Referring to equation (2) above, $`y=0`$ when $`B=\tau `$. When $`B`$ exceeds the fixed delay $`\tau `$, the collector spacecraft $`y`$-value then becomes positive. For $`B>>\tau `$,
$$y\frac{B^2}{2\tau };$$
(3)
thus the interspacecraft distance $`D=\sqrt{y^2+B^2}`$ grows quadratically with baseline.
For the de-scoped NASA mission ST3, preliminary design considerations indicated that a fixed delay line of $`20`$ m stored delay was achievable within the constraints of spacecraft size and instrument visibility budget. In a later section we provide details of the fixed delay line design. Using $`\tau =20`$ m, and the additional constraint of $`D1`$ km imposed by formation–flying requirements, ST3 is able to achieve a maximum interferometer baseline of about 200 m.
## 3 Optical Design
Figure 3 indicates schematically the optical design for ST3 in the adopted dual spacecraft configuration. The optical train is almost completely planar throughout the system, and employs an athermalized ultra-stable composite optical bench. In the combiner spacecraft (which will function as a standalone fixed-baseline interferometer) a pair of outboard siderostats feed into an afocal gregorian compressor with a 1 arcmin fieldstop at the internal focus.
The 12 cm incoming beams are then compressed to 3 cm and fed into the delay lines, one fixed and one movable. After this the beams enter the beam combiner. An outer 0.5 cm annular portion of each beam is stripped off for guiding, and the central 2 cm portion of the beam is used for fringe tracking (using a single-element avalanche-photodiode detector in one of the combined beams). The other combined 2 cm beam is dispersed in a prism and integrated coherently on an 80 channel CCD fringe spectrometer.
### 3.1 Fixed delay line
Perspective and schematic views of the fixed delay line are shown in Fig. 4. The design employs 3 nested cat’s eye retroreflectors, two of which are in a Cassegrain configuration and the third a Newtonian. As noted in the plot, the optics are very slow, giving large depth of focus and minimal impact on wavefront distortion. Three of the 13 reflections occur at foci and have little wavefront effect. However, due to the large magnification of the system, focal plane flats must be sized generously to match field of view requirements.
###### Acknowledgements.
We thank M. Shao and M. Colavita for their invaluable suggestions & support of this work. The research described here was carried out by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. |
no-problem/9912/hep-ex9912036.html | ar5iv | text | # Experimental TechniquesCourse given at the VII Mexican Workshop on Particles and Fields, Merida, Yucatán, Mexico, November 10-17, 1999. Proceedings to be published by AIP.
## Introduction
In this course we will not try to reproduce standard text books about detectors (see for example jurgen:fernow ; jurgen:kleinknecht ) and descriptions of interaction with matter (a good summary can be found in jurgen:pdg ), covering in great details all aspects of experimental high energy physics. Due to the time restrictions ($`3\times 50\text{min}`$ were assigned by the organizers for this course) we will rather discuss examples on the use of some detector families. The selection is highly biased, since the author decided to use examples he knows best, e.g. he either worked on some of the detectors directly, or they were part of an experiment he participated in.
In a particle physics experiments detectors of various kinds are placed downstream (some even upstream) of the fixed target or surrounding a collision point of colliding beams. In general different detectors and electronic apparatus are required to perform the different tasks most experiment require: tracking, momentum analysis, particle identification, neutral particle detection, triggering, and data acquisition. Clearly a very important part is also the analysis of data, which comes in two parts: the event reconstruction, using all the detector information available (which requires for example a good alignment and calibration of all parts), and the final physics analysis. All these pieces cannot live by them self, to perform a successful experiment requires that all components, be it hardware or software, work all together to reach the final goal: A good and significant physics result.
Usually resources (both money and person-power) are limited when designing, building and operating an experiment. Careful consideration is necessary to decide, if some fancy or expensive detector is really necessary to obtain the physics goals or if a simpler, less expensive version would be sufficient, and the free resources can be applied to more essential parts of the experiments. Sometimes there are also political constraints that make a clear technical decision more complicated. All these arguments hold for the hardware as well as for the software part of the experiment.
In the following we will discuss generically MWPCs, the properties of a silicon microstrip array used in the SELEX experiment, and a longer part about RICH detectors used in WA89, SELEX, and CKM.
## Multiwire Proportional and Drift Chambers
This section should be seen as an introduction to energy loss in matter and its use for particle detection, with these devices as examples.
Long time ago Geiger invented his counter: A charged particle will ionize gas, and the liberated electrons drift under the influence of an electric field to a thin counting wire. Close to the wire the field is strong enough to accelerate the electrons to ionize again further gas atoms or molecules (gas multiplication). In typical applications multiplication gains of $`10^5`$ or more can be obtained. The electrons drift to the wire, producing there a fast signal which is usually not resolved by the preamplifier, but the backwards drifting ions induce an additional, slower, signal, which can be easily detected.
Without any further consideration this will actually lead to a spark, usually something not welcome (but in the 60’s “spark chambers” where used for particle detection), since in the multiplication avalanche not only ions, but also excited atoms or molecules are produced, which will de-excite with the emission of a (UV)-photon, which can again ionize. Two tricks already used by Geiger help to avoid this: 1) A sufficiently large resistor is included in the HV line, so that the current drawn will lead to a voltage drop. 2) The addition of so-called quencher gases, usually some alcohol, to the detector gas, who will absorb the UV-photons without being ionized.
In the 60’s several groups, most famous the group around Charpak at CERN, started to develop counters later known as MWPC: Instead of one wire, a lot of wires are stretch parallel, with the electrons drifting to the nearest wire. This allows construction of larger area detectors, something necessary in the time when people tried to develop electronic detectors to replace the bubble chambers. The resolution of the detector is given by the wire distance, and space points can be obtained be putting MWPCs under different angles. In practice the wire distance is limited to about $`1\text{mm}`$ in small size chambers, and even more in larger areas for two reasons: The wires have to be stretched, supported by a strong frame, and, even stretched, electrostatic deflection has to be taken into account.
In the mid-60’s, a new idea came up, first realized in Heidelberg by Heintze and Walenta jurgen:Heintze : If the time a particle passes the counting gas is known, e.g. by using for example an scintillator somewhere in the experiment, the drift time of the electrons from the point of ionization to the wire contains also space information. Putting the wires further apart, and forming with cathodes a homogeneous electric field (with exception close to the wire), a constant drift velocity in the order of several $`\text{cm/}\mu \text{sec}`$ is observed. The resolution, limited by the diffusion of the electrons, obtained with drift chambers can be well below $`100\mu \text{m}`$, even with drifts of 10’s of cm. The advantage is clearly the reduced number of wires and readout channels, with the additional cost of the need of measuring the drift time with some sort of TDC.
The energy loss of moderately charged particles (other than electrons) in matter is primarily ionization. The mean rate of energy loss is given by the Bethe-Bloch equation:
$$\frac{dE}{dx}=Kz^2\frac{Z}{A}\frac{1}{\beta ^2}\left[\frac{1}{2}\mathrm{ln}\frac{2m_ec^2\beta ^2\gamma ^2T_{\mathrm{max}}}{I^2}\beta ^2\frac{\delta }{2}\right]$$
(1)
Here $`K`$ is some constant, $`ze`$ the charge of the particle, $`Z`$ and $`A`$ the atomic number and charge of the medium, $`T_{\mathrm{max}}`$ the maximum energy of a free electron after one collision, $`I`$ the mean excitation energy of the medium, and $`\delta `$ is a correction factor; the other symbols have there usual obvious meaning. $`dx`$ is measured here in units of $`\text{g}/\text{cm}^2`$. The Bethe-Bloch formula only describes the mean energy loss; for finite path lengths, there are significant fluctuations in the actual energy loss. The distribution is skewed towards high values, described by the Landau distribution. Only for a thick layer the distribution is nearly Gaussian.
As seen from eq. 1, for $`\beta \gamma 3`$ the energy loss has a so-called “relativistic raise”. If the momentum of the particle is known, this can be used to identify the particle. The problem is that one has to sample the energy loss several times to be able to extract the average loss (Landau!). This is explicitly done in so-called “jet chambers”. An most up-to-date example is the OPAL central jet chamber jurgen:opala ; jurgen:opalb ; jurgen:opalc , where a normal track gets measured at 159 points, using $`4\text{m}`$ long wires spaced by $`1\text{cm}`$. The 3-dim space information ($`r`$: wire number, $`\varphi `$: drift time, $`z`$: charge division) gets used to measure the momentum (a magnetic field is present) and the total charge information helps to identify the particle. Another application is a TPC (Time Projection Chamber); the whole drift volume contains only gas with parallel electrical and magnetic field (to reduce the diffusion), and the electrons drift to the end plates, where wire or pads are used to obtain space information.
## Silicon Microstrip Detectors
In the 1980’s, silicon microstrip detectors became used heavily in HEP. They are absolutely necessary to measure properties of particles containing charm and beauty quarks. Examples for very successful experiments using this kind of detectors include E691 at Fermilab, WA82 at CERN, and, in colliders, CDF, the 4 LEP experiments (Aleph, DELPHI, L3, OPAL), and the HERA experiments. Today there are a lot of experiments using silicon microstrips, with channel counts up to 1 million or more.
The detector allows to measure with a precision of down to a few $`\mu \text{m}`$ the one-dimensional position of a passing charged track. Newer devices, the so-called pixel detectors, measure a two-dimensional position. The detector uses as basic detection device a pn-junction, shown in fig. 1 left,
a diode which is operated in blocking direction with a sufficiently high voltage so that the entire device is depleted, e.g. there are no free electron-hole pairs (also called charge carriers). In one device several (up to several thousand) of these pn-junctions are operated, arranged in parallel strips. Should a charged particle pass through the detector (see fig. 1 right), new electron-hole pair are created and one of the carrier types will drift towards the nearest strip. In Silicon, the energy loss $`dE/dx3.8\text{MeV/cm}`$, and the energy needed to create one electron-hole pair is $`3.6\text{eV}`$<sup>1</sup><sup>1</sup>1The band gap in Silicon is only $`1.1\text{eV}`$, but Silicon is an indirect semiconductor., so in a typically $`300\mu \text{m}`$ thick detector about $`310^4`$ pairs will be created.
The construction of the detector itself seems to be under control today. There are several companies available which will produce the silicon detector with a well understood process. The smallest strip distance used today is $`10\mu \text{m}`$, so that the structure is actually much simpler than the achieved sub-micron structures in todays semiconductor chips. The real challenge in these detectors is the readout: Imagine a $`5\text{cm}\times 5\text{cm}`$ detector with $`10\mu \text{m}`$ strip distance: 5000 strips with there small signals have to be readout. Every single strip needs a preamplifier, and some kind of signal detection like a discriminator, otherwise noise will overwhelm the data acquisition. To reduce the number of cables (anyway, how to have a cable every $`10\mu \text{m}`$?) it would be nice to chain several channels together, at best even all 5000. The chips should then be clever enough only to send a strip number to the data acquisition, e.g. the signal gets digitized and zero suppressed already at the detector.
A system like this, called SVX jurgen:lblsvx , was developed about 10 years ago by LBL for collider experiments (CDF), and also used in WA89 jurgen:wa89silicon and SELEX jurgen:prakash ; jurgen:selexsilicon . The basic layout of the SVX system is shown in fig. 2.
The current is integrated onto a capacitor as long as $`R_a`$ is closed. The charge is then transfered via a sequence of switch operations and compared to a pre-stored charge. If the signal charge is bigger, the channel offers its number to the readout. Up to 64 chips with 128 channels each can be chained for readout. Since this chip was developed for collider, the fact that the chip is integrating is not very important, since a clear cycle can be performed before every collision. In fixed target operation it is not known when an interaction will happen, so in general the chip will integrate several interactions until a clear cycle is performed, which has to be closely coupled with the trigger of the experiment since during a clear the detector is not sensitive. Depending on the beam rate, the number of interactions and the sensitivity of the experiment to out-of-time tracks, the ratio of integration and clear time has to be optimized for the experiment.
The layout of a typical fixed target vertex detector is shown in fig. 4.
Tracks originating from the targets are transversing the silicon planes oriented in 4 different orientations (rotated by $`45^{}`$) to allow the reconstruction of tracks in space. They eventually get fit to form a vertex, and the obtained resolution is shown in fig. 4. At high momentum the resolution is limited by the strip distance, but at lower momentum multiple scattering becomes more and more important. Nevertheless, the fit takes all error contributions correctly into account, as seen from a constant $`\chi ^2=1`$ for all momenta. This is another lesson to learn: more detectors is not always good.
Another nice example is the silicon drift detector. The device developed for ALICE was presented in this workshop by E. Crescio jurgen:sildrift .
## Ring Imaging Cherenkov Counters
### Introduction
Even though the basic idea of determining the velocity of charged particles via measuring the Cherenkov angle was proposed in 1960 jurgen:ArtR , and in 1977 a first prototype was successfully operated jurgen:desrich , it was only during the last decade that Ring Imaging Cherenkov (RICH) Detectors were successfully used in experiments. A very useful collection of review articles and detailed descriptions can be found in the proceedings of three international workshops on this type of detectors, which were held in 1993 (Bari, Italy) jurgen:bari , 1995 (Uppsala, Sweden) jurgen:upsalla , and 1998 (Ein Gedi, Israel) jurgen:israelproc , respectively.
Charged particles with a velocity $`v`$ larger than the speed of light in a medium with refractive index $`n`$ will emit Cherenkov radiation under an angle $`\theta `$, given by jurgen:cherenkov
$$\mathrm{cos}\theta =\frac{1}{\beta n}$$
(2)
with $`\beta =v/c`$, $`c`$ being the speed of light in vacuum. The number of photons $`N`$ emitted per energy interval $`dE`$ and path length $`dl`$ is given by jurgen:franck
$$\frac{d^2N}{dEdl}=\frac{\alpha }{\mathrm{}c}\left(1\frac{1}{(\beta n)^2}\right)=\frac{\alpha }{\mathrm{}c}\mathrm{sin}^2\theta $$
(3)
or, expressed for a wavelength interval $`d\lambda `$,
$$\frac{d^2N}{d\lambda dl}=\frac{2\pi \alpha }{\lambda ^2}\mathrm{sin}^2\theta $$
(4)
By measuring the Cherenkov angle $`\theta `$ one can in principle determine the velocity of the particle, which will, together with the momentum $`p`$ obtained via a magnetic spectrometer, lead to the determination of the mass and therefor to the identification of the particle<sup>2</sup><sup>2</sup>2For this reason Cherenkov detectors are usually described under the chapter “particle identification” in particle detectors books..
Neglecting multiple scattering and energy loss in the medium, all the Cherenkov light (in one plane) is parallel, and can therefor be focused (for small $`\theta `$) with a spherical mirror (radius $`R`$) onto a point, as shown in fig. 5.
Since the emission is symmetrical in the azimuthal angle around the particle trajectory, this leads to a ring of radius $`r`$ in the focus, which is itself a sphere with radius $`R/2`$. The radius $`r`$ is given by
$$r=\frac{R}{2}\mathrm{tan}\theta $$
(5)
The dependence of the ring radius on the momentum for different particles is shown in fig. 6
Since the number of photons is $`\lambda ^2`$, most of the light is emitted in the VUV range. To fulfill equation 2, the refractive index has to be $`n>1`$, so there will be no Cherenkov radiation in the x-ray region. Also it is very important to remember that $`n`$ is a function of the wavelength ($`n=n(\lambda )`$, chromatic dispersion) and most materials have a absorption line in the VUV region, where $`n\mathrm{}`$, as shown in fig. 7, using Neon as example.
Since usually the wavelength of the emitted photon is not measured, this leads to a smearing of the measured ring radius, and one has to match carefully the wavelength ranges which one wishes to use: Lower wavelengths gives more photons, but larger chromatic dispersion.
A very useful formula is obtained by integrating eq. 4 over $`\lambda `$ (or $`E`$), taking into account all efficiencies etc., obtaining a formula for the number of detected photons $`N_{\mathrm{ph}}`$ jurgen:desrich :
$$N_{\mathrm{ph}}=N_0L\mathrm{sin}^2\theta $$
(6)
where $`N_0`$ is an overall performance measure (quality factor) of the detector, containing all the details (sensitive wavelength range, efficiencies), and $`L`$ is the path length of the particle within the radiator. A “ very good” RICH detector has $`N_0=100\text{cm}^1`$, which gives typically around $`10`$ to $`15`$ detected photons ($`N_{\mathrm{ph}}`$) per $`\beta =1`$ ring.
The usual construction of a RICH detector is to use a radiator length of $`L=R/2`$, e.g. equal to the focal length; but any other configuration, like folding the light path with additional (flat) mirrors is possible.
All the presented arguments and the drawing in fig. 5 only work for small $`\theta `$, which is always fulfilled in gases, since $`n`$ only differs little from $`1`$. Also important is the fact, that, should the particles not pass through the common center of curvature of mirror(s) and focal spheres, the ring gets deformed to an ellipse or, in more extreme cases, to a hyperbola. If the photon detector is able to resolve this, and the resolution is needed for the measurement, these deviations from a perfect circle have to be taken into account in determining the velocity $`\beta `$. In general this effect can be neglected, and all parallel particles (with the same $`\beta `$) will give the same ring in the focal surface, due to the fact that all emitted Cherenkov light is parallel. The position of the ring center is determined by the angle of the tracks, not by their position.
In the following, we will describe two RICH detectors used in experiments, and a new application for RICH detectors for a new, proposed experiment. The author works or worked on all of them, so the selection is clearly biased. Even so, we feel that they are good examples for the use of this kind of detectors.
### The CERN Omega-RICH
In the middle of the 1980’s, first attempts were made to apply the prototype results obtained by Séguinot and Ypsilantis jurgen:desrich to experiments in a larger scale. One of these attempts was performed at the CERN Omega facility in the West Hall. Experiments WA69 and WA82 tried to use this detector for there analysis, but only succeeded partly. An overview about this history can be found in jurgen:wa89bari . When in 1987 a new experiment, later named WA89, was proposed jurgen:wa89loi ; jurgen:wa89proposal , an important part was a necessary upgrade of this detector for the use by this new experiment. Two main parts where changed: New photon detectors using TMAE as photo sensitive component, and new mirrors to perform the focusing. Details about the detector can be found in jurgen:wa89bari ; jurgen:doktorarbeit ; jurgen:wa89wire ; jurgen:wa89upsalla ; jurgen:wa89israel .
As seen in the overall layout of the detector (fig. 8), a RICH detector is basically a simple device: a big box, some mirrors at the end, and photon-sensitive detectors at the entrance.
The real challenge is to combine all the parameters together to obtain a perfect match for the overall system.
The size of the radiator box and the photon detector is given by the angular distribution of tracks which have to be identified at the location of the detector. Since usually this detector is placed behind a magnetic spectrometer, and the momentum spectrum of the interesting tracks depends on the physics goals of the experiment, the surfaces to cover have to be determined for every setup and experiment, usually with Monte Carlo simulations during the design phase of the experiment jurgen:doktorarbeit . In the case of WA89, the mirror surface needed was about $`1\text{m}\times 1.5\text{m}`$, much smaller than the $`4\text{m}\times 6\text{m}`$ covered by the original Omega-RICH. It was therefor decided to replace only the central mirrors with smaller (as seen in fig. 8), higher surface quality mirrors to obtain better resolution.
The detector surface was calculated to be $`1.6\text{m}\times 0.8\text{m}`$, with a spatial resolution of a few millimeters for every detected photon. The pixel size could therefor not be much bigger than also a few millimeters, leading to about $`100000`$ pixels in the detector plane. The solution was to build drift chamber (TPC) modules, shown in fig. 9,
covering an area of $`35\text{cm}\times 80\text{cm}`$, and approximating the focal sphere with a polygon of 5 modules. After passing a $`3\text{mm}`$ thick quartz window (to reduce absorption), the photons hit TMAE (see fig. 11) molecules,
converting via photo-effect into an single electron. TMAE is present with a concentration of about $`0.1\text{\%}`$ with the driftgas, which is otherwise pure ethan. The use of a quartz window together with TMAE as photo-sensitive gas leads that the detector is only sensitive in a small wavelength range between $`165\text{nm}<\lambda <230\text{nm}`$, as demonstrated in fig. 11. TMAE has a very low vapor pressure, so that at ambient temperatures the molecule is saturated within a gas. To obtain a short enough conversion length of around $`1\text{cm}`$ (otherwise the conversion would occur to far away from the focal plane and lead to an addition contribution to the resolution), the drift gas (ethane) is led through a bubbler, containing TMAE liquid at $`30^{}\text{C}`$. This means that everything after the bubbler, e.g. the whole detector including radiator box, had to be heated to $`40^{}\text{C}`$ to avoid condensation. Other unpleasant properties of TMAE include a high reactivity with Oxygen, producing highly electro-negative oxides, which will attach an electron easily, change the drift velocity by a factor of several thousand, leading to a loss of electrons. Since the signal is a single electron (photo-effect!) this is catastrophic. The counting gas had an Oxygen contents of $`<1\text{ppm}`$.
Mostly due to the presence of TMAE, the operation of this detector was not trivial. All parameters were monitored electronically, and hardware limits on some critical parameters (like temperature, Oxygen content of Ethane) lead to a automatic shutdown of the detector, waiting for an expert to arrive in the experimental hall.
Once the electron was released, it was drifting under the influence of an electric field of $`1\text{kV/cm}`$ (drift velocity $`5.4\text{cm/}\mu \text{m}`$) upwards or downwards over maximal $`40\text{cm}`$ towards $`6\text{cm}`$ long counting wires (gold-coated tungsten, $`15\mu \text{m}`$ diameter), spaced by $`2.54\text{mm}`$. The two-dimensional spatial information about the conversion point of the photon is obtained with the position of the wire and the drift time of the electron. In total, 1280 wires were used in the detector. An additional complication was that the charged particles itself where passing through the chambers, leaving a $`dE/dx`$ signal of several hundred electrons, which is to be compared to the single electron which is our signal. This leads to increased requirements for the wire chambers (sensitive, e.g. sufficient multiplication, to single electrons, but no sparking with several hundred electrons) and to the preamplifier electronic (not too much dead time after a big pulse).
The overall resolution allowed the separation of pions and kaons up to a momentum of about $`100\text{GeV/}c`$, which was exactly the design goal. This lead to a good number of physics results jurgen:wa94a ; jurgen:wa89a ; jurgen:wa89b ; jurgen:wa89c ; jurgen:wa94b ; jurgen:wa89d ; jurgen:wa89e ; jurgen:wa89f ; jurgen:wa89g ; jurgen:wa89h , which would not have been possible to obtain without the RICH detector.
### The SELEX Phototube RICH Detector
At Fermilab an new hyperon beam experiment, called SELEX, was proposed in 1987 jurgen:selexproposal . The key elements to perform a successful charmed-baryon experiments are 1) a high resolution silicon vertex detector and 2) a extremely good particle identification system based on RICH. During the following years, a prototype for the SELEX RICH was constructed and tested successfully jurgen:protos , based in some part on experience gained by our Russian collaborators jurgen:sphinx ; jurgen:sphinxnew . The real detector was constructed in 1993-1996, ready for the SELEX data taking period from July 1996 to September 1997. First results for the final detector were reported in jurgen:elba , and publications from this year contain all details and performance descriptions of this detector jurgen:bigrichpaper ; jurgen:israel .
A layout of the vessel is shown in fig. 12.
The radiator gas is Neon at atmospheric pressure and room temperature (see fig. 7), filled into the vessel with a nice gas-system jurgen:gassystem : First the vessel is flushed for about 1 day with $`\mathrm{CO}_2`$ (a cheap gas). After this the gas (mostly $`\mathrm{CO}_2`$ and little air) is pumped in a closed system over a cold trap running at liquid Nitrogen temperature, freezing out $`\mathrm{CO}_2`$ and the remaining water vapor. At the same time Neon gets filled into the vessel to keep the pressure constant. This part of the procedure takes about 1/2 day, and the vessel contains afterwards only Neon and about $`100\text{ppm}`$ of Oxygen which is removed by pumping the gas over a filter of activated charcoal for a few hours, ending with an Oxygen contents of $`<10\text{ppm}`$ in the radiator. After this all valves were closed and the vessels sits there for the whole data taking of more than 1 year at a slight ($`1\text{psi}`$) overpressure<sup>3</sup><sup>3</sup>3Actually the closed detector still sits untouched in the PC4 pit at Fermilab. We did not open it yet..
The mirror array at the end of the vessel is made of $`11\text{mm}`$ low expansion glass, polished to an average radius of $`R=(1982\pm 5)\text{cm}`$, coated with Aluminum and a thin overcoating of $`\mathrm{MgF}_2`$, which gives $`>85\text{\%}`$ reflectivity at $`155\text{nm}`$. The quality of the mirrors was measured with the Ronchi technique jurgen:ronchi to assure a sufficient surface quality of the mirrors. The total mirror array covers $`2\text{m}\times 1\text{m}`$ and consists of 16 hexagonally shaped segments. The mirrors are fixed with a 3-point mount consisting of a double-differential screw and a ball bearing to a low mass honeycomb panel. The mirrors are mounted on one sphere, and were aligned by sweeping a laser beam coming from the center of curvature over the mirrors.
The photo detector is a hexagonally closed packed $`89\times 32`$ array of 2848 half-inch photomultipliers. A side view is shown in fig. 13.
In a $`3\text{in.}`$ thick aluminum plate holes are drilled from both sides, a $`2\text{in.}`$ deep straight hole holds the photomultiplier, and a conical hole on the radiator side holds aluminized mylar Winston cones, which form on the radiator side hexagons, leading to a total coverage of the surface. The 2848 holes are individually sealed with small quartz windows. For the central region of the array, a mixture of Hamamatsu R760 and FEU60 tubes were used, in the outside rows only FEU60 tubes are present. The nearly 9000 cables (signal, hv, ground) are routed to the bottom (hv) or top, where the signal cables are connected to preamp-discriminator-ecl-driver hybrid chips and finally readout via standard latch modules<sup>4</sup><sup>4</sup>4Since the phototubes are detecting single photons, no ADCs are necessary..
A single event display of the detector is shown in fig. 15,
demonstrating the clear multitrack capability and the low noise of this detector. To analyze an event, the ring center is predicted via the known track parameters, and a likelihood analysis jurgen:likeli for different hypothesis (the momentum is known!) is performed to identify the particle. The final performance for this detector is shown in fig. 15. The detector is nearly $`100\text{\%}`$ efficient, even below the proton threshold the efficiency is above $`90\text{\%}`$. In the SELEX offline analysis, the RICH is one of the first cuts applied to extract physics results. SELEX presented already several results at conferences jurgen:selexa ; jurgen:selexb ; jurgen:selexc ; jurgen:selexd ; jurgen:selexe ; jurgen:selexf , and one paper is submitted for publication jurgen:selexg .
### Two RICHes for the CKM Experiment
Last year a new experiment called CKM jurgen:ckmproposal was proposed at Fermilab. The goal of the experiment is to measure the branching ratio for $`K^+\pi ^+\nu \overline{\nu }`$ to an accuracy of $`10\text{\%}`$ (SM prediction is $`10^{10}`$) to measure the CKM matrix element $`V_{td}`$. To withstand the high expected physics background, the experiment will use, in addition to a conventional magnetic spectrometer, a velocity spectrometer consisting of two phototube RICH detectors, one to measure the incoming $`K^+`$, the second the outgoing $`\pi ^+`$. The design of the detectors is based on the SELEX RICH. The HEP group in San Luis Potosí is involved in the design, construction, and testing of parts of these detectors.
## Acknowledgement
The author wishes to thank the organizers for the opportunity to present this course. This work was partly financed by FAI-UASLP and CONACyT. Figures 12, 13, and 15 are reprinted from Nuclear Instruments and Methods A431, J. Engelfried et al., The SELEX Phototube RICH Detector, pages 53-69, 1999, with permission from Elsevier Science. |
no-problem/9912/hep-ph9912217.html | ar5iv | text | # RESONANT SINGLE CHARGINO AND NEUTRALINO VERSUS FERMION-ANTIFERMION PRODUCTION AT THE LINEAR COLLIDER 11footnote 1Presented at the 2^{𝑛𝑑} ECFA/DESY study on Linear Colliders, Frascati, November 1998 (Alternative Theories working group).
hep-ph/9912217
CERN-TH/99-375
## Abstract
We study single superparticle productions at the linear collider, putting particular emphasis on resonant processes. We find that there exists a wide region of model parameters where single chargino and neutralino productions dominate over R-violating fermion-antifermion final states. For certain values of $`\mu `$ and $`M_2`$, it is possible to produce even the heavier charginos and neutralinos at significant rates, amplifying the total cross section and obtaining interesting chains of cascade decays. Effects from initial-state radiation are also included.
Although the main working tool for supersymmetry searches has been the Minimal Supersymmetric Standard Model, the most general $`SU(3)_c\times SU(2)_L\times U(1)_Y`$ invariant superpotential with the minimal field content also contains the terms
$$W=\lambda _{ijk}L_iL_j\overline{E}_k+\lambda _{ijk}^{}L_iQ_j\overline{D}_k+\lambda _{ijk}^{\prime \prime }\overline{U}_i\overline{D}_j\overline{D}_k$$
(1)
where $`L`$ $`(Q)`$ are the left-handed lepton (quark) superfields while $`\overline{E},\overline{D},`$ and $`\overline{U}`$ are the corresponding right-handed fields. If both lepton- and baryon-number violating operators were present at the same time in the low energy Lagrangian, they would lead to unacceptably fast proton decay; to avoid this, a symmetry that forbids the terms in (1), R-parity , has been invoked. However, it has been shown that there exist symmetries which allow the violation of only a subset of these operators , resulting in a very rich phenomenology : single superparticle productions are allowed, while for couplings $`\stackrel{>}{}10^6`$, the lightest supersymmetric particle decays inside the detector . In both cases, the standard missing energy signature is substituted by multilepton and/or multijet events.
There are three basic categories of new signals:
$``$ Pair superparticle productions and subsequent decays via R-violating operators. Such processes are favoured for small R-violating couplings.
$``$ For reasonably large R-violating couplings, single superparticle productions may occur. In this case, the mass reach can be considerably larger than for MSSM processes at the same machine.
$``$ Virtual effects, from sparticle exchanges. These provide the optimal signals for a very heavy superparticle spectrum.
Here, we put particular emphasis on resonant scalar-neutrino production and its subsequent decay to either sfermions or a single chargino or neutralino . In particular, we study the processes
$`e^+e^{}(\stackrel{~}{\nu })^{}f\overline{f}^{}\mathrm{and}e^+e^{}(\stackrel{~}{\nu })^{}\{\begin{array}{cc}\mathrm{}_i^\pm \stackrel{~}{\chi }^{}\hfill & \\ \nu _i\stackrel{~}{\chi }^0\hfill & \end{array}`$ (4)
and identify for which regions of the supersymmetric parameter space each channel is expected to dominate.
For a collider operating in the $`e^+e^{}`$ mode, the only couplings that involve two electrons are $`L_1L_2\overline{E}_1`$ and $`L_1L_3\overline{E}_1`$ (remember that from $`SU(2)`$ invariance, the two lepton doublets cannot have the same flavour) <sup>2</sup><sup>2</sup>2 In $`22`$ single superparticle productions one may generically probe only a subset of operators. All 45 operators can be simultaneously probed by going to a 3-body final state, for instance in rare $`Z^0`$ decays; however, this process is phase-space suppressed and was found to be more relevant for hadron colliders .. The bounds for these couplings (Table 1) scale proportionally to the superparticle masses and therefore, for a heavy sparticle spectrum, the couplings can be quite large.
Close to the resonance (where the $`t`$\- and $`u`$\- channel exchanges can be neglected in comparison to the $`s`$-channel pole), the cross section productions can be approximated by a Breit-Wigner formula. For instance, for single neutralino production,
$`\sigma `$ $`=`$ $`{\displaystyle \frac{8\pi s}{m_{\stackrel{~}{\nu }}^2}}{\displaystyle \frac{\mathrm{\Gamma }(\stackrel{~}{\nu }f\overline{f})\mathrm{\Gamma }(\stackrel{~}{\nu }\nu \stackrel{~}{\chi }_0)}{(sm_{\stackrel{~}{\nu }}^2)^2+m_{\stackrel{~}{\nu }}^2\mathrm{\Gamma }_{\mathrm{total}}^2}}\left[{\displaystyle \frac{sm_{\stackrel{~}{\chi }^0}^2}{m_{\stackrel{~}{\nu }}^2m_{\stackrel{~}{\chi }^0}^2}}\right]^2`$ (5)
$``$ $`{\displaystyle \frac{8\pi }{m_{\stackrel{~}{\nu }}^2}}B(\stackrel{~}{\nu }f\overline{f})B(\stackrel{~}{\nu }\nu \stackrel{~}{\chi }^0)\text{, as }sm_{\stackrel{~}{\nu }}^2`$
Similar expressions arise for the other processes. The resonant cross sections can thus be deduced by the appropriate branching fractions.
Ignoring contributions to the vertices of the MSSM from mass terms, the latter are given by the following formulas:
$`\mathrm{\Gamma }(\stackrel{~}{\nu }\nu \chi _i^0)={\displaystyle \frac{g^2}{32\pi }}(N_{i2}\mathrm{tan}\theta _WN_{i1})^2m_{\stackrel{~}{\nu }}\left(1{\displaystyle \frac{m_{\chi _i^0}^2}{m_{\stackrel{~}{\nu }}^2}}\right)^2`$
$`\mathrm{\Gamma }(\stackrel{~}{\nu }\mathrm{}^{}\chi _i^\pm )={\displaystyle \frac{g^2V_{j1}^2}{16\pi }}m_{\stackrel{~}{\nu }}(1{\displaystyle \frac{m_{\chi _i^\pm }^2}{m_{\stackrel{~}{\nu }}^2}})^2,\mathrm{\Gamma }(\stackrel{~}{\nu }f\overline{f})={\displaystyle \frac{\lambda _{ijk}^2}{16\pi }}m_{\stackrel{~}{\nu }}`$
In the above, $`\lambda _{ijk}`$ is the appropriate R-parity violating Yukawa coupling generating the decay $`\stackrel{~}{\nu }f\overline{f}`$,while $`V_{i1}`$ and $`N_{i1}`$, $`N_{i2}`$ are the relevant matrix elements in the mixing matrix for charginos and neutralinos respectively.
Conclusively, whether single chargino and neutralino final states will dominate over the resonant fermion-antifermion productions depends on (i) the SUSY parameter space and (ii) the strength of $`\lambda `$. This is indicated in Figs. 1,2 where the branching ratio of the sneutrino decay to fermions is presented for the regions of the supersymmetry parameter space that are interesting for LEP (Fig. 1) and LC (Fig. 2) <sup>3</sup><sup>3</sup>3For squark decays, analogous results have been presented in .. Here, the U(1) gaugino mass $`M_1`$ is determined from the SU(2) gaugino mass $`M_2`$ by the unification relation $`M_1=(5/3)\mathrm{tan}^2\theta _WM_2`$. For lower values of $`M_2,\mu `$, a larger number of charginos and neutralinos can be produced at the final state, while the phase space suppression for their production is small. The picture starts changing as we pass to larger $`M_2,\mu `$ and this is indicated in the increase of the sneutrino decay rate to fermions. However, we can see that for a wide range of $`M_2`$ and $`\mu `$ the production of charginos and neutralinos at the LC tends to dominate. Moreover, there exist bands of the parameter space where the production of the heavier charginos and neutralinos may occur at a significant level. This is shown in Table 2, where we present the branching ratios for the production of each chargino and neutralino separately.
Subsequently, the charginos and neutralinos will decay to an R-parity even final state, with the possibility of an interesting chain of cascade decays with multi-lepton events and explicit lepton-number violation at the final state. The lightest neutralino decays via
$`\stackrel{~}{\chi }_1^0`$ $``$ $`\{(e^\pm ,l_i^{},\nu _e),(e^\pm ,e^{},\nu _i)\}`$
For the charginos and the heavier neutralinos, there exist two possible decay modes: The first is the cascade decay via the lightest neutralino and the second the direct decay via the R-violating coupling(s), as discussed in . For instance, for the lighter chargino we have the channels
$`\stackrel{~}{\chi }_1^{}\stackrel{~}{\chi }_1^0+(W^{})^{}\stackrel{~}{\chi }_1^0+f\overline{f}^{}`$
where $`f\overline{f}^{}`$ are the decay fermions of the (virtual) W-boson, or
$$\stackrel{~}{\chi }_1^{}e^{}e^+l_i^{},\stackrel{~}{\chi }_1^{}\nu _e\nu _ie^{}$$
(6)
In the first case of (6) the total signal could be even more distinct since it involves four leptons at the final state (three being in the same semi-plane) without any missing energy, unlike the cascade chargino decay which always involves neutrinos at the final state <sup>4</sup><sup>4</sup>4 Which of the two processes will appear, clearly depends on (i) the strength of the R-parity violating operator: the strongest the operator the larger the decay rate for a direct decay of the chargino. (ii) the relative mass of chargino-neutralino: if the mass gap between the two states is very small, then the cascade decay is suppressed by phase space. . It turns out however that the charginos as well as the heavier neutralinos dominantly decay to $`\chi _1^0`$ and fermions for a wide region of the parameter space.
In all cases, the signals should be clearly visible at an $`e^+e^{}`$ collider, provided the cross-section is sufficiently large; the latter mainly depends on how large the unknown coupling $`\lambda `$ will be. We study the relevant cross section, at and away from the resonance. Ignoring contributions to the vertices of the MSSM from mass terms, we have two channels present ($`s`$ and $`t`$) for chargino production and all three ($`s`$, $`t`$ and $`u`$) for neutralino production. For the $`s`$-channel diagram we take into account the contribution due to the decay width of the scalar neutrino.
In Fig. 3, we show the cross sections for single chargino and neutralino productions, including effects from initial state radiation (ISR), for two different sneutrino masses. To illustrate the effects from the production of many charginos and neutralinos we chose a point of the parameter space where all several of these states are produced. Indeed, for the choice of parameters tha appears in the figure, the chargino masses are 201.9 and 273.1 GeV respectively, while the neutralino masses are 127.8, 192.9, 217.5 and 272.1 respectively.
As expected, the effect of initial state radiation is to lower the peak but widen the resonance. For instance, in our example we find that, for $`\sqrt{s}=500`$ GeV (where all charginos and neutralinos may be produced) and $`m_{\stackrel{~}{\nu }}=450`$ GeV, IR enhances the cross section by almost an order of magnitude. Actually, in this example, the heavier charginos and neutralinos may arise with large cross sections. Indeed, the partial cross sections that we find for the four neutralinos (from the lighter to the heavier), for $`\sqrt{s}=500`$ and $`m_{\stackrel{~}{\nu }}=450`$ GeV, are: 1.03 pb, 0.22 pb, 0.15 pb, 1.9 pb while for the charginos 1.8pb and 2.9 pb respectively.
From the above discussion, we conclude that single chargino and neutralino productions arise with significant cross sections and provide an interesting possibility for looking for R-violating supersymmetry at the Linear Collider.
Acknowledgement: I would like to thank P. Morawitz for collaborating at the first stages of this work and A. Belyaev, for his help with PAW. |
no-problem/9912/cond-mat9912035.html | ar5iv | text | # Power-law Distribution of Family Names in Japanese Societies
## Abstract
We study the frequency distribution of family names. From a common data base, we count the number of people who share the same family name. This is the size of the family. We find that (i) the total number of different family names in a society scales as a power-law of the population, (ii) the total number of family names of the same size decreases as the size increases with a power-law and (iii) the relation between size and rank of a family name also shows a power-law. These scaling properties are found to be consistent for five different regional communities in Japan.
Scaling laws have been playing an important role in science for the past several decades . Diverse systems in nature have been found to exhibit a scaling law and self-similarity without a fine tuning of external parameters —known as self-organized-criticality. A simple model proposed by Bak et. al. shows that the minimal ingredients of these scaling behaviours is to have a large number of degrees of freedom and nonlinear interactions between them. Human societies also show complexity which meets the above features of self-organized criticality. In this context, many of human activities including word freqeuncy , traffic flow , economics , population growth , city growth , internet , citation frequency and war distribution have been reported to show scaling behaviour.
Here, we study frequency distribution of family names in Japanese societies. We define a family as a group of people who share the same family name, i.e., the different families are identified by their family names. We also define the size of a family, $`s`$, as a number of people in that family. We rank families by their size from the biggest family to the smallest family; For example, the biggest family in town of Fuso is “Senda” with family size $`s(Senda)=296`$, so its rank is $`r(Senda)=1`$. The second biggest is “Kondo” with size $`s(Kondo)=229`$, and rank $`r(Kondo)=2`$, and so on . In this way we measure the size and the rank of all families.
We analyze the telephone directories of five regional communities in Japan: town of Haruhi, town of Fuso, city of Inazawa, city of Kasugai and 1/3 of the city of Nagoya. The directories were published in 1998 by the communications company “NTT”. The total number of customers $`S`$ appeared in these directories are $`1634`$, $`7775`$, $`23365`$, $`65988`$ and $`177267`$, respectively. First, we count the number of different family names, $`N`$, appeared in the directories. In Fig. 1 we plot $`N`$ versus $`S`$ and we find that
$$NS^\chi ,$$
(1)
with an exponent $`\chi =0.65\pm 0.03`$.
Next, we investigate the scaling properties of two different quantities: (i) the distribution of the family size $`n(s)`$ which is the number of families of the same size $`s`$, and (ii) the relation between size and rank of a family, i.e., $`s(r)`$ which forms the so-called Zipf’s plot . The two quantities are complementary in a sense that $`n(s)`$ mainly focuses on the scaling property of the smaller size family while $`s(r)`$ highlights the scaling property of the bigger size family. We find the power-law scalings for both the quantities which are consistent for all five regions investigated.
We measure the distribution $`n(s)`$ for each town which is shown in Fig. 2a in double logarithmic scale. It shows a nice power-law behaviour with same exponent for all five different communities. We suggest the following scaling form for $`n(s)`$ :
$$n(s)=Af(\frac{s}{s^{}}),$$
(2)
where the scaling function $`f(x)`$ behaves as $`fx^\tau `$ for $`x1`$ and $`f=1`$ for $`x1`$. Here $`s^{}`$ is a characteristic family size at which $`n`$ becomes one, i.e. $`n(s^{})=1`$, which in turn gives $`A=1`$. In Fig. 2b we try to collapse data using the scaling form of Eq. (2) with an additional scaling law $`s^{}S^\alpha `$ and the scaling exponent $`\alpha =0.37\pm 0.03`$. A linear fit of the collapsed scaling function yields $`\tau =1.75\pm 0.05`$.
From the normalization condition,
$$_1^s_{}n(s)𝑑s=N,$$
(3)
and the scaling form for $`n(s)`$ \[Eq. (2)\] we obtain a relation, $`s^{}N^{1/\tau }`$. This scaling, combined with our finding $`NS^\chi `$, gives
$$\alpha =\frac{\chi }{\tau }.$$
(4)
This scaling relation is well consistent with the exponents measured within error bars.
In Fig. 3a we plot the family size $`s`$ versus rank $`r`$ in double logarithmic scale. Each curve shows a crossover behaviour from one power-law regime with exponent $`\varphi _I=0.67\pm 0.03`$, to another steeper power-law decay with exponent $`\varphi _{II}=1.33\pm 0.03`$ at the characteristic rank $`r^{}`$ which also scales as $`r^{}S^\alpha ^{}`$. We propose the following scaling form for $`s(r)`$;
$$s(r)=r^{}g(\frac{r}{r^{}}),$$
(5)
where the scaling function $`g`$ behaves as $`gx^{\varphi _I}`$ for $`x1`$ and $`gx^{\varphi _{II}}`$ for $`x1`$. In Fig. 3b we try to collapse the data using the scaling form of Eq. (5) and the best fit is obtained when $`\alpha ^{}=0.5\pm 0.05`$.
Two quantities, $`n(s)`$ and $`r(s)`$, are related by an integral equation ;
$$r(s)=_s^{\mathrm{}}n(s^{})𝑑s^{}s^{1\tau }.$$
(6)
By inverting the relation we obtain a scaling relation, $`s(r)r^{\frac{1}{1\tau }}`$. This relation gives the exponent
$$\varphi _{II}=\frac{1}{\tau 1},$$
(7)
because the scaling exponent $`\tau `$ is measured for small $`s`$, i.e. for $`r>r^{}`$. Note that the Eq. (7) is well satisfied by our results. The fact that the crossover points $`r^{}`$ scales as $`S^{0.5}`$ suggests that the sampling of the population is random so that relative deviation of the probability decreases as $`S^{0.5}`$ as number of data points $`S`$ increases.
To test the role of the communities on the observed scaling behaviours we randomly select a population and repeat our analysis for the extracted data set. Figure 4 shows the distributions for the randomly chosen population $`S=2189,6566,19696,59089`$ and $`177267`$ out of the biggest data for $`1/3`$ of city of Nagaya. It shows very close scaling behaviours as Figs. 1 to 3. This experiment suggests that the families are distributed randomly in the town without spatial correlation. Such scaling universality in the family structure of contemporary societies could be explained as a result that the time scale characterizing the migration of pupulation in a community is much shorter than the time scale asscoiated with the reproduction of a famiy name.
The scaling exponents $`\tau =1.75`$, $`\varphi _I=0.67`$ and $`\varphi _{II}=1.33`$ are different from the Zipf’s result on word frequency where the exponents are $`\tau =2.0`$ and $`\varphi =1.0`$. The power-law relation between $`N`$ and $`S`$ and it’s exponent $`\chi =0.65`$ observed in family name distribution seem to be nontrivial. One may expect this scaling law breaks if the number of available family names in a society is too small compared to the population. Cohen et. al. found that this situation occurred in the words frequency distribution — for very large $`S`$, $`N(S)`$ approaches a plateau. They found that the exponent $`\chi `$ for the number of different words in a text is also a function of length of the text. This is true also for the societies where the family names are strictly inherited from fathers to sons without any creation of new family names. In fact, the expectation number of sons per parents is one under the stationary constant population. Then the survival probability $`P(t)`$ of a family name after $`t`$ generations decreases as $`P(t)t^{0.5}`$. As a result, after many generations, only a few family names will dominate the whole population in the society. This is the situation in countries where the creation of new family names has been strictly restricted for many generations such as in Korea. The total number of family names in Korea is about $`250`$ while the total population is about $`50`$ millions. On the contrary Japan has most rich family names in the world whose total number of family names is about $`132,000`$ and the population is about $`125`$ millions. The creation of a new family name in Japan is also very rare. However, historically the most of Japanese family names were created about $`120`$ years ago . The short history of family names may cause to preserve the diversity and the scaling properties of family names as it was at the creation.
In summary, we have investigated the distribution of Japanese family names for five different regional communities in Japan. From the our empirical investigation, the power-law relation between total number of different family names and total population appeared in a telephone directory with the exponent $`\chi =0.65`$. Also we have found that the name-variety-size distribution shows nice power-law scaling with the exponent $`\tau =1.75`$ and the cutoff exponent, $`\alpha =0.37`$. These scaling properties are consistent for five regional communities and randomly generated societies with with different populations. In a size-rank distribution of family name we have obtained a crossover behaviour from one exponent, $`\varphi _I=0.67`$ to another exponent $`\varphi _{II}=1.33`$ at the crossover point $`r^{}S^\alpha ^{}`$ with $`\alpha ^{}=0.5`$. This result is consistent even if the specific family names of higher rank in one community is different from those in other communities. We have also derived scaling relations between these exponents.
Acknowdgements
We thank I. Grosse, P.Ch. Ivanov and S. Havlin for helpful discussions. |
no-problem/9912/cond-mat9912353.html | ar5iv | text | # Boolean derivatives and computation of cellular automata
## 1 Introduction
This work is based on the concept of Boolean derivatives, introduced in the context of cellular automata by G. Vichniac . A cellular automaton (CA) is defined on a lattice by an interaction range (for instance on a two dimensional square lattice, with nearest and next to nearest neighbors interactions), and by an updating rule that gives the future value (state) of a lattice variable given its present state and the state of its neighbors. The rule is applied in parallel on all the sites of the lattice, and can be either deterministic or probabilistic. From a computational point of view, the simplest case for the rule is a Boolean deterministic function, and, if not otherwise specified, we shall refer to this situation in the following. Cellular automata are often studied from a numerical point of view. Generally large lattices and long time simulations are required, and this originates the problem of developing efficient algorithms for the simulations of cellular automata on general purpose computers and sometimes on dedicated machines. For the first hardware resources, a technique that allows an efficient use of the memory and CPU is the Multi Site Coding (MSC) technique . This technique implies that the rule is coded only using bitwise operations. Although standard bitwise expressions (canonical forms) are easy to generate given a Boolean function, the minimization of the number of required operations is believed to be a np-complete problem . In Section 2 we recall the basic definitions and in the following section we introduce the Boolean derivatives that will lead to the Taylor and MacLaurin series for a Boolean function. They are more compact expressions than the standard canonical conjunctive and disjunctive forms. In Section 4 this technique is farther developed for the particular case of totalistic cellular automata, for which the future state of a cell depends only on the total number of neighbors that are in a certain state, regardless of their position. The symmetries of the problem allow the saving of a large number of Boolean operations. In Section 5 the results are applied to some models that appeared in literature. Finally, conclusions are drawn in the last section.
## 2 General definitions
Our fundamental set is $`B_1=\{0,1\}`$. This is called the Boolean set. Higher dimensional Boolean sets are indicated as $`B_n=\{0,1\}^n`$. By $`_{n,m}`$ we denote the set of functions $`f:\{0,1\}^n\{0,1\}^m`$. $`_n`$ stands for $`_{n,1}`$.
To an element $`a=(a_1,\mathrm{},a_n)B_n`$ corresponds a number $`a[0,2^n)`$:
$$a=\underset{i=1}{\overset{n}{}}a_i2^{i1};$$
and to each number $`a[0,2^n)`$ corresponds a $`n`$-tuple $`(a_1,\mathrm{},a_n)B_n`$:
$$a_i=a/2^{i1}mod2,$$
where $`a`$ stands for the integer part of $`a`$. An integer number is thus an ordered set of Boolean numbers (bits). In order to emphasize its Boolean structure we shall refer to these sets with the name of bitarray, whose dimension is that of the space in which it is defined. We introduce a partial ordering between bitarrays saying that $`ab`$ if, for all $`i`$, $`a_ib_i`$, where $`0<1`$. We can thus substitute the expression $`a[0,2^{n1})`$ by $`a2^{n1}1`$ or simply $`aB_n`$. A Boolean function $`f`$ is called monotone if $`ab`$ implies $`f(a)f(b)`$.
This mapping between numbers and Boolean $`n`$-ple corresponds to the representation of integer numbers in computers, the integer division by a power of two being equivalent to a right shift (in FORTRAN $`a/2^{i1}\mathrm{𝚂𝙷𝙸𝙵𝚃𝚁}(𝚊,𝚒\mathrm{𝟷})`$), and the modulo two operation to take the leftmost (less important) bit.
Let us introduce the most common Boolean operations. If applied to a bitarray, they will act bit by bit. There are $`2^{2^n}`$ Boolean functions in $`_n`$. With $`n=1`$ the most important function is the negation (NOT), indicated by the symbol $`\neg `$, or by a bar over the argument. With $`n=2`$ there are 16 functions. The ones that exist on (almost) all computers are the AND, OR and XOR operations. The AND (symbol $``$) gives one only if both the arguments are one (it is equivalent to a multiplication of the bits considered as integer numbers), the OR (symbol $``$) gives one if either one or the other argument is one, while the XOR (symbol $``$) corresponds to the sum modulo two. Notice that the XOR operation accounts both for the sum and for the subtraction or negation ($`aa=0`$ and $`a1=\neg a`$). The AND has higher priority than the OR and XOR operations. The negation has the highest priority. The OR and the XOR operations are distributive with respect to the AND. The XOR operation can be expressed by the NOT, AND and OR: $`ab=\neg aba\neg b`$. Equivalently the OR can be expressed by the AND and XOR operations: $`ab=abab`$. If two Boolean quantities $`a`$ and $`b`$ cannot be one at once, both the expressions $`ab`$ and $`ab`$ give the same result. In the following we shall emphasize this condition by writing $`a+b`$, and indeed in this case the usual sum operation can be used. On certain computers (namely on the CRAY), the logical and numerical unities can work in parallel. By mixing Boolean and arithmetic operations it is possible to speed up the actual calculations .
Often in the literature the conditional negation is indicated by $`a^b`$ with the meaning that $`a^0=\neg a`$ and $`a^1=a`$. This is equivalent to $`\neg (ab)`$ or to $`ab1`$. In this work we prefer to assign a different meaning to exponentiation, more similar to the usual one. We define $`a^0=1`$ and $`a^1=a`$. With this definition the expression $`a^b`$ is equivalent to $`(a1)b1`$ or to $`a\neg b`$. When applied to bitarrays, the exponentiation is performed bit by bit and the results are afterwards ANDed together,
$$a^b=\underset{i=1}{\overset{n}{}}a_i^{b_i}=\underset{i=1}{\overset{n}{}}a_i\neg b_i.$$
(1)
We note that $`a^b`$ is equal to one if and only if $`ab`$.
A Boolean function $`f_n:xB_nf(x)`$ is defined by giving its results for all the possible ($`2^n`$) configurations of its arguments (truth table). It is possible to obtain an expression for the $`f`$ only containing the AND, OR and NOT operations from this table. It is sufficient to give $`f^1(0)`$ or $`f^1(1)`$. The two canonical forms are
$$\begin{array}{cc}\hfill f(x)& =\underset{af^1(1)}{}\underset{i=1}{\overset{n}{}}\neg (x_ia_i);\text{disjunctive form, and}\hfill \\ \hfill f(x)& =\underset{af^1(0)}{}\underset{i=1}{\overset{n}{}}x_ia_i;\text{conjunctive form.}\hfill \end{array}$$
These canonical forms are the standard starting points for the problem of minimizing the number of required operations given a function. It is possible to demonstrate that the minimal expression for a monotone function only contains the AND and OR operations; the expression is unique and easy to compute.
As the NOT and the OR can be expressed by the AND and XOR operations, any function $`f`$ can be given in terms of the latter two operations (Ring Sum Expansion, RSE) . This form is identified by a Boolean vector $`f_i;iB_n`$
$$f(x)=\underset{iB_n}{}f_ix^i.$$
(2)
Since the number of different Boolean vectors $`f_i`$ and of functions $`f_n`$ is equal, the RSE is unique.
## 3 Boolean derivatives
Following Vichniac , we define the derivative of a Boolean function $`f_n`$ with respect to its $`i`$-th argument $`x_i`$ as
$$\frac{f}{x_i}|_x=f(x_1,\mathrm{},x_i,\mathrm{},x_n)f(x_1,\mathrm{},\neg x_i,\mathrm{},x_n).$$
This (first order) derivative expresses the dependence of the function by its $`i`$-th argument: $`f/x_i`$ is one if $`f`$ changes when changing $`x_i`$, given the configuration $`x_1,\mathrm{},x_{i1},x_{i+1},\mathrm{},x_n`$. If the derivative of $`f`$ with respect to $`x_i`$ is one regardless of the other arguments, than the rule changes its value whenever $`x_i`$ does. In Ref. a rule that shows this behavior is called a toggle rule.
This definition is consistent with the common expectations: the derivative of the identity function is one, and the derivative of a constant (0 or 1) is zero. Moreover, the derivative is linear with respect to the XOR operations, and it follows the standard rule for the derivative of a product,
$$\frac{(fg)}{x}=\frac{f}{x}gf\frac{g}{x}.$$
We can extend the definition to higher order derivatives. For example, a second order derivative with respect to $`x_i`$ and $`x_j`$ is defined as
$$\begin{array}{cc}\hfill \frac{^2f}{x_ix_j}|_x=& f(x_1,\mathrm{},x_i,\mathrm{},x_j,\mathrm{},x_n)f(x_1,\mathrm{},\neg x_i,\mathrm{},x_j,\mathrm{},x_n)\hfill \\ & f(x_1,\mathrm{},x_i,\mathrm{},\neg x_j,\mathrm{},x_n)f(x_1,\mathrm{},\neg x_i,\mathrm{},\neg x_j,\mathrm{},x_n).\hfill \end{array}$$
Note that the definition is consistent with the usual chain rule for derivatives, i.e.,
$$\frac{^2f}{xy}=\frac{}{y}\left(\frac{f}{x}\right).$$
(3)
A second order derivative with respect to the same argument is identically zero.
We introduce a more compact definition for the Boolean derivatives. Indicating with $`x`$ the bitarray of the arguments $`(x_1,\mathrm{},x_n)`$ and with $`\delta `$ a (constant) bitarray of the same dimension, we define
$$_\delta f(x)=\underset{\alpha \delta }{}f(x\alpha ).$$
It is immediate to verify that $`_\delta f(x)`$ is equal to the partial derivative in $`x`$ of $`f`$ with respect to the variables that correspond to nonzero bits in $`\delta `$. For instance, indicating with $`\delta ^{(i)}`$ a bitarray in which only the $`i`$-th bit is set to one (i.e., $`\delta _i^{(i)}=\delta _{i,j}`$, where the latter is the usual Kronecker symbol), we have
$$_{\delta ^{(i)}}f(x)=\frac{f}{x_i}|_x.$$
We can now state our most important result. For a Boolean function the Taylor expansion is always finite. Let us start with a perturbation on only one variable. If $`y=x\delta ^{(i)}`$, from the definition (3) of the derivative we get
$$f(y)=f(x)_{\delta ^{(i)}}f(x).$$
Generalizing
$$f(x\delta )=\underset{\alpha \delta }{}_\alpha f(x),$$
with $`_0f(x)=f(x)`$.
Using our definition of the exponentiation (1), we can substitute the XOR over $`\alpha \delta `$ with a XOR over $`\alpha B_n`$. We need a test function that gives one if $`\alpha \delta `$ and zero otherwise, and from the consideration after Eq. (1) this can be expressed as $`a^b`$. Finally we obtain
$$f(x\delta )=\underset{\alpha B_n}{}\delta ^\alpha _\alpha f(x).$$
As a noticeable consequence, we can expand a function starting from 0 (the bitarray that has all the bits to zero), obtaining the MacLaurin series
$$f(x)=\underset{\alpha B_n}{}x^\alpha f_\alpha ,$$
(4)
where
$$f_\alpha =_\alpha f(0);$$
(5)
Which is the ring sum expansion of the function $`f`$, Eq. (2).
Let us explicitly write down the formula (4) for an elementary CA, whose evolution rule depends on the cell itself ($`y`$) and on its nearest neighbors ($`x`$ and $`w`$). Locally
$$y^{}=f(x,y,w)$$
where the prime indicates the future value of the cell. The MacLaurin expansion of $`f`$ is given by
$$\begin{array}{cc}\hfill y^{}=& f(0,0,0)\hfill \\ & x\frac{f}{x}|_{0,0,0}y\frac{f}{y}|_{0,0,0}z\frac{f}{z}|_{0,0,0}\hfill \\ & xy\frac{f^2}{xy}|_{0,0,0}xz\frac{f^2}{xz}|_{0,0,0}yz\frac{f^2}{yz}|_{0,0,0}\hfill \\ & xyz\frac{f^3}{xyz}|_{0,0,0}.\hfill \end{array}$$
The first order derivatives of all the elementary CA can be found in Ref. . Higher order derivatives can be obtained by using the chain rule (3). Otherwise, the array of derivatives $`f_i`$ in zero can be obtained from the truth table $`f(j)`$ via the matrix $`_{i,j}`$
$$f_i=\underset{jB_n}{}_{i,j}f(j);$$
where
$$_{i,j}=\left(\genfrac{}{}{0pt}{}{j}{i}\right)mod2.$$
The matrix $``$ can be recursively generated considering that
$$\begin{array}{cc}\hfill _{i,0}& =1;\hfill \\ \hfill _{0,j}& =0(j>0);\hfill \\ \hfill _{i,j}& =_{i1,j}_{i1,j1}(i,j>0);\hfill \end{array}$$
(6)
To show an application of the MacLaurin expansion, let us examine the expression normally used to select between two random bits $`a`$ and $`b`$ according to a third one ($`r`$),
$$f(r)=ra\neg rb\text{(4 operations)}.$$
We only consider the explicit dependence of the function on $`r`$. To write down the ring sum expansion of $`f(r)`$ we need
$$\begin{array}{cc}\hfill f(0)& =b;\hfill \\ \hfill _1f(0)& =f(0)f(1)=ba.\hfill \end{array}$$
The RSE for $`f(r)`$ is
$$f(r)=br(ab)\text{(3 operations)}.$$
We consider it a good result to save one operation out of four in such a widely used and (apparently) simple expression. Other examples can be found in section 5.
## 4 Totalistic rules
The power of the algorithm is particularly evident when applied to totalistic CAs. The transition rule for these automata depends on the sum of the cell values in the neighborhood,
$$T^{(n)}=\underset{i=1}{\overset{n}{}}x_i.$$
Any totalistic evolution rule can be written as
$$f(x_1,\mathrm{},x_n)=f\left(T^{(n)}\right)=\underset{k=0}{\overset{9}{}}r_k\chi _k^{(n)}$$
(7)
where $`\chi _k^{(n)}`$ is one if $`T^{(n)}=k`$ and zero otherwise (totalistic characteristic functions). Only one term contributes in the sums of equations (7) so that we can use the arithmetic summation. The quantities $`r_k`$ take the value zero or one and define the automaton rule. Probabilistic CAs may be implemented by allowing the coefficients $`r_k`$ of equation (7) to assume the values zero and one with probabilities $`p_k`$ (see also the last example of Section 5).
A totalistic function $`f`$ is completely symmetrical with respect to its arguments . This implies that the derivatives of $`f`$ of same order are all functionally equals. In particular, as the derivatives of the MacLaurin expansion (4) are calculated in zero, they are actually equals, and thus can be factorized. This leads to
$$f(x_1,\mathrm{},x_n)=f\left(T^{(n)}\right)=f_0f_1\xi _1^{(n)}f_2\xi _2^{(n)}\mathrm{}f_n\xi _n^{(n)};$$
(8)
where the $`f_i`$ represents the derivative of order $`i`$ of $`f`$ in 0 (5), and the $`\xi _i^{(n)}`$ are the homogeneous polynomes of degree $`i`$ in the variables $`x_1,\mathrm{},x_n`$ (using the AND for the multiplication and the XOR for the sum)
$$\begin{array}{cc}\hfill \xi _1^{(n)}& =x_1x_2\mathrm{}x_n,\hfill \\ \hfill \xi _2^{(n)}& =x_1x_2x_1x_3\mathrm{}x_{n1}x_n,\hfill \\ & \mathrm{}\hfill \\ \hfill \xi _n^{(n)}& =x_1x_2\mathrm{}x_n.\hfill \end{array}$$
The functions $`\xi _i`$ satisfy some recurrence relations. The first one is based on the idempotent property of the AND operation ($`aa=a`$) and the nullpotent property of the XOR operation ($`aa=0`$)
$$\begin{array}{cc}\hfill \xi _1& :\text{ irreducible;}\hfill \\ \hfill \xi _2& :\text{ irreducible;}\hfill \\ \hfill \xi _3& =\xi _2\xi _1;\hfill \\ \hfill \xi _4& :\text{ irreducible;}\hfill \\ \hfill \xi _5& =\xi _4\xi _1;\hfill \\ \hfill \xi _6& =\xi _4\xi _2;\hfill \\ \hfill \xi _7& =\xi _4\xi _3=\xi _4\xi _2\xi _1;\hfill \\ \hfill \xi _8& :\text{ irreducible;}\hfill \\ & \mathrm{}.\hfill \end{array}$$
The second property is based on the separation of the variables in two groups (bisection). Let us call $`X`$ the group of the variables $`(x_1,\mathrm{},x_n)`$, with $`L`$ we indicate the left part of $`X`$ up to some index $`j`$, and with $`R`$ the right part of $`X`$
$$\begin{array}{cc}\hfill L& =(x_1,\mathrm{},x_j)\hfill \\ \hfill R& =(x_{j+1},\mathrm{},x_n).\hfill \end{array}$$
We have
$$\begin{array}{cc}\hfill \xi _i(X)=& \xi _i(L)\xi _{i1}(L)\xi _1(R)\xi _{i2}(L)\xi _2(R)\mathrm{}\hfill \\ & \xi _1(L)\xi _{i1}(R)\xi _i(R).\hfill \end{array}$$
As an example, let us explicitly calculate the $`\xi _i`$ for eight variables. We bisect homogeneously the set $`X=(x_1,\mathrm{},x_8)`$ first into $`L`$, $`R`$, and then into $`LL`$, $`LR`$, $`RL`$, $`RR`$. We have
$$\{\begin{array}{cc}\xi _1(LL)\hfill & =x_1x_2,\hfill \\ \xi _1(LR)\hfill & =x_3x_4,\hfill \\ \xi _1(RL)\hfill & =x_5x_6,\hfill \\ \xi _1(RR)\hfill & =x_7x_8;\hfill \\ \xi _2(LL)\hfill & =x_1x_2,\hfill \\ \xi _2(LR)\hfill & =x_3x_4,\hfill \\ \xi _2(RL)\hfill & =x_5x_6,\hfill \\ \xi _2(RR)\hfill & =x_7x_8;\hfill \end{array}$$
$$\{\begin{array}{cc}\xi _1(L)\hfill & =\xi _1(LL)\xi _1(LR),\hfill \\ \xi _1(R)\hfill & =\xi _1(RL)\xi _1(RR);\hfill \\ \xi _2(L)\hfill & =\xi _2(LL)\xi _1(LL)\xi _1(LR)\xi _2(LR),\hfill \\ \xi _2(R)\hfill & =\xi _2(RL)\xi _1(RL)\xi _1(RR)\xi _2(RR);\hfill \\ \xi _3(L)\hfill & =\xi _2(L)\xi _1(L),\hfill \\ \xi _3(R)\hfill & =\xi _2(R)\xi _1(R);\hfill \\ \xi _4(L)\hfill & =\xi _2(LL)\xi _2(LR),\hfill \\ \xi _4(R)\hfill & =\xi _2(RL)\xi _2(RR);\hfill \end{array}$$
$$\{\begin{array}{cc}\xi _1^{(8)}\hfill & =\xi _1(L)\xi _1(R);\hfill \\ \xi _2^{(8)}\hfill & =\xi _2(L)\xi _1(L)\xi _1(R)\xi _2(R);\hfill \\ \xi _3^{(8)}\hfill & =\xi _2\xi _1;\hfill \\ \xi _4^{(8)}\hfill & =\xi _4(L)\xi _3(L)\xi _1(R)\xi _2(L)\xi _2(R)\xi _1(L)\xi _3(R)\xi _4(R);\hfill \\ \xi _5^{(8)}\hfill & =\xi _4\xi _1;\hfill \\ \xi _6^{(8)}\hfill & =\xi _4\xi _2;\hfill \\ \xi _7^{(8)}\hfill & =\xi _4\xi _3;\hfill \\ \xi _8^{(8)}\hfill & =\xi _4(L)\xi _4(R).\hfill \end{array}$$
where $`\xi _k^{(8)}=\xi _k(X)`$. Taking into account the common patterns that appear in the expressions of $`\xi _2^{(8)}`$ and $`\xi _4^{(8)}`$, we only need 34 operations to build up all the $`\xi _i^{(8)}`$.
The extension of the calculations to 9 variables, only adds a small number (16) of operations
$$\begin{array}{cc}\hfill \xi _1^{(9)}& =\xi _1^{(8)}x_9,\hfill \\ \hfill \xi _i^{(9)}& =\xi _i^{(8)}\xi _{i1}^{(8)}x_9(2i8),\hfill \\ \hfill \xi _9^{(9)}& =\xi _8^{(8)}x_9;\hfill \end{array}$$
even though a careful bisection of the set of variables implies fewer (39) operations.
A kind of normalization condition on the $`\xi _i`$ is given by
$$\underset{i=1}{\overset{n}{}}x_i=\underset{i=1}{\overset{n}{}}\xi _i^{(n)},$$
(9)
and can be used to save operations in building an expression containing a XOR of $`\xi _i`$. Another useful relation is
$$\underset{i=1}{\overset{n1}{}}(x_ix_{i+1})=\underset{i=1}{\overset{n1}{}}\xi _i^{(n)}.$$
(10)
We now have to build up the derivatives (in zero) of a totalistic function $`f`$, Eq. (8). There are only $`n+1`$ independent derivatives $`f_i`$, $`(i=0,\mathrm{},n)`$, as all the derivatives of the same order $`i`$ are equal. We have
$$f_i=\underset{T=0}{\overset{i}{}}_{i,j}f(T),$$
where the matrix $``$ is defined in Eq. (6).
For completeness, we report the expressions for the $`\chi _k^{(8)}`$ and $`\chi _k^{(9)}`$,
$$\begin{array}{cc}\hfill \chi _1^{(8)}& =\xi _1\xi _3\xi _5\xi _7,\hfill \\ \hfill \chi _2^{(8)}& =\xi _2\xi _3\xi _6\xi _7,\hfill \\ \hfill \chi _3^{(8)}& =\xi _3\xi _7,\hfill \\ \hfill \chi _4^{(8)}& =\xi _4\xi _5\xi _6\xi _7,\hfill \\ \hfill \chi _5^{(8)}& =\xi _5\xi _7,\hfill \\ \hfill \chi _6^{(8)}& =\xi _6\xi _7,\hfill \\ \hfill \chi _7^{(8)}& =\xi _7,\hfill \\ \hfill \chi _8^{(8)}& =\xi _8;\hfill \\ \hfill \chi _1^{(9)}& =\chi _1^{(8)}\xi _9,\hfill \\ \hfill \chi _2^{(9)}& =\chi _2^{(8)}\mathrm{}\chi _8^{(9)}=\chi _8^{(8)},\hfill \\ \hfill \chi _9^{(9)}& =\xi _9.\hfill \end{array}$$
(11)
Obviously, the $`\chi _k^{(9)}`$ are only formally similar to the $`\chi _k^{(8)}`$ and they are calculated with nine variables.
We note that the normalization condition on the $`\chi _k^{(n)}`$ is
$$\underset{k=0}{\overset{n}{}}\chi _k^{(n)}=1;$$
(12)
from which $`\chi _0^{(n)}`$ can be obtained.
Putting all the stuff together, we need a maximum of 1024 operations for a generic rule with eight arguments, and 2304 operations for a generic rule with nine arguments (if all the operations are explicitly developed); 50 (resp. 57) operations for a generic totalistic rule with eight (resp. nine) arguments using directly the $`\xi _i`$ of Eq. (8) and 73 (resp. 82) using the $`h_k`$ of Eq. (7). These numbers should be compared with the $`3000`$ operations of the standard disjunctive form for a function of eight arguments whose truth table is half filled with ones and with the $`600`$ operations required for a totalistic rule with nine arguments as described in Ref. .
We can see that the Taylor expansion of a Boolean function allows a big saving if the function itself depends symmetrically on the variables (i.e., it is a totalistic function). Sometimes a function depends in a totalistic way only on part of the variables (see e.g., the Life rule in the following section). After rearranging the indices,
$$f(x_1,\mathrm{},x_i,x_{i+1},\mathrm{},x_n)=f^{}(g(x_1,\mathrm{},x_i),x_{i+1},\mathrm{},x_n),$$
where $`g(x_1,\mathrm{},x_i)`$ is a totalistic function. If this happens, we have
$$f_{(\mathrm{},x_j,\mathrm{},x_k,\mathrm{})}=f_{(\mathrm{},x_k,\mathrm{},x_j,\mathrm{})}(j,ki)xB_n.$$
From a computational point of view, $`f`$ depends symmetrically on $`x_j`$ and $`x_k`$ if
$$\underset{iB_n}{}\left(f_if_{i\delta }\right)=0,$$
(13)
where $`\delta =\delta ^{(j)}\delta ^{(k)}`$. Symmetries among more than two variables can be obtained via the transition property.
## 5 Some applications
In this section we shall apply the algorithm to some problems, chosen among the ones that appeared in literature. Some of them were investigated with efficiency in mind, so they are supposed to be carefully studied with the aim of reducing the number of required operations.
The first example is the totalistic two-dimensional CA M46789 . The future value $`c^{}`$ of a cell $`c`$ is determined by the value most prevalent in its Moore neighborhood (nearest and next to nearest neighbors, nine variables), with a twist in case of a marginal majority or minority. In terms of Eq. (7) the rule is defined as
$$r_k=\{\begin{array}{cc}1\hfill & \text{if }k=4,6,7,8,9\text{;}\hfill \\ 0\hfill & \text{otherwise.}\hfill \end{array}$$
The twist in the majority provides a kind of frustration that simulates a mobile interface according to the Allen-Cahn equation .
From the general expression (8), we get the simplified expression
$$c^{}=\xi _4\left[1\xi _1\left(1\xi _2\right)\right]\xi _8,$$
for a total of 39 operations.
The second model is the game of Life . This well known game has recently shown to be a good tool model for the problem of the self-organizing criticality . The evolution rule for Life depends separately on the sum of the eight nearest and next-to-nearest neighbors (outer Moore neighborhood), and on the central cell itself.
The evolution rule can be expressed saying that a dead (zero) cell can become alive (one) only if it is surrounded by three alive cells, while survival only occurs if the alive cell is surrounded by two or three alive cells. Developing first the rule for the central cell $`c`$, we get
$$c^{}=\chi _3^{(8)}\chi _2^{(8)}c,$$
where $`c^{}`$ represent the updated value of the central cell, and the $`\chi _k^{(8)}`$ are calculated on the outer Moore neighborhood.
The substitution of the expressions for the $`\chi _k^{(8)}`$ (11) and simplification gives
$$c^{}=\xi _2\left[c(1c)\xi _1\right]\left(1\xi _4\right).$$
As $`a\neg ab=ab`$ we have
$$c^{}=\xi _2\left(1\xi _4\right)\left(c\xi _1\right),$$
that implies only 33 operations, to be compared with the $`170`$ ones reported in Ref. .
We can also apply the method to non totalistic rules. Let us examine the Kohring rule for an FHP gas with obstacles. The collision rules are the same of the original FHP model, with a set of four body collisions. Let us label the six directions in a counterclockwise way with the indices ranging from one to six. The operations on the indices are supposed to be modulo six. All the Boolean quantities are actually elements of some array of integer words: we do not consider here the spatial indices. If the Boolean variable $`x_i`$ is one, there is an incoming particle on the site from direction $`i`$. Collisions occur for
$$(x_{j+1},\mathrm{},x_{j+6})=\{\begin{array}{cc}(1,0,0,1,0,0),\hfill & \text{two particles collisions;}\hfill \\ (1,0,1,0,1,0),\hfill & \text{three particles collisions;}\hfill \\ (1,1,0,1,1,0),\hfill & \text{four particles collisions.}\hfill \end{array}$$
The index $`j=0,\mathrm{},5`$ provides for all the rotational invariant configurations.
After the application of the collision rule, the variable $`y_i`$ equal to one means that there is a particle outgoing from the site with direction $`i+3`$. If the particles travel unperturbed, the updating rule is just $`y_i=x_i`$. On each lattice site there is an additional bit, indicated by $`a`$, to code the local conservation of angular momentum. If $`a`$ is one and a collision takes place, then all the particles on the site turn counterclockwise of $`\pi /3`$ (i.e., $`y_i=x_{i1}`$), otherwise the rotation is clockwise ($`y_i=x_{i+1}`$). The $`a`$ bit is reversed each time a collision takes place. Finally, a bit $`b=0`$ (1) indicates the presence (absence) of an obstacle. The meaning of $`b`$ is reversed from that in Ref. for convenience. In the case $`b=0`$ no collision occurs, but the velocities of the particles are reversed, i.e., $`y_i=x_{i+3}`$. For further details about the implementation we refer to Khoring’s work .
First we want to obtain the expression for a bit $`c`$ that indicates the occurrence of a collision. The two cases of zero and six particles can eventually be included among the collisions, without having any consequence. There are no symmetries among the variables in the truth table of the collisions (see Eq. (13)), so it is preferable to divide them into two groups, two and four particles collisions in one group, and three particle collisions in the other one. The first group is characterized by symmetries between $`x_i`$ and $`x_{i+3}`$. Introducing the auxiliary variables $`w_i=x_ix_{i+3}`$ (only three of them are really needed), we get
$$c(0,2,4,6)=\chi _0^{(3)}(w1,w2,w3);$$
where $`c(i,j,\mathrm{})`$ indicates the contribution to the collision bit by the $`i,j,\mathrm{}`$ particles collisions, and the $`\chi _k^{(3)}`$ are the totalistic characteristic functions for three arguments. From Eq. (12), (9) and (11) we obtain
$$\neg c(0,2,4,6)=w1w2w3.$$
Three particles collisions occur when $`(x_1,x_3,x_5)`$ or $`(x_2,x_4,x_6)`$ are all zero or one, that is
$$\begin{array}{cc}\hfill \neg c& (0,3,6)=\hfill \\ & =\left[\chi _1^{(3)}(x_1,x_3,x_5)+\chi _2^{(3)}(x_1,x_3,x_5)\right]\left[\chi _1^{(3)}(x_2,x_4,x_6)+\chi _2^{(3)}(x_2,x_4,x_6)\right]\hfill \\ & =\left[\xi _1(x_1,x_3,x_5)\xi _2(x_1,x_3,x_5)\right]\left[\xi _1(x_2,x_4,x_6)\xi _2(x_2,x_4,x_6)\right].\hfill \end{array}$$
Using the property (10) we get
$$\begin{array}{cc}\hfill \neg c(0,3,6)& =(x_1x_3)(x_3x_6)(x_2x_4)(x_4x_6);\hfill \\ \hfill c& =\neg \left[\neg c(0,2,4,6)\neg c(0,3,6)\right].\hfill \end{array}$$
This expression for the collision bit is equal to that found in Ref. .
The expression for the $`y_i`$ can be though as a function of $`a,b,c`$. Developing the expression we get
$$y_i=x_{i+3}b(x_{i+3}x_iz_i),$$
where
$$z_i=\left[x_ix_{i+1}(x_{i+1}x_{i1})a\right]c.$$
Finally, we notice that
$$z_iz_{i+3}=\left[w_iw_{i+1}(w_{i+1}w_{i1})a\right]c;$$
but when $`c=1`$ all the $`w_i`$ are zero, so $`z_i=z_{i+3}`$. We need one more operation to reverse the angular momentum bit in case of a collision, $`a^{}=ca`$. Taking into account the common patterns in the expression of $`c`$,$`z_i`$ and $`y_i`$, we only need 35 operations to update all the six velocities, and 14 arrays (six for the old values of the particles, six for the new values, one for the angular momentum and one for the collision bits). For comparison, in Ref. the algorithm needs 74 operations and 16 arrays.
Incidentally, we observe that only six arrays are really needed to store the configuration, without the needing of a translational phase. Indeed, the rule only implies an (eventual) exchange of the particles among the planes (the RAP1 machine is based on this consideration). The translation of the particles can be taken into account by a logical shift of the origin of the arrays. The procedure is still vectorizable, but the mapping between the lattice and the arrays of integer words is indeed more complex, so maybe it is not worth doing the efforts unless perhaps for a dedicated hardware.
The last example involves a probabilistic totalistic CA for the simulation of the Ising model . The rule depends in a totalistic way on the outer Von Neumann neighborhood (the north, east, south and west neighbors). The rule can be expressed as
$$c^{}=\underset{k=0}{\overset{4}{}}r_k\chi _k^{(4)},$$
where the $`r_k`$ are random bits equal to one with predefined probabilities $`p_k=p(r_k)`$. Building the $`\chi _k^{(4)}`$ from the $`\xi _i`$ as in Eq. (11), we get the quoted result of 22 Boolean operations and four arithmetic summations. Writing down the RSE (2), we have
$$c^{}=r_0\underset{i=1}{\overset{4}{}}s_i\xi _i,$$
where the $`s_i`$ are random bits with probability $`p(s_i)`$, obtained as
$$\begin{array}{cc}\hfill s_1& =r_0r_1,\hfill \\ \hfill s_2& =r_0r_2,\hfill \\ \hfill s_3& =r_0r_1r_2r_3,\hfill \\ \hfill s_4& =r_0r_4;\hfill \end{array}$$
and considering that $`p(ab)=p(a)+p(b)2p(a)p(b)`$.
Using the approach described above, we need nine operations to build up the $`\xi _i`$ and eight operations for $`c^{}`$.
## 6 Conclusions
In this work we extend and complete the notion of the derivatives of a Boolean function already introduced in Ref. . We are thus able to derive the Taylor and MacLaurin series of a Boolean function. The latter represent the ring sum expansion for a Boolean function, which is more compact than the canonical conjunctive and disjunctive forms. Moreover, for totalistic functions (i.e., for functions completely symmetric in their arguments) very compact expressions are found. These ideas have wide applications in the development of faster algorithms, in particular for cellular automata simulations, and in the design of digital circuitry. As examples of physical applications, we analyze already published optimized algorithms, involving both deterministic and stochastic automata. We found that our procedure generally leads to more compact expressions. We think that the Boolean derivative is not limited to the minimization of Boolean functions. Work is in progress about the connection between Boolean derivatives and the chaotic properties of cellular automata, possibly leading to the definition of Lyapunov exponents for discrete systems.
## acknowledgements
We are grateful to G. Vichniac, R. Livi and S. Ruffo for fruitful discussions. We also acknowledge the Institute for Scientific Interchange Foundation (Torino, Italy) where this work was first started in the framework of the workshop Complexity and Evolution. |
no-problem/9912/cond-mat9912441.html | ar5iv | text | # Quasiballistic correction to the density of states in three-dimensional metal
## Abstract
We study the exchange correction to the density of states in the three-dimensional metal near the Fermi energy. In the ballistic limit, when the distance to the Fermi level exceeds the inverse transport relaxation time $`1/\tau `$, we find the correction linear in the distance from the Fermi level. By a large parameter $`ϵ_\mathrm{F}\tau `$ this ballistic correction exceeds the diffusive correction obtained earlier.
The zero-bias tunneling anomaly in disordered metals has been studied extensively both experimentally and theoretically. The explanation of this phenomenon has been given on the basis of the interaction induced correction to the single-particle density of states (DOS). For the three-dimensional case the leading order exchange correction is given by:
$$\frac{\delta \nu _{\mathrm{diff}}}{\nu _0}=A\frac{\lambda }{\left(ϵ_\mathrm{F}\tau \right)^2}\sqrt{|\epsilon |\tau }.$$
(1)
Here $`\nu _0=m^2v_\mathrm{F}/\pi ^2`$ is DOS on the Fermi level in the non-interacting metal ($`m`$ and $`v_\mathrm{F}`$ being the mass of electron and the Fermi velocity correspondingly, $`\mathrm{}=1`$), $`A=3^{3/2}/8\sqrt{2}0.459`$ is the numerical constant, and $`\lambda `$ is the unitless interaction strength. In the realistic cases $`\lambda 1`$. The subscript attached to the correction to DOS $`\delta \nu _{\mathrm{diff}}`$ emphasizes that the above result is valid in the diffusive limit, i. e. when the distance to the Fermi level $`|\epsilon |`$ is much smaller than the inverse transport scattering time $`1/\tau `$. The maximum value that the correction can reach can therefore be estimated as $`\delta \nu /\nu (ϵ_\mathrm{F}\tau )^21`$.
In this work we evaluate the exchange correction to DOS in three-dimensional metal with short-range impurities. The use of the concrete form of disorder allows us to go beyond the universal diffusive regime. Our result in the ballistic regime ($`|\epsilon |1/\tau `$) is:
$$\frac{\delta \nu _{\mathrm{ball}}}{\nu _0}=B\frac{\lambda }{\left(ϵ_\mathrm{F}\tau \right)^2}|\epsilon |\tau .$$
(2)
Here $`B=\pi /160.196`$ is the numerical constant, $`\lambda =\stackrel{~}{V}(\omega =0,𝒒=0)\nu _0`$, where $`\stackrel{~}{V}(\omega ,𝒒)`$ is the Fourier transform of screened electron-electron interaction potential. This result is valid up until energies of the order of Fermi energy. The maximum value reached by this correction can be estimated as $`\delta \nu /\nu (ϵ_\mathrm{F}\tau )^1`$. Thus it exceeds the diffusive correction by a large parameter $`ϵ_\mathrm{F}\tau `$. We conclude therefore that the ballistic correction produces a larger suppression of DOS on the Fermi level than the diffusive correction in case of short range impurities.
The singular diffusive correction to the tunneling DOS is observed in many experiments. However at larger energies it crosses over into less singular correction, behaving approximately as the absolute value of distance to the Fermi level. This behavior is consistent with our prediction (2) for the ballistic regime. The details of the crossover between (1) and (2) are given below.
To derive Eq. (2) we follow the guidelines of Refs. and . The correction to one particle DOS is related to the exchange correction to the retarded Green function
$$\delta \nu (\epsilon )=\frac{2}{\pi }\frac{d𝒑}{(2\pi )^3}\mathrm{Im}\delta G^R(\epsilon ,𝒑)$$
(3)
The latter is calculated in the first order of the perturbation theory in the electron-electron interaction $`\stackrel{~}{V}(\omega ,𝒒)`$
$$\begin{array}{c}\delta G^R(\epsilon ,𝒑)=i\left[G^R(\epsilon ,𝒑)\right]^2\frac{d𝒒}{(2\pi )^3}\frac{d\omega }{2\pi }\hfill \\ \\ \times \left[\mathrm{\Gamma }(\omega ,𝒒)^21\right]G^A(\epsilon \omega ,𝒑\mathbf{}𝒒)\stackrel{~}{V}(\omega ,𝒒).\hfill \end{array}$$
(4)
Here $`G^R`$ and $`G^A`$ are the retarded and advanced Green functions respectively $`G^{R,A}(𝒑,\epsilon )=1/\left(\epsilon 𝒑^2/2m\pm i/2\tau \right)`$. We subtract unity from the square of the vertex function in the presence of impurities $`\mathrm{\Gamma }(\omega ,𝒒)`$ to exclude “bare” interaction correction existing with no impurities. The vertex function has to be calculated using our model of impurity potential
$$u(𝒓)=\underset{i}{}u_0\delta (𝒓𝒓_i),$$
(5)
where the locations of impurities $`𝒓_i`$ are scattered randomly with average density $`n_i`$. In the ladder approximation the vertex is then given by
$$\begin{array}{c}\mathrm{\Gamma }(\omega ,𝒒)=\theta \left[\epsilon (\epsilon \omega )\right]+\frac{\theta (\epsilon )\theta (\omega \epsilon )}{1\zeta (\omega ,𝒒)}\hfill \\ \\ +\frac{\theta (\epsilon )\theta (\epsilon \omega )}{1\zeta ^{}(\omega ,𝒒)},\hfill \end{array}$$
(6)
where
$$\begin{array}{c}\zeta (\omega ,𝒒)=n_i|u_0|^2\frac{d𝒑}{(2\pi )^3}G^R(𝒑+𝒒,\epsilon )G^A(𝒑,\epsilon \omega )\hfill \\ \\ =\frac{i}{2qv_\mathrm{F}\tau }\mathrm{ln}\frac{\omega +qv_\mathrm{F}+i/\tau }{\omega qv_\mathrm{F}+i/\tau },\hfill \end{array}$$
(7)
and $`1/\tau =\pi \nu _0n_i|u_0|^2`$. The latter expression is valid for any $`\omega ,\epsilon ,qv_\mathrm{F}ϵ_\mathrm{F}`$. In the diffusive case ($`\omega ,\epsilon ,qv_\mathrm{F}1/\tau `$) the last expression takes the form $`\zeta =1+i\omega \tau (qv_\mathrm{F}\tau )^2/3`$ and the vertex part has the expected diffusive pole.
The screened electron-electron interaction $`\stackrel{~}{V}(\omega ,𝒒)=4\pi e^2/\left[q^2+4\pi e^2\mathrm{\Pi }(\omega ,𝒒)\right]`$, where the polarization operator derived in the random phase approximation
$$\mathrm{\Pi }(\omega ,𝒒)=\nu _0\left[1+\alpha i\omega \tau \zeta /(1\zeta )\right].$$
(8)
Here parameter $`\alpha `$ is equal to 1 or 0 depending on whether the retardation of the interaction is to be taken into account or not. This parameter is introduced for comparison to the earlier results.
After some transformations the expression for the correction to DOS is obtained from Eq. (4)
$$\frac{\delta \nu }{\nu _0}=\frac{\lambda }{(ϵ_\mathrm{F}\tau )^2}f\left(|\epsilon |\tau \right),$$
(9)
where the interaction constant $`\lambda =1`$ and
$$\begin{array}{c}f(\gamma )=\frac{1}{8\pi }_0^\gamma 𝑑\gamma ^{}\mathrm{Im}_0^{\mathrm{}}\frac{x^2dx}{x^2(\gamma ^{}+i)^2}\hfill \\ \\ \times \frac{2\zeta \zeta ^2}{(1\zeta )\left[1\zeta (1\alpha i\gamma ^{})\right]},\hfill \\ \\ \zeta =\frac{i}{2x}\mathrm{ln}\frac{\gamma ^{}+x+i}{\gamma ^{}x+i}.\hfill \end{array}$$
(10)
The integrals in (10) cannot be evaluated in terms of elementary functions. Nevertheless the asymptotic expressions for diffusive ($`\gamma 1`$) and ballistic ($`\gamma 1`$) cases are easily obtained:
$$f(\gamma )=\{\begin{array}{cc}A\sqrt{\gamma },\hfill & \gamma 1\hfill \\ B\gamma ,\hfill & \gamma 1.\hfill \end{array}$$
(11)
They lead to Eqs. (1) and (2). The constant $`A`$ in this formula differs for the cases of instantaneous ($`\alpha =0`$) and retarded ($`\alpha =1`$) interactions. For the former case we obtain $`A=3^{3/2}/16\sqrt{2}`$, for the latter $`A=3^{3/2}/8\sqrt{2}`$. This agrees with the previous results. The ballistic constant $`B=\pi /16`$ is the same for both instantaneous and retarded cases.
The crossover between diffusive and ballistic regimes can be described numerically. To this end we represent the correction to DOS in the intermediate region as follows
$$\delta \nu =C\sqrt{\delta \nu _{\mathrm{diff}}^2+\delta \nu _{\mathrm{ball}}^2},$$
(12)
where the crossover function $`C1`$ is shown in Fig. 1.
The crossover function can be effectively fitted with polynomials
$$C(\xi )=\underset{n}{}C_n\xi ^n,\xi \epsilon \tau /\left(1+\epsilon \tau \right).$$
(13)
The fit with a $`7`$th degree polynomial with coefficients $`C_0=0.9999`$, $`C_1=0.2610`$, $`C_2=0.9114`$, $`C_3=5.6062`$, $`C_4=17.9270`$, $`C_5=28.7905`$, $`C_6=22.8587`$, and $`C_7=7.0387`$ guarantees an error not exceeding $`0.14\%`$.
In conclusion we evaluated the exchange correction to the tunneling density of states in three-dimensional metal in the ballistic regime ($`1/\tau \epsilon ϵ_\mathrm{F}`$). The obtained correction is proportional to the distance to the Fermi level $`\epsilon `$ and exceeds the diffusive correction by the large parameter $`ϵ_\mathrm{F}\tau `$. The crossover between diffusive and ballistic limits is also studied.
The author is grateful to Boris Shklovskii and Alexander Rudin for valuable discussions. This work is supported by NSF grant DMR-9616880. |
no-problem/9912/solv-int9912012.html | ar5iv | text | # Whitham-Toda hierarchy in the Laplacian growth problem11footnote 1Talk given at the Workshop NEEDS 99 (Crete, Greece, June 1999)
(October 1999)
The Laplacian growth problem in the limit of zero surface tension is proved to be equivalent to finding a particular solution to the dispersionless Toda lattice hierarchy. The hierarchical times are harmonic moments of the growing domain. The Laplacian growth equation itself is the quasiclassical version of the string equation that selects the solution to the hierarchy.
The Laplacian growth problem is one of the central problems in the theory of pattern formation. It has many different faces and a lot of important applications. In general words, this is about dynamics of moving front (interface) between two different phases. In many cases the dynamics is governed by a scalar field that obeys the Laplace equation; that is why this class of growth problems is called Laplacian. Here we shall confine ourselves to the two-dimensional (2D) case only. To be definite, we shall speak about two incompressible fluids with different viscosities on the plane. In practice, the 2D geometry is realized in the narrow gap between two plates. In this version, this is known as the Saffman-Taylor problem or viscous fingering in the Hele-Shaw cell. For a review, see .
We shall mostly concentrate on the external radial problem for it turns out to be the simplest case in the frame of the suggested approach. Let the exterior of a simply connected domain on the plane be occupied by a viscous fluid (oil) while the interior be occupied by a fluid with small viscosity (water). The oil/water interface is assumed to be a simple analytic curve. Other versions such as internal radial problem, wedge or channel geometry are briefly discussed at the end of the paper. Basically, they allow for the same approach.
Let $`p(x,y)`$ be the pressure, then $`p`$ is constant in the water domain. We set it equal to zero. In the case of zero surface tension $`p`$ is a continuous function across the interface, so $`p=0`$ on the interface. In the oil domain the gradient of $`p`$ is proportional to local velocity $`\stackrel{}{V}=(V_x,V_y)`$ of the fluid (Darcy’s law):
$$\stackrel{}{V}=\kappa \text{grad}p,$$
(1)
where $`\kappa `$ is called the filtration coefficient<sup>2</sup><sup>2</sup>2This coefficient is inversely proportional to the viscosity, so the Darcy law is formally valid in the water, too.. In particular, this law holds on the interface thus governing its dynamics:
$$V_n=\kappa \frac{p}{n}.$$
(2)
Here $`V_n`$ is the component of the velocity normal to the interface and $`p/n`$ is normal derivative. Since the fluid is incompressible ($`\text{div}\stackrel{}{V}=0`$), the Darcy law implies that the potential $`\mathrm{\Phi }(x,y)=\kappa p(x,y)`$ is a harmonic function in the exterior (oil) domain:
$$\mathrm{\Delta }\mathrm{\Phi }(x,y)=0,$$
(3)
where $`\mathrm{\Delta }=_x^2+_y^2`$ is the Laplace operator on the plane. The asymptotic behaviour of the function $`\mathrm{\Phi }`$ very far away from the interface (at infinity) is determined by the physical condition that there is a sink with constant capacity $`q`$ placed at infinity. This means that
$$_\gamma (\stackrel{}{V},\stackrel{}{n})𝑑l=q,$$
where $`\gamma `$ is any closed contour encircling the water domain, $`\stackrel{}{n}`$ is the unit vector normal to $`\gamma `$. So, we require $`\mathrm{\Phi }=0`$ on the interface and $`\mathrm{\Phi }=\frac{q}{2\pi }\text{log}|z|`$ as $`|z|\mathrm{}`$. The goal is to describe the interface motion subject to the local dynamical law $`V_n=\mathrm{\Phi }/n`$.
An effective tool for dealing with this problem is the time dependent conformal mapping technique (see e.g. ). Passing to the complex coordinates $`z=x+iy`$, $`\overline{z}=xiy`$ on the physical plane, we bring into play a conformal map from a reference domain on the mathematical plane $`w`$ to the growing domain on the physical plane. By the Riemann mapping theorem, such a map does exist and, under some conditions, is unique. More precisely, let $`z=\lambda (w)`$ be the univalent conformal map from the exterior of the unit circle to the exterior of the interface (i.e., to the oil domain) such that $`\mathrm{}`$ is mapped to $`\mathrm{}`$ and the derivative $`\lambda ^{}(\mathrm{})`$ is a positive real number $`r`$. Under these conditions the map is known to be unique. The Laurent expansion of the $`\lambda (w)`$ around $`\mathrm{}`$ has then the following general form:
$$\lambda (w)=rw+\underset{j=0}{\overset{\mathrm{}}{}}u_jw^j.$$
(4)
If the interface moves, the conformal map $`z(w,t)=\lambda (w,t)`$ becomes time-dependent. The interface itself is the image of the unit circle $`|w|=1`$: as $`w=e^{i\varphi }`$, $`0\varphi 2\pi `$, sweeps over the unit circle, $`z=\lambda (e^{i\varphi },t)`$ sweeps over the interface at the moment $`t`$.
Having defined the conformal map, we immediately see that the real part of the logarithm of the inverse conformal map, $`w(z)`$, provides the solution to the Laplace equation in the oil domain with the required asymptotics: $`\mathrm{\Phi }(x,y)=\frac{q}{2\pi }\text{Re}\text{log}w(z)`$. Let us introduce the complex velocity $`V=V_xiV_y`$. The obvious formula
$$\frac{q}{2\pi }\frac{\text{log}w(z,t)}{z}=\frac{\mathrm{\Phi }}{x}i\frac{\mathrm{\Phi }}{y}$$
allows one to represent the Darcy law (1) as follows:
$$V(z)=\frac{q}{2\pi }\frac{\text{log}w}{z},$$
(5)
where the derivative is taken at constant $`t`$.
In terms of the time-dependent conformal map, the Darcy law is equivalent to the following relation referred to as the Laplacian growth equation (LGE):
$$\text{Im}\left(\frac{z}{\varphi }\frac{\overline{z}}{t}\right)=\frac{q}{2\pi }.$$
(6)
It first appeared in 1945 in the works on the mathematical theory of oil production. ¿From now on we set $`q=\pi `$ without loss of generality. (This amounts to a proper rescaling of $`t`$.) Introducing the Poisson bracket notation
$$\{f,g\}=w\frac{f}{w}\frac{g}{t}w\frac{g}{w}\frac{f}{t},$$
(7)
for functions $`f=f(w,t)`$, $`g=g(w,t)`$ of $`w`$, $`t`$, we rewrite the LGE in the suggestive form<sup>3</sup><sup>3</sup>3Given a Laurent series $`f(z)=_jf_jz^j`$, we set $`\overline{f}(z)=_j\overline{f}_jz^j`$, so $`z(w)`$ and $`\overline{z}(w^1)`$ are complex conjugate only if $`|w|=1`$.
$$\{z(w,t),\overline{z}(w^1,t)\}=1.$$
(8)
The LGE thus means that the transformation from $`\text{log}w,t`$ to $`z,\overline{z}`$ is canonical.
For a technical simplicity, we assume that the point $`z=0`$ lies in the water domain. The interface dynamics given by the Darcy law (or, equivalently, by the LGE) implies that if a point $`(x,y)`$ is in the water (interior) domain at the initial moment, then it remains there for all values of time. In particular, our assumption that the point $`z=0`$ belongs to the water domain means that zeros of the function $`\lambda (w)`$ are inside the unit circle for any $`t`$.
The Laplacian growth is a particular case of the 2D inverse potential problem. The shape of the interface can be characterized by the harmonic moments $`C_k`$ of the oil domain and the area $`C_0`$ of the water domain:
$$C_k=_{\text{exterior}}z^k𝑑x𝑑y,k1;C_0=_{\text{interior}}𝑑x𝑑y.$$
(9)
(The integrals at $`k=1,2`$ are assumed to be properly regularized.) A remarkable result of Richardson shows that the LGE (8) implies conservation of the harmonic moments $`C_k`$ when the interface moves, $`dC_k/dt=0`$, while area of the water domain grows linearly in time: $`C_0=\pi t`$. Therefore, the problem can be posed as follows: to find the shape of the domain as a function of its area provided all the harmonic moments of the exterior are kept fixed. Since the harmonic moments are coefficients in the expansion of the Coulomb potential created by a homogeneously distributed charge in the oil domain, this is just a specification of the inverse potential problem. We remind that to know the shape of the domain is the same as to know the coefficients $`r`$, $`u_j`$ of the conformal map (4).
A good deal of hints that the LGE has much to do with integrable systems have been known for quite a long time -. However, its status in the realm of integrability was obscure until very recently. The nature of the LGE and its relation to integrability are clarified in the work . The idea is to treat the LGE not as a dynamical equation but as a constraint in a bigger integrable hierarchy. The latter turns out to be an infinite Whitham hierarchy of the type first introduced in . This hierarchy is a multi-dimensional generalization of integrable hierarchies of hydrodinamic type . It naturally incorporates the general inverse potential problem as well. Namely, the coefficients of the conformal map as functions of all the harmonic moments are given by a particular solution to the dispersionless Toda lattice hierarchy (see for a detailed study of the latter). The hierarchical evolution times are just the harmonic moments and their complex conjugate. In the Laplacian growth all of them but the area $`C_0`$ are frozen. Making them alive, one moves over the space of initial data for the LGE, and recovers the Whitham hierarchy.
For a more convenient formulation of the result, let us rescale the harmonic moments and introduce the new notation for them:
$$tt_0=\frac{C_0}{\pi },t_k=\frac{C_k}{\pi k},\overline{t}_k=\frac{\overline{C}_k}{\pi k},k1.$$
(10)
The symbols $`(f(w))_\pm `$ below mean a truncated Laurent series, where only terms with positive (negative) powers of $`w`$ are kept, $`(f(w))_0`$ is a constant part ($`w^0`$) of the series.
###### Theorem 1
The conformal map (4) obeys the following differential equations with respect to the harmonic moments:
$$\frac{\lambda (w)}{t_j}=\{H_j,\lambda (w)\},\frac{\lambda (w)}{\overline{t}_j}=\{\overline{H}_j,\lambda (w)\},$$
(11)
$$\frac{\overline{\lambda }(w^1)}{t_j}=\{H_j,\overline{\lambda }(w^1)\},\frac{\overline{\lambda }(w^1)}{\overline{t}_j}=\{\overline{H}_j,\overline{\lambda }(w^1)\},$$
(12)
where the Poisson bracket is defined in (7), and
$$H_j(w)=\left(\lambda ^j(w)\right)_++\frac{1}{2}\left(\lambda ^j(w)\right)_0,$$
(13)
$$\overline{H}_j(w)=\left(\overline{\lambda }^j(w^1)\right)_{}+\frac{1}{2}\left(\overline{\lambda }^j(w^1)\right)_0.$$
(14)
The proof is sketched in and . These are the Lax-Sato equations for the dispersionless Toda lattice hierarchy of non-linear differential equations, with the $`\lambda (w)`$ and $`\overline{\lambda }(w^1)`$ being the Lax functions. On comparing coefficients in front of powers of $`w`$ in (11), (12), one obtains an infinite set of non-linear differential equations for the coefficients of the comformal map. Altogether, they form the hierarchy. The particular solution that solves the inverse potential problem is selected by the constraint (8), where $`z(w,t)=\lambda (w,t)`$, $`\overline{z}(w,t)=\overline{\lambda }(w^1,t)`$, and all $`t_k`$ are fixed. This constraint is known as (a quasiclassical version of) the string equation. Surprisingly, this very constraint is the key ingredient of the integrable structures in 2D gravity coupled with $`c=1`$ matter . The mathematical theory of the dispersionless hierarchies constrained by string equations was developed in and extended to the Toda case in .
The Lax-Sato equations imply the existence of the prepotential function $`F(t,t_k,\overline{t}_k)`$. This function solves the inverse potential problem in the following sense. Let $`C_k`$, $`k1`$, be the complimentary set of harmonic moments, i.e., the moments of the interior of the domain:
$$C_k=_{\text{interior}}z^k𝑑x𝑑y.$$
(15)
Were we able to find $`C_k`$ from a given set $`C_0`$, $`C_k`$ (i.e., given $`t`$, $`t_k`$), this would yield the complete solution, alternative but equivalent to knowing the conformal map (4). Indeed, using the contour integral representation of the harmonic moments, it is easy to see that the generating function of all the harmonic moments, obtained as an analytic continuation of the Laurent series
$$\mu (\lambda )=\frac{1}{\pi }\underset{k𝐙}{}C_k\lambda ^k=\underset{k=1}{\overset{\mathrm{}}{}}kt_k\lambda ^k+t+\frac{1}{\pi }\underset{k=1}{\overset{\mathrm{}}{}}C_k\lambda ^k,$$
(16)
would allow us to restore the interface curve via the equation $`|z|^2=\mu (z)`$ ($`S(z)=\mu (z)/z`$ is what is called Schwarz function of the curve ). The function $`F`$ does the job.
###### Theorem 2
() There exists a real function $`F`$ of the (rescaled) harmonic moments $`t`$, $`t_k`$, $`\overline{t}_k`$ such that
$$C_k=\pi \frac{F}{t_k},\overline{C}_k=\pi \frac{F}{\overline{t}_k},k1.$$
(17)
In particular, this implies the symmetry of derivatives of the inner harmonic moments with respect to the outer ones:
$$\frac{C_k}{t_j}=\frac{C_j}{t_k},\frac{C_k}{\overline{t}_j}=\frac{\overline{C}_j}{t_k}.$$
(18)
For general Whitham hierarchies, the function $`F`$ was introduced in . In the dispersionless Toda case, it is the dispersionless limit of logarithm of the $`\tau `$-function . This function obeys the dispersionless limit of the Hirota equation (a leading term of the differential Fay identity , ):
$$(z\zeta )\mathrm{exp}\left(\underset{n,m1}{}\frac{F_{nm}}{nm}z^n\zeta ^m\right)=z\mathrm{exp}\left(\underset{k1}{}\frac{F_{0k}}{k}z^k\right)\zeta \mathrm{exp}\left(\underset{k1}{}\frac{F_{0k}}{k}\zeta ^k\right)$$
(19)
This is an infinite set of relations between the second derivatives $`F_{nm}=_{t_n}_{t_m}F,F_{0m}=_t_{t_m}F`$. The equations appear while expanding the both sides of (19) in powers of $`z`$ and $`\zeta `$.
To complete the identification with the objects of the theory of Whitham hierarchies, we just mention that the function $`\mu (\lambda )`$ (16) is the quasiclasical limit of the Orlov-Shulman operator for the 2D Toda lattice. This function obeys the conditions $`\{\lambda ,\mu \}=\lambda `$, $`\{\overline{\lambda },\overline{\mu }\}=\overline{\lambda }`$. The string equation can be then written as a relation
$$\overline{\lambda }=\lambda ^1\mu $$
(20)
between the Laurent series, together with the reality condition $`\mu (\lambda )=\overline{\mu }(\overline{\lambda })`$. Written in this form, it admits generalizations which are also relevant to the Laplacian growth problem, and which we are going to discuss now.
Consider the Laplacian growth problem for domains symmetric under the $`𝐙_n`$-transformations $`ze^{2\pi il/n}z`$, $`l=1,\mathrm{},n1`$, where $`n`$ is a positive integer. In this case $`C_m=0`$ unless $`m=0(\text{mod}n)`$, i.e., only the moments $`C_{kn}`$ are non-zero. The conformal map from the exterior of the unit circle to the exterior of such a domain is given by $`z(w)=(\lambda (w^n))^{1/n}`$, where $`\lambda (w)`$ has the same general structure as in (4). Equivalently, the problem may be posed as the Laplacian growth in the wedge (sector) domain restricted by the rays $`\text{arg}z=0`$ and $`\text{arg}z=2\pi /n`$ with the periodic condition on the boundary. In the latter case, the conformal map is
$$z(w)=(\lambda (w))^{1/n}.$$
(21)
The LGE in terms of the conformal map $`z(w,t)`$ has the same form (8) if the time is rescaled as $`tt/n`$. In terms of the $`\lambda (w,t)`$, one has
$$\{\lambda ,\overline{\lambda }\}=n^2(\lambda \overline{\lambda })^{\frac{n1}{n}}.$$
(22)
Let us introduce the following notation for the non-zero harmonic moments:
$$t=\frac{C_0}{n\pi },t_k=\frac{C_{kn}}{\pi k},\overline{t}_k=\frac{\overline{C}_{kn}}{\pi k},k1$$
(23)
(cf. (10)). The more general string equation (22) amounts to the relation
$$\overline{\lambda }=n^n\lambda ^1\mu ^n$$
(24)
between the Lax and Orlov-Shulman functions (note that this means $`|z|^2=n\mu (z)`$). This relation is familiar from . It is consistent with the hierarchy and selects a solution that, as $`n>1`$, is different from the one selected by (20). The Lax-Sato equations (11) – (14) for the Lax function $`\lambda `$ and the Hirota equation (19) hold true as they stand provided the times are redefined as in (23). The solution to the Laplacian growth problem in the channel geometry (in the Hele-Shaw cell) can be obtained from the above formulas as a somewhat tricky limit $`n\mathrm{}`$. It is unclear, however, if the existing finite-dimensional solutions of the LGE \[4-6\] will survive after these transformations. We should also mention that inclusion of the “no-flux” boundary conditions at the walls of the wedge, which state that $`\mathrm{log}z/\mathrm{log}w`$ is real, imposes an extra symmetry in the solution and doubles the number of wedges. We will elucidate these questions in the near future.
At last we point out that the above formulas make sense for negative integer $`n`$, too. In particular, the case $`n=1`$ describes the internal radial problem when oil is inside while water is outside. In this case the internal and external harmonic moments are interchanged in their role: the internal moments (15) together with the $`C_0`$ become independent variables (cf. (23)) while the external moments are found as derivatives of the prepotential function according to (17).
These results were reported at the XIII NEEDS Workshop (Crete, June 1999). We are very grateful to the organizers for the invitation and for the opportunity to share these results in such a fruitful and nice atmosphere. We are indebted to Paul Wiegmann for collaboration in and many stimulating discussions. The work of A.Z. was partially supported by RFBR grant 98-01-00344. |
no-problem/9912/hep-ex9912004.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The KEK B-Factory, KEKB, is an asymmetric energy collider with 8 GeV electron and 3.5 GeV positron beams built in the 3 km long TRISTAN tunnel of KEK (Figure 1). One of the physics goals of this facility is to make a detailed study of the B-meson, in particular to observe possible CP violation effects in its decay. The energy difference between the electron and positron beams gives a boost to the produced B-meson pairs. This makes it possible to measure time dependent features of B-meson decay where a large CP asymmetry may show up as predicted by A. Carter and I. Sanda . The KEKB project was first discussed at KEK in 1987 and the first conceptual design of the accelerator was worked out by K. Oide in February 1988 . It was later modified to the present design with the equal circumference of two rings in 1989 . The project was approved by the Japanese government in April 1994.
## 2 Accelerator design
The biggest challenge of the project is how to produce the more than 10 million B-meson pairs required to observe CP asymmetry, or how to reach a luminosity greater than 10<sup>34</sup>cm<sup>-2</sup>sec<sup>-1</sup>.
This high luminosity is to be achieved by the collision of beams stored in two different synchrotrons with equal circumference. We store as many beam bunches as possible in the synchrotrons in which one particular beam bunch in one ring collides with a particular bunch in the other ring. The single bunch current and therefore the single bunch luminosity is a moderate one which can be easily realized by established technology. However, the total luminosity which is a sum of many single bunch luminosity can be made quite high . The designed total beam currents are 2.6 A for the positron (LER) and 1.1 A for the electron (HER) rings.
A big concern was how to overcome beam instabilities which may show up when storing a large number of bunches and a high beam currents in the synchrotrons. There are instabilities due to single-beam collective effects, impedance from various beamline elements, ion trapping, photo-electron effects, and beam-beam effects. These instabilities are suppressed by carefully designing accelerator components such as RF cavities with HOM damping and the smooth surface of accelerator elements as well as by the implementation of powerful feedback systems .
The choice of a finite beam crossing angle in the horizontal plane at the beam collision point, 22 mrad, is a notable feature of the accelerator design. This scheme eliminates parastic collisions and make it much easier to manipulate two beams of different energies around the collision point. It hopefully leads to smaller beam induced backgrounds. The positron beam enters the solenoid field volume parallel to the field axis, while the incoming electron beam orbit is at an angle of 22 mrad from the field axis. The integrated field strength along the beam path is locally cancelled by a pair of additional superconducting solenoids with smaller radius placed in the detector solenoid. Extensive simulation studies indicate that the instability due to the finite angle collision of beams can be avoided by choosing the proper sets of betatron tunes for the beams . A crab cavity is being developed to rotate the beams to provide head-on collision in case the finite angle crossing causes a serious problem .
Many wiggler magnets are used to reduce the damping time of the positron beam. The LINAC was upgraded in energy from 2.5 GeV to 8.5 GeV to make it possible to inject electron and positron beam directly into the storage rings . The beam optics of the storage rings remains unchanged both at the time of beam injection and beam storage. The two rings are equipped with more than 1000 beam position monitors to facilitate quick beam diagnostics.
## 3 Accelerator commissioning
The upgrading of the LINAC was completed successfully in July 1998. The KEKB synchrotrons received the first beam from the LINAC in December 1998. The commissioning continued until mid April 1999 without BELLE. Although there have been several interuptions by unexpected accidents, we have succeeded in storing both electron and positron beams larger than 500 mA. The integrated stored current reached 100 A-hour for the positron beam and 70 A-hour for the electron beam . The BELLE was rolled onto the beamline in May 1999.
The collision of the beams was confirmed on June 1 by observing hadronic events with the BELLE detector. Although interrupted again by an accident for two weeks, the operation of the accelerator continued until August 4th. Part of the operation time was dedicated to physics runs. The total integrated luminosity accumulated by the BELLE detector was 25 pb<sup>-1</sup>. The highest luminosity averaged over one hour reached 2.5 pb<sup>-1</sup>. The highest peak luminosity, 2.9$`\times `$10<sup>32</sup> cm<sup>-2</sup>sec<sup>-1</sup> (Figure 1), was recorded on the last day of the summer run.
The parameters with which the storage rings produced the highest peak luminosity are summarized in Table 1.
Several remarks are in order here. 1) All accelerator components, such as newly developed RF cavities, ARES, single cell superconducting RF cavities, superconducting quadrupole magnets at the IR, and many wiggler magnets, have functioned as designed. 2) The single bunch currents stored are close to the design values. 3) The number of bunches stored is about one fifth of the design values. For the electron ring the limitation is mainly due to a fast ion trapping effect, while for the positron ring no explanation has yet been determined. 4) The beam sizes were measured by the synchrotron light using an interferometer. The vertical and horizontal beam sizes were about 3.3 $`\mu `$m and 190 $`\mu `$m for the electron beam at 100 mA and about 3.6 $`\mu `$m and 190 $`\mu `$m for the positron at 200 mA. These are larger than the designed values by a factor of 2. 5) The beam blows up vertically as the amount of stored beam currents increase. This is suppressed by a feed-back system quite effectively for the electron beam but not for the positron beam. 6) The beams blow up vertically to about 6 $`\mu `$m when the beams collide at 100 mA (HER) and 200 mA (LER).
Although the luminosity achieved is only 3% of the design value and we have many things to be understood, we believe that the basic design concept, especially the choice of the finite angle crossing scheme, is not wrong. The factor of 30 improvement will be achieved by a careful tuning of the accelerator parameters.
## 4 BELLE commissioning
BELLE is a solenoid spectrometer with the capability of precise energy and momentum measurement and of good particle identification, especially $`\pi `$-K separation. The field strength of the solenoid is 1.5 Tesla with the field uniformity better than 4% in the central tracking volume. The detector is arranged asymmetrically along the direction of the boost. The detector has been built by the collaboration of 51 institutions from ten countries and one region .
The construction began in April 1994 and was completed in December 1998. Data taking began in June 1999 with all sub-detectors assembled. Data taking was made mostly on the peak of the Upsilon(4s) resonance after its confirmation by an energy scan. The integrated luminosity recorded by the detector was only 25pb<sup>-1</sup> by August 4th. Therefore, no physics results can be presented in this report but rather we report on the detector performance obtained by using about 60,000 hadronic events. Although we suffered from beam induced backgrounds, data taking was successful. The typical trigger rate was about 300 Hz at the beam currents of 200 mA for LER and 100 mA for HER. The analysis of the data has proceeded smoothly using software developed in the last five years.
### 4.1 Vertexing and tracking
The BELLE vertex detector(SVD) is a three layer double sided silicon strip detectors and the tracker is a standard multi-cell drift chamber. The SVD covers the polar angle between 20 and 140 in the lab frame. The impact parameter resolutions are measured to be 35 $`\mu `$m and 40 $`\mu `$m in r-$`\varphi `$ and z coordinates, respectively. The momentum resolution is about 0.5% at 1 GeV. The following examples illustrate how the vertexing and tracking work.
First, we show a candidate event where one B-meson decays to J/$`\psi `$ and K<sup>+</sup> and the other B-meson decays to a state including a $`\mu `$<sup>-</sup>. Two vertices are separated by 357 $`\mu `$m along the z direction, which is substantially larger than the vertex resolution. It should be noted that the $`\mu `$<sup>±</sup> and K<sup>+</sup> are positively identified (Figure 2).
Another example of the BELLE vertexing performance is the measurement of the lifetime of D-meson. In this study, we have chosen D$``$K$`\pi `$ samples with momentum larger than 2.4 GeV. The distance between the beam center and the vertex point in the plane transverse to the beam direction is plotted in the figure 3 together with an invariant mass distribution for D$``$K$`\pi `$. The lifetime measured is consistent with the known value.
The next examples are the invariant mass distributions for Ks$``$$`\pi `$<sup>+</sup>$`\pi `$<sup>-</sup> and J/$`\psi `$$``$$`l`$<sup>+</sup>$`l`$<sup>-</sup> observed in hadronic events (Figure 4). They give an idea of the performance of tracking.
### 4.2 $`\pi `$-K separation
The BELLE detector is equipped with three different types of detectors for charged K-meson identification: The aerogel Cherenkov counters, the time of flight counters and the drift chamber. We calculate the K-meson likelihood for all charged tracks by combining the information from the three particle id devices. The K-meson identification is made by selecting tracks with high K-likeliness. To show how this method works, we present a plot of invariant mass made for two tracks with opposite charges for all possible combination in hadronic events. The histogram is the invariant mass assuming two tracks to be K-mesons, while the black circles are those requiring both tracks to have K-meson probabilities greater than 70%. One clearly sees a peak corresponding to the $`\varphi `$-meson with a background suppression of more than a factor of 100 (Figure 5). It has been confirmed that this method is also quite effective for the identification of charged K-mesons in the decay process D$``$K$`\pi `$.
### 4.3 The calorimeter
The Belle electromagnetic calorimeter is made of finely segmented CsI crystals of about 30 cm in length. The calibration of the calorimeter has been performed by using cosmic rays and Bhabha events. We present an invariant mass spectrum formed for two energy clusters which are greater than 50 MeV in the figure 6. One sees clear peaks for $`\pi `$ and $`\eta `$ mesons.
## 5 The beam background
In spite of the choice of the finite angle beam crossing scheme at the interaction point, we have suffered from beam induced backgrounds. Despite the backgrounds, data taking was possible with the largest stored beam current in collision mode, about 200 mA for the positron beam and 100 mA for the electron beam.
The most serious background was due to synchrotron radiation with a critical energy of a few keV from the electron beam generated at magnets upstream of the detector. These photons were not expected from simulation nor were they observed by the dedicated beam background detector before BELLE rolling-in. They could be eliminated by adjusting the steering of the electron beam orbit. However, we noticed this too late and this damaged the inner layer of the silicon detector.
We suffered also from synchrotron radiation with a critical energy around 30 keV. This background is presumably generated by the outgoing electron beam in the superconducting quadrupole magnet which generates a fan that hits the aluminum part of the beam pipe and then scatters back into the detector. This synchrotron light will be suppressed by replacing the beam pipe with an appropriate shape made of copper.
The BELLE detector was not free from over-focused beam particles, especially from the electron beam, which hit the detector. This backgrounds will be suppressed by improving the vacuum and by installing shielding around the beam pipe.
## 6 Summary
The construction and the commissioning of the KEKB accelerator and the detector has been successful. It was shown that the BELLE detector functions as designed. The time spent on commissioning before the summer shut-down was so short that we could achieve only 3% of design luminosity. The luminosity will be improved by carefully tuning the beam parameters. During the summer break in KEKB operation, more RF cavities are installed and more shielding around the interaction region is added and the damaged silicon detectors are replaced. KEKB operation will resume in October and will continue for 10 months. We hope that we will have information on CP violation in B-meson decay by the end of next run.
Discussion
Richard Taylor (SLAC): You mentioned that the data analysis is under control. Will this still be the case when the luminosity increase by a factor of 30?
Takasaki: We are prepared for much higher rate and we hope it will be OK. |
no-problem/9912/astro-ph9912142.html | ar5iv | text | # Radio galaxies with a ‘double-double’ morphology: II - The evolution of double-double radio galaxies and implications for the alignment effect in FRII sources
## 1 Introduction
Double-double radio galaxies (DDRGs) are defined as consisting of two unequally sized, two-sided, double-lobed, edge-brightened (type FRII) radio sources (see also the preceding paper Schoenmakers et al. 1999a, hereafter paper I). The radio cores of the two structures coincide. This gives them the appearance of two independent FRII-type sources the smaller of which is placed inside the larger structure. This morphology distinguishes the DDRGs clearly from ‘normal’ FRII objects. Seven DDRGs in which the inner and outer structures are well aligned have been presented in paper I. In all these objects the radio axes of the two structures form angles of less than 7. The projected linear sizes of the outer source structure of all these objects is greater than 700 kpc. Although it is too early for final conclusions, we believe that large linear sizes of the outer source structure is an intrinsic property of the DDRGs as a sub-class of the FRII radio source population and not a selection effect. The so-called X-shape radio sources (e.g. Leahy & Williams 1984) may be further examples of DDRGs but they differ from the sources presented in paper I in that the radio axes of the two structures are not aligned.
In most of the outer lobes of the DDRGs presented in paper I no radio hot spots are detected. For the exception to this rule, B 1834+620, only one hot spot is detected (Schoenmakers et al. 1999b, hereafter paper III). This may indicate that the outer source structure of DDRGs is no longer supplied with energy from the AGN via jets.
The main aim of this paper is to show that the observed properties of the DDRGs are consistent with a model in which the jet production mechanism in the AGN in the centre of the host galaxy of these objects is interrupted and then restarted. During the first phase of activity the outer radio structure is inflated while after the disruption the inner structure is created. The formation of the inner structure of the aligned DDRGs is only possible if some material from the outside of the outer cocoon has penetrated the cocoon boundary and forms a rather smooth density distribution within the region of the outer cocoon. We show that this material is most likely provided by the remains of warm, dense clouds which form one phase in the Inter Galactic Medium (IGM) surrounding the DDRGs and are slowly dispersed by the effects induced by the cocoon material streaming along their surface.
In Sect 2 we briefly review the properties of the gaseous environment of FRII radio sources. Section 3.1 discusses a possible mechanism for the restarting of the jet flow and its implications for X-shaped radio sources. A brief review of the analytical model used for analyzing the radio properties of the DDRGs is given in Sect. 3.2. This model is then applied to the outer, Sect. 3.3, and the inner source structure, Sect. 3.4. In this Sect. we also show that some material from the outside of the outer cocoon must have penetrated into this region. Section 4 reviews possible mechanisms for achieving this contamination. Several implications that this contamination of the cocoon has for observable properties of the DDRGs will be discussed in Sect. 5. The optical and UV continuum emission of the hosts of FRII sources at high redshift ($`z>0.6`$) has been found to be aligned with the radio source axis (for a review see McCarthy 1993). In Sect. 6 the implications for the alignment effect in FRII sources of the most likely contamination process, the shredding by the bow shock of warm, dense clouds embedded in the otherwise hot IGM is investigated.
Throughout this paper we assume $`H_o=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_o=0.5`$.
## 2 The environments of FRIIs
The radio structures of radio galaxies are embedded in the Inter Stellar Medium (ISM) of their host galaxies and, in the case of large linear sizes, the IGM in between galaxies. Extended optical line emission observed to be associated with the hosts of powerful radio galaxies at low redshift implies the existence of warm gas ($`T_{cl}10^4`$ K) on scales of 10 kpc. Baum & Heckman (1989) find that the density of this gas is at least 0.1 cm<sup>-3</sup>. This assumes a volume filling factor for this material, $`f_{cl}`$, close to unity. This density is a lower limit since for a ‘clumpy’ distribution of the line emitting material, i.e. smaller filling factors, higher densities are required. The gas densities and the corresponding filling factors of the gas clouds providing the extended optical emission observed in FRIIs are difficult to constrain because of the poorly constrained properties of the ionizing radiation originating in the central AGN and/or shocks in the hot IGM which illuminates the gas in radio galaxies. The gas density can in some cases be directly inferred from the flux ratios of density-dependent emission lines, e.g. \[Sii\] 6717, 6732. For example, van Breugel et al. (1985) find gas densities of $`300`$ cm<sup>-3</sup> for the line-emitting clouds in the environment of the radio source 3C 277.3 and Heckman et al. (1989) find comparable densities for the emission line regions of galaxies at low redshift associated with cooling flows.
At high redshift ($`z0.6`$) the extended optical and UV emission, line and continuum, of the host galaxies of FRII radio sources is often aligned with the axis of the radio structure (for a review see McCarthy 1993). These structures can extend over several 100 kpc. Assuming that the aligned optical line emission in radio-loud quasars is caused by the ionisation of warm gas by the central AGN, the ionizing radiation of which can be to some extent constrained from X-ray and UV observations, Heckman et al. (1991) find densities for the warm material of typically 100 cm<sup>-3</sup> while the volume filling factors, $`f_{cl}`$, are estimated at $`10^810^7`$. These low values imply the existence of warm, dense clouds embedded and in pressure equilibrium with the hot, X-ray emitting phase of the IGM. The existence of such clouds also in low redshift radio galaxies is consistent with the observations (Baum & Heckman 1989), if the volume filling factor of these clouds is roughly 10<sup>-3</sup> on scales of 10 kpc and lower on larger scales in these objects. Note however, the different distances from the center of the host galaxy at which the line emission is observed in objects at low and at high redshifts. It is not clear whether the properties of the environments of FRIIs at low and at high redshift are similar in this way.
Since all currently known DDRGs are large radio sources with linear sizes $`700`$ kpc, the properties of the IGM rather than those of the ISM determine the evolution and appearance of the outer source structures of these objects. The density of the hot IGM ($`T_x10^7`$ K) at distances on a scale of 100 kpc can in principle be inferred from its thermal bremsstrahlung emission in X-rays. On the properties of the IGM on scales of Mpc, which is comparable to the size of the outer structures of the DDRGs, we have no observational constraints in these sources and we therefore have to estimate these quantities. FRII radio sources at low redshift are often found in poor groups of galaxies (e.g. Prestage & Peacock 1988) and X-ray observations suggest that the density distribution of the gas in such environments is well described by a King (1972) profile with $`n_o=10^2`$ cm<sup>-3</sup>, $`a_o=10`$ kpc and $`\beta _{King}=0.5`$ (Willott et al. 1999, based on X-ray data presented by Mulchaey & Zabludoff 1998). We will assume in the following sections that the outer lobes of the DDRGs are embedded in a hot IGM following such a density profile.
The observation of warm gas at large distances from the host galaxies of radio sources implies that the surrounding IGM is a two-phase medium. For the evolution of the large scale radio structure of FRIIs only the hot ($`T_x10^7`$ K) phase is important. This can be seen as follows.
The expansion of the large scale structure of FRII radio galaxies is confined by the ram pressure of the material surrounding it. The warm clouds will therefore be dynamically unimportant for the expansion of the bow shock and cocoon of FRII sources as long as their volume filling factor in the IGM is smaller than $`\left(n_x/n_{cl}\right)^{1/2}`$, where $`n_x`$ is the density of the hot phase of the IGM and $`n_{cl}`$ is the density of the warm clouds (Begelman & Cioffi 1989). For the warm clouds to be dynamically stable they must be in pressure equilibrium with the hot gas and from this, for the assumption of ideal gas conditions, we find $`f_{cl}<\left(T_{cl}/T_x\right)^{1/2}0.03`$. This will hold in virtually every radio galaxy (but see for example McCarthy, van Breugel & Kapahi 1991). The expansion velocity of the outer cocoon in DDRGs is therefore determined by the density distribution of the hot IGM alone.
## 3 The evolution of DDRGs
In this section we present a model for the evolution of the peculiar radio structures of DDRGs. This evolution is crucially influenced by the properties of the source environment outlined above. In this model we take the double-double appearance of the DDRGs and the absence of hot spots in most of the outer cocoons as evidence that the jet activity in these sources first inflates the outer or ‘old’ cocoon, is then stopped by some mechanism disrupting the jet production process in the central AGN and finally restarts causing the formation of the inner or ‘new’ cocoon.
Of the seven aligned DDRGs presented in paper I we exclude 3C 445 and 3C 219 from the analysis described in the following. In the case of 3C 445 sufficiently accurate radio flux measurements of the inner structure are not available. The exceptionally high asymmetry of the inner source structure of 3C 219 sets it apart from the other sources considered here. This suggests that a process different from the one described below is causing the formation of the double-double structure in this object (see also paper I).
### 3.1 Restarting of the jet flow
The high degree of symmetry of the inner source structures of the remaining five aligned DDRGs about the cores of the sources suggests that the jet flow must have been restarted on both sides of the sources simultaneously. The conclusions derived from the model presented here do not depend on the exact mechanism(s) that cause the disruption of the old jets and we are therefore unable to constrain them. For a brief discussion of the possibilities we refer the reader to paper I. Here we only present some further remarks on one possible scenario, the recent infall of a large amount of gas onto the AGN.
If the restarting of the jets is caused by the infall of a large mass of gas onto the AGN, there is no reason why the angular momentum vector of the new material should have a direction similar to that of the material in the original disk defining the direction of jet propagation. This may imply that in general the direction of the new jets deviates considerably from the old radio axis. In this case the new jets will quickly propagate through a part of the old cocoon and then inflate a new cocoon within the same environment as the old jets. The radio emission of the old cocoon will fade quickly as will be described in Sect. 3.3 but for some time four radio lobes will be observable; two with hot spots at their ends and two without. This is reminiscent of the so-called X-shaped or winged radio galaxies (e.g. Leahy & Williams 1984). In this scenario the aligned DDRGs of paper I are simply those sources in which the angular momentum vector of the infalling material is not very different from that of the existing accretion disk. Dennett-Thorpe et al. (1998) note that winged radio sources have radio luminosities close to the FRI/FRII break ($`510^{25}`$ W Hz<sup>-1</sup>, Fanaroff & Riley 1974). In paper I it is shown that this is also the case for the DDRGs with the exception of B 1834+620.
Whether the rate at which the black holes in AGN accrete and in the process launch jets is limited by the availability of fuel or otherwise is unclear. In any case it is unlikely that the efficiency of the jet production mechanism will decrease if an additional gas supply becomes available to the accretion disk if other parameters like the mass and spin of the black hole do not change. If the spin of the central black hole is responsible for the jet production (e.g. Blandford & Znajek 1977) than we expect that the new jets forming after disruption of the jet flow will have the same power as the old jets.
### 3.2 The evolution of FRII radio sources
To investigate the radio properties of the DDRGs we will use the dynamical model for FRII sources by Kaiser & Alexander (1997, hereafter KA) with the extension by Kaiser, Dennett-Thorpe & Alexander (1997, hereafter KDA) which allows the calculation of the radio luminosity of these objects as a function of their physical, linear size, $`D`$.
The observation that the optical identifications of the DDRGs are extended and the absence of broad lines in the optical spectra of the host galaxies (paper I) strongly suggest that the radio axes of the DDRGs lie close to the plane of the sky. We note that the sources 3C445 and 3C 219 are broad-line objects and thus possibly oriented differently. But, since we do not take these two sources into account here, we will therefore assume that the projected linear size of the outer sources structures is indeed equal to their physical size. The inner source structures are very closely aligned with the outer structures. Although this may be a projection effect and the radio axes of the inner and outer structure may form a rather large angle, it is unlikely that all the DDRGs found until now conspire to produce the apparent very close alignment of the two parts of the source. We therefore assume in this paper that our viewing angle of the inner source structures is also very close to 90. In any case, all DDRGs are radio galaxies and orientation unification schemes suggest that the smallest viewing angle for a radio galaxy is 45 (e.g. Barthel 1989) which implies a maximum error in the physical linear sizes of $`\sqrt{2}`$.
The models of KA and KDA are based on the assumption of a constant energy transport rate, $`Q_o`$, from the core of the radio galaxy via the jets to the cocoon. The jets end in strong shocks which can be identified with the radio hot spots and the jet material subsequently inflates the radio cocoon. The expansion of the cocoon is supersonic with respect to the surrounding material and therefore drives a bow shock into this gas (Scheuer 1974). The jets are confined by the pressure in the cocoon. Falle (1991) showed that the expansion of the bow shock should be self-similar and the model presented in KA predicts self-similar growth of the cocoon as well. This is supported by observations (e.g. Leahy & Williams 1984).
The model of KA requires the density distribution external to the radio cocoon to be modeled by a power law; $`n_x=n_o(r/a_o)^\beta `$, where $`n_o`$ is the density at a distance, $`r`$, of one core radius, $`a_o`$, from the centre of the radio galaxy. This power law is a good approximation to a King (1972) profile with central density $`n_o`$ and $`\beta _{King}=\beta /3`$ outside a few core radii. With this assumption the age of a radio source of linear size $`D`$ is given by
$$t=\left(\frac{D}{2c_1}\right)^{\frac{5\beta }{3}}\left(\frac{m_pn_oa_o^\beta }{Q_o}\right)^{\frac{1}{3}},$$
(1)
where $`m_p`$ is the mass of a proton and $`c_1`$ is a dimensionless constant (see KA). For the pressure within the cocoon, $`p_c`$, KA find
$$p_ct^{(4\beta )/(5\beta )}.$$
(2)
KDA develop a more sophisticated model of the cocoon which follows the evolution of the population of relativistic electrons responsible for the radio emission via the synchrotron process in the different parts of the cocoon under the influence of energy loss processes. These include adiabatic expansion, synchrotron radiation and inverse Compton scattering of the cosmic microwave background radiation.
The model of KDA for the radio luminosity as a function of linear size or age of the radio source also depends on the aspect ratio of the cocoon, $`R`$, the Lorentz factor of the bulk velocity of the jet material, $`\gamma _j`$, and the index of the energy distribution of the relativistic electrons at the time of their injection into the cocoon, $`p`$. Available low frequency radio maps indicate that $`R=3`$, the median value found by Leahy & Williams 1984 in their sample of FRII sources, is reasonable for all the sources discussed here (see paper I). The aspect ratio of the cocoon is determined by the ratio of the pressure in the hot spot region and that within the cocoon, $`p_h/p_c`$ (see KA). Kaiser & Alexander (1998) show that the assumption of ram pressure confinement of the cocoon perpendicular to the jet axis overpredicts the value of $`p_h/p_c`$ and we use their empirical fitting formulae instead. From this for $`R=3`$ we find $`p_h/p_c8`$ if $`\beta =1.5`$ and $`p_h/p_c21.4`$ if $`\beta =0`$.
We assume only mildly relativistic flow in the jet, $`\gamma _j=2`$, and for this Heavens & Drury (1988) show that $`p=2.14`$ at the jet shock. We set the ratios of specific heats of the jet material and of the IGM surrounding the source to $`5/3`$ while those of the cocoon material and of the energy density of the completely tangled magnetic field within the cocoon are set to $`4/3`$. We also follow KDA in assuming that the power law initially describing the energy distribution of the relativistic electrons in the cocoon extends to thermal energies and that the jet consists entirely of pair plasma. The inclusion of protons in the jets changes the absolute values of the quantities calculated in the following sections but has no influence on any of the conclusions of this paper.
### 3.3 The evolution of the old cocoon
The absence of hot spots in some of the outer cocoons of the DDRGs implies that in these cases the old jets are no longer active. Therefore the model discussed in the previous section is not directly applicable to DDRGs since it assumes a constant supply of energy to the cocoon by the jets. For simplicity we assume that the jet power, $`Q_o`$, drops instantaneously to zero once the jet production mechanism is disrupted. The last jet material accelerated by the AGN just before the interruption occurs takes a time $`t_t`$ to reach the hot spots in the old cocoon. Only after the last jet material has passed through the old hot spots the evolution of the old cocoon will start to deviate from the prediction of the models of KA and KDA. Because of the relativistic bulk speeds in extragalactic jets, we set $`t_tD/2c`$, where $`D`$ is the total linear size of the outer cocoons and $`c`$ is the speed of light.
We will assume that the evolution of the pressure of the material in the old cocoons and therefore also that of the energy density of the magnetic field in this region during the time we are interested in is given by the power law derived by KA (Eq. 2), even after the last jet material has reached the old cocoon. The information that the jets have ceased to supply the old cocoon with energy will travel through the old cocoon at the local sound speed. After roughly one sound crossing time the entire cocoon will continue to grow but now this expansion is adiabatic since there is no further energy input into the old cocoon. In the following we will justify this assumption by showing that the sound crossing time for each of the sources discussed here by far exceeds the time elapsed since the last jet material reached the old cocoon until the time at which the source is observed. Because the energy supply to the old cocoons has stopped, the pressure in the old cocoons will decrease faster than assumed here and therefore this analysis represents formally only an upper limit.
The assumption about the pressure in the old cocoon also implies that the evolution of the overall size of the cocoons during this time is indistinguishable from that of cocoons which are still supplied with energy by their jets. The total age of the source is thus given by Eq. (1).
Once the last jet material has passed through the jet shocks at the end of the old jets, the hot spots in the old cocoon will start to disperse and blend into the rest of the cocoon material at the local sound speed, $`c_s`$. Using the model of KA we find that $`c_s`$ is typically of order $`0.5c`$ in the hot spot region. For an upper limit of the hot spot radius of 10 kpc we find that the hot spot will disappear roughly within $`710^4`$ yr; a fraction of the time it takes the material in the jets of giant radio sources to travel from the core to the hot spots. We can therefore neglect this time and we will assume that the hot spots of the old cocoon disperse instantaneously once the last jet material has passed through them.
The relativistic electrons and possibly positrons responsible for the synchrotron emission observed in the cocoons of FRII radio sources are probably accelerated in the strong shocks at the end of the jets (e.g. Hargrave & Ryle 1974). Clearly this acceleration process stops once the last jet material reaches the old cocoons. The relativistic particles that were accelerated until this time will loose their energy because of the energy loss processes discussed by KDA. This model allows one to calculate the radio emission of parts of the cocoon identified by their injection time into the cocoon and then add up these various contributions by integrating over all injection times. In our case, the integration is stopped at the injection time of the last jet material. With the assumptions made above this allows us to determine, for a given jet power, $`Q_o`$, by what factor $`\mathrm{\Delta }`$ the radio luminosity of the old cocoons has dropped during the time interval $`t_d`$ from the moment the last jet material is injected into the old cocoons until the time at which we observe the source. A lower limit for the jet power, $`Q_{o,min}`$, is given by the case in which no dimming has taken place and we observe the source at a time when the last jet material has not yet reached the old hot spots. Lower values for $`Q_o`$ are not possible since these would imply a source luminosity below that observed even without any dimming.
For the assumptions given in Sect. 3.2 results for the minimum jet power, $`Q_{o,min}`$, and the corresponding maximum age of the outer cocoon, $`t_{o,max}`$, for the external density profile of poor groups are given in Tab. 1. Note that for B 1240+389 $`Q_{o,min}`$ is less than $`10^{37}`$ W which is roughly the dividing line between low power FRI-type sources and the more powerful FRIIs (Rawlings & Saunders 1991, KA). However, it is very likely that the old cocoon of B 1240+389 has dimmed and that the power of its old jets therefore was greater than $`10^{37}`$ W (see Section 3.4).
### 3.4 The evolution of the new cocoon
The inner source structure of all the DDRGs discussed here is fairly symmetrical about the core of the respective source. The difference of the lengths of the inner radio lobes on the two sides of a given source (paper I) is comparable with typical values found for radio galaxies without double-double structure (McCarthy, van Breugel & Kapahi 1991). This implies a density distribution in their surroundings as smooth as that of sources embedded in the unperturbed IGM. If the material in this region responsible for the development of jet shocks in the inner source structures were clumpy, we would expect the new jets to increase in luminosity if they encounter a dense clump of gas; the propagation of the hot spots is correspondingly slowed down. In all the DDRGs discussed here hot spots are detected on both sides of the inner source structure at similar distances from the core of the source and the ratio of the radio luminosity of the two sides (paper I) is also within the limits found for FRII radio galaxies without double-double morphology (McCarthy et al. 1991). The ratio of the luminosities for B 1450+333 ($`6`$) is somewhat higher than in the other sources. However note, that in this source the lengths of the two sides of the inner source structure are almost identical. The armlength ratio is 1.06. We therefore consider it likely that the environment of the inner source structures of DDRGs can also be modeled by a power law density distribution similar to what is assumed for ‘normal’ FRIIs (see KA).
For all five aligned DDRGs the direction of the new jets is within 7 of the old jet axis. This implies that in each object the inner structure is expanding in a region occupied by the material of the old cocoons. The new jets all end in hot spots and the gas surrounding the inner source structures must therefore be dense enough to cause the formation of strong jet shocks.
Clarke & Burns (1991) present numerical simulations of a restarting jet. They find that their simulated jet develops a shock within the region of the old cocoon indicating that the gas within the old cocoon is dense enough to prevent the new jet becoming ‘quasi-ballistic’, i.e. with only a negligible pressure discontinuity (shock) at its end. The shock they observe in their simulations may be strong enough to cause the inner source structure of the aligned DDRGs discussed here.
Although Clarke & Burns find that mechanical instabilities along the cocoon boundary begin to grow after the jet is ‘switched’ off, the gas causing the formation of a jet shock is mainly that transported by the old jet during its activity. They find the resulting density contrast of the gas in the cocoon and the uniform density the old jet expanded into to be roughly $`1/40`$. The model of FRII sources used here assumes that no mixing of material across the contact discontinuity delineating the cocoon of these objects takes place. For this case the only material within the old cocoons of the DDRGs is the jet material which has passed through the jet shock. The density in the old cocoon in the case of a uniform distribution is given by the model as
$$n_c=\frac{Q_o\left(t_ot_d\right)}{\left(\gamma _j1\right)m_pc^2V_c},$$
(3)
where $`t_o`$ is the total age of the source given by Eq. (1), $`t_d`$ is the ‘dimming time’, i.e. the time elapsed between the arrival of the last jet material at the old cocoon and the time of observation, and $`V_c`$ is the volume of the old cocoon. In order to be able to compare the density given by Eq. (3) with the density of the unperturbed IGM we have assumed that all the particles within the old cocoon are protons. This is contrary to our assumption that the jets consist of electrons and positrons only. However, since all model predictions for the evolution of the inner source structure depend only on the mass density and not on the particle density in the old cocoon, we can use Eq. (3) for purposes of comparison in its given form. If we assume that the old cocoon expanded in a uniform density environment ($`n_o10^2`$ cm<sup>-3</sup>, $`\beta =0`$) as in the numerical simulations of Clarke & Burns (1991), than we find from Eq. (3) that our model predicts density contrasts of the gas in the cocoon and the ambient medium comparable to those found in the simulations for short lengths of the old cocoon (of order 1 kpc). This is consistent with the numerical simulations since they only extend to short life times equivalent to short lengths of the jet.
Clarke & Burns (1991) also point out that the formation of relatively strong shocks at the end of the inner jets should be accompanied by the formation of a bow shock within the old cocoon material of similar strength in front of the hot spots. Since the old cocoon volume is filled with magnetic fields this bow shock could be visible due to synchrotron emission if electrons are accelerated to relativistic velocities by the bow shock. In none of the sources discussed here a distinct bow shock around the inner source structure has been observed. This may indicate that the acceleration of particles to relativistic velocities at this bow shock is not efficient. Note here that the strength of the bow shock around the inner source is decreasing away from the hot spots. This implies that although the shock ending the inner jet is strong enough to accelerate particles ‘lighting up’ the cocoon of the inner sources the associated bow shock may be too weak along most of its length to produce enough relativistic particles for it to be detectable. However, the non-detection of any emission from this bow shock is somewhat puzzling.
In the case of short life times of the old jets before the jet flow is interrupted and of uniform density distributions of the external gas we showed above that the gas in the region of the old cocoon may indeed be dense enough to force the formation of significant shocks at the end of the new jets. However, for the profile of the density in the environment used here and for the considerably larger life times of the old jets derived in the previous section, Eq. (3) predicts much smaller density contrasts. We will show in the following that the density in the region of the old cocoon created by the material transported by the old jets alone is insufficient for the formation of strong shocks at the end of the new jets in DDRGs.
Numerical simulations predict the presence of backflow in the cocoon (e.g. Norman et al. 1982) which implies that most of the material within the cocoon will ‘pile up’ towards the core of the source. This suggests that the new jets are propagating along a negative density gradient, the exact shape of which is difficult to determine. However, an upper limit for the density in front of the hot spots of the new source is given by the assumption that all the jet material which has passed through the old jets during their life time is uniformly distributed over the volume of the old cocoon. If, within the confines of the inner source, the same amount of material is arranged in such a way that it forms a monotonically decreasing density profile, the density in front of the hot spots of the inner source will always be less than in this limiting case. For calculation of the density in the region of the old cocoon in this limiting case we can therefore use Eq. (3). The solid lines in Fig. 1 show the results as a function of the power of the old jets. We did not consider jet powers above a value for which the radio luminosity at 178 MHz of the respective source would reach $`210^{27}`$ W Hz<sup>-1</sup> sr<sup>-1</sup>, if no dimming of the emission had taken place. This is the luminosity of the most luminous source with a linear size greater than 700 kpc, 3C 292, in the sample of Laing, Riley & Longair (1983) which includes the most luminous radio sources in the observable universe at any redshift.
Analogous to the analysis of the old cocoon, we can use the model of KDA to determine the density of a uniform environment, $`n_{oi}`$, required to produce the observed linear sizes and radio fluxes of the inner sources for a given jet power. We use the same source parameters as for the old cocoon except for the exponent of the external density which in a uniform environment $`\beta _i=0`$ (note that the exponent of the power law density distribution surrounding the inner source structure, $`\beta _i`$, is in general not equal to $`\beta `$, the equivalent exponent of the material the outer cocoon is embedded in). The dashed lines in Fig. 1 show $`n_{oi}`$ as a function of the power of the inner jets. The cut-off at high jet powers for these lines in the figure is given by the limit that the inner source can not expand faster than the speed of light. It is now also possible to calculate the age of the inner source, $`t_i`$, as a function of the jet power of the inner source. This is shown as the dashed lines in Fig. 2.
The time during which the AGN was quiescent, $`t_{off}`$, is given by
$$t_{off}=t_t+t_dt_i,$$
(4)
where $`t_t`$ is the time it takes the last jet material to reach the end of the old cocoon and $`t_d`$ is the time during which the old cocoon must have dimmed for a given power of the old jets in order to explain the currently observed properties of the outer radio lobes. The solid lines in Fig. 2 represent the sum of $`t_t`$ and $`t_d`$ as a function of the power of the old jet. The turn over in these curves is caused by a change in the dominant energy loss process for the relativistic electrons in the old cocoon. For lower jet powers this is the inverse Compton scattering of the CMBR while at higher $`Q_o`$ losses due to synchrotron radiation become more important. The maximum dimming time, $`t_{d,max}`$, for each source is given by the peak of the solid curves in Fig. 2 minus $`t_t`$. They are summarized in Tab. 1.
$`t_{off}`$ must always be greater or equal to zero and this implies that $`t_it_t+t_{d,max}`$. This condition defines a minimum jet power for the new jets which is shown as the cut-off of the dashed lines in Fig. 1 at low jet powers for two of the DDRGs. For the other three sources this cut-off is below $`10^{37}`$ W which is roughly the jet power where the transition from FRII to FRI-type sources occurs (Rawlings & Saunders 1991, KA).
From Fig. 1 it is clear that in the cases of B 0925+420, B 1240+389 and 4C 26.35 the properties of the inner source structure could be caused alone by the presence of the material transported by the old jet during its life time. B 1240+389 and 4C 26.35 are also the sources with the smallest outer structures in our sample for which Eq. (3) predicts the highest densities in the old cocoon. Note however, that for all three objects this requires the somewhat favorable assumption that this material is distributed uniformly. It also implies that even if the power of the new jets in B 1240+389 and 4C 26.35 is lower by a factor of 5 to 10 than that of the old jets the current expansion velocity of the inner structure is very close to the speed of light, which is unlikely. We therefore conclude that the old cocoons of the DDRGs can not be filled solely with the old jet material but that they must be ‘dirty’ in the sense that some other material must have penetrated the cocoon boundary from the outside.
If the power of the new jets in B 1450+333 is equal to that of the old jets, then the observations of this source are consistent with the old cocoon still being supplied with energy. This would imply the existence of hot spots in the old cocoon of this source. The observational evidence is unclear, but a rather diffuse hot spot may have been detected in the southern radio lobe (paper I). In this scenario we find $`t_{off}10^6`$ years for B 1450+333. Assuming that the power of the inner jets, $`Q_o`$, is equal to that of the old jets as suggested in Sect. 3.1 and that $`t_{off}1`$ Myr for all sources, we can calculate $`Q_o`$ required for each source by the observed properties of the inner and outer source structure. Tab. 2 lists the results for the assumption of a uniform density in the region of the old cocoon along with the required densities and resulting source ages. The assumption of $`t_{off}=1`$ Myr gives plausible results for $`Q_o`$ but other values for $`t_{off}`$ may be reasonable as well. B 1834+620 shows a radio hot spot at one end of its old cocoon but not at the other (paper III). This implies that the radio emission of the old cocoon in this source has not dimmed significantly and the small value for $`t_d`$ listed in Tab. 2 for the assumption of $`t_{off}=10^6`$ years for this source is within the limitations of the model consistent with this.
The sound crossing time for the old cocoon, $`t_{sc}`$, is also given in Tab. 2. For all five sources the time during which the radio emission of the cocoon has dimmed is of order a few percent of the sound crossing time. Our assumption that most of the old cocoon continues to dynamically evolve during $`t_d`$ as if the jets were still supplying it with energy is therefore justified.
## 4 ‘Contaminating’ the old cocoon
Several possibilities for the contamination of the cocoon with material from the outside exist. In this section we consider entrainment of material into the old cocoon across the contact discontinuity by Kelvin-Helmholtz and/or Rayleigh-Taylor instabilities, the replacement of the old cocoon by the surrounding IGM by buoyancy and the disruption and dispersion by the bow shock of warm, dense clouds embedded in the IGM.
### 4.1 Entrainment across the contact discontinuity
Numerical simulations indicate that the contact discontinuity delineating the cocoon is stable against hydrodynamic instabilities, if the backflow within the cocoon is supersonic with respect to the ambient medium (Norman et al. 1982). In the simulations the backflow is found to be initially supersonic whenever the bulk velocities in the jets inflating the cocoon are highly supersonic with respect to the speed of sound in the unperturbed IGM. For the five DDRGs and the assumed isothermal density profile of poor groups of galaxies we find that even if the power of the old jets is close to their lower limit, $`Q_{o,min}`$, their Mach numbers just before they stop supplying energy to the old cocoon are equal or greater than about 5.
In FRII sources which do not show large off-axis gas flow within their cocoon the backflow of gas must eventually be decelerated to velocities below the external sound speed (Norman et al. 1982). This may lead to the development of fluid instabilities along the contact discontinuity of the inner cocoon. This is also seen in numerical simulations. However, most simulations are confined to a two dimensional treatment of the problem and include reflective boundary conditions at the limits of the computational grid where we would expect the instabilities to be strongest. The predictions of numerical simulations in this flow region should therefore be treated with caution. A simple analytical estimate of the time scale for the growth of Kelvin-Helmholtz and Rayleigh-Taylor instabilities of length scale $`l`$ at the boundary of two fluids with large density contrast $`\chi `$ and relative velocity $`v_{rel}`$ is given by $`t_{KH}\sqrt{\chi }l/v_{rel}`$ and $`t_{RT}\sqrt{l/g}`$, where $`g`$ is the acceleration of one fluid with respect to the other (Chandrasekhar 1961). To replace large parts of the cocoon of FRII sources large-scale instabilities would be most efficient. However, for $`l`$ comparable to the cocoon size the instability growth is slow and the mixing of material from the ambient IGM with the gas in the cocoon on small scales, say $`l1`$ kpc, may be sufficient if it proceeds fast enough. We have mentioned already that the relative velocity for the two gas streams for fluid instabilities to grow must be of the order of or smaller than the sound speed in the unshocked IGM, which for $`T_x10^7`$ K is about 370 km s<sup>-1</sup>. From the previous section we note that $`\chi 10^4`$ for the outer cocoon at a few 100 kpc from the centre of the assumed density distribution of the unshocked IGM. From this we find that $`t_{KH}210^8`$ years which is longer than the life time of most of the outer cocoons in the sources discussed here. Kaiser & Alexander (1999) show that there should be a backflow not only within the cocoon but also in the shocked IGM in between bow shock and contact discontinuity. Both flows show similar velocities and decelerate on similar length scales. This implies that $`g`$, the acceleration of the flow within the cocoon with respect to that of the shocked IGM, is small and $`t_{RT}t_{KT}`$. Entrainment of dense material from the IGM into the old cocoon across the contact discontinuity even in the regions of the cocoon where the backflow velocity is not supersonic with respect to the unshocked IGM should therefore be rather inefficient. This makes it unlikely that the additional material in the old cocoons has been entrained.
### 4.2 Replacement of the old cocoon by the IGM
During most of the life time of FRII sources the cocoon will be overpressured with respect to the surrounding IGM. The expansion of the cocoon will be supersonic and therefore it will drive a strong bow shock into the IGM. However, the pressure within the cocoon decreases with time and will eventually become comparable to the pressure of the ambient medium and the bow shock will vanish. If the IGM in the vicinity of the cocoon is isothermal with a density distribution close to a King (1972) profile, this will first occur close to the centre of the radio source. Once the cocoon is no longer protected by its bow shock over its entire length, buoyancy will set in and the denser IGM will push the lighter material filling the cocoon outwards thereby starting to replace it in the inner regions of the cocoon. Assuming the minimum jet power for each source given in Tab. 1, the model of KA predicts that for a temperature of $`10^7`$ K for the IGM the pressure in the old cocoons of all of the DDRGs discussed here, except in that of B 1834+620, should be slightly lower than about half the value of the ambient pressure close to the core of the respective source. In the case of B 1834+620 the old cocoon should still be overpressured with respect to the IGM over its entire length. The low value for the pressure in the old cocoon in the other sources implies that their cocoons may be in direct contact with the unshocked IGM close to the core. However, this estimate depends strongly on the assumed temperature of the IGM and also on whether this material really is isothermal. If the power of the old jets is higher than their minimum value used here, then their old cocoons will still be overpressured and they will still be surrounded by a bow shock over their entire lengths.
The replacement of the cocoon material by the IGM due to buoyancy proceeds at the sound speed of the IGM and is therefore a slow process. For the temperature of the IGM assumed here, the relevant sound speed is of order 370 km s<sup>-1</sup>. At this speed we find that the replacement of a cylindrical volume of 100 kpc length and the width of the old cocoon by the IGM takes of order a 100 Myrs. This almost corresponds to the maximum ages of the old cocoons (see Tab. 1). This implies that there is enough time for the IGM to replace a considerable fraction of the volume of the old cocoon only if the replacement started early in the evolution of the radio source. Of course, a higher temperature of the IGM causing a higher sound speed in this material would shorten the relevant time scales. Although in principle this mechanism may therefore be fast enough we present further arguments against the replacement of large fractions of the cocoon material by the IGM in the following.
It is not straightforward to determine the density distribution in the region of the old cocoon resulting from such a buoyant replacement of the old cocoon material. We can, however, make two very crude approximations. If the IGM has had enough time to replace the old cocoon material and settle into an equilibrium configuration, its density distribution may closely resemble that of the unperturbed IGM. In this case the inner sources in the aligned DDRGs are embedded in essentially the same environment as the old cocoons with $`n_{oi}=10^2`$ cm<sup>-3</sup>, $`a_o=10`$ kpc and $`\beta _i=1.5`$. The dotted lines in Fig. 1 show the central density $`\rho _{oi}`$ required to explain the properties of the inner source structures for the case of $`\beta _i=1.5`$ assuming that $`a_o=10`$ kpc. The density required to explain the observations is well below the value assumed for this approximation. If the entire material of the IGM contained in a sphere of radius 400 kpc, approximately the linear size of one half of the cocoon of the inner source structure of B 0925+420, centered on the core of the radio source is distributed uniformly over the same volume by the replacement process, the density in this region is found to be $`10^4`$ cm<sup>-3</sup>. Again this is much higher then the density required for the inner source structures in the case of a uniform density profile within the region of the old cocoon (dashed lines in Fig. 1). Both approximations indicate that the replacement of the old cocoon material with the denser IGM can not explain the properties of the inner source structures.
If the replacement of the cocoon material takes place in combination with mixing of this gas with the IGM this may result in the low densities required for environments of the inner source structures. However, as we have shown in the previous section, the time scales for the fluid instabilities that could be responsible for mixing of the two fluids are prohibitively long.
Kaiser & Alexander (1998) discuss the ‘pinching’ of the cocoon at its centre by a steep pressure gradient within the cocoon. This gradient is caused by steep gradients in the density distribution of the surrounding IGM. This effect could also be responsible for the replacement of the old cocoon material but analogous to the replacement due to buoyancy it is not clear how the replacing IGM is diluted sufficiently to give rise to the observed properties of the inner sources.
Taking together all the arguments above it seems unlikely that the environments of the inner sources of the DDRGs are created by the replacement of the old cocoon material by the denser IGM.
### 4.3 Dispersion of warm clouds
As we have seen in Sect. 2, optical observations of FRII radio galaxies suggest the existence of warm, dense clouds embedded in the hot gas of the IGM in the vicinity of the host galaxy. They can potentially provide the additional material needed to explain the observed properties of the inner source structures in the aligned DDRGs. This is only possible if they can pass through the contact discontinuity into the old cocoon and if most of their material is subsequently dispersed over the volume of the old cocoon.
When the bow shock of an FRII radio source encounters one of these warm clouds embedded in the hot IGM, it drives a shock into it. The cloud is small and will therefore quickly re-establish pressure equilibrium with its surroundings. This implies that the Mach number of the shock within the cloud can be set equal to the Mach number of the bow shock within the hot IGM, $`M_b`$ (McKee & Cowie 1975). Note that because of its shape $`M_b`$ is not uniform along the length of the bow shock. In the following $`M_b`$ therefore refers to an ‘average’ Mach number of a given bow shock. Because of its large density the cloud is not efficiently accelerated by the passage of the shock and is therefore quickly overtaken by the contact discontinuity and passes through this surface into the cocoon.
It has been suggested that after the cloud has passed through the bow shock it will collapse and possibly form stars if radiative cooling is efficient (Rees 1989, Begelman & Cioffi 1989, Foster & Boss 1996). Because of the low temperature of the cloud material the bow shock of FRII sources will be radiative and the compression of the cloud is an almost isothermal process. For strong shock conditions this implies $`n_{cl}^{}=M_b^2n_{cl}`$, where $`n_{cl}`$ is the pre-shock density of the cloud (Rees 1989). We will use dashed variables for quantities describing the cloud after the shock has passed through it. The temperature of the warm cloud will initially increase by a factor $`30`$ for $`M_b=10`$. The cloud material then cools radiatively back to a temperature of $`10^4`$ K within $`10^310^4`$ years (Sutherland & Dopita 1993). During this phase the optical line emission of the clouds will be enhanced because of the passage of the bow shock. Once the temperature within the cloud reaches $`10^4`$ K, cooling via radiation becomes inefficient and the subsequent evolution of the cloud will be adiabatic. In this regime the numerical simulations predict the shredding of the cloud due to growing Kelvin-Helmholtz and Rayleigh-Taylor instabilities on its surface within a few ‘cloud crushing times’ (Klein, McKee & Colella 1994)
$$t_{cc}=\frac{\sqrt{\chi ^{}}r_{cl}^{}}{v_b},$$
(5)
where $`\chi ^{}`$ is the ratio of the density of the warm cloud and that of its surroundings, $`r_{cl}^{}`$ is the radius of the cloud and $`v_b`$ is the velocity of the bow shock with respect to the unshocked IGM. Here we have made use of the fact that the warm cloud is not very effectively accelerated by the passage of the bow shock and that $`v_b`$ is therefore similar to $`v_{rel}`$, the relative velocity of the shocked hot IGM with respect to the cloud.
For this and in the following we assume that the density of the cloud is uniform over its volume. This is a simplification and the cloud will be denser close to its centre than further out. In this case the core of the cloud may collapse under the influence of the radiative shock within the cloud and form stars. Such shock induced star formation may play an important role in the explanation of the alignment effect in FRII sources at $`z>0.6`$ (see Sect. 6). However, most of the gas in the cloud will become adiabatic before it can collapse to form stars and this is the material which will subsequently be spread out over the volume of the old cocoon. Super novae and stellar winds of the stars formed in the core of the cloud will contribute to the dispersion of the outer cloud regions as well. The exact details of the evolution of the warm cloud can probably only be determined with the help of numerical simulations taking into account radiative cooling and feedback from star formation.
Immediately after passing through the bow shock the warm cloud is surrounded by the shocked gas of the hot IGM. For the hot IGM the bow shock is adiabatic and for strong shock conditions $`\chi ^{}`$ is therefore equal to $`M_b^2/4(T_x/T_{cl})`$, where $`M_b`$ is the local Mach number of the bow shock. This implies the shredding of the cloud within a few $`t_{cc}M_b\sqrt{T_x/T_{cl}}r_{cl}^{}/(2c_x)`$. The acceleration of the cloud by the shocked IGM outwards, away from the contact discontinuity, proceeds on at least similar if not longer time scales (Klein et al. 1994). Consider now a ‘typical’ FRII source with a total linear size of $`D=400`$ kpc embedded in a density distribution appropriate for a poor group. According to Eq. (1) this source has an age of roughly $`210^7`$ years if the aspect ratio of its cocoon, $`R`$, is equal to 3. From this we find the advance speed of the hot spots for this source $`v_b0.03`$ c. If we assume that the shape of the cocoon is cylindrical the expansion speed perpendicular to the jet axis is $`510^2`$ c thereby implying a Mach number of the bow shock in this direction of 3.8. Assuming $`r_{cl}^{}=10`$ pc (e.g. Osterbrock 1989) we find that for this cloud in these conditions $`t_{cc}1.610^6`$ years. From Kaiser & Alexander (1999) we note that the stand-off distance between the bow shock and the contact discontinuity is roughly $`510^3D`$. If the cloud is not accelerated at all by the passage of the bow shock, it will take the contact discontinuity roughly $`10^6`$ years to overtake the cloud. The cloud is therefore just able to reach the safety of the cocoon before it is dispersed within the layer of shocked IGM. Since the shape of the cocoons of FRII sources is not cylindrical, the conditions for clouds closer to the hot spot should be more favorable than estimated here because the Mach number of the bow shock will always be higher than in the above scenario and also the different direction of expansion of the cocoon causes the contact discontinuity to overtake the cloud earlier. We therefore conclude that ‘average’ clouds are not disrupted or accelerated efficiently by the bow shock and the shocked IGM.
The pressure within the cocoon is the same as that in the shocked layer of gas between bow shock and contact discontinuity but the density is much lower. For an estimate we use Eq. (3) with the same parameters as above. Again we assume that the warm clouds supplying the material for the environment of the inner source structures are not displaced by the passage of the bow shock. Therefore to determine the disruption time scale of these clouds we have to consider the properties of the bow shock around the outer source structure at the time when it had a linear size of typically 400 kpc. We find $`n_c110^7`$ cm<sup>-3</sup> which implies $`t_{cc}510^7`$ years. A warm cloud is therefore stable for a long time within the cocoon during which its outer layers may be ionized by the UV emission of the central AGN and/or the emission of the shocked hot phase of the IGM (see Sect. 6). This material may then produce the observed line emission.
Roughly a time $`t_{cc}`$ after an ensemble of clouds have passed through the bow shock they are finally broken up into smaller fragments and their material is spread out over large fractions of the cocoon volume. It is not straightforward to estimate the size of the volume over which the cloud material is dispersed because this is determined by the mixing of the cloud material with the gas already present in the cocoon. While the break-up of the warm clouds is mainly driven by the fluid instabilities on large scales, mixing proceeds on much smaller scales which are difficult to resolve in numerical simulations (Klein et al. 1994). As was mentioned above, instabilities grow faster on smaller scales and so significant fractions of the gas mass of the clouds may be already mixed with the other cocoon material by the time the clouds finally break up on larger scales after a time $`t_{cc}`$. A rough estimate for the volume over which the cloud material is spread may be obtained by calculating the volume ‘swept-out’ by a given cloud during one cloud crushing time. Assuming that the cloud expands adiabatically in pressure equilibrium with its surroundings in the cocoon we find from Eq. (2) that the cloud cross sectional area, $`\sigma _{cl}`$, increases proportional to $`t^{2(4+\beta )/3\mathrm{\Gamma }_{cl}(5\beta )}`$. If we again assume that the velocity of the cloud relative to its surroundings is roughly equal to the advance speed of the hot spot we find for the volume swept-out by the cloud $`\sigma _{cl}v_b𝑑t`$. If the cloud material is distributed over this swept-out volume this corresponds to an increase in the cloud volume of a factor $`310^5`$ for the cloud and source parameters introduced above. Of course this estimate is very simplistic and neglects many of the complicated processes involved. However, we note that since the cloud disruption is a continuous process the density of the gas in the cocoon responsible for the disruption of newly incoming clouds is already enhanced by the shredding of clouds which were earlier overtaken by the expansion of the cocoon. This means that the time scales for the cloud dispersal and mixing of cloud material with the cocoon gas may be shorter than estimated above. From this we see that the volume filling factor of the former cloud material may increase substantially when compared to that of the intact clouds. In the following we will make the approximation that the material of all clouds is spread out uniformly over the entire volume of the cocoon, i.e. the filling factor of the former cloud material is unity after the disruption of the clouds. The resulting density within the cocoon is approximately $`f_{cl}^{}n_{cl}^{}`$. This then provides the environment for the inner source structures of the aligned DDRGs.
In Sect. 3.4 we have calculated the uniform density within the region of the old cocoon required to explain the properties of the inner source structure for the assumption that $`t_{off}10^6`$ years and that the jet power of the old and new jets is the same. If this density profile is provided by the shredding of line-emitting clouds we can estimate their initial volume filling factor, $`f_{cl}^{}`$, within the cocoon before they are dispersed. The results which are given in Tab. 2 agree very well with the filling factors derived from observations of the line emission (Heckman et al. 1991). Note that because we assume a filling factor of unity of the former cloud material the calculated values of $`f_{cl}^{}`$ are only lower limits since some of the clouds will not be dispersed completely. Of course this is also true if the cloud cores collapse and form stars.
The distribution of warm, line-emitting clouds within the hot IGM must be quite smooth to explain the symmetry of the inner source structures. In some FRII sources observations indicate a rather clumpy distribution of these clouds (e.g. Johnson, Leahy & Garrington 1995). A concentration of a large number of warm, dense clouds in the IGM in the path of one of the jet may even become dynamically important for the evolution of this jet (McCarthy et al. 1991). The volume filling factor of the clouds must be locally increased in such regions so that $`f_{cl}^{}\sqrt{T_x/T_{cl}}0.03`$. However, the distribution of clouds in these sources is observed at a time when they are still intact within the cocoon. The time available for the cloud to disrupt and the cloud material to smooth out over the volume of the old cocoon in DDRGs is probably sufficient to provide a more homogeneous environment for the inner source structures than suggested by the observations of the intact clouds.
## 5 Implications of cloud-shredding for aligned DDRGs and other large FRII sources
The dispersion of warm, dense clouds by the bow shock of the outer cocoon of aligned DDRGs can explain the formation of the observed peculiar structures of these objects. There are some further implications of this model which we will discuss in this section.
### 5.1 Internal depolarisation by the cloud material
Any thermal material, like the warm, dense clouds inside the outer cocoon prior to their dispersion, threaded by magnetic field will contribute to the depolarisation of the radio synchrotron emission of the cocoon material by Faraday rotation. In some FRII sources a clear spatial association between the line emitting clouds and the depolarisation is observed (e.g. Pedelty et al. 1989). As long as the clouds are still intact their contribution to the depolarisation can not be distinguished from any depolarisation occurring outside the radio cocoon. However, in DDRGs and other large radio galaxies the cloud material should be spread out over most of the volume of the cocoon and in this case the depolarisation should show the characteristics of internal depolarisation. Garrington & Conway (1991) find no evidence for the presence of thermal material in the cocoons of a sample of 47 FRII radio galaxies with a maximum linear size of 625 kpc. They give an upper limit of $`510^3`$ cm<sup>-3</sup> $`\mu `$G for the product of the strength of the magnetic field along the line of sight in the cocoon, $`B_c`$, and the number density of electrons, $`n_c`$, in this region. From the model of KA we find the pressures in the old cocoons of the DDRGs to be of order $`10^{13}`$ erg cm<sup>3</sup>. Assuming equipartition between the energy density of the magnetic field and that of the relativistic particles in the cocoon we find $`B3`$ $`\mu `$G. The upper limit of Garrington & Conway (1991) then corresponds to densities of roughly $`210^3`$ cm<sup>-3</sup> for the cocoon material. The densities within the old cocoons of the DDRGs necessary to explain the properties of the inner source structures are about three orders of magnitude lower than this (see Tab. 2). The internal depolarisation of the radio emission of large radio galaxies caused by the material of the dispersed warm clouds should therefore be unobservable. However, the model presented here predicts that as more and more of the warm clouds are shredded within the cocoons, the radio depolarisation observed towards the cocoon of FRII radio sources should decrease with increasing linear size. This effect may be masked by the additional decrease in depolarisation caused by the fact that for large FRII sources a larger fraction of the radio lobes is surrounded by material of low density at large distances from the core which causes less depolarisation than the material closer in (e.g. Strom & Jägers 1988). However, the shredding of warm clouds should lead to a decrease in the number of ‘clumpy’ regions of depolarisation, possibly associated with optical line emission, in large FRIIs and this difference may be detectable in high resolution depolarisation maps of sources of different size.
### 5.2 The large size of DDRGs
For the cloud in the example presented in the previous section the final dispersion takes place when the outer cocoon has a linear size of 670 kpc. If the shredding of line emitting clouds is the mechanism which contaminates the cocoon of FRII radio sources then, because of the long time it takes to disrupt these clouds, we expect to see large aligned DDRGs only. Indeed, all aligned DDRGs mentioned in paper I have large linear sizes ($`700`$ kpc).
If the jet flow in younger sources is disrupted without significantly changing the direction of the jets, the new jets will pass through the old cocoons at the bulk velocity of the jet material without developing a strong jet shock because the warm clouds are still intact and can not influence the dynamics of the new jets. Those sources will most likely follow a evolution much like that predicted by Clarke & Burns (1991), in which a restarted jet travels unhampered through the old cocoon without a clear trace of its doing so. Because of the relativistic velocities of the jet material, the time it takes the new jets to reach the edge of the old cocoon is about the light travel time along the cocoon. Since the old hotspots will fade extremely fast after the energy input has stopped, such sources would only be noticed during the short time their radio lobes do not show any hot spots.
It is unclear how such a source will evolve further once the jet has reached the old cocoon boundary again. During the time that the jet was off, the old cocoon will have continued to expand and it will probably have a lower length-to-width ratio, since mainly the forward, jet-driven motion will have slowed down and not the sideways expansion. The new hotspot, since it is driven by the restarted jet, will advance much faster than the head of the old cocoon and it will probably form a protrusion at the head of the old lobe (see also the simulations of Clarke & Burns 1991). There are a number of radio sources which clearly show protrusions in one or both radio lobes (e.g. 3C 79, Spangler, Myers & Pogge 1984; 3C 132, Neff, Roberts & Hutchings 1995). These may therefore well be radio galaxies with restarted jets.
### 5.3 The velocity of the inner lobes
Since the density in the old cocoon is lower than that of the unperturbed IGM, it is expected that the inner structures advance much faster than the outer structures would have at a similar distance from the core.
Finding direct evidence for such high advance velocities, for instance through spectral ageing studies, is difficult and often ambiguous (e.g Alexander & Leahy 1987). If the inner structures indeed are the result of a restarted jet, then the fact that we still detect the outer radio lobes in the DDRGs already indicates that the inner lobes must be advancing relatively fast. Radio lobes which are not being refuelled anymore are expected to fade away quickly, in a few times $`10^7`$ yr, at most. Since the radio powers of the outer structures are quite normal for sources of their size (see paper I), the length of time elapsed since the disconnection of the outer lobes from the jet flow must be relatively short compared to their age. Hence, the inner structures must have grown relatively fast to their currently observed size, certainly faster than the outer lobes would have advanced at the time they had a similar size.
In the case of B 1834+620 we have been able to constrain the advance velocity and age of the inner structure (see paper III). We find that the velocity must lie within the range $`0.20.3c`$, depending only on the orientation of the source. This is much higher than what is usually found in powerful radio galaxies ($`0.010.1c`$; e.g. Alexander & Leahy 1987, Scheuer 1995). The age is constrained to the range between 2.6 and 5.8 Myr, which is in good agreement with the prediction of the model presented in this paper (see Tab. 2). Also, we have estimated an ambient density of the inner lobes of $`8\times 10^7`$ cm<sup>-3</sup>, three orders of magnitude below what is generally found around the lobes of radio galaxies (e.g. Alexander & Leahy 1987), but in reasonable agreement with the prediction from the model in this paper (see Tab. 2).
## 6 Implications for the alignment effect
The alignment of the UV and optical line emission with the radio axis in FRII sources at high redshift, $`z0.6`$, is usually explained by ionisation of the warm clouds by either the radiation from the AGN (McCarthy 1993 and references therein) or by the emission of gas shocked and heated by shocks (e.g. Dopita & Sutherland 1995). Aligned optical continuum emission is partly caused by the nebular continuum emitted by the ionized clouds (Dickson et al. 1995) and may include contributions by scattering of the AGN emission by the cloud material (Tadhunter et al. 1987) and shock induced star formation (Rees 1989, Begelman & Cioffi 1989). It is interesting to note that all of the explanations mentioned above postulate the existence of a two phase IGM; warm ($`T_{cl}10^4`$ K), dense clouds embedded in a hot ($`T_x10^7`$ K), less dense background.
The model for the shredding of the warm, dense clouds presented here is consistent with all of the explanations for the alignment effect. The cores of the clouds may very well collapse after the compression of the cloud by the bow shock while the outer cloud regions are stable for a significant fraction of the life time of the radio source. The material in this region can be ionized but will also scatter the light of the AGN. The hot phase of the IGM may provide some of the ionizing radiation after passing through the bow shock. The following considerations show that the model is consistent with the observed properties of the alignment effect.
We have shown in Sect. 4.3 that the properties of the warm, line-emitting clouds can change significantly when they pass through the bow shock of an FRII source. Most optical emission from FRII sources is observed in regions overlapping with the radio cocoon which is the basis of the alignment effect. All physical quantities derived from such observations for the warm clouds therefore apply to their post-shock state within the cocoon of the FRII source. This has important implications for their confinement in the hot IGM before they are shocked.
To be stable the pre-shock cloud must be in pressure equilibrium with the hot IGM. For the typical temperatures involved this implies $`n_{cl}/n_x10^3`$, where $`n_x`$ is the density of the hot IGM. The densities of these clouds derived from the observed line emission coming from regions overlapping with the radio cocoon would then imply much higher densities of the hot IGM than are derived from X-ray observations of this material (e.g. Fabian et al. 1987). However, because the clouds, for which these observations have been made, may have been compressed by the bow shock their pre-shock densities are much lower than those indicated by the observations. For $`M_b10`$ we find that the pre-shock clouds are stable within the density profile of the hot IGM in poor groups of galaxies assumed above out to roughly 50 kpc from the centre of the group. This explains the stability of the warm clouds in their hot environment without the need to invoke additional contributions to the thermal pressure of the hot IGM by cooling flows or other processes.
In some radio galaxies Ly$`\alpha `$ emission is detected well beyond the radio cocoon but still aligned with the radio axis forming the so called Ly$`\alpha `$ halo (Chambers, Miley & van Breugel 1990, McCarthy et al. 1990, Eales et al. 1993, van Ojik et al. 1996, Pentericci et al. 1997). The source of ionizing photons for the Ly$`\alpha `$ emission cannot be due to recently formed stars and must be caused by the illumination of warm material by the obscured AGN or by the shocked hot phase of the IGM (e.g. Meisenheimer & Hippelein 1992). However, the warm clouds causing the observed emission of the Ly$`\alpha `$ halos must still be in their pre-shock state. This can explain observed differences in velocity spread and ionisation parameter between the material in the halo and that of the region of the cocoon.
Velocities of the warm clouds in regions overlapping with the cocoon and in the more extended halo can be inferred from their Doppler-broadened emission lines. In the Ly$`\alpha `$ halos typical velocities are of order 500 km s<sup>-1</sup> which is comparable to the sound speeds within the hot IGM and may indicate some large scale movement like rotation (van Ojik et al. 1996). For the line-emitting regions within the radio cocoons velocities of order 1000 km s<sup>-1</sup> are measured (McCarthy 1993 and references therein). These somewhat higher velocities of the clouds within the cocoon may be caused by the acceleration of the clouds by the passage of the bow shock and the continuing momentum transfer from the material in the cocoon to the clouds. The velocities within the cocoon should be similar to the advance speed of the hot spots (Norman et al. 1982). These are typically a few times 1000 km s<sup>-1</sup> which agrees well with the observations. This may indicate that the clouds observed in regions overlapping with the cocoon are indeed located within the cocoon.
Furthermore the ratio of ionizing photons arriving at a warm cloud to the number density of electrons available to absorb these photons within the cloud, the ionisation parameter, $`U`$, will also change when the cloud passes through the bow shock (e.g. Lacy et al. 1998). The warm clouds in the cocoon will have been compressed by the bow shock but the ionizing flux from the core is roughly the same for clouds within the cocoon and those in front of it. This will lead to a smaller value of $`U`$ for the clouds within the cocoon compared to those in front of it. This can in principle be tested observationally using line ratios like $`[`$O iii$`]`$5007 / $`[`$O ii$`]`$3727.
All models of the dynamical evolution of the radio cocoons of FRII sources predict a correlation of linear size with age of the source. The exact age of a source of a given linear size depends of course on the power of its jets and also on the density of the surrounding material. However, larger sources will in general be older than smaller sources. Once the line-emitting clouds within the cocoon are completely disrupted, the resulting temperatures (a few $`10^8`$ K) and densities in this region imply a drastic decrease in the line emission of these clouds. In the picture of the slow dispersion of the line-emitting clouds within the radio cocoon sketched above we therefore expect the strongest aligned emission in small FRII sources while in larger sources the effect should be weaker and may vanish in large sources. The same is true for the aligned optical and UV continuum emission because the nebular continuum emission (Dickson et al. 1995) of the warm clouds will decrease as the cloud material starts to be spread out over the volume of the cocoon. If the cores of the warm clouds collapse and form stars, their emission will also fade because of the aging of the stellar population. Best, Longair & Röttgering (1996) find in a sample of 8 FRII radio galaxies, all at a redshift $`z1`$ and of similar radio luminosity, that the alignment effect depends on the linear size and therefore presumably on the age of the radio source. Sources with larger linear sizes show a weaker alignment effect than smaller objects as predicted by our model.
## 7 Conclusions
Based on the observed properties of aligned DDRGs we have developed a model for their evolution. The symmetry of the inner source structure strongly suggests that the interruption of the jet flow takes place in the central AGN. One possible physical process is the infall of large masses of gas onto the accretion disk, possibly caused by a (repeated) interaction with a companion galaxy (see paper I). This scenario may explain the existence of so-called X-shaped or winged FRII radio galaxies, if similar processes have taken place in these sources and the angular momentum vector of the infalling gas is very different to that of the pre-existing accretion disk. These objects may then constitute the ‘misaligned’ DDRGs.
We extended the model of the radio luminosity evolution of FRII sources of KDA to allow for sources in which the jets have stopped supplying the cocoon with energy. We used this model to investigate the evolution of the outer and the inner source structure. This analysis suggests that the density in the outer cocoon created by the old jets is insufficient to explain the observed properties of the inner source structure. We argue that the most likely contamination of the old cocoon with additional gas is the slow dispersion by the effects of the bow shock surrounding the old cocoon of warm, dense clouds embedded in the otherwise hot IGM. The observation of optical and UV emission aligned with the radio axis in many FRII sources at high redshift implies that the clouds must survive for a long time in the cocoon. They are, however, eventually destroyed and provide the environment for inner source structures in DDRGs. The long survival times for these clouds derived here are consistent with the observation that the currently known aligned DDRGs are all large ($``$ 700 kpc). Also, it is consistent with the observation that in $`z1`$ 3CR radio galaxies the strength of the alignment effect anti-correlates with the linear size of the radio sources. The lower limits for the volume filling factors for the warm, dense clouds derived from the density requirements of the inner source structures are in good agreement with those obtained from optical observations.
The problem of the confinement of the warm clouds within the hot IGM previously pointed out by various authors (e.g. Fabian et al. 1987) can be resolved by taking into account the effects of the bow shocks on these clouds. The properties of the clouds are derived from optical observations of clouds which may already be located inside the radio cocoon. This implies that they have been compressed by the bow shock leading to an increased density and pressure for the cloud material compared to pre-shock clouds.
Polarization measurements of the radio emission from the cocoons of FRII sources indicate a spatial association of peaks in the depolarisation and concentrations of line-emitting material. Once the cloud material has been spread out over the volume of the radio cocoon, its depolarisation signature will be too weak to be detected.
The available data on the currently known DDRGs have not allow us to put strong constraints on the model. Only in the case of the source B 1834+620 have we been able to limit the age and to estimate the ambient density of the inner source. These values are in rough agreement with the predictions from the model.
Many of the physical quantities we have derived for the DDRGs discussed here are model dependent. The shape of the density distribution within the region of the old cocoon in particular is unknown. Moreover, we have assumed a density profile for the environment of the outer source structures which may be typical for low redshift FRII sources but in the absence of X-ray data we do not have the means to test the validity of this assumption. The numerical values of the derived source parameters should therefore be treated with caution. However, we arrived at the main conclusions of this paper, namely that the cocoons of aligned DDRGs and possibly other FRII sources contain material additional to that supplied by their jets, using only limiting cases of the models. This material is most likely the remains of shredded line-emitting clouds. In view of the observational evidence we believe that the contamination of the cocoons of FRII sources by warm, dense clouds located in the IGM is an important process in the evolution of these objects.
## 8 Acknowledgments
The authors would like to thank P. N. Best, A. G. de Bruyn, M. Lacy, H. van der Laan and M. D. Lehnert for many stimulating discussions and suggestions. We also thank the referee, J. P. Leahy, for his comments on the manuscript. This work was supported in part by the Formation and Evolution of Galaxies network set up by the European Commission under contract ERB FMRX– CT96–086 of its TMR programme. |
no-problem/9912/hep-th9912223.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Zero-point fluctuations of quantum fields give rise to forces, which are regarded as manifestations of the Casimir effect (for reviews, see e.g. refs.). From the theoretical viewpoint, one of the most daunting aspects of the evaluation of Casimir energies, even for highly symmetrical boundaries, is its sheer difficulty. Many mathematical methods has been developed, but even the simplest ones demand considerable efforts.
At the core of several of these techniques one finds uniform asymptotic expansions —also called Debye expansions— of Bessel or Ricatti-Bessel functions appearing in integrals over momentum-like variables. This fruitful method was used as early as —at least— the time of ref., and has been repeatedly revisited in a huge number of articles, often in the framework of other regularization schemes (see e.g. ref. and refs. therein). However reliable, the whole Debye expansion technique is a time-consuming process and the search for computational alternatives might be of interest . This is, precisely, one of the motivations of the present letter. Our purpose is to take further the exploitation of summation theorems for Bessel functions, started in ref. for cases with spherical surfaces, and apply it to a problem with a cylindrical boundary.
We are considering a material cylinder of radius $`a`$, infinitely long, placed along the $`z`$-axis, with permitivitty and permeability $`\epsilon _1,\mu _1`$, surrounded by a medium with permitivitty and permeability $`\epsilon _2,\mu _2`$. For such surfaces, a special situation is the case where the light velocities in both media —interior (1) and exterior (2)— are the same, i.e.,
$$\epsilon _1\mu _1=\epsilon _2\mu _2c^2,$$
(1)
where $`c`$ is the common light-velocity. Since any variation in $`\epsilon `$ affects $`\mu `$, this is called dielectric-diamagnetic case, as opposed to the purely dielectric one, in which $`\mu _1=\mu _2=1`$ but the velocity has to change. Dielectric-diamagnetic conditions are often desirable as they cause the frequency equations to simplify and some divergences to cancel out. In a QCD context, $`\epsilon `$ and $`\mu `$ refer to colour permitivitty and permeability (see and refs. therein). Illustrations of the dependence of the interquark potential on the boundary conditions for a string model have been provided in ref..
In refs. , and the regularized Casimir energy per lateral unit-length for an infinite dielectric-diamagnetic cylinder has been studied. Up to the order of
$$\xi ^2=\left(\frac{\epsilon _1\epsilon _2}{\epsilon _1+\epsilon _2}\right)^2,$$
(2)
the energy has been shown to vanish within all the tested degrees of numerical accuracy. The next contribution, which is of the order of $`\xi ^4`$, has been found —to our knowledge, for the first time— in ref..
In our minds, medium 2 will be pure vacuum and medium 1 a very tenuous dielectric, which means $`\epsilon _2=\mu _2=1`$ and $`\epsilon _111`$. As a result, the $`\xi ^2`$ parameter, defined by eq.(2), is a small number. According to ref., the eigenfrequencies $`\omega `$ coming from the Maxwell equations for this problem are given by the zeros of some equations of the type $`f_n(k_z,\omega ,a)=0`$, $`n\text{Z}`$ —eqs.(2.3)-(2.5) in ref.—. Further, in cases where the relation (1) holds, $`f_n`$ takes the form
$$f_n(k_z,\omega ,a)=a^2c^2\lambda ^6\frac{(\epsilon _1+\epsilon _2)^2}{4\epsilon _1\epsilon _2}\left[\xi ^2𝒫_n^2(\lambda a)+\frac{4}{\pi ^2(\lambda a)^2}\right],𝒫_n(x)\left(J_nH_n\right)^{}(x),$$
(3)
where $`\xi ^2`$ is given by (2), $`J_n`$, $`H_n`$ are Bessel and Hankel functions, and
$$\omega =c\sqrt{\lambda ^2+k_z^2}.$$
(4)
Every $`\lambda `$ belongs to the eigenfrequency set of the projected two-dimensional problem —say $`\mathrm{\Lambda }`$—, while $`\mathrm{}<k_z<\mathrm{}`$, i.e., the values of $`k_z`$ are continuous without any restriction. Before regularizing, the Casimir energy per unit-length ($`_C`$) is given by the mode sum
$$_C=\frac{1}{2}\mathrm{}\underset{n,m}{}\frac{dk_z}{2\pi }\omega _{n,m,k_z},\omega _{n,m,k_z}=c\sqrt{\lambda _{n,m}^2+k_z^2}.$$
(5)
The $`n`$-index is the angular momentum number, while $`m`$ describes the remaining degree of freedom, i.e., labels the different $`\lambda `$-values at a given $`n`$.
The present work is organized as follows. In sec. 2 we follow ref. and evaluate the energy density $`g^{(2)}`$ (in momentum space) up to the order of $`\xi ^2`$, by a modified Bessel function summation theorem, and resorting to the properties of Meijer $`G`$ functions. Then, we show that the integration of $`g^{(2)}`$ yields a vanishing result. Sec. 3 is devoted to an alternative approach based on a zeta function prescription for the initial mode sum like in refs.. Apart from proving to be easier, this technique paves the way to the numerical calculation of higher order contributions. Our conclusions are given in sec. 4.
## 2 Density method
We begin by reviewing the procedure used in ref. and obtaining an expression for the Casimir energy. The mode sum is first represented, as usual in these cases, by a contour integral
$$E_C=\frac{\mathrm{}c}{2}_{\mathrm{}}^{\mathrm{}}\frac{dk_z}{2\pi }\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\frac{1}{2\pi i}\frac{1}{2}_C\sqrt{\lambda ^2+k_z^2}d_\lambda \mathrm{ln}\left[\frac{f_n(k_z,\omega ,a)}{f_{n,as}(k_z,\omega )}\right],$$
(6)
where the integration contour $`C`$ consists of a straight line parallel to, and just to the right of, the imaginary axis, $`(i\mathrm{},+i\mathrm{})`$ closed by a semicircle of an infinitely large radius in the right half-plane. The branch line of the function $`\phi (\lambda )=\sqrt{\lambda ^2+k_z^2}`$ is chosen to run between $`i|k_z|`$ and $`i|k_z|`$ on the imaginary axis. In terms of $`y=\text{Im}\lambda `$ we have
$$\phi (iy)=\{\begin{array}{ccc}i\sqrt{y^2k_z^2},\hfill & & y>k_z,\hfill \\ \pm \sqrt{k_z^2y^2},\hfill & & |y|<k_z,\hfill \\ i\sqrt{y^2k_z^2},\hfill & & y<k_z.\hfill \end{array}$$
(7)
Noting that the argument of the logarithm is an even function of $`iy`$, (6) reduces to
$$E_C=\frac{\mathrm{}c}{2\pi ^2}\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}_0^{\mathrm{}}𝑑k_z_{k_z}^{\mathrm{}}\sqrt{y^2k_z^2}d_y\mathrm{ln}\left[1\xi ^2(y_y(I_n(ay)K_n(ay)))^2\right],$$
(8)
where we have expressed (3) explicitly on the imaginary axis. Integrating with respect to $`k_z`$ we obtain the Casimir energy per unit length ($`_𝒞`$) as an integral over $`y`$, namely
$$_𝒞=\frac{\mathrm{}c}{4\pi }\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}_0^{\mathrm{}}𝑑yy^2_y\mathrm{ln}\left[1\xi ^2(y_y(I_n(ay)K_n(ay)))^2\right].$$
(9)
In the following subsections, we use the approach of ref. to evaluate this expression.
### 2.1 Density calculation to the order of $`\xi ^2`$
Having integrated $`k_z`$ out, the integrand in expression (9) may be interpreted as the density of Casimir energy with respect to the parameter $`y=i\lambda `$. This density may be evaluated by expanding in terms of $`\xi ^2`$, the first contribution being
$$g^{(2)}(y)\frac{\mathrm{}c\xi ^2}{4\pi }\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}dyy^2_y\left[y_y(I_n(ay)K_n(ay))\right]^2.$$
(10)
We shall now show that $`g^{(2)}`$ can be calculated explicitly by a variant of the method shown in ref. . Specifying the identity 8.530.2 of ref. to Hankel solutions $`Z_n=H_n^{(1)}H_n`$, and choosing the $`\nu `$ parameter equal to zero, we obtain the summation theorem
$$H_0(mR(\rho ,r,\phi ))=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}J_n(m\rho )H_n(mr)e^{in\phi },R(\rho ,r,\phi )\sqrt{\rho ^2+r^22\rho r\mathrm{cos}\phi }.$$
(11)
Performing the change $`mim`$, and selecting the special case $`\rho =r`$, it becomes
$$K_0(mR(r,\phi ))=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}I_n(mr)K_n(mr)e^{in\phi },R(r,\phi )r\sqrt{2(1\mathrm{cos}\phi )}=2r\left|\mathrm{sin}\left(\phi /2\right)\right|.$$
(12)
Differentiating with respect to $`m`$, using the property $`K_0^{}(z)=K_1(z)`$ together with the fact that $`K_n=K_n`$, and setting $`m=1`$ afterwards, we have
$$R(r,\phi )K_1(R(r,\phi ))=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}r(I_nK_n)^{}(r)e^{in\phi }.$$
(13)
Recalling the orthogonality of the imaginary exponential functions, we arrive at
$$\frac{1}{2\pi }_0^{2\pi }𝑑\phi \left[R(r,\phi )K_1(R(r,\phi ))\right]^2=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\left[r(I_nK_n)^{}(r)\right]^2F(r),$$
(14)
In order to proceed, we rename $`r`$ into $`ay`$, and do the variable change $`u\left|\mathrm{sin}\left(\phi /2\right)\right|`$. After differentiation with respect to $`y`$, we realize that the sum in (10) is given by
$$g^{(2)}(y)=\frac{\mathrm{}c\xi ^2}{4\pi }\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}y^2_y\left[y_y(I_n(ay)K_n(ay))\right]^2=\frac{2\mathrm{}c\xi ^2}{\pi ^2}_0^1𝑑u\frac{u^2}{\sqrt{1u^2}}y^2_y(ayK_1(2ayu))^2,$$
(15)
Thus, we have turned the problem of calculating an infinite angular-momentum summation into the evaluation of a definite integral of a transcendental function. Next, by writing the product of Bessel functions appearing in Eq.(15) in terms of the Meijer $`G`$ function , $`g^{(2)}`$ can be evaluated explicitly. (See Appendix for the definition and some simple properties of this function.) First, we write
$$u^2a^2y^2K_1^2(2ayu)=u^2a^2y^2\frac{\sqrt{\pi }}{2}G_{13}^{30}(\frac{1}{2};1,0,1;4a^2y^2u^2)=\frac{\sqrt{\pi }}{8}G_{13}^{30}(\frac{3}{2};0,1,2;4a^2y^2u^2)$$
(16)
where, in the last step, we have made use of the identity (48). Differentiating (16) with respect to $`y`$ and substituting in (15) we obtain
$$g^{(2)}(y)=\frac{2\mathrm{}c\xi ^2}{\pi ^{3/2}}_0^1𝑑u\frac{y^3a^2u^2}{\sqrt{1u^2}}G_{13}^{30}(\frac{1}{2};0,0,1;4a^2y^2u^2).$$
(17)
Using eq.$`\mathrm{5.5.2}(5)`$ in , and some straightforward manipulation one gets:
$$g^{(2)}(y)=\frac{\mathrm{}c\xi ^2}{8\pi a}G_{24}^{31}(1,2;\frac{3}{2},\frac{3}{2},\frac{5}{2},\frac{1}{2};4a^2y^2).$$
(18)
### 2.2 Calculation of the energy to order $`\xi ^2.`$
We now turn to the question of deriving the Casimir energy. There are two possibilities:
1. To perform the $`u`$ integration in (15) after integrating over the $`y`$ variable. Unfortunately, this turns out to be divergent. Considering the integral
$$\frac{2}{\pi ^2}_0^{\mathrm{}}y^2𝑑y_0^{\frac{\pi }{2}}𝑑u_y\left[y\mathrm{sin}uK_1(2y\mathrm{sin}u)\right]^2$$
(19)
and interchanging the order of integration, one arrives at
$$_0^{\mathrm{}}𝑑yy^2_y\left[y\mathrm{sin}uK_1(2y\mathrm{sin}u)\right]^2=\frac{1}{12\mathrm{sin}^2u}.$$
(20)
If we now try to do the $`u`$-integration, the integral diverges. This shows that some further regularization is called for. In fact, in sec. 3 we will go through the same sort of calculation, but with the advantage of having applied zeta function regularization from the outset.
2. Direct integration of $`g^{(2)}(y)`$. We use the following identity from ref. (Vol 1, page 215).
$$\begin{array}{c}_0^{\mathrm{}}𝑑yy^aK_\nu \left(2\sqrt{y}\right)G_{pq}^{mn}(a_1,\mathrm{},a_p;b_1,\mathrm{},b_q;xy)\\ =\frac{1}{2}G_{p+2,q}^{m,n+2}(a\frac{\nu }{2},a+\frac{\nu }{2},a_1,\mathrm{},a_p;b_1,\mathrm{},b_q;x)\end{array}$$
(21)
In order to take advantage of this formula, we note that $`K_{\frac{1}{2}}(y)=\sqrt{{\displaystyle \frac{2}{\pi y}}}e^y.`$ Changing to a variable $`t=\frac{\sigma _{r}^{}{}_{}{}^{2}y^2}{4}`$ and inserting $`K_{\frac{1}{2}}`$, we can cast the energy per unit-length into the form
$$_C^{(2)}\xi ^2=_0^{\mathrm{}}g^{(2)}(y)𝑑y=\underset{\sigma _r0}{lim}\frac{\mathrm{}c\xi ^2}{2a^2\pi \sqrt{\pi }\sigma _r}_0^{\mathrm{}}𝑑tt^{\frac{1}{4}}K_{\frac{1}{2}}\left(2\sqrt{t}\right)G_{24}^{31}(1,2;\frac{3}{2},\frac{3}{2},\frac{5}{2},\frac{1}{2};\frac{16t^2}{\sigma _{r}^{}{}_{}{}^{2}}).$$
(22)
Although we have inserted $`K_{\frac{1}{2}}`$, as a mere technicality to help calculate the energy, one can think of using it as an exponential regulator<sup>3</sup><sup>3</sup>3Actually, it is not difficult to show that applying an exponential regulator in the form $`e^{\sigma _r\omega }`$ to (6), before carrying the $`k_z`$ integration, yields the same result as the one we derive. (see ref.). However the convergence of the integral shows that the density we have derived is already regularized in some sense. One can now use (21) to get:
$$_{𝒞}^{}{}_{}{}^{(2)}=\frac{\mathrm{}c}{4a^2\pi \sqrt{\pi }\sigma _r}G_{44}^{33}(0,\frac{1}{2},1,2;\frac{3}{2},\frac{3}{2},\frac{5}{2},\frac{1}{2};\frac{16}{\sigma _{r}^{}{}_{}{}^{2}}).$$
(23)
In order to check the asymptotics as $`\sigma _r0`$, we use the property (49), together with the asymptotics (again from ref.) $`G(x)=𝒪(|x|^\beta )\text{ as }x0,`$ for $`pq`$, and $`\beta =\text{max}\{\text{Re}b_h\}`$ for $`h=1,\mathrm{},m`$. In our case we simply have
$$_{𝒞}^{}{}_{}{}^{(2)}\underset{\sigma _r0}{lim}\frac{1}{\sigma _r}𝒪(|\sigma _r^2|)=\underset{\sigma _r0}{lim}𝒪(\sigma _r)=0.$$
(24)
Thus, the $`\xi ^2`$-term is shown to vanish, confirming the conclusions of refs. , and , without recourse to numerical evaluations.
## 3 Complete zeta function regularization
In this section we will take a different approach, based on the application of the complete zeta function method (see e.g. refs.) to the initial mode sum (5). The use of zeta functions for regularizing such sort of sums dates from the time of refs.. In the version we shall now apply, the regularized value of the Casimir energy per unit-length is
$$_C=\underset{s1}{lim}\frac{1}{2}\mathrm{}c\zeta _{\mathrm{\Omega }(D=3)}(s)$$
(25)
where the zeta function $`\zeta _{\mathrm{\Omega }(D=3)}`$ for the whole set of $`\omega `$-modes in the three-dimensional problem —say $`\mathrm{\Omega }`$— is given by
$$\zeta _{\mathrm{\Omega }(D=3)}(s)=\underset{n,m}{}_{\mathrm{}}^{\mathrm{}}\frac{dk_z}{2\pi }\left(\frac{\omega _{n,m,k_z}}{c}\right)^s.$$
(26)
First, one assumes that $`s`$ is large enough for this function to make sense, with the final aim of setting $`s=1`$ at the end (usually, one introduces in (25) an arbitrary mass scale, but, in this problem, it turns out to be unnecessary). Taking into account (5), we write
$$\zeta _{\mathrm{\Omega }(D=3)}(s)=\underset{n,m}{}_{\mathrm{}}^{\mathrm{}}\frac{dk_z}{2\pi }\left[\lambda _{n,m}^2+k_z^2\right]^{s/2}=\frac{1}{2\pi }\text{B}(\frac{s1}{2},\frac{1}{2})\underset{n,m}{}\lambda _{n,m}^{(s1)}.$$
(27)
Let’s consider the zeta function for the projected two-dimensional problem,i.e., for the $`\mathrm{\Lambda }`$ eigenmode set:
$$\zeta _{\mathrm{\Lambda }(D=2)}(\sigma )=\underset{n,m}{}\lambda _{n,m}^\sigma =\underset{n=0}{\overset{\mathrm{}}{}}d_n\zeta _n(\sigma ),\{\begin{array}{ccc}d_0=1,\hfill & & \\ d_n=2,\hfill & \text{ for }n1,\hfill & \end{array}$$
(28)
where $`\zeta _n(\sigma )`$ stands for the $`n`$th partial-wave zeta function
$$\zeta _n(\sigma )=\underset{m=1}{\overset{\mathrm{}}{}}\lambda _{n,m}^\sigma .$$
(29)
Bearing this in mind, we put (27) as
$$\begin{array}{ccc}\zeta _{\mathrm{\Omega }(D=3)}(s)\hfill & =\hfill & \frac{1}{2\pi }\text{B}(\frac{s1}{2},\frac{1}{2})\zeta _{\mathrm{\Lambda }(D=2)}(s1)\hfill \\ & =\hfill & \frac{1}{2\pi }\left[\frac{\zeta _{\mathrm{\Lambda }(D=2)}(2)}{s+1}+\left(\mathrm{ln}(2)\frac{1}{2}\right)\zeta _{\mathrm{\Lambda }(D=2)}(2)+\zeta _{\mathrm{\Lambda }(D=2)}^{}(2)+𝒪(s+1)\right],\hfill \end{array}$$
(30)
where an expansion around $`s=1`$ has taken place. This was the method applied in ref..
Eqs. (28), (29) hold for $`\text{Re}\sigma >1`$, but they will have to be analytically continued to the neighbourhood of $`\sigma =2`$ (note that $`\sigma =s1`$). Such an analytic continuation is carried out by the contour integration method of refs.. To begin with, one takes
$$a^\sigma \zeta _n(\sigma )=\frac{\sigma }{2\pi i}_C𝑑uu^{\sigma 1}\mathrm{ln}\left[f_n(u)\right],\text{ for }\text{Re}\sigma >1\text{ },$$
(31)
where $`f_n(u)f_n(k_z,\omega ,a)`$ with $`u\lambda a`$, and $`C`$ is a circuit in the complex $`u`$-plane enclosing all the positive zeros of $`f_n(u)`$. In the desired limit this contour will be semicircular, with the straight parts along the imaginary axis, and adequately avoiding the origin. The first step (see e.g.) is to examine the asymptotic behaviour $`f_{n,\text{as}}(u)`$ of $`f_n(u)`$ for $`|u|\mathrm{}`$. If $`f_{n,\text{as}}(u)`$ has no roots inside of $`C`$, we leave the eq.(31) unchanged by setting
$$a^\sigma \zeta _n(\sigma )=\frac{\sigma }{2\pi i}_C𝑑uu^{\sigma 1}\mathrm{ln}\left[\frac{f_n(u)}{f_{n,\text{as}}(u)}\right].$$
(32)
Going back to (3), we observe that, for large $`x`$, $`𝒫_n^2(x)=𝒪\left(x^4\right)`$. Then, one can write
$$f_n(u)=f_{n,\text{as}}(u)\left[1+\xi ^2\frac{\pi ^2u^2}{4}𝒫_n^2(u)\right],f_{n,\text{as}}(u)=\frac{(\epsilon _1+\epsilon _2)^2}{\epsilon _1\epsilon _2}\frac{c^2}{\pi ^2a^4}u^4.$$
(33)
Therefore, eq.(32) translates into
$$a^\sigma \zeta _n(\sigma )=\frac{\sigma }{2\pi i}_C𝑑uu^{\sigma 1}\mathrm{ln}\left[1+\xi ^2\frac{\pi ^2}{4}u^2𝒫_n^2(u)\right].$$
(34)
After realizing that only the vertical parts of $`C`$ —where $`u=e^{\pm i\pi /2}y`$— are actually contributing to the integration, eq.(34) yields
$$\begin{array}{c}\zeta _n(\sigma )=a^\sigma \frac{\sigma }{\pi }\mathrm{sin}\left(\frac{\pi \sigma }{2}\right)_0^{\mathrm{}}𝑑yy^{\sigma 1}\mathrm{ln}\left\{1\xi ^2\left[y\left(I_nK_n\right)^{}(y)\right]^2\right\},\text{ for }1<\text{Re}\sigma <0,\end{array}$$
(35)
where we have used $`𝒫_n^2(\pm iy)={\displaystyle \frac{4}{\pi ^2}}\left[\left(I_nK_n\right)^{}(y)\right]^2`$, being $`I_n`$, $`K_n`$ the corresponding modified Bessel functions (note that this $`y`$ is dimensionless). All this has validity near $`\sigma =1`$, but we still need some further work in order to reach the neighbourhood of $`\sigma =2`$.
### 3.1 Calculation to the order of $`\xi ^2`$
Let $`_C={\displaystyle \underset{p1}{}}_C^{(2p)}\xi ^{2p},`$ and analogously for the involved zeta functions. Then,
$$\begin{array}{ccc}\hfill \zeta _n(\sigma )& =& a^\sigma \frac{\sigma }{\pi }\mathrm{sin}\left(\frac{\pi \sigma }{2}\right)\underset{p1}{}\frac{1}{p}\xi ^{2p}A_n^{(2p)}(\sigma ),\hfill \\ \hfill A_n^{(2p)}(\sigma )& =& _0^{\mathrm{}}𝑑yy^{\sigma 1}\left[y(I_nK_n)^{}(y)\right]^{2p},\text{ for }1<\text{Re}\sigma <0.\hfill \end{array}$$
(36)
Note that we are commuting a $`\xi `$-expansion with a process of analytic extension which sidesteps $`\sigma `$-poles (i.e., $`s`$-poles). Yet, since the $`\xi `$-dependence has no problematic traits, this should be correct, and we write
$$\begin{array}{ccc}\hfill \zeta _{\mathrm{\Lambda }(D=2)}(\sigma )& =& \underset{n=0}{\overset{\mathrm{}}{}}d_n\zeta _n(\sigma )=\underset{p1}{}\zeta _{\mathrm{\Lambda }(D=2)}^{(2p)}(\sigma )\xi ^{2p},\hfill \\ \hfill \zeta _{\mathrm{\Lambda }(D=2)}^{(2p)}(\sigma )& =& \frac{1}{p}a^\sigma \frac{\sigma }{\pi }\mathrm{sin}\left(\frac{\pi \sigma }{2}\right)\underset{n=0}{\overset{\mathrm{}}{}}d_nA_n^{(2p)}(\sigma ).\hfill \end{array}$$
(37)
If we just want to keep the terms $`\xi ^2`$ in $`_C`$, it will be enough to maintain the $`p=1`$ contribution, which can be rewritten in the way
$$\zeta _{\mathrm{\Lambda }(D=2)}^{(2)}(\sigma )=a^\sigma \frac{\sigma }{\pi }\mathrm{sin}\left(\frac{\pi \sigma }{2}\right)_0^{\mathrm{}}𝑑yy^{\sigma 1}F(y),F(y)\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\left[y(I_nK_n)^{}(y)\right]^2,$$
(38)
where we have also taken into account (28) and the fact that $`\zeta _n(\sigma )=\zeta _n(\sigma )`$. An integral representation of the $`F(y)`$ is already available in eq.(14). From there, we proceed as in the derivation of eq.(15), i.e., we do the variable change $`u|\mathrm{sin}(\phi /2)|`$ and find
$$F(y)=\frac{8y^2}{\pi }_0^1\frac{du}{\sqrt{1u^2}}u^2K_1^2(2uy).$$
(39)
With this, we go back to eq. (38) and focus on the integral
$$(\sigma )_0^{\mathrm{}}𝑑yy^{\sigma 1}F(y)=\frac{8}{\pi }_0^1\frac{du}{\sqrt{1u^2}}u^2_0^{\mathrm{}}𝑑yy^{\sigma +1}K_1^2(2uy).$$
(40)
The $`y`$-integration is evaluated with the help of formula 6.576.4 in ref.. Then, the remaining $`u`$-integral is immediate using forumla 3.251.1 in the same book. As a result,
$$(\sigma )=\frac{1}{2\sqrt{\pi }}\frac{\mathrm{\Gamma }\left(\frac{4\sigma }{2}\right)\mathrm{\Gamma }^2\left(\frac{2\sigma }{2}\right)\mathrm{\Gamma }\left(\frac{\sigma }{2}\right)}{\mathrm{\Gamma }(2\sigma )\mathrm{\Gamma }\left(\frac{\sigma +2}{2}\right)},$$
(41)
which has a zero of order one at $`\sigma =2`$ by virtue of the singularity of $`\mathrm{\Gamma }\left(\frac{\sigma +2}{2}\right)`$. Putting it into eq.(38) and expanding near $`\sigma =2`$, we find
$$\zeta _{\mathrm{\Lambda }(D=2)}^{(2)}(\sigma )=a^\sigma \frac{\sigma }{\pi }\mathrm{sin}\left(\frac{\pi \sigma }{2}\right)(\sigma )=\frac{1}{a^2}\left[\frac{1}{6}(\sigma +2)^2+𝒪\left((\sigma +2)^3\right)\right],$$
(42)
which provides the desired analytic extension to $`\text{Re}\sigma =2`$. The crucial point is that it has a zero of order two at $`\sigma =2`$ and, therefore, $`\zeta _{\mathrm{\Lambda }(D=2)}^{(2)}(2)=0`$ and $`\zeta _{\mathrm{\Lambda }(D=2)}^{(2)}{}_{}{}^{}(2)=0`$ . This, together with eqs. (25) and (30), leads to $`_C=0+𝒪\left(\xi ^4\right)`$, i.e.,
$$_C^{(2)}=0,$$
(43)
which was numerically found in ref. <sup>4</sup><sup>4</sup>4Incidentally, the comments made by one the authors of the present letter led to the correct numerical figure in ref.. (see also ).
### 3.2 Higher-order corrections in $`\xi ^2`$
In order to know new corrections in $`\xi ^2`$, one has to keep the next $`\xi ^2`$-terms in eqs. (36), (37). However, unlike $`A_n^{(2)}(\sigma )`$, the $`A_n^{(2p)}(\sigma )`$ integrals with $`p2`$ are already finite at $`\sigma =2`$, because $`\left[y(I_nK_n)^{}(y)\right]^{2p}{\displaystyle \frac{1}{(2y)^{2p}}}`$ as $`y\mathrm{}`$. Thus, the restriction to $`\text{Re}\sigma >1`$ in (36) is caused only by the presence of $`p=1`$, while, for $`p2`$, it suffices to numerically evaluate all the necessary $`A_n^{(2p)}(\sigma )`$’s at $`\sigma =2`$, i.e.,
$$\zeta _{\mathrm{\Lambda }(D=2)}^{(2p)}{}_{}{}^{}(2)=\frac{1}{pa^2}\underset{n=0}{\overset{\mathrm{}}{}}d_nA_n^{(2p)}(2).$$
(44)
A posteriori, we have verified that each term decreases quickly enough with $`n`$ and that the $`n`$-summation has a numerically acceptable behaviour. Then, the finiteness of each of these sums confirms that $`\zeta _{\mathrm{\Lambda }(D=2)}^{(2p)}(2)=0`$, and the $`\xi ^{2p}`$-contributions to $`_C`$ are
$$\begin{array}{ccc}\hfill _C^{(2p)}\xi ^{2p}& =& \mathrm{}c\frac{1}{2}\zeta _{\mathrm{\Omega }(D=3)}^{(2p)}{}_{}{}^{}(1)\xi ^{2p}=\mathrm{}c\frac{1}{4\pi }\zeta _{\mathrm{\Lambda }(D=2)}^{(2p)}{}_{}{}^{}(2)\xi ^{2p}\hfill \\ & =& \mathrm{}c\frac{\xi ^{2p}}{4\pi pa^2}\underset{n=0}{\overset{\mathrm{}}{}}d_nA_n^{(2p)}(2),\text{for }p2\text{,}\hfill \end{array}$$
(45)
where the meaning of $`\zeta _{\mathrm{\Omega }(D=3)}^{(2p)}(s)`$ is obvious. When $`p=2`$, including all the $`n`$-values up to $`n_{\text{max}}120`$, we have found $`{\displaystyle \underset{n0}{}}d_nA_n^{(4)}(2)0.19108`$. This and formula (45) yield
$$_C^{(4)}\xi ^4=0.0076028\frac{\mathrm{}c}{a^2}\xi ^4,$$
(46)
in agreement with ref. . As remarked there, the negative sign means that the involved Casimir forces are attractive. Physical implications concerning the flux tube model for confinement have been discussed in that work.
In fact, the higher $`p`$, the fewer terms are needed in the $`n`$-series for obtaining reliable figures. For $`p>2`$ we have found many contributions, but we just list the first ones:
| $`p`$ | 3 | 4 | 5 | 6 | 7 | $`\mathrm{}`$ |
| --- | --- | --- | --- | --- | --- | --- |
| $`_C^{(2p)}a^2/(\mathrm{}c)`$ | $`0.0022637`$ | $`0.0010807`$ | $`0.0006202`$ | $`0.0003972`$ | $`0.0002737`$ | $`\mathrm{}`$ |
As argued in refs. or , the special value $`\xi ^2=1`$ should reproduce the perfectly-conducting case $`_{\text{C (p.c.)}}a^2/(\mathrm{}c)=0.01356\mathrm{}`$ . Taking all the contributions up to $`p=7`$, we obtain $`_Ca^2/(\mathrm{}c)0.01224`$, with a 10% relative error. This is not too surprising, as the $`\xi ^2`$-expansion comes from a logarithimc series, and a slow numerical convergence at $`\xi ^2=1`$ is expectable. Including all the terms up to $`p=200`$ we have found $`_Ca^2/(\mathrm{}c)0.01354`$, with a 0.15% relative error.
## 4 Conclusions
The ultimate consequences of any result about Casimir effect are not easy to foresee, as the domain of applicability of this concept has been expanding beyond what could be considered ‘traditional’ areas of field theory. For instance, we have recent examples of these ideas in spacetime evolution and quantum cosmology . Proposals haven even been made about possible ways of extracting work from the vacuum energy .
In the present letter we have confirmed the expectation that the $`\xi ^2`$-contribution to the Casimir energy for a dilute-dielectric cylinder, infinitely long, and under the condition of light-velocity conservation, would have to vanish. Numerically speaking, this had been noticed with very high accuracy in several articles, starting with ref., but in the present letter we have been able to derive it as an exact result (eqs.(24) and (43)). Another new aspect lies in applying the method developed in , i.e., the use of summation theorems for infinite series of Bessel functions, which has spared us the handling of Debye expansions (see also the application of this method in ref. and , but this time in connection with the problem of refs.).
Moreover, by a numerical evaluation, and within the complete zeta function regularization framework, we have reobtained the $`\xi ^4`$-contribution calculated in ref., (our eq. (46)), which is negative. This constitutes the first deviation from zero and shows that, although at a higher order, the Casimir energy of this system would tend to contract the cylinder. Even higher corrections in $`\xi ^2`$ have also been found (table in sec. 3).
Our spectral zeta function has been constructed like in ref.. Other variants of the zeta function procedure, which differ from ours at some particular steps, are also in circulation (e.g. ref. or ). We regard them as slightly different formulations of one common underlying principle. In particular, ref. illustrates the advantages of dealing with the total zeta function as a whole object, rather than a series of partial-wave zeta functions.
## Appendix A Appendix: The Meijer $`G`$ function
Here we state some facts about the Meijer $`G`$ function, which is defined by the integral
$$G_{pq}^{mn}(a_{\text{list}},b_{\text{list}},x)=\frac{1}{2\pi i}_L𝑑s\frac{_{j=1}^m\mathrm{\Gamma }(b_js)_{j=1}^n\mathrm{\Gamma }(1a_j+s)}{_{j=m+1}^q\mathrm{\Gamma }(1b_j+s)_{j=n+1}^p\mathrm{\Gamma }(a_js)}x^s.$$
(47)
The different integration paths $`L`$ can be found, for example, in , as most of the other properties we use. By simple variable changes one may prove numerous identities such as:
$$x^nG_{pq}^{mn}(a_{\text{list}},b_{\text{list}},x)=G_{pq}^{mn}(a_{\text{list}}+n,b_{\text{list}}+n,x)$$
(48)
and
$$G_{pq}^{mn}(a_{\text{list}},b_{\text{list}},\frac{1}{x})=G_{qp}^{mn}(1b_{\text{list}},1a_{\text{list}},x)$$
(49)
which we have used in secs. $`2.1,2.2`$.
Acknowledgements
I.K. wishes to thank J. Feinberg, A. Mann and M. Revzen for their remarks. A.R. thanks K.A. Milton and V.V. Nesterenko for discussions. |
no-problem/9912/astro-ph9912488.html | ar5iv | text | # A Robust Filter for the BeppoSAX Gamma Ray Burst Monitor Triggers
## The Gamma Ray Burst Monitor
The GRBM frontera97 ; amati97 ; feroci97 ; costa98 is a gamma-ray detection system onboard the BeppoSAX satellite. It is the secondary function of the anticoincidence shields of the PDS experiment. It is composed of four $``$1100 cm<sup>2</sup> slabs of CsI(Na) scintillators operating in the 40-700 keV range. Each detector provides time series of the detected counts in the above energy range, with 1 s resolution as a continuous housekeeping and with better than 8 ms resolution upon trigger. An oboard trigger is active whenever a statistically significant counting excess is simultaneously detected in at least two of the four detectors.
## Aims, Method and Limits
Besides of being gamma-ray detection devices, the scintillating crystals composing the GRBM are sensitive to highly ionizing particles that, leaving a large amount of energy ($``$GeV) in just one shot, result in a phosphorescence phenomenon, with a consequent detection by the electronics of a large number of counts in few tens of ms, i.e. a spike in the counting rate. When the same particle subsequently crosses two detectors, it causes the onboard logic to trigger on the event, that is electronically undistinguishable from a cosmic gamma-ray transient (no particle anticoincidence is available to the GRBM). The number of spikes that trigger the onboard electronics (originating false triggers) is of the order of 9-10/day. Therefore, the aim of this work is to provide a software filter that allows to make a ‘safe’ first-order discrimination of the ‘instrumental’ triggers from the ‘cosmic-origin’ triggers. This will allow the reduction of the huge number of onboard triggers recorded so far (of the order of 10,000 in the first three years of BeppoSAX operations), and to apply a more refined program to the generated sample of triggers. A first drawback of the ‘roughness’ of our filter is that it does not include criteria to separate gamma ray bursts from solar flares and from soft gamma repeaters events. This task will be carried out by the ‘second order’ filter.
Our software filter is based on the automatic on-ground analysis of the high resolution time series, according to criteria established on the basis of the known detector/electronics behaviour and an extended study of the GRBM time series. Usually, an eye inspection of a GRBM light curve is sufficient to discriminate cosmic gamma-ray events from spurious events. However, when an archival search for real GRB is carried out this becomes not viable anymore, and an automatic filter is needed. We therefore set-up an IDL-based software code that implements a number of discrimination criteria to the GRBM light curves. This is a first approach to the problem and we did not make use of all the information that the GRBM provides for each onboard trigger. In fact, also 1-s resolution data are available in two energy ranges (40-700 and $`>`$700 keV) and time-averaged energy spectra but they are not used in the analysis presented here.
The criteria implemented in the program are based on the knowledge of the instrument operational principles and on experience on the observed light curves. It is likely that particular cases exist that were not taken into account in our code. At this time, the following parameters of the individual events are computed for each of the four GRBM detection units: duration, rise time, simultaneity, shape, full width at half maximum. The basic criterium of the code is to assign a ‘score’ to each of these parameters, based on their comparison between pairs of detectors, with the goal of having the smallest score to the most-likely-cosmic events. The individual scores for the different parameters then combine together to give a total score accounting for all the measured event characteristics. For the final score, the higher is the value the smaller is the probability of being a cosmic event.
## Calibration of the filter
In order to test and calibrate the efficiency of our software filter, we set-up a sample of selected events whose origin was known by different methods (typically BATSE or IPN events). To these a number of eye-screened false triggers were added. In Figure 1 we present the results of the analysis carried out with our filter on such a sample of events. The x-axis reports the score given to each event by the filter. The texture classifies the events, based on the comparison with data from other satellites (BATSE, Ulysses, etc.). The sample includes several short GRBs in order to test the ability of the software to distinguish between them and spikes, but many long GRBs are also included. Thus, we can define 3 score classes (Burst, Spike, Doubt), with the confidences given in Table 1 (the sum of every row is 100%), computed assuming multinomial distribution. The software works on single peaks even during the evolution within a multiple-peaked event. Therefore, the results reported in the Table should be intended as applicable to individual peaks in each light curve. Examples of events belonging to these three categories are presented in Figure 2.
There are 3 additional special classes in the plot: No-End, No-Class, No-Trig. No-End: the signal does not return to noise level before the end of the light curve (GRBM high time resolution data have a maximum coverage of 106 s). This can be due to long bursts, or to a variable background (see an example in feroci97 ). No-Class: the signal has not been analyzed. This can happen for weak events if the software is not able to estimate the duration of the signal. No-Trig: the software is unable to find other than noise in the light curve, possibly due to very weak signals.
## Application to the GRBM database
The software filter has been applied to a fraction of the BeppoSAX GRBM data archive, covering more than 3 years of elapsed time, but consisting of about 603 days of satellite observing time. The goal was to test our software on an homogenous sample of onboard triggers. A noticeable by-product of this operation is an estimate of the GRB detection efficiency. As stated above, the filter operates on individual peaks. The result of the analysis gives: 440 peaks from cosmic events, 510 doubt cases, 4648 spikes, 58 No End, 86 No Class. If we apply the filter efficiency that was defined in the previous section, then the number of cosmic peaks become (367$`\pm `$26). They belong to about 340 individual events, of which only $``$180 have been post-facto verified to be real cosmic events.
## Summary and Conclusions
The results of the analysis on an unselected portion ($``$40%) of the BeppoSAX/GRBM data archive presented in this paper allow to draw the following conclusions:
\- A number of $``$180 cosmic events detected by the BeppoSAX/GRBM were identified (including a small number of Solar Flares and events from Soft Gamma Repeaters), over a net exposure time of $``$545 days, leading to an estimation of the GRBM efficiency on triggering cosmic events of $``$0.33/day (additional events are detected, but without an onboard trigger).
\- The automatic filter has a $``$50% efficiency on selecting real events out of false triggers (i.e., any event selected by the program has a $``$50% probability of being of cosmic origin). This reduced efficiency is likely due to the incompleteness of the sample on which the code was calibrated.
\- The automatic filter has a $`>`$90% efficiency on descarding false triggers (i.e., any event not selected by the program has a less than 10% probability of being a real event, based on the software calibration). So, the filter allows to reduce the number of light curves to be analyzed to about 10% of the total, with an expected efficiency of more than 90%. |
no-problem/9912/hep-ph9912444.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Extra charged ($`W^{}`$) and extra neutral ($`Z^{}`$) gauge bosons are predicted by many extensions of the Standard Model (SM). The masses and the couplings of the extra gauge bosons to fermions depend on the free parameters of the theory extending the Standard Model (SM). They must be determined by experiment. Hence, the observation of extra gauge bosons would teach us an important lesson on physics beyond the SM. Therefore, the search for these particles is included in the research program of every future collider.
Recent collider data are consistent with the SM. In particular, they suggest that extra gauge bosons predicted in Grand Unified Theories (GUT’s) must be heavier than $``$500 GeV . We therefore expect that extra gauge bosons would manifest themselves as (small) deviations of some observables from the SM predictions at future linear $`e^+e^{}`$ colliders. At LHC, they would show up through peaks in the invariant mass distribution of their decay products. Two cases are possible. If the SM is confirmed at future colliders, impressive improvements on the exclusion limits on extra gauge bosons can be achieved, see e.g. for a review and further references. For extra gauge bosons with masses considerably smaller than their search limit in the same reaction, some information on their masses and couplings can be obtained. See again for a review and further references.
In most models, the reactions $`e^+e^{}(\gamma ,Z,Z^{})f\overline{f}`$ and $`ppZ^{}(W^{})X`$ are most sensitive to a $`Z^{}`$ or $`W^{}`$. We suppose here that a signal of extra gauge bosons is detected in one of these reactions and its mass and some of its couplings are measured.
In earlier contributions , we investigated the potential of the reaction $`e^+e^{}\nu \overline{\nu }\gamma `$ in obtaining search limits for a $`W^{}`$ appearing in different models. For some models, these limits can compete with those from the LHC. In contrast to the limits from hadron colliders, the limits from $`e^+e^{}`$ collisions are independent of unknown details of structure functions and quark mixing. This is a good process at an $`e^+e^{}`$ collider for putting bounds on a $`W^{}`$.
In this contribution, we show where the reaction $`e^+e^{}\nu \overline{\nu }\gamma `$ can contribute complementary information to the coupling measurements of extra gauge bosons. A more detailed paper is forthcoming.
## 2 Assumptions and Calculation
We show constraints on the couplings $`L_\nu (Z^{}),R_\nu (Z^{})`$ of extra neutral gauge bosons to neutrinos and on the couplings $`L_l(W^{}),R_l(W^{})`$ of extra charged gauge bosons to leptons. The couplings $`L_f(V)`$ and $`R_f(V)`$ of the left- and right-handed fermions $`f_{L,R}`$ to the vector boson $`V`$ are defined by the following interaction lagrangian,
$$=V_\mu \overline{f}_L\gamma ^\mu f_LL_f(V)+V_\mu \overline{f}_R\gamma ^\mu f_RR_f(V).$$
(1)
Two-dimensional constraints (95% confidence) correspond to
$$\chi ^2=\underset{i}{}\left(\frac{O_i(SM)O_i(SM+Z^{}+W^{})}{\mathrm{\Delta }O_i}\right)^2=5.99.$$
(2)
In equation (2), $`O_i(SM)`$ is the expectation for the observable $`O_i`$ by the SM, $`O_i(SM+Z^{}+W^{})`$ is the prediction of the extension of the SM and $`\mathrm{\Delta }O_i`$ is the expected experimental error. The index $`i`$ numbers different observables as, for example, the total cross section $`\sigma _T=\sigma `$ and $`A_{LR}`$.
The physical input for our numerics is $`M_W=80.33GeV,M_Z=91.187GeV,\mathrm{sin}^2\theta _W=0.23124,\alpha =1/128`$ and $`\mathrm{\Gamma }_Z=2.49GeV`$.
Let $`E_\gamma `$ and $`\theta _\gamma `$ denote the photon’s energy and angle in the $`e^+e^{}`$ center-of-mass, respectively. We have restricted the range of $`E_\gamma `$ and $`\theta _\gamma `$ as follows:
$$10GeV<E_\gamma <\frac{\sqrt{s}}{2}(1M_Z^2/s)6\mathrm{\Gamma }_Z,$$
(3)
$$10^o<\theta _\gamma <170^o,$$
(4)
so that the photon may be detected cleanly. The upper bound on $`E_\gamma `$ ensures that the photons from the radiative return to the SM $`Z`$ resonance are rejected. As well, the angular cut eliminates the collinear singularity arising when the photon is emitted parallel to the beam.
The most dangerous background to the reaction $`e^+e^{}\nu \overline{\nu }\gamma `$ comes from radiative Bhabha scattering. It can be eliminated by a cut on the transverse momentum of the photon,
$$P_{T,\gamma }>\sqrt{s}\mathrm{sin}\theta _\gamma \mathrm{sin}\theta _v/(\mathrm{sin}\theta _\gamma +\mathrm{sin}\theta _v),$$
(5)
where $`\theta _v`$ is the minimum angle for veto detectors. We take $`\theta _v=25`$ mrad.
The unpolarized cross section is composed of polarized cross sections as $`\sigma =\sigma _{LL}+\sigma _{LR}+\sigma _{RL}+\sigma _{RR}=\sigma _L+\sigma _R`$. Note that $`\sigma _{LL}=\sigma _{RR}0`$ only in models where the $`W^{}`$ has non-zero couplings to left- and right-handed fermions simultaneously. Even in this case, we have $`\sigma _{LL},\sigma _{RR}[\sigma (SM)\sigma (SM+Z^{}+W^{})]`$ due to present experimental bounds on extra gauge bosons and because $`\sigma _{LL}`$ and $`\sigma _{RR}`$ do not interfere with the SM.
We consider polarization asymmetries involving polarized electrons only, $`P^{}=90\%`$, $`A_{LR}=(\sigma _L\sigma _R)/(\sigma _L+\sigma _R)`$, and polarization asymmetries involving both polarized beams, $`P^{}=90\%`$, $`P^+=60\%`$, $`A_{LR}=(\sigma _{LR}\sigma _{RL})/(\sigma _{LR}+\sigma _{RL})`$ and devote half of the luminosity to each polarization combination. The case of two polarized beams corresponds to an effective polarization $`P_{eff}=(P^{}+P^+)/(1+P^{}P^+)=97.4\%`$ of the electron beam. Note that the use of the effective polarization is exact for $`\sigma _{LL}=\sigma _{RR}=0`$. If this is not the case, it remains a good approximation.
We assume systematic errors of 1% for both observables $`\sigma `$ and $`A_{LR}`$, although we know that this assumption is very optimistic for $`\sigma `$.
We have performed two independent calculations. In the first calculation, the squared matrix element was obtained in the conventional trace formalism. The phase space of the neutrinos was integrated out analytically. This results in (rather long) analytical formulas for the doubly differential cross section $`d^2\sigma /(dE_\gamma d\mathrm{cos}\theta _\gamma )`$. All analytical integrals were checked numerically to a precision of at least 10 digits. Artificial singularities in the physical phase space appeared from the partial fraction decomposition of the first integration. These singularities were proven to cancel analytically after the second integration and an appropriate Taylor expansion. All algebraic manipulations were carried out with the algebraic manipulation program form . The remaining one (two) integrations to obtain simple distributions (total cross sections) were done using an adaptive Simpson integration routine.
In a second calculation, the helicity amplitudes were calculated by spinor techniques. They were squared numerically and summed over the helicities of the final states. The result was integrated using Monte Carlo techniques.
The total cross sections from both calculations agree within the Monte Carlo errors. For the SM, they agree with the predictions of COMPHEP .
## 3 Extra Neutral Gauge Bosons
In figures 1 and 2, we assume that there is a signal from a $`Z^{}`$ and no signal from a $`W^{}`$. This can happen in models where the $`W^{}`$ has purely right-handed couplings and the right-handed neutrino is heavy. If there is a signal for a $`W^{}`$, a similar analysis could be done including the known $`W^{}`$ parameters. The experimental errors of these parameters would then enlarge the errors of the measurements shown in the next two figures. However, it would leave our main conclusions unchanged.
We find that the process $`e^+e^{}\nu \overline{\nu }\gamma `$ can give some constraints to the couplings of the $`Z^{}`$ to SM neutrinos below the $`Z^{}`$ resonance.
In figures 1 and 2, we assume a heavy repetition of the SM Z boson. We see that we can get some constraints even in the case where the $`Z^{}`$ is considerably heavier than the center-of-mass energy. The region which cannot be resolved by the observables is between the two lines and contains the couplings of the assumed model. This choice also shows our normalization of the couplings. For the cases where only one bounding line is shown, the second line is outside the figure. $`R_\nu (Z^{})`$ and $`L_\nu (Z^{})`$ are mainly constrained by the interference of the $`Z^{}`$ contributions with the SM contributions. Mainly the $`Z^{}`$ coupling to left-handed neutrinos can be constrained. This makes the constraints especially simple.
Figure 1 illustrates the effect of different observables and different experimental parameters on these constraints. The total cross section gives the strongest constraint. The constraint from $`A_{LR}`$ is shown for one and two polarized beams. The constraints from energy and angular distributions with 10 equal size bins give no improvement. The constraint for a low luminosity linear collider is also shown in figure 1. A systematic error of 1% relaxes the constraints considerably and dilutes the advantage of high luminosity. We see that small systematic errors and a high luminosity collider are highly desired for the proposed model measurement.
Figure 2 shows the possible constraints on $`R_\nu (Z^{})`$ and $`L_\nu (Z^{})`$ including a systematic error of 1% for three representative $`Z^{}`$ masses. The constraints become much stronger for smaller $`Z^{}`$ masses. So far, we assumed that the $`Z^{}e^+e^{}`$ couplings $`R_e(Z^{})`$ and $`L_e(Z^{})`$ are precisely known. However, they must be measured (with errors) by experiment. Figure 4b of reference illustrates such a measurement for a collider with conventional luminosity. The errors from reference are taken to estimate its influence on the $`R_\nu (Z^{})`$, $`L_\nu (Z^{})`$ constraint. Our input for the errors of the $`Z^{}e^+e^{}`$ couplings for $`M_Z^{}=1.0TeV`$ and $`0.75TeV`$ are obtained from those for $`1.5TeV`$ by the scaling relation (2.63) in . We see that the uncertain knowledge of the $`Z^{}e^+e^{}`$ couplings leads to weaker constraints on $`R_\nu (Z^{})`$ and $`L_\nu (Z^{})`$. However, figure 2 shows that this effect is only important for a very heavy $`Z^{}`$ where its couplings are already weakly constrained. The influence of the errors of the $`Z^{}e^+e^{}`$ coupling measurement is negligible for a relatively light $`Z^{}`$.
Finally, we mention that there is no sign ambiguity in the measurement of $`R_\nu (Z^{})`$ and $`L_\nu (Z^{})`$, if the signs of the $`Z^{}e^+e^{}`$ couplings are known. Remember that the $`Z^{}e^+e^{}`$ couplings have a two-fold sign ambiguity if measured in the reaction $`e^+e^{}f\overline{f}`$ alone. If the sign ambiguity in the $`Z^{}e^+e^{}`$ couplings is removed , i.e. by measurements of the reaction $`e^+e^{}W^+W^{}`$ below the $`Z^{}`$ resonance or by measurements at the $`Z^{}`$ resonance, it disappears also in the proposed constraint of $`R_\nu (Z^{})`$ and $`L_\nu (Z^{})`$.
## 4 Extra Charged Gauge Bosons
In figures 3 to 6, we assume that there is no signal from a $`Z^{}`$ but a signal from a $`W^{}`$. This can happen in models where the $`W^{}`$ is considerably lighter than the $`Z^{}`$. We recognize that this particular scenario is unlikely in the context of the models we consider. For instance, in the UUM model, the W and Z masses are approximately equal and there would most likely be a signal observed for the $`Z^{}`$ in addition to the $`W^{}`$. So it should be understood that our results for the case of a $`W^{}`$ only represent an estimate of the reach of this process in constraining $`W^{}`$ couplings, rather than precision limits in the context of a fuller understanding of the physics realized in nature. We use this simple scenario in order to indicate sensitivity to various parameters, such as the observables used and the luminosity. In the case of a $`Z^{}`$ signal, the knowledge about this particle should be included in the following analysis. Again, the experimental errors of the measurements of the $`Z^{}`$ parameters would enlarge the errors of the $`W^{}`$ measurements but not change our main conclusions.
We find that the process $`e^+e^{}\nu \overline{\nu }\gamma `$ can give constraints on the couplings $`L_l(W^{})`$ and $`R_l(W^{})`$ for $`W^{}`$ masses considerably larger than the center-of-mass energy.
Figure 3 is similar to figure 1 but it shows the constraint on the $`W^{}`$ couplings. For illustration, we assume that there exists a $`W^{}`$ with couplings as the SM W but with a mass of $`1.5TeV`$. We get some constraints on the couplings even in this case. The constraints from energy and angular distributions give no improvement for the considered model. The constraint from $`A_{LR}`$ is complementary to that from the total cross section. It is shown for one and two polarized beams. It turns out that $`\sigma `$ and $`A_{LR}`$ together give the best constraint on the coupling. The constraints on the $`W^{}`$ couplings always have a two-fold sign ambiguity, i.e. nothing is changed by a simultaneous change of the sign of $`L_l(W^{})`$ and $`R_l(W^{})`$. This ambiguity is obvious from the amplitude where $`W^{}`$ couplings always enter in products of pairs.
In figure 4, we show constraints on the $`W^{}`$ couplings from $`\sigma `$ and $`A_{LR}`$ combined. We have two well separated regions for high luminosity and no systematic error (compare figure 3). These two regions come closer together for low luminosity and no systematic error. We are left with one large region after the inclusion of a systematic error of 1%. As in the case of extra neutral gauge bosons, a small systematic error and a high luminosity are necessary for a coupling measurement.
In figure 5, we show how the constraints on the $`W^{}`$ couplings vary for different $`W^{}`$ masses. The constraint for $`M_W^{}=1.5TeV`$ is identical to that shown in figure 4. We see that the constraint on the $`W^{}`$ masses improves dramatically for lower $`W^{}`$ masses.
Figure 6 illustrates the possible discrimination between different models. We see that a $`W^{}`$ with SM couplings ($`W_L^{}`$) can be separated from the SM. A $`W^{}`$ with pure right-handed couplings ($`W_R^{}`$) with a strength of the left-handed coupling of the SM $`W`$ cannot be distinguished from the SM case.
From the amplitude of the process $`e^+e^{}\nu \overline{\nu }\gamma `$, it can be seen that the constraints shown in figs. 3 to 6 are, to good approximation, valid for the combinations $`L_l(W^{})/M_W^{}`$ and $`R_l(W^{})/M_W^{}`$, and not for the couplings and the mass separately. We have fixed so far the $`W^{}`$ mass for illustration. If a $`W^{}`$ is found with a mass different from our assumptions, the constraint on its couplings can then be obtained by the appropriate scaling of our results as long as $`M_W^{}\sqrt{s}`$.
We considered model independent bounds on the couplings of extra gauge bosons neglecting the existence of other extra gauge bosons, i.e. of the $`W^{}`$ in the case of $`Z^{}`$ constraints or of the $`Z^{}`$ in the case of $`W^{}`$ constraints. However, in the general case, extra neutral and extra charged gauge bosons simultaneously influence the observables.
In figure 7, we assume that the left-right-symmetric model is true. See reference for conventions. In the first case, we take $`M_W^{}=0.75TeV`$. It follows that $`M_Z^{}=0.90(1.27)TeV`$ for $`\kappa =1`$ and $`\rho =1(2)`$. We show the constraints on the couplings of the $`W^{}`$ for $`\rho =1`$ obtained by two different fitting strategies: first ignoring the $`Z^{}`$ completely, and second taking the $`Z^{}`$ into account exactly without any errors. We see that the two curves are quite close. The reason is that the reaction $`e^+e^{}\nu \overline{\nu }\gamma `$ is not very sensitive to such a $`Z^{}`$.
The more realistic case that the $`Z^{}`$ parameters are measured with some errors would produce a constraint between the two curves of figure 7. The case $`\rho =2`$ predicts a heavier $`Z^{}`$. This would produce two curves, which differ even less from each other than those for $`\rho =1`$. To demonstrate how the constraints change for a larger signal, we repeated the same procedure with $`M_W^{}=550GeV`$. This number (and the mass of the associated $`Z^{}`$) are at the edge of the present exclusion limit . However, the constraints get only a minor improvement.
Figure 8 is similar to figure 7, but now we assume that the Un-Unified model is true. See again reference for conventions. We consider two different mass scenarios, $`M_W^{}=M_Z^{}=0.75TeV`$ and $`M_W^{}=M_Z^{}=0.55TeV`$. We show the constraints on the couplings of the $`W^{}`$ obtained by the same two fitting strategies as demonstrated in figure 7. Already for masses of $`0.75TeV`$, the two curves differ much more than was the case for the LR model. For masses of $`0.55TeV`$, the wrong fitting strategy gives always a $`\chi ^2`$ above 15, i.e. no allowed region in our case. This shows that such a light $`Z^{}`$ cannot be ignored in the fitting procedure.
Other reactions as $`e^+e^{}f\overline{f}`$ or hadron collisions are more sensitive to a $`Z^{}`$ than $`e^+e^{}\nu \overline{\nu }\gamma `$. In almost all cases, they will detect a $`Z^{}`$ signal in the cases where the $`Z^{}`$ contribution is relevant for a $`W^{}`$ constraint by $`e^+e^{}\nu \overline{\nu }\gamma `$. We conclude from figures 7 and 8 that this information from other experiments is needed for a reliable constraint on the $`W^{}`$ coupling.
## 5 Conclusions
We found that the reaction $`e^+e^{}\nu \overline{\nu }\gamma `$ can give interesting constraints on the couplings of extra gauge bosons to leptons. The total cross section and polarization asymmetries are the most sensitive observables for this measurement. The assumed universal systematic error of 1% seems to be very optimistic for the total cross section. Already this small systematic error almost dilutes the advantages gained from the high luminosity option of the collider. A polarized electron beam is crucial. A polarized positron beam is not essential but gives a quantitative improvement of the coupling constraints. |
no-problem/9912/cond-mat9912451.html | ar5iv | text | # Melting of Charge/Orbital Ordered States in Nd1/2Sr1/2MnO3: Temperature and Magnetic Field Dependent Optical Studies
## I INTRODUCTION
Doped manganites, with chemical formula R<sub>1-x</sub>A<sub>x</sub>MnO<sub>3</sub> \[R= La, Nd, Pr, and A= Ca, Sr, Ba\], have attracted lots of attention due to their exotic transport and magnetic properties, such as colossal magnetoresistance. The coexistence of ferromagnetism and metallicity, for the samples near $`x`$ $``$ 0.3, had been explained by the double exchange model. However, it was found that the double exchange interaction alone cannot explain the colossal magnetoresistance. Additional mechanisms were proposed. Among them, two scenarios attracted most of attention: the polaron due to the Jahn-Teller distortion of Mn<sup>3+</sup> ion and the orbital fluctuation.
On the other hand, some manganite samples with small bandwidths near $`x`$ $``$ 1/2 show intriguing charge ordering phenomena, i.e. real space orderings of the Mn<sup>3+</sup> and the Mn<sup>4+</sup> ions. For manganites, the charge ordering is usually accompanied with orbital and antiferromagnetic ordering. For example, charge ordering in Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub> leads to the d$`_{3x^2r^2}`$ (d$`_{3y^2r^2}`$) orbital ordering and the CE-type antiferromagnetic spin ordering at a low temperature. Moreover, it was found that some charge ordered states could be changed into ferromagnetic metallic states at a higher temperature and/or under a high magnetic field. The transitions from charge/orbital ordered insulator to ferromagnetic metal are usually called ”melting” of charge/orbital ordered states.
There have been numerous optical investigations which tried to understand basic mechanisms of colossal magnetoresistance. However, only a few works have been reported for optical responses of the charge/orbital ordered state. Recently, Okimoto et al. reported the magnetic field dependent optical conductivity for a charge/orbital ordered manganite, Pr<sub>0.6</sub>Ca<sub>0.4</sub>MnO<sub>3</sub>, and that the optical responses under the magnetic field could be understood qualitatively in terms of an insulator-metal transition. However, their measured spectral region was rather limited (i.e., from mid-infrared to visible), so details of the insulator-metal transition could not be addressed.
In this paper, we will report optical properties of Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub>. To get clear understanding on the insulator-metal transitions due to the melting of the charge/orbital ordered states, optical spectra were taken by varying either temperature ($`T`$) or magnetic field ($`H`$). Our experimental data will be analyzed in terms of the polaron scenario. The changes of the optical response due to the melting of the charge/orbital ordered states will be explained in terms of the percolation model.
## II EXPERIMENTAL
Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub> single crystal was grown by the floating zone methods. Details of sample growth and characterization were reported elsewhere. The $`T`$-dependent resistivity was measured by the four-probe method and the magnetoresistance was obtained using the 20 T superconducting magnet. For optical measurements, the crystal was polished up to 0.3 $`\mu `$m using the diamond paste. To remove surface damages due to the polishing process, we carefully annealed the sample again in an O<sub>2</sub> atmosphere at 1000 $`{}_{}{}^{o}C`$ just before optical measurements.
Near normal incident reflectivity spectra were measured from 5 meV to 30 eV. A Fourier transform spectrophotometer was used for 5 meV $``$ 0.8 eV, and a grating monochromator was used for 0.6 $``$ 7.0 eV. Above 6 eV, we used the synchrotron radiation from the Normal Incidence Monochromator beam line at the Pohang Light Source. After the spectra were taken, the gold normalization technique was used to subtract surface scattering effects. In the frequency region of 5 meV $``$ 4 eV, the $`T`$-dependent reflectivity spectra were taken using the liquid-He cooled cryostat. In the same frequency region, the $`H`$-dependent reflectivity spectra were taken with spectrophotometers at National High Magnetic Field Laboratory.
The Kramers-Kronig analyses were used to obtain $`T`$\- and $`H`$-dependent optical conductivity spectra $`\sigma (\omega )`$. For these analyses, the room temperature reflectivity spectrum in the frequency region of 4 $``$ 30 eV was smoothly connected. Then, the reflectivity at 30 eV was extended up to 40 eV, above which $`\omega ^4`$ dependence was assumed. In the low frequency region, the reflectivity spectrum below 5 meV was extrapolated to be a constant for an insulating state or using the Hagen-Rubens relation for a metallic state. To check the phase errors due to the extrapolations in the Kramers-Kronig analyses, we also independently measured optical constants in the frequency region of 1.5 $``$ 5 eV using a spectroscopic ellipsometry. It was found that the data from the spectroscopic ellipsometry measurements agreed quite well with the Kramers-Kronig analyses results, demonstrating the validity of our extrapolations.
## III DATA AND RESULTS
### A dc resistivity
Figure 1(a) shows the $`T`$-dependent dc resistivity curve of Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub> which was taken with $`H`$ = 0 T. With decreasing $`T`$, the dc resistivity value slightly decreases near the ferromagnetic ordering temperature $`T_C`$ ($``$ 250 K), but it increases abruptly near the charge ordering temperature $`T_{CO}`$ ($``$ 150 K). The dc resistivity value at 4.2 K is estimated to be around 200 $`\mathrm{\Omega }`$ cm. This large value of the dc resistivity is known to be originated from real space charge/orbital ordering. With increasing $`T`$, the dc resistivity value smoothly decreases initially and then experiences an abrupt decrease to $``$ 0.6 $`m\mathrm{\Omega }`$ cm near 170 K. The dc resistivity values for the heating run are larger than those for the cooling run, suggesting that the melting of the charge/orbital ordered states has the nature of first order phase transition. Above 170 K, the dc resistivity values are nearly the same as those for the cooling run. Note that no apparent hysteresis can be observed near $`T_C`$.
Figure 1(b) shows the $`H`$-dependent dc resistivity curve of Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub> which was taken at 4.2 K. With increasing $`H`$, the dc resistivity value slowly decreases initially, but it suddenly decreases to $``$ 0.2 $`m\mathrm{\Omega }`$ cm near 13 T. Above 13 T, the dc resistivity value doesn’t change at all within our experimental errors. With decreasing $`H`$, the dc value does not change down to 7.5 T and starts to increase abruptly near 7.5 T. The dc resistivity curve shows a very strong hysteresis below 13 T: the dc resistivity values for the field-decreasing run are quite smaller than those for the field-increasing run. Note that the dc resistivity value ($``$ 0.2 $`m\mathrm{\Omega }`$ cm) for the ferromagnetic metal state at $`H=`$ 17 T is lower than that ($``$ 0.6 $`m\mathrm{\Omega }`$ cm) for the same state at 170 K.
### B Temperature dependent optical conductivity spectra
The $`T`$-dependent $`\sigma (\omega )`$ of Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub> are shown in Fig. 2(a). At room temperature (i.e. $`T>T_C`$), there are two broad peaks near 1.0 and 4.0 eV. When entering into the ferromagnetic metallic state (i.e. $`T<T_C`$), the broad 1.0 eV peak shifts to a lower energy, which accompanies large spectral weight changes. In addition, there is a small decrease of the spectral weight near 3.0 eV. Interestingly, even at a highly metallic state near 170 K, optical conductivity decreases below 0.5 eV and shows the Drude-like behavior below 0.1 eV. When entering into the charge/orbital ordered state (i.e. $`T<T_{CO}`$), the spectral weights move to the opposite direction: namely, from a low to a high energy region. The optical conductivity spectrum at this charge/orbital ordered state shows an opening of optical gap, whose value is estimated to be about 0.1 eV. \[This value is in reasonable agreements with the value obtained from recent photoemission experiments.\] In addition, the spectral weights near 3.0 eV are restored approximately to the values at $`T>T_C`$.
The far-infrared $`\sigma (\omega )`$ are displayed in Fig. 2(b). Above $`T_C`$, there are three optical phonon peaks, which are known as the external, the bending, and the stretching modes of the cubic perovskite. In the temperature region of $`T_{CO}<T<T_C`$, the phonon features are screened and $`\sigma (\omega )`$ increase significantly. \[The solid circle represents the dc conductivity value at 170 K.\] Note that the Drude-like absorption behavior is not so clear. Below $`T_{CO}`$, $`\sigma (\omega )`$ decrease quite drastically. At this low temperature, the bending and the stretching modes are splitted and corresponding phonon frequencies move to higher energies. Such changes in the phonon spectra can be understood in terms of the strong lattice distortion due to the charge/orbital ordering.
### C Magnetic field dependent optical conductivity spectra
The $`H`$-dependent $`\sigma (\omega )`$, which were taken at 4.2 K, are shown in Fig. 3(a). \[Note that the spectra were measured with increasing $`H`$.\] At 0 T, the optical spectra are nearly the same as those at 15 K, displayed in Fig. 2(a). With increasing $`H`$, the spectral weights near 1.2 and 2.7 eV are transferred to lower energy regions. The gap values seem to decrease and finally disappear above 13 T. Note that the $`H`$-dependent spectral weight changes are similar to the $`T`$-dependent spectral weight changes near $`T_{CO}`$. However, the spectra in the ferromagnetic metallic state of 17 T clearly show a Drude-like absorption feature, which is somewhat different from $`\sigma (\omega )`$ of the ferromagnetic metallic state at 170 K, displayed in Fig. 2(a).
The far-infrared $`\sigma (\omega )`$ under various $`H`$ are displayed in Fig. 3(b). At 0 T, the low temperature phonons can be seen clearly. With increasing $`H`$, the phonon peaks become screened and the Drude peak seems to appear above 13 T. \[The solid circle represents the dc conductivity value at 17 T.\] Note that the Drude peak becomes clear and appears below 0.04 eV.
## IV DISCUSSIONS
### A A schematic diagram of optical transitions
Interpretations on $`\sigma (\omega )`$ of perovskite manganites have been quite different among experimental groups. However, a correct interpretation is essential to understand physics of the colossal magnetoresistance and the charge/orbital ordering phenomena. Recently, we proposed a schematic diagram of $`\sigma (\omega )`$ based on the polaron scenario, which seems to explain most features of optical transitions in colossal magnetoresistance manganites observed by numerous group. We want to extend the schematic diagram to include the charge/orbital ordered state.
Figure 4 shows our proposed schematic diagram of optical transitions in Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub>: (a) $`T>T_C`$, (b) $`T_{CO}<T<T_C`$, and (c) $`T<T_{CO}`$. Above $`T_C`$, there are four main contributions for $`\sigma (\omega )`$ below 5 eV: (i) $`\sigma _I(\omega )`$ due to a small polaron absorption below 1.0 eV, (ii) $`\sigma _{II}(\omega )`$ due to an inter-orbital transition between the Jahn-Teller splitted levels of the Mn<sup>3+</sup> ions near 1.5 eV, (iii) $`\sigma _{HS}(\omega )`$ due to an optical transition between the Hund’s rule split bands near 3.0 eV, and (iv) $`\sigma _{CT}(\omega )`$ due to a charge transfer transition from the O 2p band to the Mn 3d band near 4.5 eV. The optical transition between the Hund’s rule split bands represents e$`{}_{}{}^{}{}_{g}{}^{}`$(t$`{}_{}{}^{}{}_{2g}{}^{}`$) $``$ e$`{}_{}{}^{}{}_{g}{}^{}`$(t$`{}_{}{}^{}{}_{2g}{}^{}`$) and e$`{}_{}{}^{}{}_{g}{}^{}`$(t$`{}_{}{}^{}{}_{2g}{}^{}`$) $``$ e$`{}_{}{}^{}{}_{g}{}^{}`$(t$`{}_{}{}^{}{}_{2g}{}^{}`$) transitions. \[This notation indicates the transition occurs between two e<sub>g</sub> bands with the same spin but different t<sub>2g</sub> spin background.\] There have been numerous optical reports which support our assignments of the small polaron peak, the optical transition between Hund’s rule split bands, and charge transfer peaks. On the other hand, the existence of peak near 1.5 eV was observed by many workers, but there remain some controversies for its origin. We think that the most probable candidate is the inter-orbital transition at the same Mn<sup>3+</sup> site. Although this transition is prohibited by the selection rule for an Mn atom, this transition could become possible due to the local lattice distortion of MnO<sub>6</sub> octahedra and the strong hybridization between Mn 3d and O 2p orbitals.
For the ferromagnetic metallic region of $`T_{CO}<T<T_C`$, the small polaron peak will change into coherent and incoherent absorptions of a large polaron. The coherent absorption will appear as Drude-like optical conductivity spectra $`\sigma _D(\omega )`$ and the incoherent one as an asymmetric mid-infrared peak. The increase of electron screening and the decrease of lattice distortion in the metallic state will decrease the 1.5 eV peak somewhat. The optical transition between the Hund’s rule splitted bands will decrease, since all of the t<sub>2g</sub> spins will be aligned in the ferromagnetic state. And, the charge transfer peak remains to be nearly $`T`$-independent.
Below $`T_{CO}`$, the coherent absorption of the free carrier will disappear due to the charge/orbital ordering. And, its spectrum will be similar to that for $`T>T_C`$. However, there seems to be three minor but important differences. First, the optical gap due to charge/orbital ordering should appear. Second, the absorption peak due to the polaron hopping should decrease since such a hopping requires more energy in the antiferromagnetic ordered state. Third, the 1.5 eV peak should become stronger, since lattice distortion becomes larger in the charge/orbital ordered state.
### B Temperature dependent spectral weight changes
For the quantitative analysis of $`T`$-dependent electronic structure, we analyzed $`\sigma (\omega )`$ in terms of five peaks, discussed in Section IV. A:
$$\sigma (\omega )=\sigma _D(\omega )+\sigma _I(\omega )+\sigma _{II}(\omega )+\sigma _{HS}(\omega )+\sigma _{CT}(\omega )\text{ .}$$
(1)
For $`\sigma _{HS}(\omega )`$ and $`\sigma _{CT}(\omega )`$, the Lorentzian functions were used. The simple Drude formula were used for $`\sigma _D(\omega )`$. After subtracting the Drude and high frequency peaks in $`\sigma (\omega )`$, we obtained the $`T`$-dependent midgap component $`\sigma _{ms}(\omega )`$\[$`=\sigma _I(\omega )+\sigma _{II}(\omega )`$\]. Note that the polaron absorption and the inter-orbital transition between Mn<sup>3+</sup> sites are assigned as Peak I and Peak II, respectively.
Peak I and II were fitted with two Gaussian functions, as shown in Fig. 5. The solid circles represent the experimental $`\sigma (\omega )`$ after subtracting $`\sigma _{HS}(\omega )`$, $`\sigma _{CT}(\omega )`$, and $`\sigma _D(\omega )`$. The fitting results with the Gaussian functions could explain the experimental data quite well. Using the integration of each Gaussian function, we derived the $`T`$-dependent optical strengths, $`S_I`$ and $`S_{II}`$, for Peak I and Peak II, respectively. We also obtained the strength of the Drude weight $`S_D`$ by integrating the corresponding Drude peak.
The $`T`$-dependences of $`S_I`$, $`S_{II}`$, and $`S_D`$ are displayed in Figs. 6(a), (b), and (c), respectively. The total spectral weights due to polaron absorption, $`S_{tot}`$($`=S_I+S_D`$), are also plotted in Fig. 6(d). \[All units are $`\mathrm{\Omega }^1`$cm<sup>-1</sup>eV.\] With decreasing $`T`$, $`S_I`$ starts to increase below $`T_C`$ and abruptly decreases below $`T_{CO}`$. The $`T`$-dependence of $`S_{II}`$ is nearly opposite to that of $`S_I`$. The $`T`$-dependence of $`S_D`$ is similar to $`S_I`$, but becomes zero for $`T<T_{CO}`$. And, $`S_{tot}`$ starts to increase below $`T_C`$ and abruptly decreases below $`T_{CO}`$. When the sample becomes ferromagnetic below $`T_C`$, it becomes metallic. Then, the polaron hopping contribution $`S_I`$ and the free carrier contribution $`S_D`$ should increase due to the alignment of t<sub>2g</sub> spins. Due to the increase of metallicity, the lattice distortion of the MnO<sub>6</sub> octahedron will become reduced, resulting in decrease of $`S_{II}`$. When the sample becomes antiferromagnetic below $`T_{CO}`$, the polaron hopping requires more energy and $`S_I`$ should decrease. The reduction of the metallicity makes $`S_{II}`$ increase and $`S_D`$ become zero very rapidly. These temperature dependences are explained in the schematic diagram in Fig. 4.
According to the polaron picture, the $`T`$-dependence of $`S_{tot}`$ can be explained more quantitatively. In (La,Pr)<sub>0.7</sub>Ca<sub>0.3</sub>MnO<sub>3</sub>, whose ground state is a 3-dimensional ferromagnetic metal, it was found that $`S_{tot}`$ could be scaled with the $`T`$-dependent double exchange bandwidth $`\gamma _{DE}`$ $`(T)`$:
$$\gamma _{DE}=<\mathrm{cos}(\theta _{ij}/2)>,$$
(2)
where $`\theta _{ij}`$ is the relative angle of neighboring spins and $`<>`$ represents thermal average in the double exchange model. This scaling behavior was explained in a model by Röder et al. where the double exchange and the Jahn-Teller polaron Hamiltonian were taken into account. The dashed line shows $`\gamma _{DE}`$ $`(T)`$ for the 3-dimensional ferromagnet. Above $`T_{CO}`$, the agreement between $`S_{tot}(T)`$ and $`\gamma _{DE}`$ $`(T)`$ is quite good.
However, $`S_{tot}(T)`$ deviates from $`\gamma _{DE}`$ $`(T)`$ below $`T_{CO}`$. This deviation might be explained by a strong suppression of polaron absorption due to the CE-type antiferromagnetic ordering at the low temperature region. In the CE-type configuration, the e<sub>g</sub> conduction electrons are allowed to hop along the ferromagnetically aligned zigzag chains forming an effective 1-dimensional ferromagnet. In the 3-dimensional ferromagnet above $`T_{CO}`$, the polaron hopping is allowed to six neighboring Mn sites with parallel spins. But, in the 1-dimensional ferromagnetic chain, the polaron hopping is allowed only along the zigzag chain. Therefore, the transition from the 3-dimensional ferromagnet to the 1-dimensional zigzag chain will strongly suppress the polaron absorption near $`T_{CO}`$.
### C Magnetic field dependent spectral weight changes
With the fitting process used in Section IV. B, we obtained $`H`$-dependent changes of $`S_I`$, $`S_{II}`$, $`S_D`$, and $`S_{tot}`$, at 4.2 K. Figure 7 shows the results of such fittings, and all of the optical strengths show strong hysteresis behaviors. During the field-increasing run, $`S_I`$ increases near 13 T and becomes nearly saturated above 14 T. During the field-decreasing run, $`S_I`$ remains nearly the same down to 8 T and then abruptly decreases. Note that the value of $`S_I`$ at $`H=`$ 0 T after the completion of one cycle is larger than the initial value of $`S_I`$. The $`H`$-dependence of $`S_{II}`$ is nearly opposite to that of $`S_I`$, but the $`H`$-dependence of $`S_{tot}`$ is similar to that of $`S_I`$. Contrary to rather smooth changes of $`S_I`$, $`S_{II}`$, and $`S_{tot}`$, the change of $`S_D`$ is rather abrupt: the value of $`S_D`$ becomes nearly zero below 13 T for field-increasing run and below 7 T for field-decreasing run. Qualitatively, the $`H`$-dependences of $`S_I`$, $`S_{II}`$, and $`S_D`$, are quite similar to the $`T`$-dependences of the corresponding strengths below $`T_{CO}`$. These $`H`$-dependences can be explained using the schematic diagram in Fig. 4.
Values of $`S_{tot}`$ at 4.2 K with various values of $`H`$ were shown in Fig. 6(d). The open circle, the asterisk, the solid triangle, and the solid square represent values of $`S_{tot}`$ at 0, 12, 13, and 17 T, respectively. With increasing $`H`$, $`S_{tot}`$ increases. At 17 T, it finally reaches the value predicted by Eq. (2) for the 3-dimensional case. This result indirectly supports the fact that the charge/orbital melted state might be a 3-dimensional ferromagnetic metal. Note that the value of $`S_{tot}`$ ($`T=`$ 170 K, $`H=`$ 0 T) is by about 20 % smaller than that of $`S_{tot}`$ ($`T=`$ 4.2 K, $`H=`$ 17 T). This experimental fact agrees with recent magnetostriction measurement and transmission electron microscopy work that there exist a local charge/orbital ordering even above $`T_{CO}`$.
### D Behavior of dielectric constants
To get a better understanding on the insulator-metal transition in Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub>, we looked into the real part of a dielectric constant $`\epsilon _1`$. The $`T`$\- and the $`H`$-dependent $`\epsilon _1`$ spectra are shown in Fig. 8(a) and (b), respectively. In the insulating state at 15 K, $`\epsilon _1`$ is positive and $`d\epsilon _1/d\omega 0`$. With increasing $`T`$, $`\epsilon _1`$ becomes slightly increased. Above 170 K, it becomes abruptly decreased and $`d\epsilon _1/d\omega >0`$, which are consistent with typical responses of a metal. Note that the change in $`\epsilon _1`$ is rather abrupt near the insulator-metal boundary. Such interesting behaviors of $`\epsilon _1`$ can be observed more clearly in the $`H`$-dependence. Up to the insulator-metal phase boundary ($``$ 13 T), $`\epsilon _1`$ is increased and then abruptly decreased. The changes of $`\epsilon _1`$ and $`\sigma `$ at 100 cm<sup>-1</sup> under various $`H`$ are shown in Fig. 9(a) and (b), respectively. These figures also show strong hysteresis behaviors. The solid circles and the solid squares represent data during the field-increasing and the field-decreasing runs, respectively. It is clear that the abrupt change in $`\epsilon _1`$ occurs near the insulator-metal transition. Note that $`\epsilon _1`$ becomes large as the transition region is approached both from the insulating and from the metallic sides.
The divergence of $`\epsilon _1`$ near the insulator-metal transition has appeared in numerous models. According to the Herzfeld criterion, valence electrons are considered to be localized around nuclei and contribute to atomic polarizability. Near the insulator-metal transition, the polarizability diverges, so $`\epsilon _1`$ should also diverge. Above the transition, the restoring force of the valence electron vanishes, resulting in free carriers. Another is the Anderson localization model. The polarizability of a medium is proportional to square of localization length. Since the localization length diverges near the insulator-metal transition, $`\epsilon _1`$ should diverge. \[However, it is clear that the insulator-metal transition in Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub> is not induced by disorder.\]
Note that both of the above microscopic models deal with $`\epsilon _1`$ mainly in dc limit, so our infrared data cannot be well explained. And, it was found that VO<sub>2</sub> films experienced an insulator-metal transition of the first order nature around 70 C, and that their mid-infrared properties could be explained by a composite medium model which takes into account the evolution of domain growth during the first order phase transition. In the composite medium model, the increase of $`\epsilon _1`$ near the insulator-metal transition can be interpreted as a dielectric anomaly related to percolation. Therefore, we decided to apply the effective medium approximation (EMA), which is a composite medium model predicting a percolation transition.
### E Percolative phase transition
In EMA, it is assumed that individual grains, either metallic or insulating, are considered to be embedded in a uniform background, i.e., an ”effective medium” which has average properties of the mixture. A self-consistent condition such that the total depolarization field inside the inhomogeneous medium is equal to zero leads to a quadratic equation for an effective dielectric constant $`\stackrel{~}{\epsilon }_{eff}`$,
$$f_i\frac{\stackrel{~}{\epsilon }_i\stackrel{~}{\epsilon }_{eff}}{\stackrel{~}{\epsilon }_i+2\stackrel{~}{\epsilon }_{eff}}+f_m\frac{\stackrel{~}{\epsilon }_m\stackrel{~}{\epsilon }_{eff}}{\stackrel{~}{\epsilon }_m+2\stackrel{~}{\epsilon }_{eff}}=0\text{ ,}$$
(3)
where $`\stackrel{~}{\epsilon }_i`$ and $`\stackrel{~}{\epsilon }_m`$ represent the complex dielectric constants of the insulating and the metallic Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub> phases, respectively. And $`f_i`$ and $`f_m`$($`=1f_i`$) represent volume fractions of the insulating and the metallic domains, respectively. In EMA, the percolation transition occurs at $`f_m=1/3`$.
To apply EMA, we assumed that $`\stackrel{~}{\epsilon }_i`$ and $`\stackrel{~}{\epsilon }_m`$ could be represented by the experimental complex dielectric constants at 0 T and 17 T, respectively. And, we evaluated $`\stackrel{~}{\epsilon }_{eff}`$ for various values of $`f_m`$. The predictions of EMA are shown in Fig. 8(c). If $`f_m<1/3`$ (i.e. in the insulating side), $`\epsilon _{eff}`$ increases when the insulator-metal transition is approached. If $`f_m`$ becomes larger than 1/3, the low frequency value of $`\epsilon _{eff}`$ suddenly becomes negative. By comparing with Fig. 8(a) and (b), it is clear that the EMA results can explain the $`T`$-dependent and the $`H`$-dependent $`\epsilon _1`$ quite well. It should be noted that the percolation model can also explain the increase of $`\epsilon _1`$ near the insulator-metal transition. Near the percolation, effective capacitive coupling between the metallic clusters increase due to an increase of effective area and the decrease of spacing between the metallic clusters. This increase of coupling results in the increase of $`\epsilon _1`$ near the percolation transition. Therefore, it can be argued that the insulator-metal transition in Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub> occurs through a percolative phase transition.
Recently, there have been lots of studies on the phase separations in doped manganites. Our picture of the percolative phase transition agrees with such phase separation. The EMA calculation in Fig. 8(c) shows that the metallic domain can exist in the insulating states, i.e., below 150 K without $`H`$, or below 13 T at 4.2 K. And it also shows that the insulating domain can exist in the metallic states of Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub>. The origin of the phase separations in doped manganites remains controversial. Some workers argue that the phase separation comes from the electronic origin, and some workers argue that it comes from sample inhomogeneity. Further studies are required to solve this issue clearly.
## V SUMMARY
We reported the temperature and the magnetic field dependent optical conductivity spectra of charge/orbital ordered manganites, Nd<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub>. With variation of the temperature and the magnetic field, the large spectral weight changes were observed up to 4.0 eV. These spectral weight changes are discussed with polaron picture and local charge/orbital ordering. Moreover, using the analyses of dielectric constants, we showed that the melting of charge/orbital ordered states occurs through the percolation in the ferromagnetic metal domains and that optical conductivity should be explained by the two-phase coexistence picture between charge/orbital ordered insulator and the ferromagnetic metal domains.
###### Acknowledgements.
We acknowledge Professor J.-G. Park and Dr. K. H. Kim for discussion. We also thank Dr. H. C. Kim and Dr. H.-C. Ri for help in magnetoresistance measurements. This work was supported by the Korea Science and Engineering Foundation through the CSCMR at Seoul National University and by Ministry of Education through the Basic Science Research Institute Program No. BSRI-98-2416. The work by Y. M. was supported by a Grandt-In-Aid for Science Research from the Ministry of Education, Science, Sports and Culture, and from PRESTO, JST. Part of this work was preformed at the National High Magnetic Field Laboratory, which is supported by NSF Cooperative Agreement No. DMR-9016241 and by the State of Florida. |
no-problem/9912/physics9912024.html | ar5iv | text | # COMMENT ON A TONOMURA EXPERIMENT : LOCALITY OF THE VECTOR POTENTIAL
## 1 An unconventional testable claim
De Broglie has tersely stated that his universal formula
$$P^i\mu U^ieA^i=\mathrm{}k^i$$
(1)
relating the canonical $`4`$-momentum $`P^i`$ of a point charge of charge $`e`$ , rest mass $`\mu `$, $`4`$-velocity $`U^i`$, to the $`4`$-frequency $`ki`$ of the associated wave, selects uniquely the electromagnetic gauge. The point is : In absence of external electromagnetic sources, adding to the $`4`$-potential $`A^i`$ an arbitrary $`4`$-gradient would entail indefiniteness of the $`4`$-frequency -a cardiac arythmy of the electron, so to speak. This is not observed -and is denied by crystal diffraction unequivocally displaying (in standard notation) the formula
$$𝐩\mathrm{𝐦𝐯}=\mathrm{}𝐤$$
(2)
What then of gauge invariance of the Dirac equation? Adding to the canonical $`4`$-momentum operator $`i\mathrm{}_ieA_i`$ an arbitrary $`4`$-gradient can be compensated by substracting this same $`4`$-gradient from the wave function’s phase. All right -this is like cashing a cheque. An invariance law of a differerential equation need not subsist in its solutions, which imply integration conditions. What de Broglie means is that in the expression of a free electron’s $`4`$-momentum the $`4`$-potential is identically zero : $`Ai0`$ in absence of electromagnetic sources. This is unquestionable.
###### Corollary 1
Any sort of electron interference experiment performed in presence of a toroidal magnet displays the curlless vector potential $`𝐀(r)`$ as expressed in the source adhering gauge -this being tantamount to a measurement of the vector potential.
So we claim (a big step forward !) that : A locally observable effect underlies the A.B. effect.
## 2 Proof via a Tonomura experiment
Tonomura has combined an ’electron biprism inter-ference’ with an Aharonov-Bohm one. A very perfect toroidal magnet of trapped flux $`\mathrm{\Phi }`$ quantized in $`h/2e`$ units placed downstream of a ’biprism’ has its axis z orthogonal to the planes displaying ’normally’ the Fresnel fringes. A registering film, placed ’normally’ after the magnet, displays outside and inside its circular shadow straight Fresnel fringes which are either identical to each other or black-to-white exchanged, depending on the flux $`\mathrm{\Phi }`$ being an even or an odd multiple of $`h/2e`$. The fact is that the magnet’s shadow, now termed the black ring, is ’geometric’ style showing no circular fringes, and thus no explicit $`A.B`$. effect. This precludes any observable interference between the external and internal fringes -which can be tested.
Obturating the inside of the black ring should not affect in any way the external fringes which, displayed as parallel straight lines, are identical to those existing in absence of the magnet -because the magnet’s influence is asymptotically zero. Slipping transversally the magnet out of the picture will just uncover the genuine Fresnel fringes. Similarly, for the reasons stated, obturating the outside of the black ring should not affect in any way the internal fringes -a very crucial test of locality !
How the outside and inside fringes are linked together is hidden by the black ring. Between them exists a phase shift amounting to a multiple of $`h/e`$, due to addition of $`e𝐀`$ to the two kinetic momenta $`mv`$combined with obliquity of the two interfering $`𝐤`$ vectors vizz the axis $`z`$. The hidden curved fringes can be recovered by placing the registering film just before the magnet, thus wiping off the black ring. The fringe pattern displayed will not be the genuine Fresnel one, but one where curved fringes now connect the external and internal straight Tonomura ones -a very strong proof of local physicality of $`𝐀`$ ahead of electron impact !
## 3 Direct mesurement of the vector potential
If the vector potential $`𝐀`$ is a locally measurable magnitude a precise measurement of a curlless vector potential is possible. Rather than a spatially extended ’electron biprism’ one should then use as interference generator a small diffracting crystal.
Performed inside the curlless vector potential $`𝐀(𝐫)`$ generated by a toroidal magnet, crystal diffraction will evidence, instead of formula (2), the formula
$$\mathrm{}𝐤=\mathrm{𝐦𝐯}\mathrm{𝐞𝐀}$$
(3)
yielding a measurement of $`𝐀`$ expressed in the source adhering gauge.
The maximal and neater effect will obtain if the magnet’s center coincides with that of the crystal and its axis with that of the gun. Then turning the magnet around its center will modify the intensity along the circular rings.
## 4 Resurrection of the potentials ’assassinated’ by Heaviside
Electromagnetic gauge invar-iance states : Forces, linear or angular, depend on the fields, not the potentials. All right, this is very true.
But the integrals of forces -over space, energies, or over time, momenta (linear or angular ; the 6-component angular momentum including the boost) do depend on the potentials. As interaction energies and linear or angular momenta of bound systems are measurable mag-nitudes the attached potentials also are -with expressions selected as integration conditions .
This is well known but underestimated in the case of Einstein’s energy-mass equivalence : the electrostatic mass defect of a bound system is part of its total mass.
By relativistic covariance there follows that action-reaction (linear or angular) also selects the source adhering gauge as an integration condition ; an example is afforded by the Wheeler-Feynman electrodynamics.
Electromagnetically induced inertia thus emerges as a general concept to be discussed elswhere.
De Broglie has stated that both the Einstein $`W=c^2m`$ energy-mass and the Planck $`W=h\nu `$ energy-frequency equivalences select the electromagnetic gauge ; covariant expressions of these statements have been produced . |
no-problem/9912/hep-th9912137.html | ar5iv | text | # An insider’s guide to quantum causal histories
## 1 Introduction
A quantum theory of gravity is expected to also be a satisfactory theory of quantum cosmology. In turn, a quantum theory of cosmology would only be acceptable if it admits a description fully from within the universe itself. We may translate this into the requirement that, in a satisfactory quantum theory of gravity, the physical observables must refer to observations made inside the universe.
This has immediate consequences. Consider, for example, the familiar construction of the 3-geometry wavefunction that is used in the Wheeler-DeWitt equation. This describes the quantum state of a spatial slice. However, in a causal spacetime, only very special observers, such as an observer at the final singularity, can have access to an entire slice. Since there are no observers outside the universe, this wavefunction is not an observable quantity.
In , I argued that what is required to construct a cosmological theory is internal observables, corresponding to observations made inside the universe. Internal observations contain only partial information about the universe, that which is in the causal past of an observer at the corresponding spacetime region. It then becomes desirable to set up a framework for quantum gravity in which all physical observables are internal. To see how to do this we begin by understanding the effect the requirement that all observables are internal has on the structure of observables in classical general relativity.
## 2 Classical Internal observables and their algebra
Let us start by considering only the causal structure of the universe. To proceed it is convenient to approximate the causal structure of spacetime by picking out a discrete set of events. This gives us an approximate description of the causal structure of the spacetime in terms of a causal set . This is s set of events $`p,q,r,\mathrm{}`$, ordered by a causal preceding relation $`pq`$, which is transitive ($`pq`$ and $`qr`$ imply that $`pr`$), is locally finite (given $`pq`$ the intersection of the past of $`q`$ and the future of $`p`$ contains a finite number of events), and has no closed timelike loops (if $`pq`$ and $`qp`$, then $`p=q`$).
In such a causal set universe, the internal observables are functors from the causal set to the category of sets<sup>1</sup><sup>1</sup>1 A category is a collection of objects, and arrows between the objects. There should be a unit, and the composition of arrows should be associative. Thus, a causal set is a category (partial order) whose objects are the events, and arrows are the causal relations. $`\mathrm{𝐒𝐞𝐭}`$ is the category whose objects are sets and whose arrows are maps between sets. A functor can be thought of as a “function” from one category to another that turns the objects of the first into objects of the second, while preserving the properties of the arrows of the first into the second.. Details can be found in , here we will simply discuss examples. An prime example of an internal observable then is the one describing causal past. This is the functor
$$\text{Past}:𝒞\mathrm{𝐒𝐞𝐭},$$
(1)
that outputs events that have occurred. It has components at each event $`p`$ which are the causal past of $`p`$: $`\text{Past}(p)=\{r𝒞:rp\}`$. Further, the functor contains not only all these sets, but the maps between them: $`\text{Past}(p)\text{Past}(q)`$, whenever $`pq`$.
The internal observable Past can be thought of as a varying set, having components at each event which are sets, all tied together by inclusion functions. Thus, the causal structure of the universe is built into the observable. These are fundamentally different than standard set-like observables. This can be seen by considering the algebra of internal observables.
The algebra of standard (fixed time) observables is obtained by is called a Heyting algebra. As a result a theory with internal observables is fundamentally different that a theory describing a system external to the observers. It has a different logical structure. Just as the Boolean algebra obeyed by set-like observables means that physical propositions obey boolean logic, physical propositions in a theory with internal observables obey intuitionistic logic. Its characteristic feature is that, for some statement $`a`$, $`\neg \neg aa`$. Thus, the requirement that physical observables must refer to measurements made by observers in the universe has as a consequence the fact that the very logic that observers use to describe what they see must be modified. It should take into account the fact that any single observer is only able to know a subset of the true facts about their universe.
An immediate application of the Heyting algebra is in coding the causal topology. It is easy to check that in a universe with an initial event, which causally preceeds all the others, $`\text{Future}(p)=\mathrm{}`$, for all $`p𝒞`$. In a universe with a final event, in the future of all others, $`\text{Past}(p)=\mathrm{}`$ for all $`p𝒞`$. And for a universe with both, both internal observables have all their components empty. As a result, as described in , the existence of horizons and topology change can be deduced from the Heyting complements.
## 3 The framework of Quantum Causal Histories
A quantum cosmological theory should involve only internal observables; thus it should have an algebra of observables whose classical limit is a Heyting algebra of the kind we just discussed. But we know that the $`\mathrm{}0`$ limit of the projection operators on an ordinary Hilbert space is a Boolean algebra. Therefore we need to look for a formulation of quantum cosmology that is not based on the usual single Hilbert space formalism. One possible way to proceed is to “quantize” the causal structure by attaching Hilbert spaces to the events of a causal set. These can be thought of as elementary Planck-scale systems that interact and evolve by rules that give rise to a discrete causal history. An example of such a theory is the causal evolution of spin networks , as we will see in the next section. But let us first give a brief discussion of the basic features of quantum causal histories (or QCH for short).
### 3.1 Hilbert spaces on the events
Consider a causal set $`𝒞`$. This is a “spacetime” graph, with nodes which represent events, and directed edges coding the causal ordering of the nodes. Let us interpret the events as elementary quantum mechanical systems, which are to be encountered at Planck scale. Thus, we may attach a Hilbert space to each node of the causal set graph, representing the elementary system that we encounter at that node. Since we are building a theory with a fundamental discreteness in the causal relations between these systems, thus assuming there is a lowest scale, it is reasonable to expect that these Hilbert spaces are finite-dimensional. We have, therefore, built a causal network of finite-dimensional Hilbert spaces.
By the standard rules of quantum mechanics, the combined state space of a set of events that are acausal to each other is the tensor product of the individual Hilbert spaces.
When may we expect that there is unitary evolution in this quantum causal history? It is not difficult to check that this is only possible between acausal sets of events $`a`$ and $`b`$ that form a complete pair $`ab`$ , that is, every event in $`b`$ is to the future of some event in $`a`$ and every event in $`a`$ is to the past of some event in $`b`$, as shown ($`H_a=H_1H_2`$ and $`H_b=H_3H_4H_5`$):
$$\begin{array}{c}\text{}\end{array}$$
Any information that reaches $`b`$ has come through $`a`$, and there is no event in the future of $`a`$ which is not related to $`b`$. Information is therefore conserved from $`a`$ to $`b`$ when a unitary evolution map relates the two Hilbert spaces.
It should be emphasized that this is local unitary evolution, in the sense that the complete pair $`a`$ and $`b`$, in general, are localised spacelike regions in the universe. In the special case when the causal set admits a global foliation into a set of antichains (maximal sets of events in the causal set that are all acausal to each other), there is a linear sequence of unitary evolution operators (this may be compared to quantum field theory on a globally hyperbolic spacetime).
### 3.2 Hilbert spaces on the edges
Closer inspection of the above model reveals that the unitary evolution operators do not, in general, respect local causality. As an example, consider the following configuration:
$$\begin{array}{c}\text{}\end{array}$$
with evolution operator $`E:H_1H_2H_3H_4`$. Next, select some state $`|\psi _2H_2`$. This can be extended to $`|\psi _1|\psi _2`$ in $`H_1H_2`$ and then evolved by $`E`$ to some $`|\psi ^{}H_3H_4`$. Finally, $`H_4`$ can be traced out to leave a density matrix $`\rho _3=\text{Tr}_{H_4}|\psi ^{}\psi ^{}|`$ in $`H_3`$. That is, system 3 “knows about” $`|\psi _2`$, even though there is no causal link from 2 to 3. This is a violation of the causal relations of the underlying causal set.
There is a straightforward solution to this problem. Instead of attaching the Hilbert spaces to the events of the causal set, let us attach them to the causal relations. We again take tensor products of Hilbert spaces on edges that are acausal to each other. Any unitary operator in a history where the Hilbert spaces are on the edges can be decomposed to a product of unitary operators that live on the nodes of the causal set, going from the composite Hilbert space on the incoming edges to that node to the composite Hilbert space on the outgoing ones, all of which respect the causal structure of $`𝒞`$. Therefore, in a QCH with the Hilbert spaces on the causal relations and the operators on the events, the quantum evolution strictly respects the underlying causal set.
We may also note that promoting the nodes of the causal set to evolution operators is consistent with the intuition that an event in the causal set denotes change, and so is most naturally represented by an operator. In addition, since only spacelike separated Hilbert spaces are tensored together, there is no single Hilbert space, or wavefunction, for the entire universe.
## 4 Examples of quantum causal histories
We will now give specific examples of QCH models. To do so, we need to identify the Hilbert spaces and the complete pairs that are related by the unitary evolution operators. The first of our two examples is causal spin network evolution, a model of quantum spatial geometry evolving causally. The second example considers identical individual Hilbert spaces which are two-dimensional, and the resulting QCH is a quantum computer.
### 4.1 Causal evolution of spin networks
Spin networks were originally defined by Penrose as trivalent graphs with their edges labelled by representations of $`SU(2)`$ . From such abstract labelled graphs, Penrose was able to recover directions (angles) in 3-dimensional Euclidean space. Later, in loop quantum gravity, spin networks were shown to be the basis states for the spatial quantum geometry states . The quantum area and volume operators, expanded in the spin network basis were shown to have a discrete spectrum, with their eigenvalues depending on the labels of the spin network present in the region of space whose area, or volume, is being measured.
In , spin network graphs were used as model of quantum spatial geometry evolving causally. This means that the nodes of the spin network graph are the events in a causal set.
A causal spin network history is a quantum causal history. To see this, we need to observe that it has the following features. The Hilbert spaces are the spaces of intertwiners. An intertwiner labels a node of a spin network. It is a map from the tensor product of the representations labelling the edges incoming to that node to the tensor product of the outgoing edges. The possible intertwiners for a node in the spin network form a vector space, the so-called “space of intertwiners”<sup>2</sup><sup>2</sup>2For a trivalent node of an $`SU(2)`$ spin network, the intertwiner is unique. For a four-valent node, the intertwiner space is finite-dimensional. For higher valence, continuous parameters enter..
The intertwiner spaces of spacelike separated nodes in the causal history are to be tensored together, with the representations on any connecting edges summed over: $`V_{ijkmno}=_lV_{ijkl}V_{lmno}`$ when $`l`$ labels the shared edge.
In , it was shown that, if the spin networks were restricted to be of valence $`n`$, a small set of generating evolution operators can be identified. These are the 1-skeletons of the $`n`$-dimensional Pachner moves for piecewise linear triangulations . For example, in 2+1, the set of elementary moves is shown below; in each pair the top and bottom configurations are exchanged.
$$\begin{array}{c}\text{}\end{array}$$
Given an initial 3-valent spin network, to be thought of as modeling a quantum “spatial slice”, the causal history is built by repeated application of the above moves. They change the spin network locally, thus producing a discrete analogue of multifingered time evolution. Thus, the amplitude to go from an given initial spin network $`\mathrm{\Gamma }_1`$ to a final one $`\mathrm{\Gamma }_2`$ can be expressed as the product of the amplitudes for the Pachner moves that occur in a spacetime history extrapolating between the two spin networks, summed over the possible extrapolating histories:
$$A_{\mathrm{\Gamma }_1\mathrm{\Gamma }_2}=\underset{\text{histories }\mathrm{\Gamma }_1\mathrm{\Gamma }_2}{}\underset{\begin{array}{c}\text{moves in a}\\ \text{given history}\end{array}}{}A_{\text{move}}.$$
(2)
Explicit expressions for the amplitudes $`A_{\text{move}}`$ for the elementary moves have so far only been given for a simple causal model in .
By construction, the Pachner moves are always moves between complete pairs (they are homs of the spin network graph). Thus, they can be consistently promoted to unitary operators.
This completes the identification of causal spin network histories as a QCH model. The individual Hilbert spaces are the intertwiner spaces, which are to be tensored when they are spacelike separated in the history. The local unitary operators are the Pachner moves.
Having performed these identifications, we may note that several more models of the same type have been explored in the literature. They are graphs evolving under the above causal moves, but with different sets of labels. Trivalent graphs labelled by ratios of integers give rise to Sen’s string networks . Using $`q`$-deformed spin networks, that is, spin networks labelled by representations of $`SU_q(2)`$ have a finite list of labels that may appear on an edge . In fact, spin networks can be constructed that are labelled by representations of any compact group , as well as supersymmetry , and all give rise to quantum causal histories when evolved causally.
### 4.2 Quantum computers
Possibly the simplest choice of individual Hilbert spaces in a QCH is to require that they all are 2-dimensional: $`𝐂^2`$. Having done so, it is unavoidable to note that these spaces are qubits and the history is a (very large) quantum computer! (For related work, see ). A choice of local unitary operators is a choice of quantum gates in a quantum computer. The set of properties of the underlying causal set is identical to the computer’s circuit.
Given how hard the task of finding explicit expressions for suitable QCH evolution operators, this model provides the opportunity to use the quantum gates used in quantum computing to model quantum spacetime evolution. It is possible that there is a relationship between the conditions required for a quantum computer to run for a long time and a quantum spacetime to have a classical limit.
## 5 What a quantum causal history looks like from the Inside
We may now briefly return to internal observables and outline how we expect they will appear in QCH. Consider a QCH with Hilbert spaces labelling the causal relations, and let us interpret them in a way that will help us set up internal observables.
Given some event $`p`$ in the causal set, let $`q`$ and $`r`$ be two events in the future of $`p`$. In the QCH on this causal set, there are two Hilbert spaces for the two causal relations, $`H_{pq}`$ and $`H_{pr}`$ respectively. We will interpret the first as “the state space of $`p`$ as seen by $`q`$”, and the second as “the state space of $`p`$ as seen by $`r`$”. The relation between the two should depend on the causal relation between $`q`$ and $`r`$. Thus, if $`S(p)`$ is the set of causal relations that start at $`p`$, there is a Hilbert space for every element of $`S(p)`$, describing how the an observer at the end of that particular causal relation sees $`p`$.
We may then define a generalised Hilbert space for $`p`$ to be a functor $`H_p:𝒞\text{Hilb}`$, which has as its elements the individual “viewpoint” Hilbert spaces, linked together by consistency maps that transform from the viewpoint of one observer to the viewpoint of another. A standard (not observer-dependent) description is recovered when these consistency maps are identities. Then $`H_p`$ becomes a standard state space for $`p`$.
It is on such generalised Hilbert spaces that the quantum internal observables are expected to act. A quantum internal observable should be a generalized operator, by which is meant an operator on each of the components of a generalized Hilbert space, related by the consistency maps. Details of this construction will appear elsewhere.
## 6 Conclusions: General Relativity as the low energy limit of quantum gravity
In the above examples we have seen that the requirement that all observables are internal has non-trivial consequences for the structure of both classical and quantum cosmological theories. One should not forget, however, that any Planck scale quantum cosmological theory will have to have general relativity as its low energy limit. We have not discussed this aspect of quantum gravity here, but progress on methods to obtain the low energy limit is needed in order to bring the developments described here to a conclusion. Work is in progress currently on methods to coarse grain and renormalize quantum causal histories, which will be reported elsewhere.
## Acknowledgments
I am grateful to John Baez, Louis Crane, Sameer Gupta, Eli Hawkins, Louis Kauffman, Renate Loll, Carlo Rovelli and Lee Smolin for discussions. I especially want to thank Chris Isham for explaining to me the use of functorial methods in quantum theory in and many discussions. I would also like to thank Abhay Ashtekar for hospitality at the Center for Gravitational Physics and Geometry during the course of this work.
This work was supported by NSF grant PHY/9423950 and PHY/9514240, and a gift from the Jesse Phillips foundation. |
no-problem/9912/gr-qc9912099.html | ar5iv | text | # Deep Surveys of Massive Black Holes with LISA
## Introduction
The Laser Interferometer Space Antenna (LISA) LISA is a gravitational wave (GW) observatory in the low-frequency band which is currently accessible only through non-dedicated (and low sensitivity) experiments based on the technique of Doppler tracking of interplanetary spacecraft EW ; BVI99 . As of this writing, LISA is identified as an ESA Cornerstone mission in the Horizon 2000-plus program, but is presently studied by both ESA and NASA with the view of a joint mission with launching date 2008-2010. The instrument has an optimal sensitivity in the milli-Hz frequency range, $`h_{\mathrm{rms}}3\times 10^{22}`$ for $`f1`$ mHz, covering the band $`10^5\mathrm{Hz}30\mathrm{mHz}`$. It consists of a constellation of tree drag-free spacecraft placed at the vertices of an ideal equilateral triangle with sides of $`5\times 10^6\mathrm{km}`$, forming a three-arms interferometer LISA ; Danz\_am99 .
The low frequency band is populated by a plethora of GW sources, that are out of reach for Earth-based detectors, and could be easily detectable by LISA LISA : they include guaranteed sources, such as known galactic short-period binary stars; neutron stars (NS’s) and/or low-to-intermediate mass black holes ($`10M_{}10^3M_{}`$) falling into a massive companion ($`10^5M_{}10^8M_{}`$); massive black hole binary systems (MBHB’s), with mass in the range $`10^5M_{}10^8M_{}`$; stochastic backgrounds of primordial origin, and generated by the incoherent superposition of unresolved binary systems in the Universe.
The purpose of this contribution is to discuss the impact of LISA for astronomy. Being impossible to cover all aspects, we will concentrate on one specific class of sources: massive black hole binary systems. We will describe how LISA works as GW telescope – we are ultimately dealing with a new branch of observational astronomy – summarize our present understanding of the main issues, and highlight open questions that deserve further investigations.
MBHB’s are possibly the strongest sources of GW’s that LISA will be able to detect; for typical objects of mass $`10^6M_{}`$ at redshift $`z1`$, the signal-to-noise ratio (SNR) is $`10^3`$, as show in Fig.1. The instrument is able to detect the radiation emitted during one (or more) of the three phases of black hole coalescence (in the GW jargon: in-spiral, merger, and ring-down) for a very wide range of masses – in principle from $`1M_{}`$ to $`10^9M_{}`$, depending on the mass $`m_1`$ and $`m_2`$, and the source distance – see Fig. 1, possibly beyond redshift $`z5`$, if BH’s do already exist, and are involved in catastrophic events with copious release of energy through GW’s.
LISA will be able to carry out a deep and extensive census of black hole populations in the Universe, providing an accurate demography of these objects and their environment. Compelling arguments suggest the presence of MBH’s in the nuclei of most galaxies, and they are invoked to explain a number of phenomena, in particular the activity of quasars and active galactic nuclei ZN64 ; Salpeter64 . However, the observational evidences of MBH existence come mainly from observations of relatively nearby galaxies, whose nuclei do not show significant activity Miyoshi95 ; EG96 ; maoz . Massive black holes seem to be clustered in the mass-range $`10^6M_{}10^9M_{}`$ Richstoneetal98 ; at the lower edge of the BH mass-spectrum, we find evidences for solar-mass BH candidates Rees98 . No information is presently available regarding BH’s with mass between $`10M_{}`$ and $`10^6M_{}`$, although some recent X-ray observations are interpreted as possible (but not compelling) indications of ”middleweight” BH’s CM99 ; PG99 . LISA – and Earth-based laser interferometers – will definitely show whether this gap is simply due to a ”selection” effect of present electro-magnetic observations, or indeed Nature does not provide intermediate mass black holes: an important feature of LISA is its capability of detecting BH’s with mass $`10^3M_{}10^4M_{}`$, still far from coalescence at high redshift, see Fig. 1. LISA is also likely to monitor binary systems with a wide spectrum of BH spins and orbital eccentricities, which will enable us to carry out high precision tests of general relativity Hughes\_am99 ; Poisson96 ; Ryan97 , and to derive a map of the distribution of these physical parameters in astrophysical objects.
One of the most interesting observations would be the detection of GW’s and electro-magnetic radiation from the merger of two BH’s. We do not know as yet, whether a burst of electro-magnetic radiation is emitted during MBH collisions BBR . Determining where and when a MBH merger takes place, and possibly alerting in advance the astronomers is of paramount importance; this issue is directly linked to the identification of the source host galaxy: it would allow us to establish correlations between MBH’s and their environment, and use LISA observations to estimate the fundamental cosmological parameters LISA ; Schutz86 .
We have not discussed so far the rate at which we expect to detect such signals. A fair statement would probably be that, essentially, we do not know it. However, we can summarize our present knowledge as follows. For MBHB systems, the event rate depends strongly on theoretical prejudices and model assumptions; the ”canonical” value is $`1\mathrm{yr}^1`$, but rates as high as $`10^3\mathrm{yr}^1`$ or as low as $`10^2\mathrm{yr}^1`$ are consistent with theoretical models BBR ; Bleas\_am99 ; Hae ; Vecchio97 . For low-mass black holes captured by a massive one in galactic cores, we believe to have a better understanding, and current astrophysical estimates yield a rate of a few events per year up to $`z1`$ SR ; Sigurdsson97 .
## The LISA telescope
We are dealing with a new generation of telescopes, both regarding the kind of radiation they observe (gravitational waves) and the frequency window in which they operate ($``$ mHz). It is therefore instructive to analyze the features that enable LISA to extract accurate information about GW sources.
We consider here only the in-spiral portion of the whole coalescence waveform, neglecting the merger and ring-down, both easily detectable, cfr. Fig. 1. The merger waveform is still poorly understood from the theoretical point of view; significant progresses have been made using either full numerical schemes or semi-analytical approximations, but both approaches are still far from returning a satisfactory answer for GW observations (see GC\_web ; Pullin and references therein). We do however expect to gain key information by detecting GW’s emitted during the final plunge, for instance how energy and angular momentum are radiated during this extreme strong-gravity phase. The ring-down signal, on the contrary, is theoretically well know; in order to limit the level of complexity of our analysis, we do not include it into the signal that we consider here; however, future investigations should keep it (as well as the final plunge, if/when available) into account, as it might change (conceivably improve) LISA performances in a number of astrophysical situations.
There are two main features that distinguish the in-spiral signals recorded by LISA from the ones that we expect to detect with Earth-based interferometers: (i) they last for months-to-centuries (depending on the masses) in the instrument observational band, and therefore are not burst-signals; in fact, the (Newtonian) time to coalescence is $`\tau 1.2\times 10^7\left(f_0/10^4\mathrm{Hz}\right)^{8/3}\left[m(1+z)/10^6M_{}\right]^{5/3}\left(\eta /0.25\right)^1\mathrm{sec}`$; here $`m=m_1+m_2`$ is the total mass, and $`\eta =\mu /m`$ is the symmetric mass ratio, where $`\mu =m_1m_2/m`$ is the reduced mass; (ii) the structure of the waveform is in general much more complex; in fact, we can expect to detect black holes that are fast spinning and live on highly elliptical orbits, in particular for the extreme mass ratio case, $`\eta 1`$ HB95 . As an example, in LIGO observations one will likely monitor no more than 10 cycles of precession of the orbital plane and the spins, whereas in the LISA band, for a typical observation time of 1 year, they could be as many as $`1000`$, see Table 1.
An useful figure, for both detection and parameter estimation, is also the number of wave cycles recorded by LISA: during the final year of in-spiral, they range from $`10^3`$ (for $`m_1m_2`$) to $`10^5`$ (for $`\eta 1`$).
In general, 17 parameters describe the waveform. No analysis has been carried out so far dealing with such general situation. Here, we will introduce some simplifying assumption, while retaining most of the key physical ingredients. The main limitation of our approach derives from considering circular orbits; this is probably quite realistic for binary systems of two MBH’s which have undergone a common evolution inside a galactic core, but is almost for sure violated for solar mass compact objects and/or low mass BH’s orbiting a massive one HB95 . We do, however, take into account spins; in this case we assume that either the masses of the BH’s are roughly equal, or one of the BH’s has a negligible spin (which still describe a wide range of astrophysical situations): the binary system undergoes the so-called simple precession ACST94 , where the orbital angular momentum $`𝐋`$ and the total spin $`𝐒=𝐒_\mathrm{𝟏}+𝐒_\mathrm{𝟐}`$ are locked together, and precess around the (almost) constant direction of the total angular momentum $`𝐉=𝐒+𝐋`$. We also use the post<sup>1.5</sup>-Newtonian approximation of the GW phase BDIWW . As a consequence of this chain of approximations, the number of parameters describing the signal drastically reduces, from 17 to 11.
It is useful now to review some of the instrumental features, in order to understand how LISA works as GW observatory:
(i) LISA is an all-sky monitor, and one gets for free all-sky surveys. During the observation time, however, LISA changes location and orientation. The LISA orbital motion is rather peculiar – the baricenter of the instrument is inserted in a heliocentric orbit, following by $`20^o`$ the Earth; the detector plane is tilted by $`60^o`$ with respect to the Ecliptic and the instrument counter-rotates around the normal to the detector plane with the same 1-yr period – and is conceived in order to keep the configuration as stable as possible during the mission, as well as to give optimal coverage of the sky. It also turns out to be a key factor in reconstructing the source location in the sky.
(ii) The sources are distinguished in the data stream by the different structure and time evolution of the signals at the detector output; the recorded in-spiral signal reads:
$$h_\alpha (t)=A_{\mathrm{gw}}(t)A_\mathrm{p}^\alpha (t)\mathrm{cos}[\varphi _{\mathrm{gw}}(t)+\phi _\mathrm{p}^\alpha (t)+\varphi _\mathrm{D}(t)]$$
(1)
where $`A_\mathrm{p}(t)`$ and $`\phi _\mathrm{p}(t)`$ are the time-varying polarization amplitude and phase, respectively, and $`\varphi _\mathrm{D}(t)`$ is the Doppler phase shift induced by the motion of the detector around the Sun; an example of in-spiral signal at the output of LISA is given in Fig. 2. The signal is therefore amplitude and phase modulated by the motion of the LISA centre-of-mass around the Sun, the change of orientation of the detector arms, and of the binary orbital plane. All these effects encode information about some of the source parameters.
(iii) There is only one LISA detector currently planed; correlations and/or time-of-flight measurements are not possible; they would be highly desirable in order to improve the estimation of the source parameters, in particular the source location and distance; however, as the gravitational wavelength is $`\lambda _{\mathrm{gw}}2(f/1\mathrm{mHz})^1`$ AU, a second detector would have to be placed at several AU from the first one in order to provide useful information on the position of a source in the sky; however LISA is a three-arms instrument; Cutler Cutler98 has shown that the outputs from each arm can be combined in such a way to form a pair of data sets, $`\alpha =1,2`$ in Eq. (1), whose noise is uncorrelated at all frequencies, that are equivalent to the data streams recorded by two co-located interferometers, rotated by $`\pi /4`$ one with respect to the other.
Indeed, there will be two data streams available to extract all source parameters. Correlations between the parameters are inevitable, and conspire to degrade the accuracy of the parameter measurements. It should also be clear that for LISA the measurement errors depend crucially on the actual value of the source parameters, and one therefore needs to explore a very large parameter space to give a fair description of the instrument performances.
## Surveys of massive back holes
We have discussed in the Introduction the sensitivity of LISA: there is little doubt that such interferometer will be able to survey a fairly large fraction BH populations in the Universe. We would like to stress that in the present discussion, we assume to be able to monitor the whole final year of in-spiral. This is a key and delicate point which affects the capability of surveying sources at increasingly higher $`z`$ and/or with larger $`m`$, and measuring precisely the parameters: in fact, at some frequency (between $`10^4`$ Hz and $`10^5`$ Hz) the instrumental noise will completely dominate the signal, allowing to pick up only the very final portion of the in-spiral (say a few days), or even preventing the detection; the redshifted radiation simply falls outside the observational band, cfr. Fig. 1. It is clear that the higher the redshift, the lower the typical mass for which LISA reaches the optimal sensitivity. Super-massive black holes of mass $`10^9M_{}`$ might be observable, by detecting ring-down signals at low redshifts ($`z\stackrel{<}{}0.1`$), if the sensitivity window extends to $`10^5`$ Hz.
Several analysis have been carried out so far dealing with the accuracy of the parameter measurements with LISA Cutler98 ; VC98 ; Vecchio99 ; Sintes\_am99 ; HM99 ; however, they have been mainly focussed on investigations of the instrument angular resolution; moreover, spin effects have been either ignored or explored for a very limited portion of the total parameter range. Here we will try to give a more comprehensive description of the performances of LISA as GW observatory. The accuracy of the parameter measurements is very sensitive to the actual source parameter values; it is therefore almost impossible to give typical figures for LISA as GW telescope, that can be applied to a wide range of binary systems. We discuss in some detail the case of an equal-mass MBHB, with $`m_1=m_2=10^6M_{}`$, and give some general criteria to extend these results to other parameter values. It turns out that the source location and orientation with respect to the detector play a key role. We have therefore performed Monte-Carlo simulations, where we fix the source distance and the physical parameters, and vary randomly the ”geometrical” parameters, $`\widehat{𝐍}`$, $`\widehat{𝐉}`$ and $`\widehat{𝐒}`$.
We compute the estimated mean squared errors associated to the parameter measurements by means of the so-called variance-covariance matrix CF94 ; NV98 .
The main results are presented in Figs. 3 and 4, and can be summarized as follows. The angular resolution is $`\mathrm{\Delta }\mathrm{\Omega }_N10^5`$ srad; however, depending on the location and orientation of the source it varies over a wide range of values, $`10\mathrm{arcmin}^2\stackrel{<}{}\mathrm{\Delta }\mathrm{\Omega }_N\stackrel{<}{}3\mathrm{deg}^2`$. Typically, large spins and misalignment angles – the angle between $`\widehat{𝐋}`$ and $`\widehat{𝐒}`$ – allow us to measure more precisely the source location; for a small region of these parameters, the ”error-box” in the sky could possibly be only a fraction of $`\mathrm{arcmin}^2`$. The distance is usually measured with an error $`0.1\%\stackrel{<}{}\mathrm{\Delta }D/D\stackrel{<}{}1\%`$. The timing accuracy is very high, and the instance of coalescence can be identified within $`10`$ sec. Masses and spins can be measured very precisely; typically, the errors affecting the determination of the chirp and reduced mass are $`\mathrm{\Delta }/10^5`$ and $`\mathrm{\Delta }\mu /\mu 10^4`$, respectively; the so-called spin-orbit parameter $`\beta `$ can be determined with an error $`\mathrm{\Delta }\beta 10^3`$. There is one general rule that can be derived from this analysis: if BH’s are highly spinning and the misalignment angle is large, the parameter determination improves. This is due to the fact that the parameters leave peculiar finger prints on the recorded signal, cfr. Fig. 2: in particular, $`A_\mathrm{p}`$ and $`\phi _\mathrm{p}`$ undergo strong modulations, which carry information not only on the position of the source and the orientation of the angular momenta, but also on the physical parameters, such as the masses. This is an effect which is similar – although the physics behind it is different – to the one that takes place when spins are not present, but one considers not only radiation emitted at twice the orbital frequency, but also at other harmonics Sintes\_am99 (notice that in Fig. 3 and 4, for the case $`S=0`$, we report results obtained considering only the dominant harmonic; we refer the reader to Sintes\_am99 for more details).
We can now ask how these results change by selecting different source parameters. MBHB’s with $`m_1m_210^7M_{}`$ would be typically observed with larger errors, by a factor $`10`$, than the ones reported here. If we fix $`m_1`$ and vary $`m_2`$, the measurement accuracy is fairly constant – within, say, a factor $`2`$ – as long as $`m_2/m_1\stackrel{>}{}0.1`$, then is starts degrading: this is due to a rather complex competition between several effects, in particular the SNR and the number of wave/precession cycles VCS ; AV .
MBHB’s will be visible several months before the final coalescence. This will allow us to pick up the signal when the binary system is still far from merging, and refine the source parameter measurements as the source proceeds toward the deadly plunge VCS : for a limited region of the parameter space, it could be possible to determine the source location in the sky with enough precision to have a realistic chance of observing the same field with other telescopes. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.