id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9912/hep-th9912242.html | ar5iv | text | # Metric Fluctuations in Brane Worlds
## 1 Introduction
Recently a realization of the four-dimensional gravity on a brane in five-dimensional spacetime has been discussed -. Randall and Sundrum have shown that the longitudinal components of the metric fluctuations satisfy to the quantum mechanical equation with the potential, which includes an attractive delta-function. As a result one has a normalizable zero mode, which has been interpreted as the localized gravity on the brane.
However in order to speak about the localized gravity one has to demonstrate that not only longitudinal but also the transverse components of the metric are confined to the brane. In this note we point out that the transverse components of the metric fluctuations satisfy to the equation with the potential, which includes the repulsive delta-function. There is a zero mode solution, which is not localized on the brane and this indicates that there is unstability or, in other words, that the effective theory actually is not four-dimensional but five-dimensional. Therefore it seems the original proposal from does not lead to the realization of the four-dimensional gravity on the brane in five-dimensional spacetime. Perhaps a modification of this proposal by using matter fields can lead to the trapping of gravity to the brane.
## 2 Metric perturbation
The action has the form
$$I=\frac{1}{2}d^Dx\sqrt{|g|}\left(R+2\mathrm{\Lambda }\right)+I_{Brane},$$
(1)
The RS solution is \[1-3\]
$`ds^2`$ $`=`$ $`{\displaystyle \frac{\eta _{MN}dx^Mdx^N}{(k_i|z^i|+1)^2}},`$ (2)
$`M,N`$ $`=`$ $`0,\mathrm{},3+n,i=1,\mathrm{},n,z^i=x^{i+3},\mu ,\nu =0,1,2,3,`$
where $`D=n+4`$ and $`\eta ^{MN}`$ is a Minkowski metric with the signature $`(+,,,,\mathrm{})`$. The metric (2) has singularities at $`z^i=0`$ for any $`i=1,\mathrm{},n`$, which correspond to intersecting $`n+2`$-branes. Their intersection at $`z^i=0`$ for all $`i=1,\mathrm{},n`$ is 3-brane, which bears standard model fields and corresponds to observable 4-dimensional universe. The propagation of the longitudinal $`(\mu ,\nu )`$-components of metric perturbation in the background (2) was studied in the papers . These perturbations are bounded in the vicinity of the 3-brane. In this note we consider the perturbation of the transverse ($`(i,j)`$ and $`(i,\mu )`$) components.
We write the solution in the form \[1-3\]
$`g_{MN}^0`$ $`=`$ $`H^2\eta _{MN},`$ (3)
$`H`$ $`=`$ $`k{\displaystyle \underset{i=1}{\overset{n}{}}}|z^i|+1,`$ (4)
where
$$k^2=\frac{2}{n(n+2)(n+3)}\mathrm{\Lambda },$$
(5)
and $`\mathrm{\Lambda }`$ is the bulk cosmological constant.
Let us consider perturbations, which are parametrized by $`h_{MN}`$ in the following way
$`g_{MN}=H^2\left(\eta _{MN}+h_{MN}\right)=H^2\stackrel{~}{g}_{MN}`$ (6)
and fix the gauge
$$h_{MN}\eta ^{MN}=0,\eta ^{KM}_Kh_{MN}=0.$$
(7)
The Einstein tensor $`G_{MN}=R_{MN}\frac{1}{2}g_{MN}R`$ is
$$G_{MN}=\stackrel{~}{G}_{MN}+(D2)\left[\frac{\stackrel{~}{}_M\stackrel{~}{}_NH}{H}+\stackrel{~}{g}_{MN}\stackrel{~}{g}^{KL}\left\{\frac{\stackrel{~}{}_K\stackrel{~}{}_LH}{H}+(D1)\frac{\stackrel{~}{}_KH\stackrel{~}{}_LH}{2H^2}\right\}\right],$$
(8)
where all objects, which bear tilde are calculated by using the metric $`\stackrel{~}{g}_{MN}`$,
$$\stackrel{~}{g}^{MN}=\eta ^{MN}h^{MN}=\eta ^{MN}\eta ^{MK}h_{KL}\eta ^{LN},$$
(9)
We compute all objects up to the first order in $`h_{MN}`$. We obtain
$`G_{MN}`$ $`=`$ $`\stackrel{~}{G}_{MN}+(D2){\displaystyle \frac{_M_NH\stackrel{~}{\mathrm{\Gamma }}_{MN}^K_KH}{H}}`$
$`+`$ $`(D2)\stackrel{~}{g}_{MN}\stackrel{~}{g}^{KL}\left\{{\displaystyle \frac{_K_LH\stackrel{~}{\mathrm{\Gamma }}_{KL}^I_IH}{H}}+(D1){\displaystyle \frac{_KH_LH}{2H^2}}\right\},`$
where
$$\stackrel{~}{\mathrm{\Gamma }}_{MN}^K=\frac{\eta ^{KL}}{2}\left(_Nh_{LM}+_Mh_{LN}_Lh_{MN}\right).$$
(11)
Using the gauge conditions (7)
$$\eta ^{KL}\stackrel{~}{\mathrm{\Gamma }}_{KL}^I=0$$
(12)
we get
$$G_{MN}=\frac{1}{2}\mathrm{}h_{MN}+(D2)\left[\frac{_M_NH\stackrel{~}{\mathrm{\Gamma }}_{MN}^K_KH}{H}+\stackrel{~}{g}_{MN}\stackrel{~}{g}^{KL}\left\{\frac{_K_LH}{H}+(D1)\frac{_KH_LH}{2H^2}\right\}\right],$$
(13)
where $`\mathrm{}=\eta ^{MN}_M_N`$. Now by using Einstein equations
$$G_{MN}=\mathrm{\Lambda }g_{MN}+T_{MN}^{branes}$$
(14)
we can identify $`H`$-terms with $`T^{branes}`$, because they contribute only on the brane surfaces, and using (4), (5) we identify the $`(H)^2`$-term with $`\mathrm{\Lambda }g_{MN}`$. Therefore one gets
$$\frac{1}{2}\mathrm{}h_{MN}(D2)\frac{_KH}{H}\stackrel{~}{\mathrm{\Gamma }}_{MN}^K(D2)(D1)\eta _{MN}h^{KL}\frac{_KH_LH}{2H^2}=0.$$
(15)
Finally we obtain the wave equation for metric perturbation in the form
$$\mathrm{}h_{MN}+(D2)\frac{_LH}{H}\eta ^{KL}\left(_Nh_{KM}+_Mh_{KN}_Kh_{MN}\right)+(D2)(D1)\eta _{MN}h^{KL}\frac{_KH_LH}{H^2}=0.$$
(16)
## 3 Propagation of metric perturbations
Function $`H`$ does not depend on $`x^\mu `$. It allows us to rewrite the equation (16) in the following form
$`\mathrm{}h_{ij}+(D2){\displaystyle \frac{_mH}{H}}\eta ^{mn}\left(_jh_{ni}+_ih_{nj}_nh_{ij}\right)+(D2)(D1)\eta _{ij}h^{mn}{\displaystyle \frac{_mH_nH}{H^2}}=0,`$ (17)
$`\mathrm{}h_{i\mu }+(D2){\displaystyle \frac{_mH}{H}}\eta ^{mn}\left(_\mu h_{ni}+_ih_{n\mu }_nh_{i\mu }\right)=0,`$ (18)
$`\mathrm{}h_{\mu \nu }+(D2){\displaystyle \frac{_mH}{H}}\eta ^{mn}\left(_\nu h_{n\mu }+_\mu h_{n\nu }_nh_{\mu \nu }\right)+(D2)(D1)\eta _{\mu \nu }h^{mn}{\displaystyle \frac{_mH_nH}{H^2}}=0.`$ (19)
To solve the system one can solve equation (17) to find $`h_{ij}`$, then substitute $`h_{ij}`$ into equation (18) to find $`h_{i\mu }`$ and finally substitute $`h_{i\mu }`$ into equation (19) to find $`h_{\mu \nu }`$.
If $`h_{i\mu }=0`$, then equation (19) coincides with the wave equation derived in for the longitudinal polarization of the perturbation
$$\left(\mathrm{}(D2)\frac{_mH}{H}\eta ^{mn}_n\right)h_{\mu \nu }=0.$$
(20)
This equation can be transformed into the wave equation with attractive delta-function
$`\left({\displaystyle \frac{\mathrm{}}{2}}+V^{()}(z)\right)\widehat{h}_{\mu \nu }=0,`$ (21)
$`V^{()}(z)={\displaystyle \frac{n(n+2)(n+4)k^2}{8H^2}}{\displaystyle \frac{(n+2)k}{2H}}{\displaystyle \underset{j}{}}\delta (z^j),`$
where $`\widehat{h}=H^{(n+2)/2}h`$. There is a bound state, which corresponds to the localized four-dimensional gravity . The zero-mass state corresponds to $`\widehat{h}=cH^{(n+2)/2}e^{ipx}`$, $`p_\mu p^\mu =0`$, so
$$h_{\mu \nu }=c_{\mu \nu }e^{ipx},$$
(22)
where $`c_{\mu \nu }`$ is a constant polarization tensor.
## 4 Non-longitudinal polarization in 5 dimensions
In the simplest case of one extra dimension equations (17)-(19) acquire the following form
$`\left(\mathrm{}3{\displaystyle \frac{_5H}{H}}_512\left({\displaystyle \frac{_5H}{H}}\right)^2\right)h_{55}=0,`$ (23)
$`\mathrm{}h_{5\mu }3{\displaystyle \frac{_5H}{H}}_\mu h_{55}=0,`$ (24)
$`\left(\mathrm{}+3{\displaystyle \frac{_5H}{H}}_5\right)h_{\mu \nu }3{\displaystyle \frac{_5H}{H}}\left(_\mu h_{5\nu }+_\nu h_{5\mu }\right)+12\eta _{\mu \nu }h_{55}\left({\displaystyle \frac{_5H}{H}}\right)=0.`$ (25)
The equation (23) can be transformed into the wave equation with the repulsive delta-function
$`\left({\displaystyle \frac{\mathrm{}}{2}}+V^{(+)}(z)\right)\widehat{h}_{55}=0,`$ (26)
$`V^{(+)}(z)={\displaystyle \frac{169k^2}{8H^2}}+{\displaystyle \frac{3k}{2}}\delta (z),`$
where $`\widehat{h}=H^{3/2}h`$ and we denote $`x^M=(x^\mu ,z)`$.
Zero-mass state corresponds to the solution $`he^{ipx}`$, $`p_\mu p^\mu =0`$. Let us set
$$h_{55}=0.$$
(27)
After the substitution of $`h_{55}`$ (27) into equation (24) we have
$$\mathrm{}h_{5\mu }=0.$$
(28)
Let us set
$$h_{5\mu }=c_\mu (z)e^{ipx},h_{\mu \nu }=\psi _{\mu \nu }(z)e^{ipx}.$$
(29)
Then, from (28) and (25) one gets
$`c_\mu ^{\prime \prime }=0,`$ (30)
$`\psi _{\mu \nu }^{\prime \prime }+3f\left(\psi _{\mu \nu }^{}i(c_\mu p_\nu +c_\nu p_\mu )\right)=0,`$ (31)
where we denote $`f(z)=_5H/H`$. As a simple explicite solution we take
$$\psi _{\mu \nu }=c_{\mu \nu }+i(c_\mu p_\nu +c_\nu p_\mu )z,c_{\mu \nu }=c_{\nu \mu }=const,c_\mu =const.$$
(32)
To satisfy the gauge conditions (7) we have also to set $`c_\mu p^\mu =0`$, $`c_{\mu \nu }\eta ^{\mu \nu }=0`$, $`c_{\mu \nu }p^\nu =0`$.
Let us summarize our solution
$`h_{55}`$ $`=`$ $`0,`$
$`h_{5\mu }`$ $`=`$ $`c_\mu e^{ipx},`$
$`h_{\mu \nu }`$ $`=`$ $`\left(c_{\mu \nu }+i(c_\mu p_\nu +c_\nu p_\mu )z\right)e^{ipx},`$ (33)
$`px`$ $`=`$ $`p_\mu x^\mu ,p_\mu p^\mu =0,c_\mu p^\mu =0,c_{\mu \nu }\eta ^{\mu \nu }=0,c_{\mu \nu }p^\nu =0,`$
where $`c_\mu `$, $`c_{\mu \nu }`$ and $`p_\mu `$ are constants. We assume, of course, that one takes the real (or imaginary) part of the above expressions.
To conclude, we obtain the zero mode solution (33) of the equations for the metric perturbation with non-longitudinal polarization. One has a massless vector field $`h_{5\mu }`$ on the brane. The perturbation (33) is not localized on the brane because $`h_{\mu \nu }`$ depends linearly on $`z`$. This indicates that probably the effective theory is unstable or, in other words, actually it is not four-dimensional but five-dimensional. Perhaps the coupling with matter fields could lead to the trapping of gravity to the brane.
We are grateful to I.Ya. Arefeva and B. Dragovich for useful discussions. I.V.V. is supported in part by INTAS grant 96-0698 and RFFI-99-01-00105. |
no-problem/9912/astro-ph9912272.html | ar5iv | text | # A Baseband Recorder for Radio Pulsar Observations
## 1 INTRODUCTION
A fundamental characteristic of radio astronomy is the use of phase-coherent detectors, which coherently amplify the electromagnetic radiation and preserve information about the phases of the wavefronts. An ideal radio telescope backend should take advantage of this phase coherency when recording and detecting data. An effective way to do so is to mix the telescope voltages to baseband, then Nyquist-sample the data stream; however, the resulting data volumes quickly become very large. Historically, therefore, most wide-bandwidth radio-astronomical observations have been carried out using analogue detection and recording methods, typically a multi-channel spectrometer. An exception is Very Long Baseline Interferometry (VLBI) observations, in which the voltages from different telescopes are recorded to tape, then played back and combined on a custom correlator to determine the visibility function.
In recent years, computer speeds have caught up with the data rates needed for wide-bandwidth pre-detection sampling, to the extent that computer clock rates are now within an order of magnitude of the observing frequencies used in centimetre-wavelength astronomy. These continuing advances permit commercially-available hardware to replace custom components in the construction of digital, phase-preserving baseband recorders. Such instruments are much more flexible than hardware detection systems, permitting variable filters and integration times, identification and excision of radio-frequency interference, and, if the data are stored on tape, multiple processing passes. In this paper we describe the Princeton Mark IV system, a baseband recorder developed primarily for pulsar astronomy and optimized for use with the Arecibo telescope; within the next few years we expect similar instruments to become valuable in many subfields of centimetre-wavelength astronomy, from radar ranging to spectroscopy.
## 2 DESIGN OF A BASEBAND RECORDER
One application of baseband recording is in high-precision timing and polarimetric observations of millisecond pulsars. Highly accurate pulsar timing has applications not only in the study of the pulsars themselves, but also in areas such as astrometry, time-keeping and experimental tests of cosmology and general relativity. An important obstacle to high timing precision is dispersion of the pulses during their propagation through the ionized interstellar medium. This phenomenon results in delays of lower-frequency radiation components relative to the higher frequencies, and across a typical observing bandwidth can amount to many (often hundreds or more) times the intrinsic pulse widths. The traditional pulsar timing and searching instrument has been an analogue filterbank system, in which the bandpass is subdivided into a number of channels, and the signal is detected in each channel and shifted by the predicted dispersion delay in order to align the pulse peak. This method inevitably leaves residual smearing within the channels, and the time resolution is limited to the inverse of the channel bandwidth. If, instead, the data are sampled prior to detection, a frequency-domain “chirp” filter may be applied to remove completely the predicted effects of dispersion and align the pulse with no smearing. Timing precision is therefore greatly improved with this technique.
This “coherent dedispersion” method was pioneered more than two decades ago \[Hankins & Rickett 1975\], but until recently, the data storage and processing limitations discussed above resulted in mostly narrowband, hardware-based implementations, with special-purpose chips performing the convolution of the data stream with the chirp function (e.g., Stinebring et al. 1992). While large, multi-channel hardware dedispersion instruments are now in use (e.g., Backer et al. 1997), these systems record the data only after convolution and detection and hence do not permit reprocessing or interference excision. Baseband recording coupled with software dedispersion therefore offers a unique flexibility for the analysis of pulsar signals, and various different implementations have been presented in the literature \[Jenet et al. 1997, Wietfeldt et al. 1998\] and used for timing and single-pulse studies.
The design considerations for a pre-detection digital recorder include desired observing bandwidth, quantization resolution, recording medium, processing capability and cost and availability of components. In continuum applications, a wide bandwidth is needed for high signal-to-noise ratio; however the sampling and data rates scale linearly with bandwidth. To keep the data rate to a manageable level, it is therefore necessary to accept coarsely-sampled data; some techniques for optimizing signal quality in the case of 2-bit sampling will be discussed below. A further constraint on the feasible data rate is the speed of the recording medium, typically hard disk or magnetic tape. Wider observing bandwidths also require more computing power to process: the number of operations required may increase more rapidly than linearly if, for instance, Fourier Transforms are used in processing. The optimal balance between the different system components will depend on the goals determined for a particular instrument. For instance, the system described in Jenet et al. (1997) uses a custom VLSI chip to provide 2-bit sampling across 50 MHz of bandwidth, and records data to a high-speed tape recorder. Data processing is then carried out on supercomputers. Another implementation is the instrument described by Wietfeldt et al. (1998), in which a 16 MHz bandpass is quantized at 2 bits, and the data stream written to an adapted VLBI S2 recorder. Upon playback, the data may be analysed by a workstation or faster computer.
The Princeton Mark IV pulsar instrument was designed for use with the 8 MHz-wide 430 MHz line feed of the Arecibo radio telescope. The goal was to provide routinely usable baseband sampling and recording across the full 8-MHz bandwidth using inexpensive hardware, commercially available recording media and an affordable dedicated processor. It provides somewhat greater flexibility than the systems discussed above, allowing 2-bit sampling across 10 MHz or 4-bit sampling across 5 MHz of bandwidth. Early prototypes of the system have been discussed in Shrauner et al. (1996) and Shrauner (1997). The final version, with a design based on the second prototype, is currently installed at the upgraded Arecibo Observatory.
## 3 MARK IV HARDWARE AND SOFTWARE IMPLEMENTATION
The Mark IV instrument accepts intermediate-frequency (IF) signals of bandwidth $`B`$ (either 5 or 10 MHz) centred at 30 MHz. Adjustable attenuators regulate the signal strength, and the voltages are then mixed to baseband with quadrature local oscillators (LOs) at 30 MHz, producing, for each of two orthogonal polarizations, a real and an imaginary signal with passband 0 to $`B/2`$. These four signals are low-pass-filtered with a suppression in excess of 60 dB at $`B/2`$, then 4-bit digitized at rate $`B`$, in accordance with the Nyquist theorem. Shift registers and multiplexers pack the samples such that all 4 bits are retained for the 5 MHz bandpass, while only the 2 most significant are kept for the 10 MHz bandpass. Thus the overall data rate is 10 MB/s regardless of bandwidth. This flow of data is piped through a DMA card into a SPARC-20 workstation and then onto a combination of hard disks and Digital Linear Tapes (DLTs) for off-line processing. The digitizer/packer board is clocked by a 20 MHz signal which is tied to the observatory time standard. The data timestamp is generated by a 10-second tick tied to the same external clock. The status of an injected noise signal may be monitored for later use in calibration. A block diagram of the system is shown in Figure 1.
In such a recording system hardware-related systematic errors may arise from the baseband mixers, from the low-pass filters, or from the amplifiers and attenuators in the signal path; care has been taken to minimize these errors. To verify the orthogonality of the baseband quadrature signals, a comb of harmonics of 414 kHz was used as an input signal. The data streams from the four input channels were separately Fourier-transformed and the phase differences between the real and imaginary components of the left and right channels were calculated at each harmonic. The results are plotted in Figure 2. The means cluster around $`90^{}`$, as they should, with only slight variations across either bandpass. The absence of some of the harmonics in the 10 MHz bandwidth plots (panels (a) and (b)) is due to the low intrinsic amplitude of the test signal at these frequencies.
The filters were selected for flat amplitude response with frequency; for other applications, constant response in phase may be more appropriate. Measurements indicate that the total phase rotation across the bandpass of one of the 5-MHz filters amounts to some 2.5 turns of phase. This extra shift could be incorporated into the chirp function; however, it is negligible relative to the thousands of turns of phase typically induced by dispersion. The amplitude response of the entire system is nearly flat as a function of frequency out to the knee of the filters.
### 3.1 Signal processing
The Mark IV system produces packed data at a rate of 10 MB/s, or 35 GB/hr. Analysis of this data stream in real time would require 8-10 Gflops. As an affordable alternative to a supercomputer, we use a 1.25 Gflop parallel processor optimized for Fast Fourier Transforms (FFTs), which form the core of the analysis. This machine, the SAM-350 from Texas Memory Systems, Inc., consists of 512 MB of fast memory, a DEC Alpha AX27 scalar processor, a parallel-processor board containing customized chips and an additional processor to handle communications. The fast memory may be accessed by the AX27, the parallel-processor board or a host workstation via an SBUS card.
Modeling the interstellar medium as a tenuous electron plasma permits the calculation of the “chirp” function used in the dedispersion analysis \[Hankins & Rickett 1975\]:
$$H(f_0+f_1)=\mathrm{exp}\left[2\pi i\frac{\mathrm{DM}}{2.41\times 10^{10}}\frac{f_1^2}{f_0^2(f_0+f_1)}\right],$$
(1)
where $`f_0`$ is the central observing frequency in MHz, $`|f_1|B/2`$, where $`B`$ is the observing bandwidth, and the dispersion measure, given by $`\mathrm{DM}=_0^dN_e𝑑z`$, is the integrated electron density along the line of sight to the pulsar, measured in pc cm<sup>-3</sup>. Coherent dedispersion is performed by transforming a segment of the baseband data to the Fourier domain, multiplying by the inverse of this chirp function and then transforming back to the time domain, with suitable overlap of successive data segments. In practice, the inverse FFT is taken in 2, 4 or 8 parts, splitting the bandpass into as many sub-bands. This permits the monitoring of potentially variable data quality (perhaps due to interference or scintillation) in different parts of the band, as well as the Faraday rotation of the linear polarization across the band.
Four cross-products are formed from the dedispersed data stream: $`|L|^2`$, $`|R|^2`$, Re($`L^{}R`$) and Im($`L^{}R`$). These detected time series may be recorded directly and used in the analysis of single pulses. Usually, however, the data points for each sub-band and each cross-product are folded modulo the predicted pulse period and are summed into 10-second accumulated pulse profiles. The folded products are calibrated using the observed magnitude of a noise calibrator which is pulsed for one minute after a pulsar observation; the strength of the noise calibrator in Jy is known from comparisons with catalogue flux calibration sources. The Stokes parameters are readily calculated from the four recorded products; parallactic angle correction and polarimetry have been discussed elsewhere \[Stairs, Thorsett & Camilo 1999\].
Precise calculation of pulse times-of-arrival (TOAs) is essential to pulsar timing. The total-intensity folded profiles are cross-correlated with a standard template to measure the phase offset of the pulse within the profile \[Taylor 1992\] The offset is added to the time of the first sample of a pulse period near the middle of the data set to yield an effective pulse time-of-arrival.
### 3.2 Signal quality
Digital sampling is inherently a non-linear process: with less than an infinite number of bits, noise will inevitably be added to the signal. As the determination of precise times-of-arrival depends on accurate pulse profile shapes, preservation of the pulse shape is the primary concern in a coarsely-quantized pulsar observing system. The extreme case of 1-bit sampling, in which “on” and “off” values are assigned by comparing each sample to a running mean, yields the noisiest reproduced signal. Data rate considerations have motivated the choice of 2- and 4-bit sampling for the Mark IV system, an improvement on 1-bit sampling, but still coarse. This quantization will necessarily affect the observed pulse shapes and signal-to-noise ratios in statistically predictable ways.
Preserving the pulse profile in the case of 2-bit quantization requires some care. In this quantization process, the decision thresholds are $`v_0`$, 0 and $`+v_0`$ and the values assigned to the output levels are $`n`$, $`1`$, 1 and +$`n`$, where $`n`$ is not necessarily an integer. For a randomly fluctuating signal, Cooper (1970) finds a recoverable signal-to-noise ratio of 0.88 using $`n=3`$ and $`v_0`$ equal to the root-mean-square voltage of the input signal, $`v_{\mathrm{rms}}`$. However, in pulsar observations there is an additional complication: the signal is dedispersed after quantization. For the straightforward choice of $`v_0=1.0v_{\mathrm{rms}}`$ and $`n=3`$, this process can result in the appearance of dips to either side of the pulse when power is shifted to the aligned peak from the neighbouring regions of the profile \[Jenet & Anderson 1998\]. This effect is most pronounced when the dispersion time across the observing bandwidth is comparable to the pulse width.
To find the quantization levels which minimize dips, we performed Monte Carlo simulations of observations with various quantization thresholds. The simulations used a high-resolution pulse profile of PSR B1534$`+`$12, dispersing, quantizing, dedispersing and accumulating 5000 of these pulses in each of 10 trials at each of several different initial signal-to-noise levels. The resulting profiles were then cross-correlated against the original profile, following the standard procedure used in pulsar timing. The uncertainty in the cross-correlation fit is therefore a measure of both the strength of the reproduced profile and its resemblance to the original. Figure 3 plots the root-mean-square cross-correlation uncertainties against initial signal-to-noise ratio for 4-bit quantization with $`v_0=0.59v_{\mathrm{rms}}`$ and evenly-spaced output levels, for 2-bit quantization with $`v_0=1.40v_{\mathrm{rms}}`$ and $`n=4`$ and for 2-bit quantization with $`v_0=1.0v_{\mathrm{rms}}`$ and $`n=3`$. It is apparent that the first two cases retain fairly good linearity across the range in question, where the strength of the individual pulses ranges from 0.5 of the system noise to 8 times the system noise, whereas the third case yields very poor reproductions of the original profile, particularly at higher signal-to-noise ratios. As the combination of $`v_0=1.40v_{\mathrm{rms}}`$ and $`n=4`$ for 2-bit quantization best eliminates dips and preserves the pulse shape, while making the final signal-to-noise ratio roughly 0.82 of the undispersed value, these parameters were adopted for all 2-bit observations.
In practice, the quantized $`\widehat{v}_{\mathrm{rms}}`$ is estimated during data acquisition by calculating histograms of the incoming data using $`n=3`$. The attenuation of the input voltage is then adjusted until the measured $`\widehat{v}_{\mathrm{rms}}=\mathrm{\hspace{0.17em}0.71}v_0`$. Thus the quantization threshold is set based on the quantized power rather than the unquantized power. Assuming a gaussian distribution of counts, $`\widehat{v}_{\mathrm{rms}}^2`$ may be calculated analytically for any given $`v_0`$ and $`n`$:
$$\widehat{v}_{\mathrm{rms}}^2=\frac{1}{4}\left(n^2+(1n^2)P(0.5,0.5\left[\frac{v_0}{v_{\mathrm{rms}}}\right]^2)\right),$$
(2)
where $`P(a,x)`$ is the incomplete gamma function and the factor of 1/4 is an arbitrary normalization. Based on this calculation, the combination of $`n=4`$ and an input $`v_{\mathrm{rms}}`$ voltage such that $`v_01.4\widehat{v}_{\mathrm{rms}}`$ yields very good power linearity. (Note that for $`n=3`$, $`v_0=1.4\widehat{v}_{\mathrm{rms}}`$ implies $`v_01.5v_{\mathrm{rms}}`$.) The linearity as a function of $`v_0`$ and $`n`$ may be seen in Figure 4. The flat response for $`n=4`$ in the region around $`v_01.4\widehat{v}_{\mathrm{rms}}`$ confirms the conclusions of the simulations.
Other procedures may still be necessary in order to recover the correct pulse shape. If there are very large power variations, for instance, the quantized power levels will often underestimate the true power and dips will result despite the $`n=4`$ correction. Jenet and Anderson (1998) overcome this problem in their 50-MHz baseband recorder by calculating the digitized $`\widehat{v}_{\mathrm{rms}}`$ many times over the course of the predicted pulse period and dynamically adjusting $`n`$ to preserve linearity. However, this method is not practical to compensate for very rapid fluctuations in system temperature during observations of the fastest pulsars. For example, the power in the pulsar signal may cause the telescope system temperature to fluctuate by a factor of 10 on time time scale of 1/1000 of a period, perhaps 2 $`\mu `$s. With a sample rate of 10 MHz, this would allow only 20 samples per noise-level calculation, leading to estimation errors on $`v_{\mathrm{rms}}`$ on the order of $`20^{1/2}20\%`$, too large to accurately adjust the output levels. Jenet and Anderson (1998) also discuss a correction for “scattered power” which becomes apparent in their band after dynamically setting $`n`$. With the fixed $`n=4`$ scheme, scattered power does not appear to be a problem. It is possible that the level-setting happens to be optimized to make the strength of residual dips and the scattered power exactly equal; however, as pulse-shape distortions are not evident, it appears that the existing scheme provides an adequate balance between efficient performance and high-quality signal reproduction.
## 4 OBSERVATIONS WITH THE MARK IV INSTRUMENT
The Mark IV system and its prototypes have already been used for a variety of different observing purposes, such as studying single and giant pulses in PSR B1937+21 \[Cognard et al. 1996\], polarimetry of a large number of millisecond pulsars \[Stairs, Thorsett & Camilo 1999\], high-precision timing of millisecond pulsars, including the double–neutron-star binary PSR B1534+12 \[Stairs et al. 1998\] and, using a software-synthesized filterbank, searches for the extremely fast pulsars that may exist in globular clusters. There have also been preliminary observations using the instrument as a backend for radar studies of asteroids; the instrument is clearly proving to be as flexible as hoped. Below we address the issue of greatest concern in pulsar timing: the time resolution obtainable with the new instrument relative to that obtained with the filterbank system it supersedes. We also discuss the short-term timing stability of the instrument and some interference-excision techniques developed during the course of the first observations with the completed system.
### 4.1 Comparison with analogue filterbank
Pre-detection coherent dedispersion, as implemented in Mark IV, is expected to allow marked improvement in the precision of pulsar timing experiments relative to post-detection dedispersion systems, particularly in cases with substantial dispersion smearing within individual spectral channels. To investigate this, a series of test observations were made using the 305 m Arecibo telescope at 430 MHz. The Mark IV system collected data in parallel with the earlier Mark III system \[Stinebring et al. 1992\], which was commonly used for pulsar timing experiments before the recent Arecibo upgrade. For these observations, the Mark III system used a $`2\times 32\times 250`$ kHz analogue filter bank to detect signals across an 8 MHz passband in two polarizations. The detected signals were low pass filtered with a time constant of 100 $`\mu `$s, after which they were digitized and folded modulo the predicted topocentric pulse period.
#### 4.1.1 Expected precision of time of arrival measurements
The measurement of a pulse arrival time is made by fitting an observed pulse profile, $`p(t)`$, to a scaled, shifted high signal to noise ratio standard profile, $`s(t)`$:
$$p(t)=a+bs(t\tau )+g(t)$$
(3)
where a, b and $`\tau `$ are constants, and $`g(t)`$ represents random radiometer and background noise, and where $`0<t<P`$, with $`P`$ being the pulsar period. The quantity of greatest interest is the time shift $`\tau `$ which, when added to the integration start time (along with a mid-scan correction), gives the pulse time of arrival.
The uncertainty in the arrival time is dominated by the uncertainty in $`\tau `$, $`\sigma _\tau `$. In the limit where the pulsar is much weaker than the system noise, this uncertainty is \[Downs & Reichley 1983\]:
$$\sigma _\tau =\frac{\sigma _n/b}{\left[_0^P(s^{}(t))^2𝑑t\right]^{1/2}},$$
(4)
where $`s^{}(t)`$ is the time derivative of the standard profile and $`\sigma _n`$ is a measure of system noise. According to the radiometer equation,
$$\sigma _n1/(Bt)^{1/2}$$
(5)
where $`B`$ is the bandwidth of detected radiation and $`t`$ is the integration time. The ratio of timing precision of two observing systems, A and B, can therefore be written
$$\frac{\sigma _{\tau ,A}}{\sigma _{\tau ,B}}=\left(\beta \frac{B_Bt_B}{B_At_A}\right)^{1/2},$$
(6)
where the shape factor $`\beta `$ is defined by
$$\beta =\frac{(s_B^{}(t))^2𝑑t}{(s_A^{}(t))^2𝑑t}.$$
(7)
In the case of interest, system A is the Mark IV coherent dedispersion system and the profile $`s_A(t)`$ is a close representation of the intrinsic pulse profile $`s_{\mathrm{int}}(t)`$ as emitted by the pulsar. System B is the Mark III post-detection dedispersion system. The observed pulse shape $`s_B(t)`$ is the intrinsic pulse profile convolved with the effects of dispersion smearing and the detector time constant, calculated as follows.
The effect of dispersion smearing may be quantified by noting that the transmission function of the filters used by Mark III are well described by a Gaussian,
$$\varphi (f_1)\mathrm{exp}[(f_1)^2/w^2]$$
(8)
where $`f_0`$ is the centre frequency of the sub-band defined by the filter, $`f_1`$ is a frequency within the filter sub-band, and with $`w=125`$ kHz is the filter half-width. To first order, the signal at frequency $`f=f_0+f_1`$ from a pulsar with dispersion measure DM is delayed by an amount $`tt_0=(1/\alpha )f_1`$ relative to the centre of the sub-band, where the dispersion slope is
$$\alpha =\frac{1.205\times 10^4f_0^3}{\mathrm{DM}}\frac{\mathrm{pc}\mathrm{cm}^3}{\mathrm{MHz}^2\mathrm{s}}.$$
(9)
Thus the intrinsic pulse profile is convolved with
$$\varphi (\alpha t)=\mathrm{exp}[(\alpha t/w)^2].$$
(10)
Low pass filtering of the detected signal has the effect of further convolving the pulse profile with
$$\mathrm{exp}(t/t_d),$$
(11)
where $`t_d=100\mu `$s is the detector time constant.
The pulse profile expected to be observed by Mark III, $`s_B(t)`$, can be predicted by applying these convolutions to the intrinsic pulse profile as measured by Mark IV, $`s_A(t)`$. An example of a Mark IV profile shape and the same shape filtered by the Mark III system is shown in Figure 5. From such profiles, the shape factor $`\beta `$ and expected improvement in timing precision, $`\sigma _{\tau ,A}/\sigma _{\tau ,B}`$ can be derived. Values for the sources in our test observations are given in Table 1.
#### 4.1.2 Observations and results
For these observations, pulsars were selected for which dispersion smearing across a single channel was comparable to or larger than the intrinsic pulse width and the filter bank time constant. Sources were observed simultaneously with the Mark III and Mark IV systems, typically for 29 or 58 minutes on a given day. Mark III covered a bandwidth of $`B_A=8`$ MHz (except for one observation; see Table 1). Mark IV covered a bandwidth of $`B_B=5`$ MHz with 4-bit sampling. Integration time for Mark III varied; see Table 1. Integration time for Mark IV was fixed at $`t_A=190`$ s. Combining these bandwidths and integration times resulted in the predicted ratio of timing residuals (see Table 1).
Data from each observing system was reduced by calculating pulse arrival times and fitting to a pulsar timing model with the standard tempo (http://pulsar.princeton.edu/tempo) program. The root-mean-square values of the residuals from these fits are tabulated as $`\sigma _{\tau ,A}`$ and $`\sigma _{\tau ,B}`$.
In most cases Mark IV improved the timing precision as much as, or more than, expected. There are several possible sources of improvement beyond the predicted values. (1) Mark IV has better interference removal, since individual points in the time series are examined, as described in §4.3 below. (2) Variations in pulsar flux density within individual spectral channels of the Mark III system cause non-uniform dispersion smearing, resulting in biased arrival times. (3) Small errors in the calibration of the analogue filter bank could lead to errors of several microseconds.
### 4.2 Short-term timing stability
If there are no systematic effects in a set of timing data, the root-mean-square deviation of the times-of-arrival from the predicted model should decrease as $`t_{\mathrm{int}}^{1/2}`$, where $`t_{\mathrm{int}}`$ is the integration time for the TOAs. To test whether this holds true for Mark IV data, the millisecond pulsars PSRs B1937+21 and J1713+0747 were observed for 30 minutes with 10 MHz bandwidth at the Arecibo Observatory. TOAs were then calculated for integration times ranging from 1 s to 640 s and fitted to the pulsar timing model using the tempo program. The rms postfit residuals were calculated for each integration length; they are displayed as a function of $`t_{\mathrm{int}}`$ in Figure 6. For PSR J1713+0747, the residuals closely follow the expected slope of $`t_{\mathrm{int}}^{1/2}`$; the apparent drop for large integration times can be explained by small-number statistics. For PSR B1937+21, the rms residuals follow the expected slope until leveling off at a deviation of approximately 100 ns, indicating that systematic effects prevent greater timing precision.
It is likely that the systematic effects limiting the timing precision of PSR B1937+21 are not due to the Mark IV instrument but rather to the variability of the pulses reaching the Earth. Although PSR B1937+21, with a spin period of 1.56 ms, is the fastest pulsar known, and is indeed one of the most stable \[Kaspi, Taylor & Ryba 1994\], its emission is subject to significant scattering by the interstellar medium. This process has the effect of convolving the pulse profile with a variable exponential tail, resulting in slightly changed observed profiles and hence an unavoidable uncertainty in the TOA calculation. The magnitude of this uncertainty can be estimated by considering the pulsar scintillation due to diffraction in the interstellar medium. The size of the scintillation features can be described by the decorrelation bandwidth, $`\mathrm{\Delta }\nu `$, and scintillation timescale, $`\mathrm{\Delta }t`$, for a given observing frequency. At 1400 MHz, these parameters for PSR B1937+21 have been found to be roughly $`\mathrm{\Delta }\nu =0.83`$ MHz and $`\mathrm{\Delta }t=400`$ s \[Rawley, Taylor & Davis 1988\], leading to a scattering timescale of $`\tau _\mathrm{s}=1/(2\pi \mathrm{\Delta }\nu )=190`$ ns. The portion of the timing uncertainties due to scattering-induced profile changes should be roughly equal to $`\sigma _{\mathrm{TOA}}=\tau _\mathrm{s}\sqrt{\mathrm{\Delta }\nu /B}`$ \[Cordes et al. 1990\], where $`B`$ is the observing bandwidth, yielding a minimum uncertainty of 55 ns for the 10 MHz bandpass of the Mark IV observations. While this is somewhat smaller than the observed lower bound on the timing uncertainties, the order of magnitude is correct, and indicates that the largest part of the remaining residuals is due to scattering-induced variations of the pulse profile rather than to instrumental systematics. Furthermore, the timing analysis shows that roughly halfway through the observation, the profile strength weakened considerably due to scintillation for about 10 minutes, increasing timing uncertainties for this period by roughly a factor of two. Variable data quality will also prevent the residuals from integrating down as predicted. Though it is possible that there may be some instrumental effects at the 50 ns level, deriving, for instance, from the 20 MHz clock signal, it appears from these observations that the overall timing properties of the Mark IV instrument are satisfactory.
### 4.3 Interference excision
An increasingly important problem facing radio astronomy is that of radio frequency interference. Communications satellites, television stations and other transmitters render even the protected astronomy bands in the radio spectrum vulnerable to noise. Though negotiations may bring about stronger protections, it is also useful to develop data-acquisition instruments which are capable of mitigating its effects. Baseband recording systems permit interference to be eliminated in software; the techniques discussed here could easily be adapted for applications beyond pulsar observations.
The Mark IV processing includes, optionally, two types of interference excision: narrowband and broadband. For narrowband excision, a short (typically 256-point) power spectrum is computed for every tenth FFT segment. A simple algorithm searches through the spectrum, calculating 5-point medians about every frequency bin and flagging each point which differs from its corresponding median by more than 4%, or which differs from both its nearest neighbours by more than 50%. Subsequently, points with both nearest neighbours or at least three of four nearest neighbours flagged are also flagged. A mask is produced which is used to zero the contaminated frequencies in the next set of data segments. On average, 5 to 10$`\%`$ of frequency points are zeroed in this fashion, with no apparent effect on the resulting pulse shape. Figure 7 shows a typical spectrum and resulting mask.
Broadband noise is eliminated by searching through the packed data samples for power spikes greater than 30 times the root-mean-square noise above the median. If such spikes are found, the samples in question are set to zero. This algorithm is extremely effective at eliminating broadband noise, again with no apparent effect on the pulse profile shape. “Before” and “after” grayscale plots of an interference-contaminated observation are shown in Figure 8. The benefits of interference excision may be easily deduced from this figure: in some of the 10-second integrations in the first analysis, the pulsar is drowned out by the interference and does not appear at all, whereas in the second analysis it is always present. The baseline of the overall pulse profile is also much improved. This technique not only improves the signal-to-noise ratio, but also eliminates a large source of systematic error in the time-of-arrival fitting.
## 5 CONCLUSION
The Mark IV system provides an example of a 10 MHz baseband recording system designed for use in pulsar observations. It meets and exceeds the predicted improvements in timing precision over earlier filterbank systems, and the analysis code contains features allowing optimal profile-shape recovery for 2-bit quantization and the excision of narrowband and broadband interference. These improvements could easily be generalized for application to other types of observations.
Future designs of pre-detection digital recorders will likely take advantage of the availability of more powerful computers and faster recording media to produce instruments with wider bandwidths. The performance-to-cost ratio of computers tends to increase exponentially, doubling every 18 months. Disk storage also becomes faster and less expensive with time, while the cost of magnetic tapes tends to remain more stable, thus the temptation is to migrate toward wide-bandwidth systems with only disk storage. Naturally, for an instrument in which the data are processed directly from disk and overwritten by subsequent observations, the reprocessing capability provided by the more expensive and slower tape storage will be lost. However, there are justifiable reasons for moving to such a scheme. At frequencies above 1 GHz, for instance, pulsar signals are often so weak that more than 10 MHz observing bandwidth is required to obtain usable signal-to-noise ratios. Further, although the precision of measurements from strong, distant pulsars will be limited by scattering, as discussed in §4.2, improved signal-to-noise ratio is useful for nearly all other observations. Finally, the inherent flexibility of baseband recording systems should encourage the development of instruments which can be used for a multitude of different types of centimetre-wavelength observations.
## Acknowledgements
We thank Hal Taylor, Jay Shrauner and Phil Perillat for work in designing the Mark IV prototypes, Bob Wixted for helpful discussions, Stan Chidzik for laying out and assembling the circuit boards, Mark Krumholz, Christopher Scaffidi and Donald Priour for hardware work and Walter Brisken for SAM-350 software testing. The Arecibo Observatory, a facility of the National Astronomy and Ionosphere Center, is operated by Cornell University under a cooperative agreement with the U. S. National Science Foundation. I. H. S. received support from an NSERC 1967 fellowship. S. E. T. is an Alfred P. Sloan Research Fellow. This work was supported by the NSF and the Seaver Institute. |
no-problem/9912/hep-ph9912423.html | ar5iv | text | # 1 (a) The Φ_{𝛾→𝜂_𝑐}^(𝑖) impact factor corresponds to the sum of the graphs with different gluon configurations. (b) The effective Pomeron → two-Odderon vertex 𝑊. Each group of three outcoming gluons is in a totally symmetric color singlet state.
DESY 99-195 ISSN 0418–9833
A New Odderon Solution in Perturbative QCD
J. Bartels<sup>a</sup> <sup>1</sup><sup>1</sup>1Supported by the TMR Network ”QCD and Deep Structure of Elementary Particles”, L.N.Lipatov<sup>b</sup> <sup>2</sup><sup>2</sup>2Supported by the CRDF and INTAS-RFBR (grants:RP1-253,95-0311) and by the Deutsche Forschungsgemeinschaft, G.P.Vacca<sup>a</sup> <sup>3</sup><sup>3</sup>3Supported by the Alexander von Humboldt Stiftung
<sup>a</sup> II. Inst. f. Theoretische Physik, Univ. Hamburg, Luruper Chaussee 149, D-22761 Hamburg
<sup>b</sup> St.Petersburg Nuclear Physics Institute, Gatchina, Orlova Roscha, 188350, Russia.
## Abstract
We present and discuss a new bound state solution of the three gluon system in perturbative QCD. It carries the quantum numbers of the odderon, has intercept at one and couples to the impact factor $`\gamma ^{}\eta _c`$.
1. The unitarization of the BFKL Pomeron presents one of the major tasks in QCD. After the successful calculation of the NLO corrections to the BFKL kernel and recent progress in analyzing its properties , there are several directions in going beyond the two-gluon ladder approximation. One of them investigates multigluon compound states. After the first formulation of the BKP equations it was found that, in the large-$`N_c`$ limit, their solutions have remarkable mathematical properties , and the hamiltonian is the same as for the integrable spin chain . The existence of integrals of motion and the duality symmetry provide powerful tools in analysing the spectrum of energy eigenvalues. Another line of research investigates the transition between states with different numbers of gluons .
As the first step beyond the two-gluon system (i.e. the BFKL equation) the spectrum of the three gluon system (odderon ) has attracted much attention recently. Apart from the theoretical interest in understanding the dynamics of the n-gluon states with $`n>2`$, there is the long-standing odderon problem which provides interest from the phenomenological side. After several variational studies an eigenfunction of the integral of motion with the odderon intercept slightly below one was constructed by Janik and Wosiek (see also ) and verified by Braun et al . From the phenomenological side, a possible signature of the odderon in deep inelastic scattering at HERA has been investigated by several authors . In this context also the coupling of the odderon to the $`\gamma ^{}\eta _c`$ vertex has been calculated . Another piece of information relevant for the three gluon channel is the Pomeron $``$ two-odderon vertex which has been obtained from an analysis of the six-gluon state . Its momentum structure coincides with the momentum dependence found in .
In this letter we present a new explicit solution to the three gluon system which carries the quantum numbers of the odderon and has interecept one. It is derived from the momentum structure found in and . This solution can be interpreted as the reggeization of a d-reggeon in QCD (the even-signature color octet reggeon which is degenerate with the odd-signature reggeized gluon), which interacts with the reggeized gluon. Our new solution can also be obtained by applying a duality transformation to the antisymmetric solution found in . From the phenomenological point of view, this new solution seems to be more important than the previous one: its intercept is higher than that of the totally symmetric odderon solution of . In the final part of this letter we shall explicitely show how our solution couples to the $`\gamma ^{}\eta _c`$-vertex.
2. Let us begin with the coupling of three gluons to external particles. In order to be able to apply perturbative QCD we should start from a virtual photon, $`\gamma ^{}`$, which splits into two quarks. For the elastic impact factor $`\gamma ^{}\gamma ^{}`$ it was shown in that in the t-channel with three gluons the bootstrap property of the gluon reduces the number of reggeized gluons to two, i.e. there is no state with three reggeized gluons. Therefore, a nonzero coupling of a three gluon t-channel to external particles needs an outgoing state whose parity is even, i.e. opposite to the photon. The easiest candidate is the $`\gamma ^{}\eta _c`$-vertex which has been calculated in (Fig1a). Its momentum structure (as a function of transverse momenta) has the following the form:
$`\mathrm{\Phi }_{\gamma \eta _c}^{(i)}g_s^3ϵ_{ij}{\displaystyle \frac{q_j}{𝒒^2}}\left({\displaystyle \underset{(123)}{}}{\displaystyle \frac{(𝒌_1+𝒌_2𝒌_3)𝒒}{Q^2+4m_c^2+(𝒌_1+𝒌_2𝒌_3)^2}}{\displaystyle \frac{𝒒^2}{Q^2+4m_c^2+𝒒^2}}\right).`$ (1)
Here $`𝒒=𝒌_1+𝒌_2+𝒌_3`$, the sum extends over the cyclic permutations of (1,2,3), and we have left out an overall factor in front which is not important for our present discussion. The color structure is simply given by the symmetric structure constants $`d^{a_1a_2a_3}`$. By introducing the short hand notation
$`\phi _{}^{(i)}(𝒌,𝒌^{})=g_s^2ϵ_{ij}{\displaystyle \frac{q_j}{𝒒^2}}{\displaystyle \frac{(𝒌𝒌^{})(𝒌+𝒌^{})}{Q^2+4m_c^2+(𝒌𝒌^{})^2}},`$ (2)
we rewrite (1) as
$`\mathrm{\Phi }_{\gamma \eta _c}^{(i)}g_s\left({\displaystyle \underset{(123)}{}}\phi _{}^{(i)}(𝒌_1+𝒌_2,𝒌_3)\phi _{}^{(i)}(𝒌_1+𝒌_2+𝒌_3,\mathrm{𝟎})\right).`$ (3)
The function $`\phi _{}`$ is antisymmetric under the exchange of its two arguments. It is easy to see that $`\mathrm{\Phi }`$ vanishes as one of the transverse momenta $`𝒌_i`$ goes to zero (with fixed $`𝒒`$). The full sum of the cyclic permutations is symmetric under the exchange of any pair of momenta $`(𝒌_i,𝒌_j)`$, but because of the antisymmetry of $`\phi _{}`$ its symmetry structure is more involved and will be discussed below.
Interesting enough, the same momentum structure (3) has also been found in the new Pomeron $``$ two-odderon vertex of (Fig.1b). Starting from eq.(6.12) in , one first expresses the function $`W(𝒌_1,𝒌_2,𝒌_3;𝒌_4,𝒌_5,𝒌_6)`$ in terms of the function $`G(𝒌_1,𝒌_2,𝒌_3)`$ which was first introduced in and further investigated in . As an important property we note that this $`G`$-function vanishes if either $`𝒌_1`$ or $`𝒌_3`$ goes to zero. Next we introduce the function
$`\phi _{}(𝒌_1,𝒌_2;𝒌_3,𝒌_4)=`$
$`g_s^4\left(G(𝒌_1,𝒌_2+𝒌_3,𝒌_4)G(𝒌_2,𝒌_1+𝒌_3,𝒌_4)G(𝒌_1,𝒌_2+𝒌_4,𝒌_3)+G(𝒌_2,𝒌_1+𝒌_4,𝒌_3)\right).`$ (4)
Then the Pomeron $``$ two-odderon vertex $`W`$ takes the following form:
$`W(𝒌_1,𝒌_2,𝒌_3;𝒌_4,𝒌_5,𝒌_6)`$
$`{\displaystyle \underset{(123)}{}}{\displaystyle \underset{(456)}{}}\phi _{}(𝒌_1+𝒌_2,𝒌_3;𝒌_4,𝒌_5+𝒌_6){\displaystyle \underset{(123)}{}}\phi _{}(𝒌_1+𝒌_2,𝒌_3;\mathrm{𝟎},𝒌_4+𝒌_5+𝒌_6)`$
$`{\displaystyle \underset{(456)}{}}\phi _{}(𝒌_1+𝒌_2+𝒌_3,\mathrm{𝟎};𝒌_4,𝒌_5+𝒌_6)+\phi _{}(𝒌_1+𝒌_2+𝒌_3,\mathrm{𝟎};\mathrm{𝟎},𝒌_4+𝒌_5+𝒌_6)`$ (5)
Now one easily sees that the momentum structure of the three gluon systems (123) or (456) is the same as in (3). Again we have the property that $`W`$ vanishes as any of the $`𝒌_i`$ goes to zero (from (5) one sees immediately that $`W0`$ when $`𝒌_i0`$, with the total odderon momenta $`𝒌_1+𝒌_2+𝒌_3`$ and $`𝒌_4+𝒌_5+𝒌_6`$ being kept fixed). The color structure is given by the product of the two d-tensors: $`d^{a_1a_2a_3}d^{a_4a_5a_6}`$.
3. Starting from this momentum structure it is easy to find a solution for the three gluon system. For simplicity we return to the impact factor $`\mathrm{\Phi }_{\gamma \eta _c}`$ in (3). Disregarding, for the moment, the last term which serves as a subtraction constant, we consider the convolution of $`_{(123)}\phi _{}(𝒌_1+𝒌_2,𝒌_3)`$ with the kernel for the three gluon state:
$`K_{(123)}={\displaystyle \underset{(ij)}{}}K_{(ij)}`$ (6)
where
$`K_{(12)}=K(𝒌_1,𝒌_2;𝒌_1^{},𝒌_2^{})`$ (7)
is the LO BFKL kernel which includes the gluon trajectory functions. We find
$`\left(K_{(123)}{\displaystyle \frac{1}{𝒌_1^2𝒌_2^2𝒌_3^2}}{\displaystyle \underset{(123)}{}}\phi _{}(𝒌_1+𝒌_2,𝒌_3)\right)(𝒌_1,𝒌_2,𝒌_3)=`$
$`{\displaystyle \underset{(123)}{}}\left(K_{(12)}{\displaystyle \frac{1}{𝒌_1^2𝒌_2^2}}\phi _{}(𝒌_1,𝒌_2)\right)(𝒌_1+𝒌_2,𝒌_3)`$ (8)
In deriving this result it is important to use the color structure $`d^{a_1a_2a_3}`$, the antisymmetry of $`\phi _{}`$, and the bootstrap property of the BFKL kernel. The latter is a relation which guarantees that production amplitudes with the gluon quantum number in their $`t`$ channels used for the construction of the absortive part are characterized by just a single reggeized gluon exchange (at leading and next-to-leading orders). The convolution symbol $``$ denotes the integral over transverse momenta, and we have explicitly written the gluon propagators between $`\phi _{}`$ and the BFKL kernel. For the moment we ignore the fact that the integral in (8) is infrared singular, since the function $`\phi _{}`$ does not vanish as one of its argument goes to zero. Next we replace the function $`\phi _{}(𝒌,𝒒𝒌)`$ by the BFKL (normalized) eigenfunction $`E^{(\nu ,n)}(𝒌,𝒒𝒌)`$; for odd values of the conformal spin $`n`$ this function is odd under the interchange of its arguments $`𝒌`$ and $`𝒒𝒌`$. This leads to the following definition:
$`E_3^{(\nu ,n)}(𝒌_1,𝒌_2,𝒌_3)=g_s{\displaystyle \frac{N_c}{\sqrt{N_c^24}}}{\displaystyle \frac{1}{\sqrt{3\chi (\nu ,n)}}}{\displaystyle \underset{(123)}{}}{\displaystyle \frac{(𝒌_1+𝒌_2)^2}{𝒌_1^2𝒌_2^2}}E^{(\nu ,n)}(𝒌_1+𝒌_2,𝒌_3),`$ (9)
where
$`\chi (\nu ,n)={\displaystyle \frac{N_c\alpha _s}{\pi }}\left(2\psi (1)\psi ({\displaystyle \frac{1+|n|}{2}}+i\nu )\psi ({\displaystyle \frac{1+|n|}{2}}i\nu )\right)`$ (10)
is the characteristic function of the BFKL kernel, and the global color structure is again given by $`d^{a_1a_2a_3}`$. The function $`E_3^{(\nu ,n)}`$ satisfies (8), but since $`E^{(\nu ,n)}`$ is an eigenfunction of the BFKL kernel, we can go one step further and obtain
$`K_{(123)}E_3^{(\nu ,n)}=\chi (\nu ,n)E_3^{(\nu ,n)}.`$ (11)
The leading eigenvalue for $`n=\pm 1`$, $`\nu =0`$ lies at zero, i.e. in the angular momentum plane the rightmost singularity lies at $`j=1`$. Hence this solution dominates over the totally symmetric solution of . Let us remark that in (9) we have included a normalization factor such that the norm of $`E_3^{(\nu ,n)}`$ turns out to be equal to the norm of $`E^{(\nu ,n)}`$.
Property (8) can be viewed as the reggeization of the d-reggeon: starting from the initial condition (as given by (3)) and evolving the three gluon state with the help of the kernel (6), the identity (8) tells us that the three gluon system ”collapses” into a two-reggeon state, where one reggeon is the well-known reggeized gluon (in the antisymmetric color octet reperesentation), the other one a d-reggeon (belonging to the symmetric color octet representation). The full state is in a color singlet, but it has odd C-parity. This situation can be compared with the three gluon state in the Pomeron channel (even C-parity) discussed in : here the initial condition is given by the $`D_{(3,0)}`$ function. The three gluons also evolve and ”collapse” into two reggeized gluons. The main difference lies in the evolution which, in the Pomeron channel, contains also a transition kernel $`23`$ gluons. Such a kernel is absent in the odderon channel.
4. For our further discussion it is convenient to switch to configuration space. We will show that the new solution (9) can be also be obtained from another solution which has been found recently in . Using the Moebius invariance of the Hamiltonian for the compound states $`\mathrm{\Psi }_{m,\stackrel{~}{m}}(𝝆_0)`$ of the three reggeized gluons in LLA in the impact parameter space $`𝝆`$, we can write the ansatz for the corresponding wave function
$$f_{m,\stackrel{~}{m}}(𝝆_1,𝝆_2,𝝆_3;𝝆_0)=\left(\frac{\rho _{23}}{\rho _{20}\rho _{30}}\right)^m\left(\frac{\rho _{23}^{}}{\rho _{20}^{}\rho _{30}^{}}\right)^{\stackrel{~}{m}}\phi _{m,\stackrel{~}{m}}(x,x^{}).$$
(12)
Here $`\rho _{kl}=\rho _k\rho _l`$, $`\rho _k=\rho _k^1+i\rho _k^2`$, and $`x=x^1+ix^2`$ is the anharmonic ratio:
$$x=\frac{\rho _{12}\rho _{30}}{\rho _{10}\rho _{32}}.$$
(13)
The quantum numbers $`m`$ and $`\stackrel{~}{m}`$ are the conformal weights of the state $`\mathrm{\Psi }_{m,\stackrel{~}{m}}(𝝆_0)`$ belonging to the basic series of the unitary representations of the Moebius group:
$$m=\frac{1}{2}i\nu +\frac{n}{2},\stackrel{~}{m}=\frac{1}{2}i\nu \frac{n}{2},$$
(14)
where $`n`$ is the conformal spin, and $`d=12i\nu `$ is the anomalous dimension of the operator $`O_{m,\stackrel{~}{m}}(𝝆_0)`$ describing the compound state . The function $`f_{m,\stackrel{~}{m}}(𝝆_1,𝝆_2,𝝆_3;𝝆_0)`$ is an eigenfunction of the integrals of motion $`A`$ and $`A^{}`$ , where $`A=`$ $`i^3\rho _{12}\rho _{23}\rho _{31}_1_2_3`$ with $`_k=/(\rho _k)`$ . In the $`x`$-representation the eigenvalue equation for $`A`$ takes the form
$$A_m\phi _{m,\stackrel{~}{m}}(x,x^{})=\lambda _m\phi _{m,\stackrel{~}{m}}(x,x^{}),$$
(15)
where $`A_m`$ can be written in the factorized form
$$A_m=a_{1m}(x)a_m(x),a_m(x)=x(1x)(i)^{1+m}.$$
(16)
Note, that $`A_m`$ is the ordinary differential operator of the third order
$$A_m=i^3x(1x)\left(x(1x)^3+(2m)(12x)^2(2m)(1m)\right).$$
(17)
We are looking for a solution which is annihilated by the operator $`AA^{}`$. The zero modes of the operator $`A_m`$ with $`\lambda _m=0`$ are $`1,x^m`$ and $`(1x)^m`$. The corresponding wave function in the $`(x,x^{})`$ representation for the state symmetric under the cyclic transmutations $`𝝆_1𝝆_2𝝆_3𝝆_1`$ of the gluon coordinates is
$$\phi _{m,\stackrel{~}{m}}^0(𝒙)=1+(x)^m(x^{})^{\stackrel{~}{m}}+(x1)^m(x^{}1)^{\stackrel{~}{m}}.$$
(18)
For even values of the conformal spin this wave function is not normalized and does not correspond to any physical state.
However, for odd conformal spins
$$n=m\stackrel{~}{m}=2k+1$$
(19)
the above expression vanishes at $`x0,1`$ and $`\mathrm{}`$
$$\phi _{m,\stackrel{~}{m}}^0(x,x^{})0,$$
(20)
and is normalized . In this case the wave function $`f_{m,\stackrel{~}{m}}(𝝆_1,𝝆_2,𝝆_3;𝝆_0)`$ is anti-symmetric under the pair transmutations $`𝝆_i𝝆_k`$ of the gluon coordinates. Therefore, due to requirements of the Bose symmetry of the total wave function, it describes a colourless state with the colour wave function proportional to the structure constant $`f_{a_1a_2a_3}`$ of the gauge group. This state has positive charge parity and gives a non-vanishing contribution to the structure function $`g_2`$ for the deep-inelastic scattering of the polarized electron off the polarized proton . The value of the energy turns out to be a half of the energy for the Pomeron with corresponding $`m`$ and $`\stackrel{~}{m}`$ . The minimal value of it is reached for $`m=1,\stackrel{~}{m}=0`$ or $`m=0,\stackrel{~}{m}=1`$ and equals zero .
For the interactions of three (and more) gluons the hamiltonian and the integrals of motion are invariant under duality transformations among the coordinates and momenta of the gluons . Generally this invariance leads to a degeneracy of the spectrum of these operators. Namelly, two eigenfunctions $`\phi _{m,\stackrel{~}{m}}^1(x,x^{})`$ and $`\phi _{1m,1\stackrel{~}{m}}^2(x,x^{})`$ corresponding to the same eigenvalues of $`A`$ and $`A^{}`$ are related by the duality operator $`Q_{m,\stackrel{~}{m}}`$ :
$$Q_{m,\stackrel{~}{m}}\phi _{m,\stackrel{~}{m}}^1(x,x^{})=a_m(x)a_{\stackrel{~}{m}}(x^{})\phi _{m,\stackrel{~}{m}}^1(x,x^{})=$$
$$\left|x(1x)\right|^2(i)^{1+m}(i^{})^{1+\stackrel{~}{m}}\phi _{m,\stackrel{~}{m}}^1(x,x^{})=c\phi _{1m,1\stackrel{~}{m}}^2(x,x^{}),$$
(21)
where $`c`$ is an unessential constant.
Starting from the symmetric solution (18), let us use this duality transformation in order to obtain an odderon solution. Since for odd values of the conformal spin $`n`$ the duality operator changes its sign under the transformation $`x1x`$, the duality transformations lead to relations between totally symmetric and anti-symmetric wave functions $`f(𝝆_1,𝝆_2,𝝆_3;𝝆_0)`$. In the particular case $`\lambda =0`$ the anti-symmetric wave function $`\phi _{m,\stackrel{~}{m}}^0(x,x^{})`$ is given above in (22), and the symmetric wave function $`\phi _{m,\stackrel{~}{m}}^{odd}(x,x^{})`$ describing an odderon state can be obtained from the solution of the equation
$$Q_{m,\stackrel{~}{m}}\phi _{m,\stackrel{~}{m}}^{odd}(x,x^{})=c\phi _{1m,1\stackrel{~}{m}}^0(x,x^{}).$$
(22)
It is important to take into account that the solution $`f_{m,\stackrel{~}{m}}^{odd}(𝝆_1,𝝆_2,𝝆_3;𝝆_0)`$ includes the propagators of the external gluons. The amputated solution $`F_{m,\stackrel{~}{m}}^{odd}(𝝆_1,𝝆_2,𝝆_3;𝝆_0)`$ with the removed propagators can be written as follows
$$F_{m,\stackrel{~}{m}}^{odd}(𝝆_1,𝝆_2,𝝆_3;𝝆_0)=\left|\frac{1}{\rho _{12}\rho _{23}\rho _{31}}\right|^2\left|A\right|^2f_{m,\stackrel{~}{m}}^{odd}(𝝆_1,𝝆_2,𝝆_3;𝝆_0)=$$
$$\left|\frac{1}{\rho _{12}\rho _{23}\rho _{31}}\right|^2\left(\frac{\rho _{23}}{\rho _{20}\rho _{30}}\right)^m\left(\frac{\rho _{23}^{}}{\rho _{20}^{}\rho _{30}^{}}\right)^{\stackrel{~}{m}}\mathrm{\Phi }_{m,\stackrel{~}{m}}^{odd}(x,x^{}),$$
(23)
where $`\mathrm{\Phi }_{m,\stackrel{~}{m}}^{odd}(x,x^{})`$ is obtained to be
$$\mathrm{\Phi }_{m,\stackrel{~}{m}}^{odd}(x,x^{})=\left|Q_{m,\stackrel{~}{m}}\right|^2\phi _{m,\stackrel{~}{m}}^{odd}(x,x^{})=c\left|x(1x)\right|^2(i)^{2m}(i^{})^{2\stackrel{~}{m}}\phi _{1m,1\stackrel{~}{m}}^0(x,x^{}).$$
(24)
With the use of a Fourier transformation one can verify that
$$(i)^{2m}(i^{})^{2\stackrel{~}{m}}\phi _{1m,1\stackrel{~}{m}}^0(x,x^{})=a\left(\delta ^2(x)\delta ^2(1x)+\frac{x^mx^{\stackrel{~}{m}}}{\left|x\right|^6}\delta ^2(\frac{1}{x})\right),$$
(25)
where $`a`$ is a constant. Therefore we obtain
$$F_{m,\stackrel{~}{m}}^{odd}(𝝆_1,𝝆_2,𝝆_3;𝝆_0)$$
$$\left|\frac{\rho _{20}\rho _{30}}{\rho _{10}^2\rho _{32}^3}\right|^2\left(\frac{\rho _{23}}{\rho _{20}\rho _{30}}\right)^m\left(\frac{\rho _{23}^{}}{\rho _{20}^{}\rho _{30}^{}}\right)^{\stackrel{~}{m}}\left(\delta ^2(x)\delta ^2(1x)+\frac{x^mx^{\stackrel{~}{m}}}{\left|x\right|^6}\delta ^2(\frac{1}{x})\right)$$
$$\frac{E_{m\stackrel{~}{m}}(\rho _{20},\rho _{30})}{\left|\rho _{23}\right|^4}\delta ^2(\rho _{12})+\frac{E_{m\stackrel{~}{m}}(\rho _{10},\rho _{20})}{\left|\rho _{12}\right|^4}\delta ^2(\rho _{31})+\frac{E_{m\stackrel{~}{m}}(\rho _{30},\rho _{10})}{\left|\rho _{31}\right|^4}\delta ^2(\rho _{23}),$$
(26)
where
$$E_{m\stackrel{~}{m}}(𝝆_{10},𝝆_{20})=\left(\frac{\rho _{12}}{\rho _{10}\rho _{20}}\right)^m\left(\frac{\rho _{12}^{}}{\rho _{10}^{}\rho _{20}^{}}\right)^{\stackrel{~}{m}}$$
(27)
is the BFKL wave function. Using the fact that $`E_{m\stackrel{~}{m}}(𝝆_{10},𝝆_{20})`$ is the eigenfunction of the Kasimir operators $`M^2=\rho _{12}^2_1_2`$ and $`M^2`$ of the Moebius group, we can write the odderon solution (26) as follows:
$$F_{m,\stackrel{~}{m}}^{odd}(𝝆_1,𝝆_2,𝝆_3;𝝆_0)\underset{i,kl}{}\delta ^2(𝝆_{li})\left|\mathbf{}_i\right|^2\left|\mathbf{}_k\right|^2E_{m\stackrel{~}{m}}(𝝆_{i0},𝝆_{k0}),$$
(28)
where the summation is performed over all gluon indices $`i,k,l=1,2,3`$ providing that $`i,kl`$. After the transition to the momentum space we obtain for this vertex function
$$F_{m,\stackrel{~}{m}}^{odd}(𝒌_1,𝒌_2,𝒌_3)\underset{i,kl}{}\left|𝒌_i+𝒌_l\right|^2\left|𝒌_k\right|^2E_{m\stackrel{~}{m}}(𝒌_i+𝒌_l,𝒌_k),$$
(29)
where
$$E_{m\stackrel{~}{m}}(𝒌_1,𝒌_2)=\frac{d^2\rho _1d^2\rho _2}{(2\pi )^2}\mathrm{exp}\left(i\underset{r=1}{\overset{2}{}}𝒌_r𝝆_r\right)E_{m\stackrel{~}{m}}(𝝆_1,𝝆_2).$$
(30)
Eq.(29) is the amputated counterpart of (9).
5. Finally let us analyze the coupling of the new odderon state represented by its eigenfunction (9) to the $`\gamma ^{}\eta _c`$ impact factor (3). Its knowledge will permit us to study scattering processes with odderon exchange, using the odderon Green function. The key point to note is that only the full impact factor $`\mathrm{\Phi }_{\gamma \eta _c}`$ in (3) has a ”good” infrared behaviour: it vanishes as any $`𝒌_i0`$. Therefore, inside an integral any individual term $`\phi _{}`$ will have infrared singularities, but they will cancel if we consider the full sum (3). These singularities are also related to the nature of the conformal invariant eigenfunction of the BFKL pomeron (27): in the momentum representation they contain $`\delta `$-like pieces , corresponding to a constant behaviour in configuration space, as one of the two coordinates $`𝝆_1`$, $`𝝆_2`$ is taken to $`\mathrm{}`$. In a mathematical sense, therefore, the Pomeron eigenfunction is a distribution, and its meaning has to be understood by integrating with some test function. Depending on the space of test functions, one can expect slightly different results. The same must also be true for the action of an operator on (27), e.g. the BFKL Hamiltonian: the result will, again, be a distribution and has to be integrated with a test function. All this is not a merely mathematical observation, but it has a natural physical meaning: the space of test functions in BFKL dynamics is defined by couplings (”impact factors”) to colorless objects. In (3) the function $`\phi _{}`$ alone does not have the normal ”good” properties of a colourless object; only the sum of all the terms in (3) defines a ”good” function.
Let us take a closer look at the scalar product of $`\mathrm{\Phi }_{\gamma \eta _c}`$ with $`E_3^{(\nu ,n)}`$, taking into account the antisymmetry properties of the two building objects, $`\phi _{}`$ and $`E^{(\nu ,n)}`$. Using the momentum structure in (3) and (9) one finds
$`\mathrm{\Phi }_{\gamma \eta _c}|E_3^{(\nu ,n)}`$ $`=`$ $`{\displaystyle 𝑑\mu _3\mathrm{\Phi }_{\gamma \eta _c}(\{𝒌_i\})E_3^{(\nu ,n)}(\{𝒌_i\})}`$ (31)
$`=`$ $`6{\displaystyle d^2𝒌\left[\phi _{}(𝒌,𝒒𝒌)\phi _{}(\mathrm{𝟎},𝒒)\right]\left(𝑲_LE^{(\nu ,n)}\right)(𝒌,𝒒𝒌)},`$
where $`d\mu _3=_id^2𝒌_i\delta ^{(2)}(𝒒_i𝒌_i)`$, and
$`\left(𝑲_LE^{(\nu ,n)}\right)(𝒌,𝒒𝒌)=`$
$`{\displaystyle \frac{N_c\alpha _s}{2\pi ^2}}{\displaystyle d^2𝒍\left[\frac{𝒍^2}{𝒌^2(𝒍𝒌)^2}E^{(\nu ,n)}(𝒍,𝒒𝒍)\frac{1}{2}\frac{𝒌^2}{𝒍^2(𝒌𝒍)^2}E^{(\nu ,n)}(𝒌,𝒒𝒌)\right]}.`$ (32)
One sees easily that $`𝑲_L`$ stands for ”half” of the forward BFKL kernel; adding the corresponding expression for $`𝑲_R`$ one obtains the BFKL kernel, but without the local piece. Due to the antisymmetry of $`E^{(\nu ,n)}`$, the local term in the BFKL kernel is giving zero contribution. Therefore, in the first term of the integrand in (31), using the antisymmetry of $`\phi _{}`$, one could also include one half of this local BFKL-piece, such that $`𝑲_L`$ really represents ”half” of the full BFKL kernel. Ignoring all potential divergences, one might naively expect that (32) should, basically, lead to $`\chi E^{(\nu ,n)}`$, and our scalar product $`\mathrm{\Phi }E^{(\nu ,n)}`$ equals $`\chi \phi _{}E^{(\nu ,n)}`$. This expectation turns out to be correct, but the argument is rather subtle. First, one notices that the first and the second integrand in (32) by themselves lead to infrared divergent integrals, whereas the scalar product of $`\phi _{}`$ with $`E^{(\nu ,n)}`$ is convergent. So it is clear that in the integration we are not allowed simply to use $`𝑲_{BFKL}E^{(\nu ,n)}=\chi _{\nu ,n}E^{(\nu ,n)}`$. The divergent pieces and, possibly also finite parts, would be lost. All these complications would not be visible if $`\phi _{}`$ would be a ”good” function.
In order to see the resolution to this puzzle, we calculate $`𝑲_LE^{(\nu ,n)}`$ explicitly. Rather than presenting details of this calculation we only quote the main result. In the coordinate representation we find:
$`(𝑲_LE^{(\nu ,n)})(\rho _1,\rho _2)={\displaystyle \frac{1}{2}}\chi _{\nu ,n}E^{(\nu ,n)}(\rho _1,\rho _2)+C\underset{\rho _1\mathrm{}}{lim}E^{(\nu ,n)}(\rho _1,\rho _2),`$ (33)
where $`C`$ contains some infinite (after removing the infrared regularization) contributions and also finite pieces. In momentum space, this relation corresponds to the presence of some extra $`\delta `$ function-like pieces. It turns out that, as it should be, these extra terms give a contribution which is exactly cancelled by the second integrand in (31) (the subtraction term). It is only after these cancellations that finally we can write
$`\mathrm{\Phi }_{\gamma \eta _c}|E_3^{(\nu ,n)}=\sqrt{3\chi _{\nu ,n}}{\displaystyle d^2𝒌\phi _{}(𝒌,𝒒𝒌)E^{(\nu ,n)}(𝒌,𝒒𝒌)}.`$ (34)
Thus the matrix element of our odderon solution is similar to the corresponding matrix element of the pomeron solution: this is related to our interpretation of the new odderon as a compound state of ”f” and ”d” reggeized gluons. Note that the degeneracy between the ”f” and the ”d” gluons is exact in the large $`N_c`$ limit, and the duality can be considered as a manifestation of this symmetry. Finally we just remark that in calculating the norm of $`E_3^{(\nu ,n)}`$ the $`\delta `$-like pieces do not play any role. In fact, in place of $`\mathrm{\Phi }_{\gamma \eta _c}`$ one has the amputated odderon function given in (29) and, therefore, in place of $`\phi _{}`$ the amputated pomeron eigenfunction which has ”good” properties.
6. We have presented a new set of eigenfunctions of the odderon equation which is characterized by a spectrum with a maximum intercept at $`1`$. It is remarkable that symmetry structure of this solution has been suggested by the impact factor $`\mathrm{\Phi }_{\gamma \eta _c}`$, to which the odderon couples, and by the Pomeron$``$ two odderon vertex $`W`$ which came our from the study of the six gluon amplitude. At the same time one can use the duality symmetry of the 3 gluon Hamiltonian to rederive this solution. This derivation shows the interconnection with solutions of different symmetry properties. Finally we have shown how to calculate the scalar product of the eigenfunction with the impact factor. This opens the possibility to calculate numerically the contribution of the new odderon states to some of those processes which have already been studied to probe the QCD odderon. Work in this direction is in progess.
Acknowledgements
J.B. and G.P.V. are grateful to M.A. Braun for very interesting discussions. |
no-problem/9912/hep-th9912093.html | ar5iv | text | # 1 Introduction
## 1 Introduction
In this work we develop the Hopf algebra of renormalization to progress beyond the rainbow and chain approximations for anomalous dimensions.
Summing rainbows: In $`d`$ dimensions, the massless scalar one-loop integral with propagators to the powers $`\alpha ,\beta `$ is
$$G(\alpha ,\beta ;d):=g(\alpha )g(\beta )g(d\alpha \beta );g(\alpha ):=\mathrm{\Gamma }(d/2\alpha )/\mathrm{\Gamma }(\alpha )$$
(1)
Now consider the interaction $`g\varphi ^{}\sigma \varphi `$, with a neutral scalar particle $`\sigma `$ coupled to a charged scalar $`\varphi `$, in the critical dimension, $`d_c=6`$. To find the anomalous field dimension $`\gamma `$ of $`\varphi `$, in the rainbow approximation of , one solves the consistency condition
$$1=aG(1,1+\gamma ;6)=\frac{a}{\gamma (\gamma 1)(\gamma 2)(\gamma 3)}$$
(2)
which ensures that the coupling $`a:=g^2/(4\pi )^{d_c/2}`$ cancels the insertion of the anomalous self energy. The perturbative solution of the resulting quartic is easily found:
$$\gamma _{\mathrm{rainbow}}=\frac{3\sqrt{5+4\sqrt{1+a}}}{2}=\frac{a}{6}+11\frac{a^2}{6^3}206\frac{a^3}{6^5}+\mathrm{}$$
(3)
Resumming chains: At the other extreme, one may easily perform the Borel resummation of chains of self-energy insertions, within a single rainbow. Suppose that the self energy $`p^2\overline{\mathrm{\Sigma }}(a,p^2/\mu ^2)`$ is renormalized in the momentum scheme, and hence vanishes at $`p^2=\mu ^2`$. The renormalized massless propagator is $`\overline{D}=1/(p^2p^2\overline{\mathrm{\Sigma }})`$. Then (3) is the rainbow approximation for $`\overline{\mathrm{\Sigma }}/\mathrm{log}(\mu ^2)`$ at $`p^2=\mu ^2`$. Following the methods of , one finds that the corresponding asymptotic series for chains is Borel resummable:
$$\gamma _{\mathrm{chain}}=6_0^{\mathrm{}}\frac{\mathrm{exp}(6x/a)dx}{(x+1)(x+2)(x+3)}\frac{a}{6}+11\frac{a^2}{6^3}170\frac{a^3}{6^5}+\mathrm{}$$
(4)
which differs from the rainbow approximation at 3 loops, with 206 in (3) coming from the triple rainbow, while 170 in (4) comes from a chain of two self energies inside a third.
Hopf algebra: We shall progress beyond the rainbow and chain approximations by including all possible nestings and chainings of the one-loop self-energy divergence. In other words, we consider the full Hopf algebra of undecorated rooted trees, established in and implemented in . Two figures suffice to exhibit the class of diagrams considered, and their divergence structure. The first exhibits a 12-loop example, the second exhibits its divergence structure. Due to the fact that we combine chains and rainbows, we have a full tree structure : the depth of the tree is larger than one, and there can be more than one edge attached to a vertex.
There are 4 notable features of this analysis.
1. We use the coproduct $`\mathrm{\Delta }`$ to combine the antipode $`S`$ and grading operator $`Y`$ in a star product $`SY`$ whose residue delivers the contribution of each rooted tree.
2. We show that the rationality of rainbows extends to the contribution of every undecorated rooted tree, as had been inferred from examples in .
3. We confirm that a recent analysis of dimensional regularization applies at both $`d_c=4`$ and $`d_c=6`$, detecting poles of $`\mathrm{\Gamma }`$ functions that occur in even dimensions.
4. We obtain, to 30 loops, highly non-trivial alternating asymptotic series, which we resum, to high precision, by combining Padé and Borel methods.
## 2 Hopf-algebra method
Let $`t`$ be an undecorated rooted tree, denoting the divergence structure of a Feynman diagram. Then its coproduct is defined, recursively, by
$$\mathrm{\Delta }(t)=te+idB_+(\mathrm{\Delta }(B_{}(t)))$$
(5)
where $`e`$ is the empty tree, evaluating to unity, $`id`$ is the identity map, $`B_{}`$ removes the root, giving a product of trees in general, and $`B_+`$ is the inverse of $`B_{}`$, combining products by restoring a common root. The recursion terminates with $`\mathrm{\Delta }(e)=ee`$ and develops a highly non-trivial structure by the operation of the coproduct on products of trees
$$\mathrm{\Delta }\left(_kt_k\right)=_k\mathrm{\Delta }(t_k)$$
(6)
between each removal and restoration of a root. In Sweedler notation, it takes the form
$$\mathrm{\Delta }(t)=_ka_k^{(1)}a_k^{(2)}=te+et+_k^{}a_k^{(1)}a_k^{(2)}$$
(7)
with single trees on the right and, in general, products on the left. The prime in the second summation indicates the absence of the empty tree. The field-theoretic role of the coproduct is clear: on the left products of subdivergences are identified; on the right these shrink to points. Subtractions are effected by the antipode, defined by the recursion
$$S(t)=t_k^{}S(a_k^{(1)})a_k^{(2)}$$
(8)
for a non-empty tree, with $`S(_kt_k)=_kS(t_k)`$ for products and $`S(e)=e`$.
Renormalization involves a twisted antipode, $`S_R`$. Let $`\varphi `$ denote the Feynman map that assigns a dimensionally regularized bare value $`\varphi (t)`$ to the diagram whose divergence structure is labelled by the tree $`t`$. Then we apply the recursive definition
$$S_R(t)=R\left(\varphi (t)+_k^{}S_R(a_k^{(1)})\varphi (a_k^{(2)})\right)$$
(9)
with a renormalization operator $`R`$ that sets $`p^2=\mu ^2`$, in both the momentum and MS schemes, and in the MS scheme selects only the poles in $`\epsilon :=(d_cd)/2`$.
We can use the coproduct to combine operators. Suppose that $`O_1`$ and $`O_2`$ operate on trees and their products. Then we define the star product $`O_1O_2`$ by
$$O_1O_2(t)=_kO_1(a_k^{(1)})O_2(a_k^{(2)})$$
(10)
with ordinary multiplication performed after $`O_1`$ operates on the left and $`O_2`$ on the right of each term in the coproduct. By construction, $`Sid`$ annihilates everything except the empty tree, $`e`$. The presence of $`R`$ makes $`S_R\varphi `$ finite and non-trivial. In particular, the renormalized Green function is simply
$$\mathrm{\Gamma }_R(t)=\underset{\epsilon 0}{lim}S_R\varphi (t)$$
(11)
whose evaluation was efficiently encoded in , using a few lines of computer algebra.
Here we present a new – and vital – formula for efficiently computing the contribution of an undecorated tree to the anomalous dimension. It is simply
$$\gamma (t)=\underset{\epsilon 0}{lim}\epsilon \varphi (SY(t))$$
(12)
where $`Y`$ is the grading operator, with $`Y(t)=nt`$, for a tree with $`n`$ nodes. In general, $`Y`$ multiplies a product of trees by its total number of nodes. To see that this works, consider the terms in (11), in the momentum scheme, before taking the limit $`\epsilon 0`$. Each term has a momentum dependence $`\left(p^2\right)^{n(dd_c)/2}`$, where $`n`$ is the number of loops (and hence nodes) of the tree on the right of the term in the Sweedler sum. If we multiply by $`n\epsilon `$, and then let $`\epsilon 0`$, we clearly obtain the derivative w.r.t. $`\mathrm{log}(\mu ^2/p^2)`$. Setting $`p^2=\mu ^2`$ we obtain the contribution to the anomalous dimension. Thus $`R`$ plays no role and we may replace $`S_R(t)`$ by $`lim_{Rid}S_R(t)=\varphi (S(t))`$, where $`S`$ is the canonical antipode. Multiplication by $`n\epsilon `$ is achieved by $`\epsilon \varphi (Y(t))=n\epsilon \varphi (t)`$ on the right of the coproduct, where $`Y`$ acts only on single trees. Hence the abstract operator $`SY`$ delivers the precise combination of products of trees whose bare evaluation as Feynman diagrams is guaranteed to have merely a $`1/\epsilon `$ singularity, with residue equal to the contribution to the anomalous dimension. Thus we entirely separate the combinatorics from the analysis.
## 3 Example
By way of example, we show how the 3-loop expansions of (3,4) result from (12). The combinatorics are now clear. The analysis, at first sight, seems to entail the detailed properties of $`\mathrm{\Gamma }`$ functions. However, appearances can be misleading.
In general, a dimensionally regularized bare value for a $`n`$-loop diagram, corresponding to the undecorated rooted tree $`t`$, is evaluated by the recursion
$$\varphi (t)=\frac{L(\epsilon ,n\epsilon )}{n\epsilon }\underset{k}{}\varphi (b_k)$$
(13)
where $`b_k`$ are the branches originating from the root of $`t`$. It terminates with $`\varphi (e)=1`$. For the scalar theory with $`d_c=6`$, the master function is
$$L(\epsilon ,\delta )=\frac{a\delta }{\left(p^2\right)^\epsilon }G(1,1+\delta \epsilon ;62\epsilon )=\frac{a}{\left(p^2\right)^\epsilon }\frac{\mathrm{\Gamma }(1\delta )\mathrm{\Gamma }(1+\delta )\mathrm{\Gamma }(2\epsilon )}{\mathrm{\Gamma }(4\delta \epsilon )\mathrm{\Gamma }(1+\delta \epsilon )}$$
(14)
Now the wonderful feature of (12) is that it depends only on the derivatives of $`L(\epsilon ,\delta )`$ w.r.t. $`\delta `$ at $`\epsilon =0`$. This reflects the fact that the anomalous dimension, unlike the Green function, is insensitive to the details of the regularization method. Thus we may, with huge savings in computation time, replace the master function by
$$L(0,\delta )=\frac{a}{(\delta 1)(\delta 2)(\delta 3)}=\underset{n0}{}g_n\delta ^n=\frac{a}{6}+11\frac{a\delta }{6^2}85\frac{a\delta ^2}{6^3}+O(\delta ^3)$$
(15)
which establishes that the contribution of each rooted tree is rational. The residue of the anomalous dimension operator $`SY`$ feels only the rational residues of $`\mathrm{\Gamma }`$ functions; it is blind to the zeta-valued derivatives that contribute to the renormalized Green function.
Now that the analysis has been drastically simplified, we return to the combinatorics. The double rainbow, $`t_2`$, has coproduct $`\mathrm{\Delta }(t_2)=t_2e+et_2+t_1t_1`$ where $`t_1`$ is the single rainbow, with $`\mathrm{\Delta }(t_1)=t_1e+et_1`$. The antipodes are $`S(t_1)=t_1`$ and $`S(t_2)=t_2+t_1^2`$. The star products are $`SY(t_1)=t_1`$ and $`SY(t_2)=2t_2t_1^2`$. Hence the contributions to the anomalous dimensions are the residues of $`L(0,\epsilon )/\epsilon `$ and $`(L(0,2\epsilon )L(0,\epsilon ))L(0,\epsilon )/\epsilon ^2`$, namely $`g_0=a/6`$ and $`g_1g_0=11a^2/6^3`$.
Following this simple example, the reader should find it easy to determine the anomalous dimension contributions of the two rooted trees at 3 loops. For $`t_3`$, the triple rainbow graph, $`SY`$ delivers $`3t_33t_1t_2+t_1^3`$, with residue $`g_2g_0^2+g_1^2g_0=(85+11^2)a^3/6^5`$, in agreement with (3). For the other diagram, $`t_3^{}`$, with a double chain in a single rainbow, it delivers $`3t_3^{}4t_1t_2+t_1^3`$ with residue $`2g_2g_0^2=2\times 85a^3/6^5`$, in agreement with (4). The Borel resummation (4) of chains corresponds to the result $`n!g_ng_0^n`$ for a chain of $`n`$ self energies, inside a single rainbow. Writing the anomalous dimension contribution of the full Hopf algebra as the asymptotic series
$$\gamma _{\mathrm{hopf}}\underset{n>0}{}G_n\frac{(a)^n}{6^{2n1}}$$
(16)
we find that $`G_3=3\times 85+11^2=376`$.
In this paper, we undertake Padé-Borel resummation of the full Hopf series (16), to 30 loops. We also resum
$$\stackrel{~}{\gamma }_{\mathrm{hopf}}\underset{n>0}{}\stackrel{~}{G}_n\frac{(a)^n}{2^{2n1}}$$
(17)
for the anomalous dimension of a fermion field with a Yukawa interaction $`g\overline{\psi }\sigma \psi `$, at $`d_c=4`$, whose rainbow approximation
$$\stackrel{~}{\gamma }_{\mathrm{rainbow}}=1\sqrt{1+a}$$
(18)
was obtained in . At the other extreme, the Borel-resummed chain approximation
$$\stackrel{~}{\gamma }_{\mathrm{chain}}=2_0^{\mathrm{}}\frac{\mathrm{exp}(2x/a)dx}{x+2}$$
(19)
is easily obtained from the Yukawa generating function, $`\stackrel{~}{L}(0,\delta )=a/(\delta 2)`$.
## 4 Results to 30 loops
At 4 loops, there are 5 undecorated Wick contractions, corresponding to 4 rooted trees, one of which has weight 2. For the scalar theory, at $`d_c=6`$, the tally is
$$G_4=4890+4711+3595+3595+3450=20241=3^2\times 13\times 173$$
(20)
Already this becomes tedious to compute by hand. Fortunately, the recursions (5,8) of the coproduct and antipode make it sublimely easy to automate the procedure (12).
At $`n`$ loops, the number of relevant Wick contractions is the Catalan number $`C_{n1}`$, where $`C_n:=\frac{1}{n+1}\left(\genfrac{}{}{0pt}{}{2n}{n}\right)`$. At 30 loops, there are $`C_{29}=\mathrm{1\hspace{0.17em}002\hspace{0.17em}242\hspace{0.17em}216\hspace{0.17em}651\hspace{0.17em}368}`$ contractions. Symmetries reduce these to rooted trees, with weights determined recursively by $`W(t)=w(t)_kW(b_k)`$ where $`b_k`$ are the branches obtained by removing the root of $`t`$. The symmetry factor of the root is $`w(t)=(_jn_j)!/_jn_j!`$ where $`n_j`$ is the number of branches of type $`j`$. The generating formula for $`R_n`$, the number of rooted trees with $`n`$ nodes, is $`_{n>0}R_nx^n=x_{n>0}(1x^n)^{R_n}`$ which expresses the fact that removal of roots from all trees with $`n`$ nodes produces all products of trees with a total of $`n1`$ nodes. This gives $`R_{30}=\mathrm{354\hspace{0.17em}426\hspace{0.17em}847\hspace{0.17em}597}`$. The number of terms produced by applying the BPHZ procedure to a single tree with $`n`$ nodes is $`2^n`$.
From these enumerations, one finds – with some trepidation – that computation to 30 loops entails $`_{n30}2^nR_n=\mathrm{463\hspace{0.17em}020\hspace{0.17em}146\hspace{0.17em}037\hspace{0.17em}416\hspace{0.17em}130\hspace{0.17em}934}`$ subtractions, each requiring 30 terms in its Laurent expansion, with coefficients involving integers of $`O(10^{60})`$. Brute force would require processing of $`O(10^{24})`$ bits of data, which is far beyond anything contemplated by current computer science. The remedy is clear: recursion of coproduct and antipode, to compute the residues of the anomalous dimension operator $`SY`$.
Each new coproduct or antipode refers to others with fewer loops. By storing these we easily progressed to 13 loops, extending the sequence $`G_n`$ to
$`1,11,376,20241,1427156,121639250,12007003824,1337583507153,`$
$`165328009728652,22404009743110566,3299256277254713760,`$
$`524366465815117346250,89448728780073829991976`$
For $`\stackrel{~}{G}_n`$, in the Yukawa case, we obtained the 13-loop sequence
$`1,1,4,27,248,2830,38232,593859,10401712,202601898`$
$`4342263000,101551822350,2573779506192`$
At this point, recursion of individual trees hit a ceiling imposed by memory limitations.
Beyond 13 loops, we stored only the unique combination of terms that is needed at higher loops, namely the momentum-scheme renormalized self energy. Allocating 750 megabytes of main memory to Reduce 3.7 , the time to reach 30 loops was 8 hours. Of these, more than 2 hours were spent on garbage collection, indicating the combinatoric complexity. Results for the scalar and Yukawa theories are in Tables 1 and 2. They are highly non-trivial. Factorization of $`G_{27}=2^6\times 5\times 103\times 184892457645048836717\times 69943104850621681268329469624581`$ needed significant use of Richard Crandall’s elliptic curve routine , while $`G_{29}/240`$ is a 60-digit integer that is most probably prime.
## 5 Padé-Borel resummation
We combine Padé-approximant and Borel-transformation methods. From (4) we obtain the pure chain contribution $`G_{n+1}^{\mathrm{chain}}=(2^n+(2^n1)3^{n+1})n!`$ with, for example, $`G_4^{\mathrm{chain}}=(8+7\times 81)\times 6=3450`$ appearing in (20) as the smallest contribution of the 5 Wick contractions at 4 loops, while the pure rainbow contribution, $`4711`$, is next to largest. This is far removed from the situation at large $`n`$, where the pure rainbow term is factorially smaller than the pure chain term. At large $`n`$, we combine $`C_{n1}4^{n1}/\sqrt{n^3\pi }`$ Wick contractions, some of which are of order $`G_n^{\mathrm{chain}}`$, while some are far smaller. It is thus difficult to anticipate the large-$`n`$ behaviour of $`G_n`$. We adopted an empirical approach, finding that $`S_n:=12^{1n}G_n/\mathrm{\Gamma }(n+2)`$ varies little for $`n[14,30]`$, as shown in the final column of Table 1. In the Yukawa case of Table 2, we found little variation in $`\stackrel{~}{S}_n:=2^{1n}\stackrel{~}{G}_n/\mathrm{\Gamma }(n+1/2)`$.
In the scalar case, at $`d_c=6`$, Padé-Borel resummation may be achieved by the Ansatz
$$\gamma _{\mathrm{hopf}}\frac{a}{12}_0^{\mathrm{}}P(ax/3)e^xx^2𝑑x$$
(21)
where $`P(y)=1+O(y)`$ is a $`[M\backslash N]`$ Padé approximant, with numerator $`1+_{m=1}^Mc_my^m`$ and denominator $`1+_{n=1}^Nd_ny^n`$, chosen so as to reproduce the first $`M+N+1`$ terms in the asymptotic series (16). We expect $`P(y)`$ to have singularities only in the left half-plane. In particular, a pole near $`y=1`$ is expected, corresponding to the approximate constancy of $`S_n`$ in Table 1. We fitted the first 29 values of $`G_n`$ with a $`[14\backslash 14]`$ Padé approximant $`P(y)`$, finding a pole at $`y0.994`$. The other 13 poles have $`\mathrm{}y<1`$. Moreover there is no zero with $`\mathrm{}y>0`$. The test-value $`G_{30}`$ is reproduced to a precision of $`5\times 10^{16}`$.
In the Yukawa case, at $`d_c=4`$, we made the Ansatz
$$\stackrel{~}{\gamma }_{\mathrm{hopf}}\frac{a}{\sqrt{\pi }}_0^{\mathrm{}}Q(ax/2)e^xx^{1/2}𝑑x;Q(y):=\frac{\stackrel{~}{P}(y)}{1+y}$$
(22)
suggested by Table 2. Here we put in by hand the suspected pole at $`y=1`$. The $`[14\backslash 14]`$ approximant to $`\stackrel{~}{P}(y)=1+O(y)`$ then has all its 14 poles at $`\mathrm{}y<1`$ and no zero with $`\mathrm{}y>0`$. The test-value $`\stackrel{~}{G}_{30}`$ is reproduced to a reassuring precision of $`4\times 10^{17}`$.
Table 3 compares resummation of the full Hopf results (16,17) with those from the far more restrictive chain and rainbow subsets. To test the precision of resummations (21,22), we used the star product (12) to perform the $`2.6\times 10^{21}`$ BPHZ subtractions that yield the exact 31-loop coefficients
$`G_{31}`$ $`=`$ $`2^6\times 3^3\times 5\times 139\times 2957\times 22279\times 69318820356301\times 9602299922477621`$ (23)
$`\times 144927172127490232568467`$
$`\stackrel{~}{G}_{31}`$ $`=`$ $`2^5\times 3^4\times 5\times 71\times 109\times 13224049649\times 473202021103152647613521`$ (24)
No change in the final digits of Table 3 results from using these. At the prodigious Yukawa coupling $`g=30`$, corresponding to $`a=(30/4\pi )^25.7`$, a $`[15\backslash 15]`$ Padé approximant gives $`\stackrel{~}{\gamma }_{\mathrm{hopf}}1.85202761`$, differing by less than 1 part in $`10^8`$ from the $`[14\backslash 14]`$ result $`\stackrel{~}{\gamma }_{\mathrm{hopf}}1.85202762`$. It appears that resummation of undecorated rooted trees is under very good control, notwithstanding the combinatoric explosion apparent in (23,24).
## 6 Conclusions
As stated in the introduction, we achieved 4 goals. First, we found the Hopf-algebra construct (12) that delivers undecorated contributions to anomalous dimensions. Then we found that these are rational, with the $`\mathrm{\Gamma }`$ functions of (14) contributing only their residues, via (15). Next, we exemplified the analysis of dimensional regularization in , at two different critical dimensions, $`d_c=6`$ and $`d_c=4`$. The residues of a common set (1) of $`\mathrm{\Gamma }`$ functions determine both results. Finally, we obtained highly non-trivial results, from all combinations of rainbows and chains, to 30 loops. A priori, we had no idea how these would compare with the easily determined pure chain contributions. Tables 1 and 2 suggest that at large $`n`$ the full Hopf-algebra results exceed pure chains by factors that scale like $`n^22^n`$ and $`n^{1/2}2^n`$, respectively. Padé approximation gave 15-digit agreement with exact 30-loop results. In Table 3, we compare the Borel resummations (21,22) of the full Hopf algebra with the vastly simpler rainbow approximations (3,18) and the still rather trivial chain approximations (4,19). Even at the very large Yukawa coupling $`g=30`$ we claim 8-digit precision. Apart from large-$`N_f`$ approximations , we know of no other large-coupling analysis of anomalous dimension contributions, at spacetime dimensions $`d4`$, that progresses beyond pure rainbows or pure chains .
In conclusion: Hopf algebra tames the combinatorics of renormalization, by disentangling the iterative subtraction of primitive subdivergences from the analytical challenge of evaluating dimensionally regularized bare values for Feynman diagrams. Progress with the analytic challenge shall require the expansion of skeleton graphs in the regularization parameter $`D4`$. After that, the Hopf algebra of decorated rooted trees provides the tool to take care of the combinatorial challenge of renormalization in general. Generalizations of the methods here to cases where decorations are different, but still analytically trivial, are conceivable. The results in are of this form. In the present case, where the combinatoric explosion is ferocious, while the analysis is routine, the automation of renormalization by Hopf algebra is a joy. How else might one resum $`2.6\times 10^{21}`$ BPHZ subtractions at 31 loops and achieve 8-digit precision at very strong coupling?
Acknowledgements: This work was undertaken during the workshop Number Theory and Physics at the ESI in November 1999, where we enjoyed discussions with Pierre Cartier, Werner Nahm, Ivan Todorov and Jean-Bernard Zuber. Past work with Alain Connes, Bob Delbourgo, John Gracey and Andrey Grozin supports the present paper.
Table 1: Scalar coefficients in (16), with $`S_n:=12^{1n}G_n/\mathrm{\Gamma }(n+2)`$
$$\begin{array}{ccc}n\hfill & G_n\hfill & S_n\hfill \\ & & \\ 14\hfill & 16301356287284530869810308\hfill & 0.1165\hfill \\ 15\hfill & 3161258841758986060906197536\hfill & 0.1177\hfill \\ 16\hfill & 650090787950164885954804021185\hfill & 0.1186\hfill \\ 17\hfill & 141326399508139818539694443090940\hfill & 0.1194\hfill \\ 18\hfill & 32389192708918228594003667471390750\hfill & 0.1200\hfill \\ 19\hfill & 7805642594117634874205145727265669184\hfill & 0.1205\hfill \\ 20\hfill & 1973552096478862083584247237907087008846\hfill & 0.1209\hfill \\ 21\hfill & 522399387732959889862436502331522596697560\hfill & 0.1212\hfill \\ 22\hfill & 144486332652501966354908665093390779463113660\hfill & 0.1215\hfill \\ 23\hfill & 41681362292986022786933211385817840822702468640\hfill & 0.1217\hfill \\ 24\hfill & 12520661507532542738174037622803485508817145773050\hfill & 0.1218\hfill \\ 25\hfill & 3910338928202486568787314743084879349561179264255736\hfill & 0.1220\hfill \\ 26\hfill & 1267891158800355844456289086726128521948839015617187260\hfill & 0.1221\hfill \\ 27\hfill & 426237156086127437403654947366849019736474802601497417920\hfill & 0.1221\hfill \\ 28\hfill & 148382376919675149120919349602375065827367635238832722748020\hfill & 0.1222\hfill \\ 29\hfill & 53428133467243180546330391126922442419952183999220340144106320\hfill & 0.1222\hfill \\ 30\hfill & 19876558632009586773182109989526780486481329823560105761256963720\hfill & 0.1222\hfill \end{array}$$
Table 2: Yukawa coefficients in (17), with $`\stackrel{~}{S}_n:=2^{1n}\stackrel{~}{G}_n/\mathrm{\Gamma }(n+1/2)`$
$$\begin{array}{ccc}n\hfill & \stackrel{~}{G}_n\hfill & \stackrel{~}{S}_n\hfill \\ & & \\ 14\hfill & 70282204726396\hfill & 0.3715\hfill \\ 15\hfill & 2057490936366320\hfill & 0.3750\hfill \\ 16\hfill & 64291032462761955\hfill & 0.3780\hfill \\ 17\hfill & 2136017303903513184\hfill & 0.3806\hfill \\ 18\hfill & 75197869250518812754\hfill & 0.3828\hfill \\ 19\hfill & 2796475872605709079512\hfill & 0.3848\hfill \\ 20\hfill & 109549714522464120960474\hfill & 0.3865\hfill \\ 21\hfill & 4509302910783496963256400\hfill & 0.3880\hfill \\ 22\hfill & 194584224274515194731540740\hfill & 0.3894\hfill \\ 23\hfill & 8784041120771057847338352720\hfill & 0.3906\hfill \\ 24\hfill & 414032133398397494698579333710\hfill & 0.3917\hfill \\ 25\hfill & 20340342746544244143487152873888\hfill & 0.3928\hfill \\ 26\hfill & 1039819967521866936447997028508900\hfill & 0.3937\hfill \\ 27\hfill & 55230362672853506023203822058592752\hfill & 0.3946\hfill \\ 28\hfill & 3043750896574866226650924152479935036\hfill & 0.3953\hfill \\ 29\hfill & 173814476864493583374050720641310171808\hfill & 0.3961\hfill \\ 30\hfill & 10272611586206353744425870217572111879288\hfill & 0.3968\hfill \end{array}$$
Table 3: Comparison of chain, rainbow and full Hopf contributions
$$\begin{array}{ccccccc}a\hfill & \gamma _{\mathrm{chain}}\hfill & \gamma _{\mathrm{rainbow}}\hfill & \gamma _{\mathrm{hopf}}\hfill & \stackrel{~}{\gamma }_{\mathrm{chain}}\hfill & \stackrel{~}{\gamma }_{\mathrm{rainbow}}\hfill & \stackrel{~}{\gamma }_{\mathrm{hopf}}\hfill \\ & & & & & & \\ 0.5\hfill & 0.0727579\hfill & 0.0731322\hfill & 0.0742476\hfill & 0.2245593\hfill & 0.2247449\hfill & 0.2278233\hfill \\ 1.0\hfill & 0.1301409\hfill & 0.1322419\hfill & 0.1373080\hfill & 0.4126913\hfill & 0.4142136\hfill & 0.4281423\hfill \\ 1.5\hfill & 0.1773375\hfill & 0.1825988\hfill & 0.1937609\hfill & 0.5765641\hfill & 0.5811388\hfill & 0.6118625\hfill \\ 2.0\hfill & 0.2172313\hfill & 0.2268615\hfill & 0.2455916\hfill & 0.7226572\hfill & 0.7320508\hfill & 0.7837372\hfill \\ 2.5\hfill & 0.2516214\hfill & 0.2665867\hfill & 0.2939133\hfill & 0.8549759\hfill & 0.8708287\hfill & 0.9464649\hfill \\ 3.0\hfill & 0.2817148\hfill & 0.3027756\hfill & 0.3394353\hfill & 0.9762193\hfill & 1.0000000\hfill & 1.1017856\hfill \\ 3.5\hfill & 0.3083635\hfill & 0.3361156\hfill & 0.3826462\hfill & 1.0883141\hfill & 1.1213203\hfill & 1.2509126\hfill \\ 4.0\hfill & 0.3321923\hfill & 0.3671015\hfill & 0.4239016\hfill & 1.1926947\hfill & 1.2360680\hfill & 1.3947383\hfill \\ 4.5\hfill & 0.3536734\hfill & 0.3961033\hfill & 0.4634712\hfill & 1.2904639\hfill & 1.3452079\hfill & 1.5339452\hfill \\ 5.0\hfill & 0.3731724\hfill & 0.4234058\hfill & 0.5015652\hfill & 1.3824908\hfill & 1.4494897\hfill & 1.6690711\hfill \\ 5.5\hfill & 0.3909778\hfill & 0.4492331\hfill & 0.5383523\hfill & 1.4694751\hfill & 1.5495098\hfill & 1.8005504\hfill \\ 6.0\hfill & 0.4073216\hfill & 0.4737658\hfill & 0.5739698\hfill & 1.5519895\hfill & 1.6457513\hfill & 1.9287404\hfill \end{array}$$ |
no-problem/9912/nucl-th9912075.html | ar5iv | text | # Implementing PCAC in Nonperturbative Models of Pion Production
## I Introduction
Approximate chiral symmetry is an important property of quantum chromodynamics (QCD) and should therefore be an attribute of any effective description of strong interaction processes. Yet most nonperturbative effective descriptions of pion production in few-body processes ($`NN\pi NN`$, $`\pi N\pi \pi N`$, $`\gamma N\pi N`$, etc.) have been based on traditional few-body approaches which are not consistent with chiral symmetry – even if they are based on chiral Lagrangians (i.e. the input functions are chirally invariant). The essential idea of a traditional few-body approach is to take as input two-body $`t`$ matrices and pion production vertices, and then use integral equations to sum the multiple-scattering series nonperturbatively. A feature of such an approach is that the input can be constructed phenomenologically without the need to specify an explicit Lagrangian for the strong interactions. Despite the large amount of physics taken into account (e.g. all possible pair-like interactions are included), the lack of approximate chiral symmetry in this approach results in a pion production amplitude that does not obey PCAC. Although this is a particularly serious problem at low energies where low-energy theorems apply, the absence of PCAC is undesirable at any energy because of its inconsistency with QCD. In this contribution we show how the gauging of equations method can be used to construct a nonperturbative few-body approach that is consistent with chiral symmetry and whose pion production amplitude obeys PCAC.
## II Traditional few-body model of $`\mathrm{𝐍𝐍}\pi \mathrm{𝐍𝐍}`$ without PCAC
We shall base our discussion on the example of the relativistic $`\pi NN`$ system for which a traditional few-body approach has recently been developed . For simplicity of presentation we shall treat the nucleons as distinguishable. The few-body approach to the $`\pi NN`$ system provides a nonperturbative simultaneous description of the processes $`NN\pi NN`$, $`NNNN`$, and $`\pi NN\pi NN`$ within the context of relativistic quantum field theory. In this case the integral equations are four-dimensional and can be expressed symbolically as<sup>*</sup><sup>*</sup>*In this paper it should be understood that all amplitudes and currents with external nucleon legs are actually operators in Dirac space and need to be sandwiched between appropriate Dirac spinors $`\overline{u}`$ and $`u`$ to obtain the corresponding physical quantities.
$$𝒯=𝒱+𝒱𝒢_t𝒯$$
(1)
where, for distinguishable nucleons, $`𝒯`$, $`𝒱`$, and $`𝒢_t`$ are $`4\times 4`$ matrices written as
$$𝒯=\left(\begin{array}{cc}T_{NN}& \overline{T}_N\\ T_N& T\end{array}\right);𝒱=\left(\begin{array}{cc}V_{NN}& \overline{}\\ & G_0^1\end{array}\right);𝒢_t=\left(\begin{array}{cc}D_0& 0\\ 0& G_0w^0G_0\end{array}\right).$$
(2)
The elements of matrix $`𝒯`$ are defined as follows. $`T`$ is a $`3\times 3`$ matrix whose elements $`T_{\lambda \mu }`$ are Alt-Grassberger-Sandhas (AGS) amplitudes (generalised to four dimensions) describing the process $`\pi NN\pi NN`$. Note that the following “subsystem-spectator” labelling convention is used: $`\lambda =1`$ or $`2`$ labels the channel where nucleon $`\lambda `$ forms a subsystem with the pion, the other nucleon being a spectator, while $`\lambda =3`$ labels the channel where the two nucleons form the subsystem with the pion being the spectator. In a similar way, $`T_{NN}`$ is the amplitude for $`NN`$ scattering while $`T_N`$ and $`\overline{T}_N`$ are $`3\times 1`$ and $`1\times 3`$ matrices whose elements $`T_{\lambda N}`$ and $`T_{N\mu }`$ describe $`NN\pi NN`$ and $`\pi NNNN`$, respectively. For simplicity of presentation we shall neglect connected diagrams that are simultaneously $`NN`$\- and $`\pi NN`$\- irreducible. Then the elements making up the kernel matrix $`𝒱`$ specified in Eq. (2) take the following form:
$$V_{NN}=V_{NN}^{\text{OPE}}\mathrm{\Delta }$$
(3)
where $`V_{NN}^{\text{OPE}}`$ is the nucleon-nucleon one pion exchange potential and $`\mathrm{\Delta }`$ is a subtraction term that eliminates overcounting. $``$ is a $`3\times 1`$ matrix with
$$_\lambda =\underset{i=1}{\overset{2}{}}\overline{\delta }_{\lambda i}F_iB$$
(4)
where $`F_i=f_id_j^1`$ consists of $`f_i`$, the vertex for $`N_i\pi N_i`$, and $`d_j`$, the Feynman propagator of nucleon $`ji`$. The subtraction term $`B`$ in Eq. (4) likewise eliminates overcounting. $`\overline{}`$ is the $`1\times 3`$ matrix that is the time reversed version of $``$, $`G_0`$ is the $`\pi NN`$ propagator, and $``$ is the $`3\times 3`$ matrix whose $`(\lambda ,\mu )`$’th element is $`\overline{\delta }_{\lambda ,\mu }`$. Finally the propagator term $`𝒢_t`$ is a diagonal matrix consisting of the $`NN`$ propagator $`D_0`$, and the $`3\times 3`$ diagonal matrix $`w^0`$ whose diagonal elements are $`t_1d_2^1`$, $`t_2d_1^1`$, and $`t_3d_3^1`$, with $`t_\lambda `$ being the two-body t matrix for the subsystem particles in channel $`\lambda `$ (for $`\lambda =1`$ or $`2`$, $`t_\lambda `$ is defined to be the $`\pi N`$ $`t`$ matrix with the nucleon pole term removed). The subtraction terms $`\mathrm{\Delta }`$ and $`B`$ are defined with the help of Fig. 1 as follows:
$$\mathrm{\Delta }=W_{\pi \pi }+W_{\pi N}^{}+W_{NN}+X+Y^{}\overline{B}^{}G_0B^{}$$
(5)
where $`W_{\pi N}^{}=W_{\pi N}+PW_{\pi N}P`$, $`Y^{}=Y+PYP`$, and $`B^{}=B+PBP`$, $`P`$ being the nucleon exchange operator.
Despite the rich amount of physics incorporated into the $`NN\pi NN`$ amplitude within this model, it becomes evident from the following discussion that this amplitude cannot satisfy PCAC.
## III New few-body model of $`\mathrm{𝐍𝐍}\pi \mathrm{𝐍𝐍}`$ with PCAC
### A Choosing appropriate degrees of freedom
Although we do not specify the exact Lagrangian behind our new few-body approach, we do assume that this underlying Lagrangian is chirally invariant (up to a small explicit chiral symmetry breaking term). In turn, the chiral invariance of the Lagrangian puts a strong constraint on the nature of the fields that can be used to construct a practical few-body model. For example, if one would like to follow the $`\pi NN`$ model above and have only pions and nucleons as the degrees of freedom, then the underlying chiral Lagrangian would necessarily involve an isovector pion field $`\stackrel{}{\xi }`$ that transforms into functions of itself under a chiral transformation. Unfortunately, this can happen only in a nonlinear way :
$$\stackrel{}{\xi }\stackrel{}{\xi }+\frac{1}{2}\stackrel{}{\theta }(1\stackrel{}{\xi }^2)+\stackrel{}{\xi }\stackrel{}{\theta }\stackrel{}{\xi }$$
(6)
where $`\stackrel{}{\theta }`$ is the vector of three (infinitesimally small) rotation angles. Thus one pion transforms into two pions, two pions transform into four pions, etc., a situation that would make it difficult to formulate a chirally invariant few-body description (i.e. a description whose exposed states have a restricted number of pions – in our case 0 or 1). Alternatively, we can follow the example of the linear sigma model and choose an underlying Lagrangian that involves, in addition to pions and nucleons, the isoscalar field $`\sigma `$. In this case the $`\sigma `$, together with the pion field $`\stackrel{}{\varphi }`$, transform under the chiral transformation in a linear way:
$$\stackrel{}{\varphi }\stackrel{}{\varphi }+\stackrel{}{\theta }\sigma ,\sigma \sigma \stackrel{}{\theta }\stackrel{}{\varphi }.$$
(7)
Thus the four-component field $`\varphi (\sigma ,\stackrel{}{\varphi })`$ transforms into itself - a situation that is ideal for a few-body approach. We shall therefore adopt the latter approach and treat pions and sigma particles on an equal footing. For the few-body $`\pi NN`$ model of Sec. 2, this means mostly a formal change where the usual (isospin) three-component pion is replaced by a four-component one. There is, however, one new aspect in that terms with a three-meson vertex ($`\pi \pi \sigma `$) now need to be included. For example, one will need the new subtraction term illustrated in Fig. 2. Nevertheless, the equations of Sec. 2 retain their structure, the only essential change being an increase in the size of the matrices like $`T`$, $`T_N`$ and $`\overline{T}_N`$ to take into account the introduction of a $`\sigma NN`$ channel. With these modifications we obtain few-body $`\pi NN`$ equations that are consistent with an underlying chiral Lagrangian.
### B Coupling the axial vector field everywhere
Now that we have achieved consistency with an underlying strong interaction chiral Lagrangian, our next goal is to construct a few-body $`\pi NN`$ model whose axial current is partially conserved. The problem at hand is analogous to the one of constructing a conserved electromagnetic current for a few-body system whose strong interactions are consistent with an underlying Lagrangian that conserves charge. In this case it is well known that exact current conservation is obtained by coupling an external electromagnetic field to all possible places in the strong interaction model. In a similar way, one would expect to obtain a partially conserved axial current by coupling an external axial vector field to all possible places in the strong interaction model. Until recently, however, what has not been known is the way to achieve this complete coupling for a few-body system whose strong interactions are described nonperturbatively by integral equations. Fortunately, the solution to this problem presented recently in the context of electromagnetic interactions , is based on a topological argument and therefore applies equally well to the present case of an axial vector field.
The basic idea is to add a vector index (indicating an external axial vector field) to all possible terms in the integral equations describing the strong interaction model. Thus, in the case of the $`\pi NN`$ system where the structure of the integral equations is given as in Eq. (1), coupling an external axial isovector field gives
$$𝒯^\mu =𝒱^\mu +𝒱^\mu 𝒢_t𝒯+𝒱𝒢_t^\mu 𝒯+𝒱𝒢_t𝒯^\mu $$
(8)
which can easily be solved to give a closed expression for $`𝒯^\mu `$:
$$𝒯^\mu =(1+𝒯𝒢_t)𝒱^\mu (1+𝒢_t𝒯)+𝒯𝒢_t^\mu 𝒯.$$
(9)
$`𝒯^\mu `$ is a matrix of transition amplitudes $`T_{NN}^\mu `$, $`T_{N\mathrm{\Delta }}^\mu `$, $`T_{Nd}^\mu `$, etc. (for the moment, we suppress the isovector index in these amplitudes). Here we are particularly interested in the transition amplitude $`T_{NN}^\mu `$ as it is closely related to the pion production amplitude we seek.
It is easy to see that the transition amplitude $`T_{NN}^\mu `$ given by Eq. (9) has the axial vector field attached everywhere in $`T_{NN}`$ except on the external nucleon legs . Including these external leg contributions then gives the complete axial vector transition current for $`NNNN`$:
$$j_{NN}^\mu =(\mathrm{\Gamma }_1^\mu d_1+\mathrm{\Gamma }_2^\mu d_2)T_{NN}+T_{NN}(d_1\mathrm{\Gamma }_1^\mu +d_2\mathrm{\Gamma }_2^\mu )+T_{NN}^\mu $$
(10)
where $`\mathrm{\Gamma }_i^\mu `$ is the axial vertex function of nucleon $`i`$. The input to the $`\pi NN`$ equations consists of the two-body $`t`$ matrices $`t_i`$, the pion production vertices $`f_i`$, and the single particle propagators $`d_i`$. Thus the input to Eq. (9) also includes the axial transition currents $`t_i^\mu `$, $`f_i^\mu `$, and axial vertex function $`\mathrm{\Gamma }_i^\mu `$. In order for these input quantities to be consistent with an external axial vector field being attached everywhere, they must be constructed to satisfy the Axial Ward-Takahashi (AWT) identities . This is easily achieved by restricting the form of the corresponding bare quantities . Thus, for example, $`\mathrm{\Gamma }_i^{a\mu }`$ needs to satisfy the AWT identity
$$q_\mu \mathrm{\Gamma }_i^{a\mu }(k,p)=i\left[d_i^1(k)\gamma _5+\gamma _5d_i^1(p)\right]t^aif_\pi m_\pi ^2f_i^a(k,p)d_\pi (q)$$
(11)
where $`p`$ and $`k`$ are the initial and final momenta of the nucleon, $`q=kp`$, $`f_i^a(k,p)`$ is the $`\pi NN`$ vertex for a pion of isospin component $`a`$, $`t^a`$ is an isospin $`1/2`$ matrix, $`m_\pi `$ is the mass of the pion, and $`f_\pi `$ is the pion decay constant. With the other input quantities constructed to satisfy similar AWT identities, it can be shown that the two-nucleon axial current given by Eq. (10) satisfies the AWT corresponding to exact PCAC:
$`q_\mu j_{NN}^{a\mu }(k_1k_2,p_1p_2)`$ (12)
$`=`$ $`i\left[(\gamma _5t^a)_1T_{NN}(k_1q,k_2;p_1p_2)+T_{NN}(k_1k_2;p_1+q,p_2)(\gamma _5t^a)_1\right]`$ (13)
$`+`$ $`i\left[(\gamma _5t^a)_2T_{NN}(k_1,k_2q;p_1p_2)+T_{NN}(k_1k_2;p_1,p_2+q)(\gamma _5t^a)_2\right]`$ (14)
$``$ $`if_\pi m_\pi ^2T_{N0}^a(k_1k_2,p_1p_2)d_\pi (q)`$ (15)
where $`T_{N0}^a`$ is the amplitude for $`\pi NNNN`$ and in its time reversed form, is just the the pion production amplitude that we are seeking. Note that the axial current $`j_{NN}^{a\mu }`$ contains a pion pole :
$$j_{NN}^{a\mu }(k_1k_2,p_1p_2)=\overline{j}_{NN}^{a\mu }(k_1k_2,p_1p_2)+d_\pi (q)F_\pi ^\mu (q)T_{N0}^a(k_1k_2,p_1p_2),$$
(16)
where $`\overline{j}_{NN}^{a\mu }`$ has no pion pole and $`F_\pi ^\mu `$ is the pion decay vertex function. Thus an alternative way of obtaining the pion production amplitude $`T_{N0}`$, is to take the residue of Eq. (10) at the pion pole.
### C Physical content of the new model
Not only does the pion production amplitude $`T_{N0}`$ obey exact PCAC, but it contains a very rich amount of physics that goes beyond what is included in the traditional few-body approach of Sec. 2. A few of the infinite number of new contributions are illustrated in Fig. 3.
Finally we would like to stress the practicality of our approach: PCAC and inclusion of an infinite number of new pion production mechanisms has been achieved through a simple closed expression, Eq. (9), involving scattering amplitudes $`𝒯`$ obtained from a traditional few-body model, Eq. (1). Our method also does not depend on the model used for kernel $`𝒱`$ – rather than Eq. (2), we could equally have used the simpler case of an $`NN`$ one-pion-exchange potential.
Acknowledgement. This work was supported by a grant from the Flinders University Research Committee. |
no-problem/9912/hep-lat9912014.html | ar5iv | text | # References
Comment on “Percolation Properties of the 2D Heisenberg Model”
Adrian Patrascioiu
Physics Department, University of Arizona, Tucson, AZ 85721, U.S.A.
and
Erhard Seiler
Max-Planck-Institut für Physik
– Werner-Heisenberg-Institut –
Föhringer Ring 6, 80805 Munich, Germany
## Abstract
We comment on a recent paper by Allès et al .
In a recent letter Allès at al claim to show that the two dimensional classical Heisenberg model does not have a massless phase. The paper is an attempt to refute certain arguments advanced by us in 1991 , which remain posted among the Open Problems in Mathematical Physics on the web site of the International Association of Mathematical Physics ; thus the claim of Alles et al, if correct, would be very important. We appreciate that the authors made an effort to falsify our arguments, but in fact the paper is far from establishing its claim. It shows some fundamental misunderstanding of our arguments and contains several incorrect statements.
1. To simplify the discussion, following Allès et al, we will ignore the distinction between $``$-percolation and percolation; as suggested by their choice of cluster, we will consider the model with standard nearest neighbor action at inverse temperature $`\beta `$ as a model with a Lipschitz constraint $`|ss^{}|<ϵ`$ for neighboring spins $`s`$,$`s^{}`$ and suitable $`ϵ`$. Then from our 1991 arguments it follows rigorously that as long as the FK-clusters have finite mean size, i.e. in the massive phase, the equatorial cluster of width $`ϵ`$ must percolate. Everybody agrees that at $`\beta =2.0`$ the standard action model has a finite correlation length $`\xi `$ and in fact Allès et al even state the value of $`L/\xi `$. Therefore their finding that the equatorial cluster they considered percolates is nothing but another indication that at $`\beta =2.0`$ the model is still massive. To prove that our 1991 conjecture is false, they would have to show that for any arbitrarily small $`ϵ`$ there exists a finite $`\beta _p(ϵ)`$ such that for any $`\beta >\beta _p(ϵ)`$ the equatorial cluster $`S_ϵ`$ percolates.
2. In 1991 we also gave an auxiliary argument: if, contrary to our expectation, an arbitrarily thin equatorial cluster percolated at sufficiently large $`\beta `$, then the $`O(3)`$ symmetry would be broken, since then there would be a much larger equatorial cluster on which the induced $`O(2)`$ model would be in its massless (KT) phase. Allès et all suggest that this argument fails because the percolating cluster they found at $`\beta =2.0`$ is very flimsy, having a fractal dimension less than 2. To support their reasoning, they quote a result of Koma and Tasaki that for $`D<2`$ the KT phase does not exist. The claim that this percolating cluster has a fractal dimension less than 2 is in conflict with their own numbers in Tab.1, showing that this cluster has a nonvanishing density. This is of course not surprising, since for a translation invariant percolation problem in $`2D`$ the (unique) percolating cluster under rather general conditions always has a finite density .
3. In support of the fractal picture, Allès et al point out the fact that the ratio of the perimeter over the area of the cluster does not go to 0 as its size increases. Consider then a regular square lattice on which square holes of size $`L\times L`$ have been made, in such a way that a percolating subset remains. The ratio of its perimeter to its area does not vanish and there should be no doubt that on such a lattice the $`O(2)`$ model has a KT phase for any finite $`L`$.
4. In view of this, we think that even on the ‘flimsy’ percolating cluster found by Allès et all, the $`O(2)`$ model does have a KT phase at low enough temperature. It would be interesting to verify this, even though our argument does not depend on the existence of such a transition on that particular percolating cluster. |
no-problem/9912/cond-mat9912269.html | ar5iv | text | # Spin and Charge Excitations in the Two-Dimensional 𝑡-𝐽 Model: Comparison with Fermi and Bose Systems
\[
## Abstract
Using high temperature series we calculate temperature derivatives of the spin-spin and density-density correlation functions to investigate the low energy spin and charge excitations of the two-dimensional $`t`$-$`J`$ model. We find that the temperature derivatives indicate different momentum dependences for the low energy spin and charge excitations. By comparing short distance density-density correlation functions with those of spinless fermions and hard core bosons, we find that the $`t`$-$`J`$ model results are intermediate between the two cases, being closer to those of hard core bosons. The implications of these results for superconductivity are discussed.
\]
The nature of the ground state and low energy excitations for two-dimensional strongly correlated electrons doped slightly away from half-filling is of considerable interest for understanding the properties of high temperature superconductors. The $`t`$-$`J`$ model on a square lattice is a widely studied model used to investigate these problems. While the properties of a single hole introduced into an antiferromagnet are well understood, how to extend these results to a finite density of holes remains a subject of much current research .
For conventional metals, the electron spectral function is the simplest way to investigate the energy and momentum dependence of the single particle excitations. With high temperature series we cannot easily calculate the spectral function of the 2D $`t`$-$`J`$ model directly. From the high temperature series for the momentum distribution $`n_𝐤`$ we calculated the temperature derivative $`dn_𝐤/dT`$ which we used as a proxy for the momentum dependence of the low energy part of the spectral function. Our results for $`dn_𝐤/dT`$ showed that the low energy excitations of the 2D $`t`$-$`J`$ model are spread throughout the Brillouin zone and are in general not conventional quasiparticles. A consequence of this result is that $`dn_𝐤/dT`$ does not completely determine the momentum dependence of the low energy elementary excitations in the 2D $`t`$-$`J`$ model. To further investigate the nature of the low energy elementary excitations, we have extended our calculations to the equal time spin-spin and density-density correlation functions, $`S(𝐪)`$ and $`N(𝐪)`$ respectively, and their temperature derivatives $`dS(𝐪)/dT`$ and $`dN(𝐪)/dT`$.
We have calculated high temperature series for $`S(𝐪)`$ and $`N(𝐪)`$ of the 2D $`t`$-$`J`$ model to twelfth order in inverse temperature $`\beta =1/k_BT`$. Our calculations extend previous series calculations for the correlation functions of the $`t`$-$`J`$ model. The $`t`$-$`J`$ Hamiltonian is given by
$$H=tP\underset{ij,\sigma }{}\left(c_{i\sigma }^{}c_{j\sigma }+c_{j\sigma }^{}c_{i\sigma }\right)P+J\underset{ij}{}𝐒_i𝐒_j,$$
(1)
where the sums are over pairs of nearest neighbor sites and the projection operators $`P`$ eliminate from the Hilbert space states with doubly occupied sites. The definitions of the spin-spin and density-density correlation functions are
$`S(𝐪)={\displaystyle \underset{𝐫}{}}e^{i𝐪𝐫}S_0^zS_𝐫^z`$ (2)
$`N(𝐪)={\displaystyle \underset{𝐫}{}}e^{i𝐪𝐫}\mathrm{\Delta }n_0\mathrm{\Delta }n_𝐫,`$ (3)
where $`S_𝐫^z=\frac{1}{2}_{\alpha \beta }c_{𝐫\alpha }^{}\sigma _{\alpha \beta }^zc_{𝐫\beta }`$ and $`\mathrm{\Delta }n_𝐫=_\sigma c_{𝐫\sigma }^{}c_{𝐫\sigma }n`$. The series are extrapolated to $`T=0.2J`$ by Padé approximants and a ratio technique used previously for $`n_𝐤`$.
To interpret our results for the $`t`$-$`J`$ model we examine $`dN(𝐪)/dT`$ for the tight-binding and spinless fermion models. For these non-interacting models $`dN(𝐪)/dT`$ is given by
$$\frac{dN(𝐪)}{dT}=g\frac{d𝐤}{(2\pi )^2}\left(n_𝐤\frac{dn_{𝐤+𝐪}}{dT}+n_𝐤\frac{dn_{𝐤𝐪}}{dT}\right),$$
(4)
where $`g=2`$ for tight binding and $`g=1`$ for spinless fermions. The properties of $`dN(𝐪)/dT`$ for the non-interacting models are determined by the convolution of $`n_𝐤`$ with $`dn_{𝐤+𝐪}/dT`$. At low temperatures, due to Fermi statistics $`n_𝐤`$ is large only inside the Fermi surface and $`dn_{𝐤+𝐪}/dT`$ is negative just inside the Fermi surface, positive just outside and close to zero elsewhere. The convolution will then give a significant contribution to the integral in Eq. 4 when q is such that only one of the positive or negative parts of $`dn_{𝐤+𝐪}/dT`$ overlaps $`n_𝐤`$. This occurs for $`𝐪0`$ and $`𝐪2𝐤_F`$, as demonstrated in Fig. 1 where we plot $`dN(𝐪)/dT`$ for the tight binding and spinless fermion models. The main features of these plots are a large positive spike at $`𝐪0`$ and a smaller but more extended negative dip located at $`𝐪2𝐤_F`$. The shape of the $`2𝐤_F`$ line depends on the nature of the Fermi surface (hole like or electron like) but in both cases $`2𝐤_F`$ is a continuous curve in the Brillouin zone.
For the $`t`$-$`J`$ series calculations the $`𝐪0`$ (long range) parts of the correlation functions have the least accuracy, while we expect the correlation functions at larger wavevectors (short range) to be well determined. Consequently, we concentrate in our analysis on the locations and properties of the “$`2𝐤_F`$” features in the $`t`$-$`J`$ model correlation functions. Using the non-interacting models as guides, we search for the “$`2𝐤_F`$” features in the $`t`$-$`J`$ correlation functions by looking for the largest negative values of $`dN(𝐪)/dT`$ and $`dS(𝐪)/dT`$.
Results for the $`t`$-$`J`$ model $`N(𝐪)`$ and $`S(𝐪)`$ with electron density $`n=0.8`$, $`J/t=0.4`$ and $`T=0.2J`$ are shown in Fig. 2. Our data are in good agreement with previous calculations at higher temperatures. The Brillouin zone sums of the correlation functions agree with their respective sum rules to within 0.5%. Using data at $`T=0.2J`$ and $`T=0.4J`$ we calculate $`\mathrm{\Delta }N(𝐪)/\mathrm{\Delta }T`$ and $`\mathrm{\Delta }S(𝐪)/\mathrm{\Delta }T`$ as approximations for the temperature derivatives at $`\overline{T}=0.3J`$.
Results for $`\mathrm{\Delta }S(𝐪)/\mathrm{\Delta }T`$ are shown in Fig. 3. The peak at $`𝐪0`$ is considerably broader than for the non-interacting models, in agreement with the temperature dependence observed in previous calculations. The only part of the Brillouin zone where $`\mathrm{\Delta }S(𝐪)/\mathrm{\Delta }T`$ is negative is a roughly circular region of approximate radius $`0.6\pi `$ centered on ($`\pi `$, $`\pi `$). We note that even at the lowest temperature accesible to us, the spin correlations are still peaked at ($`\pi ,\pi `$). The correlation length, around ($`\pi ,\pi `$) appears to be saturating or perhaps even decreasing with lowering $`T`$ . This could be taken as an indication for incommensurate correlations at still lower temperatures, as the peak at ($`\pi `$, $`\pi `$) splits into several distinct peaks. Experimentally, for the high temperature superconducting materials, incommensurate spin-correlations only arise below about $`100`$ K .
The negative feature in $`\mathrm{\Delta }S(𝐪)/\mathrm{\Delta }T`$ does not form a closed curve in the Brillouin zone. In particular, $`\mathrm{\Delta }S(𝐪)/\mathrm{\Delta }T`$ remains positive from (0, 0) to ($`\pi `$, 0), with no indication of a Fermi surface in the low energy spin excitations along this line. Interpreting part of the negative feature in $`\mathrm{\Delta }S(𝐪)/\mathrm{\Delta }T`$ as due to an underlying momentum distribution of itinerant spin degrees of freedom gives disconnected arcs of low energy spin excitations centered near ($`\pi /2`$, $`\pi /2`$) and extending perpendicular to the zone diagonals. These features are located near the peaks observed in $`dn_𝐤/dT`$ and $`|_𝐤n_𝐤|`$, consistent with the strongest features in $`n_𝐤`$ being due to an underlying spinon Fermi surface.
Results for $`\mathrm{\Delta }N(𝐪)/\mathrm{\Delta }T`$ are shown in Fig. 4. The peak for $`𝐪0`$ is much sharper than for $`\mathrm{\Delta }S(𝐪)/\mathrm{\Delta }T`$ and more like the non-interacting models. The negative feature in $`\mathrm{\Delta }N(𝐪)/\mathrm{\Delta }T`$ does make a closed curve in the Brillouin zone, in contrast to $`\mathrm{\Delta }S(𝐪)/\mathrm{\Delta }T`$. The location and shape of the negative feature in $`\mathrm{\Delta }N(𝐪)/\mathrm{\Delta }T`$ are similar to the $`2𝐤_F`$ line in $`dN(𝐪)/dT`$ of the spinless fermion model at the same density. The similarity extends to having the strongest negative feature in both models near ($`\pi `$, 0). In the spinless fermion model this is due to parts of the $`2𝐤_F`$ curve overlapping after being translated back into the first zone. The momentum width of the negative feature in $`\mathrm{\Delta }N(𝐪)/\mathrm{\Delta }T`$ for the $`t`$-$`J`$ model is considerably broader than the $`2𝐤_F`$ line for spinless fermions and this width is not temperature dependent down to $`T=0.2J`$. Interpreting the negative feature in $`\mathrm{\Delta }N(𝐪)/\mathrm{\Delta }T`$ for the $`t`$-$`J`$ model as arising from an underlying momentum distribution for the charge degrees of freedom gives low energy charge excitations smeared out over a range of momenta near $`𝐤_F`$ of spinless fermions at the same density. For $`n=0.8`$ the charge excitations are centered on the zone diagonals and away from ($`\pi `$, 0).
The low temperature momentum dependence of $`N(𝐪)`$ for the $`t`$-$`J`$ model, while in general similar to spinless fermions, has important differences from spinless fermions at large momenta. As shown in Fig. 2, $`N(𝐪)`$ for the $`t`$-$`J`$ model near ($`\pi `$, $`\pi `$) and ($`\pi `$, 0) is slightly smaller than $`0.2`$, the value of $`N(𝐪)`$ for spinless fermions when $`𝐪>2𝐤_F`$. This means that holes in the $`t`$-$`J`$ model have a greater tendency to be nearest or next-nearest neighbors than in the spinless fermion model, with the effect largest for nearest neighbors. This tendency has also been observed in exact diagonalization and Green’s function Monte Carlo calculations, in both of which the effect is much larger than observed in the series calculation. This enhancement is probably due to finite size effects, though more work is needed to fully understand the differences between the series calculations and the Green’s function Monte Carlo and exact diagonalization calculations.
Fig. 5 shows the temperature dependence of $`N(𝐪)`$ at ($`\pi `$, 0) and ($`\pi `$, $`\pi `$) for the $`t`$-$`J`$, spinless fermion and hard core boson models. All of these models have an infinite on-site repulsion which sets the overall scale for $`N(𝐪)`$. The hard core boson results shown in Figs. 5 and 6 are derived from a twelfth order high temperature series for $`N(𝐪)`$ of hard core bosons. The ground states of spinless fermions and hard core bosons are known: Fermi sea with a well defined Fermi surface for spinless fermions and a superfluid for hard core bosons. The data for the $`t`$-$`J`$ model lie between these two cases, leaving open the possibility that the $`t`$-$`J`$ model has a superfluid ground state. The real space nearest neighbor hole-hole correlation function $`\delta _0\delta _{(1,0)}`$ further supports this behavior. Fig. 6 shows the temperature dependence of $`\delta _0\delta _{(1,0)}`$ for the $`t`$-$`J`$, spinless fermion and hard core boson models. Again, at low temperatures the $`t`$-$`J`$ data is between the spinless fermion and hard core boson results.
The temperature dependences shown in Figs. 5 and 6 are also interesting. At high temperatures all three models have similar values, with the $`t`$-$`J`$ data very close to spinless fermions. As the temperature is lowered below $`T<1.5J`$ (for $`J/t=0.4`$) the $`t`$-$`J`$ data deviates from spinless fermions towards hard core bosons. This temperature scale is too high to be due to coherent spin fluctuations. Also, the $`t`$-$`J`$ results are only weakly dependent on $`J/t`$ for $`J/t<0.5`$ and persist to $`J/t=0`$. This shows that the enhancement of $`\delta _0\delta _{(1,0)}`$ in the $`t`$-$`J`$ model relative to spinless fermions is due to the presence of two spin species, but not due to a direct spin interaction.
For larger $`J/t`$, in the phase separation parameter regime, the nearest neighbor density correlation grows rapidly as the temperature is lowered. This latter increase is clearly due to an attraction between the holes mediated by $`J`$. In contrast to this, the high temperature devitaion from spinless fermions towards hard core bosons appears to be a statistical effect. This is, perhaps, analogous to the Hanbury-Brown and Twiss correlation well known for quantum particles . Thus it appears that strong on-site repulsion tends to make the charge degrees of freedom in the $`t`$-$`J`$ model similar to hard core bosons at a high temperature scale. This would suggest a superfluid ground state for the model, where the spin degrees of freedom merely help to choose the symmetry of the superfluid state. For antiferromagnetic spin correlations d-wave might be favored, while for ferromagnetic spin correlations p-wave symmetry could be favored. These ideas find some support in two-hole calculations, but require considerable further investigation.
In conclusion, from series calculations we find that $`\mathrm{\Delta }S(𝐪)/\mathrm{\Delta }T`$ and $`\mathrm{\Delta }N(𝐪)/\mathrm{\Delta }T`$ for the 2D $`t`$-$`J`$ model are quite different, giving further support to non-quasiparticle elementary excitations in 2D strongly correlated systems. Interpreting these results as due to itinerant degrees of freedom with underlying momentum distributions gives low energy spin and charge excitations near the zone diagonals, but with different momentum dependences, and absent near ($`\pi `$, 0). The charge correlations show considerable similarity with hard-core bosons, which is suggestive of a superfluid ground state.
This work was supported in part by a faculty travel grant from the Office of International Studies at The Ohio State University (WOP), the Swiss National Science Foundation (WOP), an ITP Scholar position under NSF grant PHY94-07194 (WOP), EPSRC Grant No. GR/L86852 (MUL) and by NSF grant DMR-9616574 (RRPS). We thank the ETH-Zürich (WOP) and the ITP at UCSB (WOP, RRPS) for hospitality while this work was being completed.
Permanent address. |
no-problem/9912/cond-mat9912299.html | ar5iv | text | # Interdependence of Magnetism and Superconductivity in the Borocarbide TmNi2B2C
\[
## Abstract
We have discovered a new antiferromagnetic phase in TmNi<sub>2</sub>B<sub>2</sub>C by neutron diffraction. The ordering vector is $`𝑸_A=(0.48,0,0)`$ and the phase appears above a critical in-plane magnetic field of 0.9 T. The field was applied in order to test the assumption that the zero-field magnetic structure at $`𝑸_F=(0.094,0.094,0)`$ would change into a $`c`$-axis ferromagnet if superconductivity were destroyed. We present theoretical calculations which show that two effects are important: A suppression of the ferromagnetic component of the RKKY exchange interaction in the superconducting phase, and a reduction of the superconducting condensation energy due to the periodic modulation of the moments at the wave vector $`𝑸_A`$.
PACS numbers: 74.70.Dd, 75.25.+z, 74.20.Fg
\]
The interplay between magnetism and superconductivity is inherently of great interest since the two phenomena represent ordered states which are mutually exclusive in most systems. Therefore, the borocarbide intermetallic quaternaries with stoichiometry (RE)Ni<sub>2</sub>B<sub>2</sub>C have attracted great attention since the publication of their discovery in 1994 , as they exhibit coexistence of magnetism and superconductivity if the rare earth (RE) is either Dy, Ho, Er or Tm. The magnetic moments in these compounds are due to the localized $`4f`$ electrons of the rare-earth ions. The $`4f`$ and the itinerant electrons are coupled weakly by the exchange interaction, resulting in the indirect Ruderman-Kittel-Kasuya-Yoshida (RKKY) interaction between the $`4f`$-moments. Thus the RKKY interaction, which is decisive for the cooperative behavior of the magnetic electrons, depends on the state of the metallic ones.
The borocarbides have a tetragonal crystal structure with space group (I4/mmm) , and TmNi<sub>2</sub>B<sub>2</sub>C has a superconducting critical temperature $`T_c=11`$ K and a Néel temperature $`T_N=1.5`$ K . The crystalline electric field aligns the thulium moments along the $`c`$ axis, and the magnetic structure has a short fundamental ordering vector $`𝑸_F=(0.094,0.094,0)`$ with several higher-order odd harmonics . In the magnetic structures detected in any of the other systems, the moments of the rare-earth ions are confined to the basal plane, and they have short wavelength antiferromagnetically ordered states. For example, they are commensurate with a propagation vector $`𝑸=(0,0,1)`$ for RE = Ho and Dy, or incommensurate with $`𝑸(0.55,0,0)`$ for RE = Gd, Tb, Ho and Er . An especially tight coupling between magnetism and superconductivity has been clearly demonstrated in TmNi<sub>2</sub>B<sub>2</sub>C, where a magnetic field applied along the $`c`$ axis induced concurrent changes of the magnetic structure and the symmetry of the flux line lattice . Similar effects have not yet been observed in any of the other borocarbides.
One question that immediately arises, and which we believe is important for the general understanding of the interaction between superconductivity and magnetism, is why the long-period magnetic ordering found in TmNi<sub>2</sub>B<sub>2</sub>C is stable. Bandstructure calculations on the normal state of non-magnetic LuNi<sub>2</sub>B<sub>2</sub>C predict a maximum in the conduction-electron susceptibility $`\chi (𝒒)`$ at $`𝒒(0.6,0,0)`$ supported by Fermi-surface nesting . The RKKY interaction is proportional to the magnetic susceptibility of the electron gas, and the position of its maximum, determining the magnetic ordering vector, is expected to be nearly the same as in $`\chi (𝒒)`$. This is in agreement with the experimental findings for many of the magnetic borocarbides, but not for TmNi<sub>2</sub>B<sub>2</sub>C.
The use of the band structure of the normal state ignores that the magnetic interactions are mediated by a superconducting medium. The BCS ground state is a singlet and apart from the interband scattering the electronic susceptibility is therefore zero at $`𝒒=\mathrm{𝟎}`$ and zero temperature. It will recover its normal-state value only when $`q10\xi ^1`$, where $`\xi `$ is the coherence length of the superconductor, as first pointed out by Anderson and Suhl . This raises the possibility that a local maximum in the susceptibility is shifted from zero in the normal phase to a non-zero value of $`q`$ in the superconducting phase. In the case of the free electron gas, this maximum is very shallow and occurs at a relatively large $`q`$. If, however, interband effects (or umklapp processes in the free electron model) are included the situation is changed. We shall write the RKKY interaction in the superconducting phase as $`𝒥(𝒒)=I[\chi _0^s(𝒒)+\chi _u(𝒒)]`$. $`\chi _0^s(𝒒)`$ is the contribution from the intraband scattering near the Fermi surface, which is sensitive to the superconducting energy gap. We have determined this function numerically in the zero temperature limit by using a linear expansion in $`q`$ of the band electron energies near the Fermi surface. A simple fit to the result is $`\chi _0^s(q)=0.99q[q+1.5\xi ^1]^1\chi _0^n(0)`$, where $`\xi =\pi \mathrm{\Delta }/(\mathrm{}v_F)`$ is the coherence length. The superconducting energy gap is negligible compared to the band splittings, and the contribution $`\chi _u(𝒒)`$ from the interband scattering is therefore not affected by the state of the conduction electrons. At small $`q`$ we may assume: $`\chi _u(q)=\chi _0^n(0)(\alpha Aq^2)`$, where $`A`$ is a constant normal-state quantity, which is expected to be some few times $`(2\pi /a)^2`$, where $`a`$ is the lattice parameter. The local maximum in $`𝒥(q)`$, at $`q=0`$ in the normal phase, will in the superconducting phase appear at $`q_0[(4/3)A\xi ]^{1/3}`$. This is the same dependence on $`\xi `$ as found by Anderson and Suhl, but the coefficient does not depend explicitly on $`k_F`$ and is somewhat smaller. With $`A(2\pi /a)^2`$ lying between 1 and 10, $`q_0`$ assumes a value between 0.084 and 0.18 times $`2\pi /a`$, which is consistent with the magnitude of the observed ordering vector, $`Q_F=0.13(2\pi /a)`$.
The existence of a local maximum in the RKKY interaction is not sufficient for stabilizing a magnetic structure with that wavelength. The total free energy of the entire system, including the condensation energy of the superconducting electrons, has to be minimal. The presence of a magnetic modulation at $`𝑸`$ may influence and weaken the superconductivity, for instance through a reduction of the density of states at the Fermi surface due to the superzone energy gaps created by the periodic modulation of the moments. This effect depends strongly on $`𝑸`$ and may be an important factor in the competition between alternative magnetic structures.
In this letter we report the results of neutron diffraction studies on TmNi<sub>2</sub>B<sub>2</sub>C augmented by theoretical calculations of the competition between the magnetic and superconducting states. With the objective of suppressing superconductivity while the $`c`$-axis components of the magnetic moments are still ordered, a magnetic field was applied in the basal plane. This allows a study of the magnetic state as it would be in the absence of superconductivity. The experimental results are described in detail below. Briefly we found that the system does not form a ferromagnetic state when superconductivity is suppressed, as one might expect from the arguments presented above, but instead it relaxes into an incommensurate antiferromagnetic state with $`𝑸=𝑸_A=(0.482,0,0)`$, resembling the magnetic ordering most commonly found in the borocarbides.
The experiments were performed on a $`2\times 2\times 0.2`$ mm<sup>3</sup> single crystal of TmNi<sub>2</sub>B<sub>2</sub>C grown by a high-temperature flux method and isotopically enriched with <sup>11</sup>B to enhance the neutron transmission . The sample was mounted on a copper rod in a dilution refrigerator insert for the low-temperature measurements at $`T<1.7`$ K, and on a standard sample stick for measurements above 1.7 K. In both cases the sample was oriented with the $`a`$\- and $`b`$-axes in the scattering plane and placed in a $`1.8`$ T horizontal-field magnet aligned along the $`a`$ axis. Measurements were performed with applied fields up to the maximum of $`1.8`$ T, and at temperatures between 20 mK and 6 K. The neutron diffraction experiments were performed at the TAS7 triple-axis spectrometer on the cold neutron beam line at the DR3 research reactor at Risø National Laboratory. The measurements were carried out with $`12.75`$ meV neutrons, pyrolythic graphite (004) monochromator and (002) analyzer crystals, and in an open geometry without collimation. The effective beam divergence before and after the sample was 60 and 120 arc minutes, respectively.
Fig. 1 Normalized integrated intensities versus temperature of the field-induced magnetic peaks at $`1.2`$ T (circles), $`1.4`$ T (triangles) and $`1.8`$ T (squares). The linear fit to the $`1.8`$ T data shows the determination of $`T_N(B)`$. Inset: Scan along the $`[h\mathrm{\hspace{0.17em}0\hspace{0.17em}0}]`$ direction at $`T=100`$ mK and $`1.8`$ T, showing the field-induced satellite peaks at $`𝑸_A=(0.482,0,0)`$ around the $`(\mathrm{0\hspace{0.17em}0\hspace{0.17em}0})`$ and $`(\mathrm{2\hspace{0.17em}0\hspace{0.17em}0})`$ nuclear reflections. The peak intensity of the $`(\mathrm{2\hspace{0.17em}0\hspace{0.17em}0})`$ reflection is 800.
In the inset to Fig. 1 is shown the result of a scan along $`[\mathrm{1\hspace{0.17em}0\hspace{0.17em}0}]`$ at $`T=100`$ mK and $`B=1.8`$ T. In addition to the nuclear $`(\mathrm{2\hspace{0.17em}0\hspace{0.17em}0})`$ Bragg reflection, the figure shows satellite peaks at $`(\mathrm{0\hspace{0.17em}0\hspace{0.17em}0})+𝑸_A`$ and $`(\mathrm{2\hspace{0.17em}0\hspace{0.17em}0})𝑸_A`$ with $`𝑸_A=(0.482,0,0)`$. These satellite peaks are not observed in zero applied field neither below nor above $`T_N`$, and they do not appear until the field is larger than 0.9 T at 100 mK. Additional magnetic satellites were detected at $`(\mathrm{1\hspace{0.17em}1\hspace{0.17em}0})\pm 𝑸_A`$ and $`(\mathrm{0\hspace{0.17em}2\hspace{0.17em}0})\pm 𝑸_A`$. No higher order harmonics of the field-induced satellites were observed, and their intensities are consistent with the magnitude and direction of the magnetic moments remaining equal to $`3.8\mu _B`$ and being essentially parallel to the $`c`$ axis. At all temperatures and fields, where the $`𝑸_A`$ peaks were observed, they stayed resolution limited. Likewise no field or temperature dependence of $`𝑸_A`$ was found. Quite remarkably, the field-induced peaks are only observed for the wave vector $`𝑸_A𝑩`$ and not for $`𝑸_A𝑩`$, hence the lowering of the in-plane four-fold symmetry of the system produced by the field has a direct consequence on the direction of $`𝑸_A`$.
Fig. 2 Experimental and theoretical phase diagram of TmNi<sub>2</sub>B<sub>2</sub>C in a magnetic field along $`[\mathrm{1\hspace{0.17em}0\hspace{0.17em}0}]`$. The medium-gray area denotes the region where both the $`𝑸_A`$ and the $`𝑸_F`$ reflections are present. In the dark-gray area only the $`𝑸_A`$ reflections were observed, up to the maximum applied field of 1.8 T. The light-gray area denotes the region where the long tail of low-intensity magnetic scattering at $`𝑸_A`$ is still observed. The squares denote the measured phase boundary between the $`𝑸_A`$ phase and the paramagnetic one, $`T_N(B)`$, determined by the procedure described in the text. The open circles denote the upper critical field determined by transport measurements . The solid lines are the theoretical phase boundaries. The dashed line is the calculated Néel temperature of the $`𝑸_A`$ phase had the metal stayed in the normal state. The thin line labeled $`B_{c2}^0(T)`$ is the estimated upper critical field if the magnetic subsystem is neglected.
Concurrently with the appearance of the field-induced peaks the intensity of the zero-field magnetic reflections with scattering vector $`𝑸_F`$ decreases and finally vanishes at $`1.4`$ T and 100 mK. Between 0.9 and $`1.4`$ T the magnetic structures at $`𝑸_F`$ and $`𝑸_A`$ coexist. The length of $`𝑸_F`$ does not change for applied magnetic fields up to 0.9 T. Above 0.9 T a small reduction of $`|𝑸_F|`$ is observed, simultaneously with the appearance of the field-induced magnetic reflection at $`𝑸_A`$. The reduction is at the most $`3\%`$, just before the peaks vanish at $`1.4`$ T.
In the main body of Fig. 1 we show the temperature dependence of the integrated intensity of the field-induced magnetic reflections for three different values of the applied fields. This shows that the intensity of the peaks increases with increasing field at a constant temperature. The results at the different values of the field show qualitatively the same temperature dependence, a rapid linear decrease when the temperature is above $`0.5`$ K and a cross-over into a long tail at about 2 K. At the maximum field of 1.8 T the tail extends up to a temperature of four times the zero-field value of $`T_N`$. For each value of the field $`T_N(B)`$ is defined as the extrapolation to zero of the linear part of the integrated intensity, as shown in the figure for the case of 1.8 T. The experimental data are summarized in the phase diagram in Fig. 2.
The theoretical phase diagram is calculated using the following parameters: 1) The crystal-field Hamiltonian of the Tm ions determined from experiments . 2) A phenomenological RKKY interaction $`𝒥(𝒒)`$ with two parameters $`𝒥(𝑸_F)`$ and $`𝒥(𝑸_A)`$. The normal-state value of $`𝒥(\mathrm{𝟎})𝒥(𝑸_F)`$. 3) Abrikosov’s formula for the condensation energy of the superconducting state
$$F_sF_n=\{B_{c2}^0(T)B_i\}^2[1.168\pi (2\kappa ^21)]^1,$$
(1)
where $`B_i`$ is the internal magnetic field, corrected for the uniform magnetization and demagnetization effects. In the present system $`B_i`$ is about 10% larger than the applied field. $`B_{c2}^0(T)`$ is the upper critical field if the coupling to the magnetic electrons is neglected. Its dependence on $`T/T_c`$ is assumed to be the same as observed in the non-magnetic Lu borocarbide . $`B_{c2}^0(T=0)`$ has been used as a fitting parameter, and its final value of 6.5 T (corresponding to $`\xi =71`$ Å) is close to that obtained when scaling the values of $`B_{c2}^0(0)`$ in the non-magnetic Lu and Y borocarbides with $`T_c`$. The other fitting parameter in Eq. (1) is the (renormalized) value of $`\kappa `$ which is found to be 6.3, close to that determined experimentally by Cho et al. . 4) The coupling between the magnetic system and the superconducting electrons is described by two parameters: a) The Anderson–Suhl reduction of $`𝒥(\mathrm{𝟎})`$ in the superconducting phase at zero temperature, which is found to be close to the value of $`𝒥(\mathrm{𝟎})`$ itself. The thermally excited quasiparticles implies that the reduction is smaller at finite temperatures. This effect is included in the calculations. b) The suppression of the superconductivity, which is assumed to be due to the superzone energy gaps near the Fermi surface produced by the magnetic ordering at $`𝑸_A`$. The density of states at the Fermi surface is reduced proportionally to the sizes of the energy gaps, which by themselves are proportional to the amplitude of the magnetic modulation . The reduction of the density of states causes a decrease of $`B_{c2}^0(T)`$ in Eq. (1), which is estimated to be at most 40%.
The model calculations account reasonably well for the experimental phase boundaries in Fig. 2, and equally well for the phase diagram when the field is applied in the $`c`$ direction, as measured by Eskildsen et al. . The large anisotropy between the experimental upper critical field along the $`a`$ and along the $`c`$ axis is not an internal property of the superconducting electron system, but is due to the large difference between the magnetic $`a`$\- and $`c`$-axis susceptibility of the Tm ions. Although the transformation of the $`𝑸_F`$ state into the $`c`$-axis ferromagnet did not occur, it is clear that the Anderson–Suhl mechanism plays an important role for the behavior of TmNi<sub>2</sub>B<sub>2</sub>C, see also the discussion by Kulić et al. . The loss in the superconducting condensation energy at the field-induced transition to the normal state is compensated for by the gain in the RKKY exchange energy deriving from the uniform magnetization because of the sudden increase of $`𝒥(\mathrm{𝟎})`$. This mechanism explains the large reduction of the upper critical field shown in Fig. 2, and the even larger reduction in the $`c`$-axis phase diagram. Most significantly, it explains why the $`𝑸_F`$ state is stable up to a field of 1 T applied along the $`c`$ axis, i.e. as long as the system stays superconducting. In case of a normal behavior of $`𝒥(𝒒)`$, the $`c`$-axis field would destroy this phase almost immediately.
Against intuition, the short wavelength magnetic modulation in the $`𝑸_A`$ phase affects the superconductivity state much more strongly than the long wavelength $`𝑸_F`$-modulation, but it should be noticed that the wavelengths of the two structures are both much shorter than $`\xi `$. The model predicts that in a normal magnetic system the $`𝑸_A`$ phase would be the stable one at low temperatures and fields. However, the superzone energy gaps produced by the $`𝑸_A`$-modulation disturb the nesting features of the Fermi surface close to this wave vector, causing a strong suppression of the superconductivity in the Tm system. This effect is also found in the measurements on e.g. Ho based borocarbides, where superconductivity is suppressed when the magnetic system enters the $`𝑸=(0.55,0,0)`$ phase and is regained when the system leaves this phase at a yet lower temperature . Er borocarbide is the only one where the $`(0.55,0,0)`$-phase does not seem to strain the superconducting properties severely, although the upper critical field shows similarities with the observations in the Tm system. The reason that the $`𝑸_F`$ phase does not appear in, for instance, the Er system may simply be that the RKKY interaction, being proportional to $`[(g1)J]^2`$, is stronger in Er than in Tm making the energy difference between the two different magnetic phases too large in comparison with the superconducting condensation energy.
Improvement of the present theory is planned in order to account for the inhomogeneities in the superconducting order parameter due to the flux lines. It is difficult to understand how the low-intensity long tail of the $`𝑸_A`$ reflections may extend up to a temperature which is twice the estimated transition temperature of the $`𝑸_A`$-phase in a normal system (the dashed line in Fig. 2). The phase may survive within the normal core of the flux lines, but not very far above the normal-phase transition temperature. Further experiments are planned at higher fields in order to test the theoretical predictions, e.g. that $`T_N(B)`$ coincides with the maximum in the in-plane upper critical field, to investigate in more detail the magnetic scattering in the long-tail regime, and to search for the $`𝑸_A`$-phase in the $`c`$-axis phase diagram.
In this letter we have proposed that the long wavelength magnetic structure of TmNi<sub>2</sub>B<sub>2</sub>C below 1.5 K owes its very existence to the fact that the material is superconducting. This was shown experimentally by applying an in-plane magnetic field, which in this Ising-like system primarily affects the superconducting state. We have found that the magnetic system enters a new phase at a critical field of 0.9 T where the order is at short wavelength, $`𝑸_A=(0.48,0,0)`$. We have presented theoretical calculations showing that the interplay between superconductivity and magnetism in the borocarbides is governed by two mechanisms: 1) a suppression of the ferromagnetic component of the RKKY exchange interaction in the superconducting phase, and 2) a reduction of the superconducting condensation energy from the periodic modulation of the moments at the wave vector $`𝑸_A`$.
We thank D. G. Naugle and K. D. D. Rathnayaka for sharing their data prior to publication. This work is supported by the Danish Technical Research Council and the Danish Energy Agency. P.C.C. is supported by the Director of Energy Research, Office of Basic Energy Science under contract W-7405-Eng.-82. |
no-problem/9912/cond-mat9912450.html | ar5iv | text | # Simple Models of the Protein Folding Problem
## 1 Introduction
The word “protein” originates from the Greek word proteios which means “of the first rank” . Indeed, proteins are building blocks and functional units of all biological systems. They play crucial roles in virtually all biological processes. Their diverse functions include enzymatic catalysis, transport and storage, coordinated motion, mechanical support, signal transduction, control and regulation, and immune response. A protein consists of a chain of amino acids whose sequence is determined by the information in DNA/RNA. There are 20 natural amino acids nature uses to make up proteins. These differ in size and other physical and chemical properties. The most important difference however, as far as the determination of the structure is concerned, is their hydrophobicity, i.e. how much they dislike water. An open protein chain, under normal physiological conditions, will fold into a three-dimensional configuration to perform its function. This folded functional state of the protein is called the native state. For single domain globular proteins which are our focus here, the length of the chain is of the order of 100 amino acids (from $`30`$ to $`400`$). Proteins of longer chains usually form multidomains each of which can usually fold independently. In Fig. 1 is shown a globular protein flavodoxin whose function is to transport electrons. Like most water soluble single domain globular proteins, it is very compact with a roughly rounded shape. The folded geometry of the chain can be best viewed in the cartoonish ribbon diagram (Fig. 1c) of the backbone configuration (Fig. 1b). One can see that the geometry of this protein structure is several $`\alpha `$-helices sandwiching an $`\beta `$-sheet. The folded geometries, often referred to as folds, of proteins usually look far more regular than random, typically possessing secondary structures (e.g., $`\alpha `$-helices and $`\beta `$-sheets) and sometime even having tertiary symmetries. (One can recognize an approximate mirror symmetry in Fig. 1c.) One of the main goals of the protein folding problem is to predict the three-dimensional folded structure for a given sequence of amino acids.
The protein folding problem is the kind of biological problem that has an immediate appeal to physicists. A protein can fold (to its native state) and unfold (to a flexible open chain) reversibly by changing the temperature, pH, or the concentration of some denaturant in solution. While the study of denaturation of proteins can be traced back at least 70 years when Wu pointed out that denaturation was in fact the unfolding of the protein from “the regular arrangement of a rigid structure to the irregular, diffuse arrangement of the flexible open chain”. A turning point was the work of Anfinsen on the so-called “thermodynamic hypothesis” in the late 50s and early 60s. Anfinsen and later many others demonstrated that for single domain proteins 1) the information coded in the amino acid sequence of a protein completely determines its folded structure, and 2) the native state is the global minimum of the free energy. These conclusions should be somewhat surprising to physicists. For the configurational “(free) energy landscape” of a heteropolymer of the size of a protein is typically “rough”, in the sense that there are typically many metastable states some of which have energies very close to the global minimum. How could a protein always fold into its unique native state with the lowest energy? The answer is evolution. Indeed, random sequences of amino acids are usually “glassy” and usually can not fold uniquely. But natural proteins are not random sequences. They are a small family of sequences, selected by nature via evolution, that have a distinct global minimum well separated from other metastable states (Fig. 2). One might ask: what are the unique and yet common properties of this special ensemble of proteinlike sequences? In other words, can one distinguish them from other sequences without the arguably impossible task of constructing the entire energy landscape? The answer lies in the heart of the question we introduce in the next paragraph and is the focus of this discussion.
There are about 100,000 different proteins in the human body. The number is much larger if we consider all natural proteins in the biological world. Protein structures are classified into different folds. Proteins of the same fold have the same major secondary structures in the same arrangement with the same topological connections , with some small variations typically in the loop region. So in some sense, folds are distinct templates of protein structures. Proteins with a close evolutionary relation often have high sequence similarity and share a common fold. What is intriguing is that common folds occur even for proteins with different evolutionary origins and biological functions. The number of folds is therefore much lower than the number of proteins. Shown in Fig. 3 is the cumulative number of solved protein domains along with the cumulative number of folds as a function of the year. It is increasingly less likely that a newly solved protein structure would take a new fold. It is estimated that the total number of folds for all natural proteins is only about 1000 . Some of the frequently observed folds, or “superfolds” , are shown in Fig. 4. Among apparent features of these folds are secondary structures, regularities, and symmetries. Therefore, as in the case of sequences, protein structures or folds are also a very special class. One might ask: Is there anything special about natural protein folds–are they merely an arbitrary outcome of evolution or is there some fundamental reason behind their selection ? Is the selection of protein structures coupled with the selection of protein sequences? We will now address these questions via a thorough study of simple models.
## 2 Simple Models and the Designability
The dominant driving force for protein folding is the so-called hydrophobic force . The 20 amino acids differ in their hydrophobicity and can be very roughly classified into two groups: hydrophobic and polar . Hydrophobic amino acids have greasy side chains made of hydrocarbons and like to stick together in water to minimize their contact with water. Polar amino acids have polar groups (with oxygen or nitrigen) in their side chains and do not mind so much to be in contact with water. The simplest model of protein folding is the so-called “HP lattice model” , whose structures are defined on a lattice and whose sequences take only two “amino acids”: H (hydrophobic) and P (polar) (see Fig. 5). The energy for a sequence folded into a structure is simply given by the short range contact interactions
$$H=\underset{i<j}{}e_{\nu _i\nu _j}\mathrm{\Delta }(𝐫_i𝐫_j),$$
(1)
where $`\mathrm{\Delta }(𝐫_i𝐫_j)=1`$ if $`𝐫_i`$ and $`𝐫_j`$ are adjoining lattice sites but $`i`$ and $`j`$ are not adjacent in position along the sequence, and $`\mathrm{\Delta }(𝐫_i𝐫_j)=0`$ otherwise. Depending on the types of monomers in contact, the interaction energy $`e_{\nu _i\nu _j}`$ will be $`e_{\mathrm{HH}}`$, $`e_{\mathrm{HP}}`$, or $`e_{\mathrm{PP}}`$, corresponding to H-H, H-P, or P-P contacts, respectively (see Fig. 5) . We choose these interaction parameters to satisfy the following physical constraints: 1) compact shapes have lower energies than any non-compact shapes; 2) H monomers are buried as much as possible, expressed by the relation $`e_{\mathrm{PP}}>e_{\mathrm{HP}}>e_{\mathrm{HH}}`$, which lowers the energy of configurations in which Hs are hidden from water; 3) different types of monomers tends to segregate, expressed by $`2e_{\mathrm{HP}}>e_{\mathrm{PP}}+e_{\mathrm{HH}}`$. Conditions 2) and 3) were derived from the analysis of the real protein data contained in the Miyazawa-Jernigan matrix of inter-residue contact energies between different types of amino acids. Since we consider only the compact structures all of which have the same total number of contacts, we can freely shift and rescale the interaction energies, leaving only one free parameter. Throughout this section, we choose $`e_{\mathrm{HH}}=2.3`$, $`e_{\mathrm{HP}}=1`$ and $`e_{\mathrm{PP}}=0`$ which satisfy conditions 2) and 3) above. The results are insensitive to the value of $`e_{\mathrm{HH}}`$ as long as both these conditions are satisfied. (The analysis in Ref. on the interaction potential of amino acids arrived at a form
$$e_{\mu \nu }=h_\mu +h_\nu +c(\mu ,\nu ),$$
(2)
where $`h_\mu `$ is the hydrophobicity of the amino acid $`\mu `$ and $`c`$ is a small mixing term. The additive term, i.e. the hydrophobic force, dominates the potential. The choice of $`e_{\mathrm{HH}}=2.3`$ in our study can be viewed as a result of a hydrophobic part -2 plus a small mixing part -0.3. Several authors have investigated the effect of the mixing contribution as a small perturbation to the additive potential .)
We have studied the model (1) on a three dimensional cubic lattice and on a two dimensional square lattice . For the three dimensional case, we analyze a chain composed of 27 monomers. We consider all the structures which form a compact $`3\times 3\times 3`$ cube. There are a total of 51,704 such structures unrelated by rotational, reflection, or reverse labeling symmetries . For a given sequence, the ground state structure is found by calculating the energies of all compact structures. We completely enumerate the ground states of all $`2^{27}`$ possible sequences. We find that only $`4.75\%`$ of the sequences have unique ground states and thus are potential proteinlike sequences. We then calculate the designability of each compact structure. Specifically, we count the number of sequences $`N_S`$ that have a given compact structure $`S`$ as their unique ground state. We find that compact structures differ drastically in terms of their designability, $`N_S`$. There are structures that can be designed by an enormous number of sequences, and there are “poor” structures which can only be designed by a few or even no sequences. For example, the top structure can be designed by $`3,794`$ different sequences ($`N_S=3,794`$), while there are $`4,256`$ structures for which $`N_S=0`$. The number of structures having a given $`N_S`$ decreases monotonically (with small fluctuations) as $`N_S`$ increases (Fig. 6a). There is a long tail to the distribution. Structures contributing to the tail of the distribution have $`N_S>>\overline{N_S}=61.7`$, where $`\overline{N_S}`$ is the average number. We call these structures “highly designable” structures. The distribution is very different from the Poisson distribution (also shown in Fig. 6a) that would result if the compact structures were statistically equivalent. For a Poisson distribution with a mean $`\overline{N_S}=61.7`$, the probability of finding even one structure with $`N_S>120`$ is $`1.76\times 10^6`$.
The highly designable structures are, on average, thermodynamically more stable than other structures. The stability of a structure can be characterized by the average energy gap $`\overline{\delta _S}`$, averaged over the $`N_S`$ sequences that design the structure. For a given sequence, the energy gap $`\delta _S`$ is defined as the minimum energy difference between the ground state energy and the energy of a different compact structure. We find that there is a marked correlation between $`N_S`$ and $`\overline{\delta _S}`$ (Fig. 6b). Highly designable structures have average gaps much larger than those of structures with small $`N_S`$, and there is a sudden jump in $`\overline{\delta _S}`$ for structures with $`N_S^c1,400`$. This jump is a result of two different kinds of excitations a ground state could have. One is to break an H-H bond and a P-P bond to form two H-P bonds, which has an (mixing) energy cost of $`2E_{\mathrm{HP}}E_{\mathrm{HH}}E_{\mathrm{PP}}=0.3`$. The other is to change the position of an H-mer from relatively buried to relatively exposed, so the number of H-water bonds (the lattice sites outside the $`3\times 3\times 3`$ cube are occupied by water molecules) is increased. This kind of excitations has an energy $`1`$. The jump in Fig. 6b indicates that the lowest excitations are of the first kind for $`N_S<N_S^c`$, but are a mixture of the first and the second kind for $`N_S>N_S^c`$.
A striking feature of the highly designable structures is that they exhibit certain geometrical regularities that are absent from random structures and are reminiscent of the secondary structures in natural proteins. In Fig. 7 is shown the most designable structure along with a typical random structure. We examined the compact structures with the 10 largest $`N_S`$ values and found that all have parallel running lines folded in a regular manner.
We have also studied the model on a 2D lattice. We take sequences of length 36 and fold them into compact $`6\times 6`$ structures on the square lattice. There are $`28,728`$ such structures unrelated by symmetries including the reverse-labeling symmetry. In this case, we did not enumerate all $`2^{36}`$ sequences but randomly sampled them to the extend where the histogram for $`N_S`$’s reached a reliable distribution. Similar to the 3D case, the $`N_S`$’s have a very broad distribution (Fig. 8a). In this case the tail decays more like an exponential. The average gap also correlates positively with $`N_S`$ (Fig. 8b). Again similar to the 3D case, we observe that the highly designable structures in 2D also exhibit secondary structures. In the 2D $`6\times 6`$ case, as the surface-to-interior ratio approaches that of real proteins, the highly designable structures often have bundles of pleats and long strands, reminiscent of $`\alpha `$ helices and $`\beta `$ strands in real proteins; in addition, some of the highly designable structures have tertiary symmetries (Fig. 9).
To ensure that the above results are not an artifact of the HP model, we have studied model (1) with 20 amino acids . In this case the interaction energies $`e_{\nu _i\nu _j}`$, where $`\nu _i`$ can now be any one of the 20 amino acids, are taken from the Miyazawa-Jernigan matrix –an empirical potential between amino acids. For the 3D $`3\times 3\times 3`$ system and the 2D $`6\times 6`$ system, the total numbers of sequences are $`20^{27}`$ and $`20^{36}`$, respectively, which are impossible to enumerate. So we randomly sampled the sequence space. Similar to the case of the HP model, $`N_S`$’s have a broad distribution in both 3D and 2D cases. Furthermore, the $`N_S`$’s correlate well with the ones obtained from the HP model (see Fig. 10). Thus the highly designable structures in the HP model are also highly designable in the 20-letter model . With 20 amino acids, there are few sequences that will have exactly degenerate ground states. For example, in the case of $`3\times 3\times 3`$ about $`96.7\%`$ of the sequences have unique ground states. However, many of these ground states are almost degenerate, in the sense that there are compact structures other than the ground state with energies very close to the ground state energy. If we require that for a ground state to be truly unique there should be no other states of energies within $`g_c`$ from the ground state energy, then the percentage of the sequences that have unique ground states is reduced to about $`30\%`$ and $`8\%`$ for $`g_c=0.4k_B`$T and $`g_c=0.8k_B`$T, respectively.
## 3 A Geometrical Interpretation
A number of questions arise: Among the large number of structures, why are some structures highly designable? Why does designability also guarantee thermodynamic stability? Why do highly designable structures have geometrical regularities and even symmetries? In this section we address these questions by using a geometrical formulation of the protein folding problem .
As we have mentioned before, the dominant driving force for protein folding is the hydrophobicity, i.e. the tendency for hydrophobic amino acids to hide away from water. To model only the hydrophobic force in protein folding, one can assign parameters $`h_\nu `$ to characterize the hydrophobicities of each of the 20 amino acids . Each sequence of amino acids then has an associated vector $`𝐡=(h_{\nu _1},h_{\nu _2},\mathrm{},h_{\nu _i},\mathrm{},h_{\nu _N})`$, where $`\nu _i`$ specifies the amino acid at position $`i`$ of the sequence. The energy of a sequence folded into a particular structure is taken to be the sum of the contributions from each amino acid upon burial away from water:
$$H=\underset{i=1}{\overset{N}{}}s_ih_{\nu _i},$$
(3)
where $`s_i`$ is a structure-dependent number characterizing the degree of burial of the $`i`$-th amino acid in the chain. Eq. (3) is essentially a solvation model at the residue level and can also be obtained by taking the mixing term of Eq. (2) to zero .
To simplify the discussion, let us consider only compact structures and let $`s_i`$ take only two values: 0 and 1, depending on whether the amino acid is on the surface or in the core of the structure, respectively. Therefore, each compact structure can be represented by a string $`\{s_i\}`$ of 0s and 1s: $`s_i=0`$ if the $`i`$-th amino acid is on the surface and $`s_i=1`$ if it is in the core (see Fig. 11a for an example on a lattice). Let us make further simplification by using only two amino acids: $`\nu _i=\mathrm{H}`$ or P, and let $`h_\mathrm{H}=1`$ and $`h_\mathrm{P}=0`$. Thus a sequence $`\{\nu _i\}`$ is also mapped into a string $`\{\sigma _i\}`$ of 0s and 1s: $`\sigma _i=1`$ if $`\nu _i=\mathrm{H}`$ and $`\sigma _i=0`$ if $`\nu _i=\mathrm{P}`$. Let us call this model the PH (Purely Hydrophobic) model. Assuming every compact structure of a given size has the same numbers of surface and core sites and noting that the term $`_i\sigma _{i}^{}{}_{}{}^{2}`$ is a constant for a fixed sequence of amino acids and does not play any role in determining the relative energies of structures folded by the sequence, Eq. (3) is then equivalent to :
$$H=\underset{i=1}{\overset{N}{}}(\sigma _is_i)^2.$$
(4)
Therefore, the energy for a sequence $`\stackrel{}{\sigma }=\{\sigma _i\}`$ folded onto a structure $`\stackrel{}{s}=\{s_i\}`$ is simply the distance squared (or the Hamming distance in the case where both $`\{\sigma _i\}`$ and $`\{s_i\}`$ are strings of 0s and 1s) between the two vectors $`\stackrel{}{\sigma }`$ and $`\stackrel{}{s}`$.
We can now formulate the designability question geometrically. We have two ensembles or spaces: one being all the sequences $`\{\stackrel{}{\sigma }\}`$ and the other all the structures $`\{\stackrel{}{s}\}`$. Both are represented by $`N`$-dimensional points or vectors where $`N`$ is the length of the chain. The points of all the sequences are trivially distributed in the $`N`$-dimensional space. In the case of the PH model, the points representing sequences are all the vertices of an $`N`$-dimensional hypercube (all possible $`2^N`$ strings of 0s and 1s of length $`N`$). On the other hand, the points representing all the structures $`\{\stackrel{}{s}\}`$ have a very different distribution in the $`N`$-dimensional space. The $`\stackrel{}{s}`$’s are constrainted and correlated. For example, in the case of the PH model where $`s_i=0`$ or 1, not every string of 0s and 1s actually represents a structure. In fact, only a very small fraction of the $`2^N`$ strings of 0s and 1s correspond to structures. If we consider only compact structures where $`_is_i=n_c`$ with $`n_c`$ the number of core sites, then the structure vectors $`\{\stackrel{}{s}\}`$ cover only a small fraction of the vertices of a hyperplane in the $`N`$-dimensional hypercube.
Now imagine putting all the sequences $`\{\stackrel{}{\sigma }\}`$ and all the structures $`\{\stackrel{}{s}\}`$ together in the $`N`$-dimensional space (see Fig. 12 for a schematic illustration). (In a more general case it would be simplest to picture if one normalizes $`\{\stackrel{}{h}\}`$ and $`\{\stackrel{}{s}\}`$ so that $`0h_i,s_i1`$.) From Eq. (4), it is evident that a sequence will have a structure as its unique ground state if and only if the sequence is closer (measured by the distance defined by Eq. (4)) to the structure than to any other structures. Therefore, the set of all sequences $`\{\stackrel{}{\sigma }(\stackrel{}{s})\}`$ that uniquely design a structure $`\stackrel{}{s}`$ can be found by the following geometrical construction: Draw bisector planes between $`\stackrel{}{s}`$ and all of its neighboring structures in the $`N`$-dimensional space (see Fig. 12). The volume enclosed by these planes is called the Voronoi polytope around $`\stackrel{}{s}`$. $`\{\stackrel{}{\sigma }(\stackrel{}{s})\}`$ then consists of all sequences within the Voronoi polytope. Hence, the designabilities of structures are directly related to the distribution of the structures in the $`N`$-dimensional space. A structure closely surrounded by many neighbors will have a small Voronoi polytope and hence a low designability; while a structure far away from others will have a large Voronoi polytope and hence a high designability. Furthermore, the thermodynamic stability of a folded structure is directly related to the size of its Voronoi polytope. For a sequence $`\stackrel{}{\sigma }`$, the energy gap between the ground state and an excited state is the difference of the squared distances between $`\stackrel{}{\sigma }`$ and the two states (Eq. (4)). A larger Voronoi polytope implies, on average, a larger gap as excited states can only lie outside of the Voronoi polytope of the ground state. Thus, this geometrical representation of the problem naturally explains the positive correlation between the thermodynamic stability and the designability.
As a concrete example, we have studied a 2D PH model of $`6\times 6`$ . For each compact structure, we devide the 36 sites into 16 core sites and 20 surface sites (see Fig. 11a). Among the $`28,728`$ compact structures unrelated by symmetries, there are $`119`$ that are reverse-labeling symmetric. (For a reverse-labeling symmetric structure, $`s_i=s_{N+1i}`$.) So the total number of structures a sequence can fold onto is $`(28,728119)\times 2+119=57,337`$, which map into $`30,408`$ distinct strings. There are cases in which two or more structures map into the same string. We call these structures degenerate structures and a degenerate structure can not be the unique ground state for any sequence in the PH model. Out of the $`28,728`$ structures, there are $`9,141`$ nondegenerate structures (or $`18,213`$ out of $`57,337`$). A histogram for the designability of nondegenerate structures is obtained by sampling the sequence space using 19,492,200 randomly chosen sequences and is shown in Fig. 11b. The set of highly designable structures are essentially the same as those obtained from the HP model discussed in the previous section. To further probe how structure vectors are distributed in the $`N`$-dimensional space, we measure the number of structures, $`n_\stackrel{}{s}(d)`$, at a Hamming distance $`d`$ from a given structure $`\stackrel{}{s}`$. Note that all the $`57,337`$ structures are distributed on the vertices of the hyperplane defined by $`_is_i=16`$. There are a total of $`C_{36}^{16}=7,307,872,110`$ vertices in the hyperplane. If the structure vectors were distributed uniformly on these vertices, $`n_\stackrel{}{s}(d)`$ would be the same for all structures and would be: $`n^0(d)=\rho N(d)`$, where $`\rho =57,337/7,307,872,110`$ is the average density of structures on the hyperplane and $`N(d)=C_{16}^{d/2}C_{20}^{d/2}`$ is the number of vertices at distance $`d`$ from a given vertex. In Fig. 13a, $`n_\stackrel{}{s}(d)`$ is plotted for three different structures with low, intermediate, and high designabilities, respectively, along with $`n^0(d)`$. We see that a highly designable structure typically has fewer neighbors than a less designable structure, not only at the smallest $`d`$s but out to $`d`$s of order 10-12. Also, $`n_\stackrel{}{s}(d)`$ is considerably larger than $`n^0(d)`$ for small $`d`$ for structures with low designability. These results indicate that the structures are very nonuniformly distributed and are clustered–there are highly populated regions and lowly populated regions. A quantitative measure of the clustering environment around a structure is the second moment of $`n_\stackrel{}{s}(d)`$,
$$\gamma ^2(\stackrel{}{s})=<d^2><d>^2=4\underset{ij}{}s_is_jc_{ij},$$
(5)
where
$$c_{ij}=<s_is_j><s_i><s_j>$$
(6)
and $`<>`$ denotes average over all structures. In Fig. 14a we plot the designability $`N_S`$ of a structure vs. its $`\gamma `$. We see that while larger $`N_S`$ implies smaller $`\gamma `$ it is not true vice versa. This is because that $`N_S`$ is very sensitive to the local environment at small $`d`$s while $`\gamma `$ is more a global measure.
What are the geometrical characteristics of the structures in the highly populated regions and lowly populated regions, respectively? This is something we are very interested in but know very little about. Naively, the structures in the highly populated regions are typical random structures which can be easily transformed from one to another by small local changes. On the other hand, structures in lowly populated regions are “atypical” structures which tend to be more regular and “rigid”. They have fewer neighbors so it is harder to transform them to other structures with only small rearrangements. One geometrical feature of highly designable structures is that they have more surface-to-core transitions along the backbone, i.e. there are more transitions between 0s and 1s in the structure string for a highly designable structure than average . We found a good correlation between the number of surface-core transitions in a structure string $`\stackrel{}{s}`$, $`T(\stackrel{}{s})`$, and $`\gamma (\stackrel{}{s})`$ (Fig. 13b). Thus, a necessary condition for a structure to be highly designable is to have a small $`\gamma `$ or a large $`T`$.
A great advantage of the PH model is that it is simple enough to test some ideas immediately. Two quantities often used to characterize structures are the energy spectra $`𝒩(E,\stackrel{}{s})`$ and $`𝒩(E,\stackrel{}{s},C)`$ . The first one is the energy spectrum of a given structure, $`\stackrel{}{s}`$ over all sequences, $`\{\stackrel{}{\sigma }\}`$:
$$𝒩(E,\stackrel{}{s})=\underset{\{\stackrel{}{\sigma }\}}{}\delta [H(\stackrel{}{\sigma },\stackrel{}{s})E].$$
(7)
The second one is over all sequences of a fixed composition $`C`$ (e.g. fixed numbers of H-mers and P-mers in the case of two-letter code), $`\{\stackrel{}{\sigma }\}_C`$:
$$𝒩(E,\stackrel{}{s},C)=\underset{\{\stackrel{}{\sigma }\}_C}{}\delta [H(\stackrel{}{\sigma },\stackrel{}{s})E].$$
(8)
It is easy to see that if two structure strings $`\{s_i\}`$ and $`\{s_i^{}\}`$ are related by permutation, i.e. $`s_i=s_{k_i}^{}`$, for $`i=1,2,\mathrm{},N`$, where $`k_1,k_2,\mathrm{},k_N`$ is a permutation of $`1,2,\mathrm{},N`$, then $`𝒩(E,\stackrel{}{s})=𝒩(E,\stackrel{}{s^{}})`$ and $`𝒩(E,\stackrel{}{s},C)=𝒩(E,\stackrel{}{s^{}},C)`$. Thus all maximally compact structures have the same energy spectra Eqs. (7) and (8). Therefore, structures differ in designability, not because they have different energy spectra Eqs. (7) and (8) , but because they have different neighborhood in the structure space.
## 4 Folding Dynamics and Thermodynamic Stability
Will highly designable structures also fold relatively fast? This question is addressed in detail in Ref. (see also Ref. ). A quantity often used to measure how much a sequence is “proteinlike” is the $`Z`$ score,
$$Z=\frac{\mathrm{\Delta }}{\mathrm{\Gamma }},$$
(9)
where $`\mathrm{\Delta }`$ is the average energy difference between the ground state and all other states and $`\mathrm{\Gamma }`$ is the standard deviation of the energy spectrum. $`Z`$ score was first introduced in the inverse folding problem and later used in protein design . It has been shown in the context of the Random Energy Model that $`Z`$ score is related to $`T_f/T_g`$ where $`T_f`$ is the folding temperature and $`T_g`$ the glass transition temperature . We have found a good and negative correlation between the folding time and the $`Z`$ score of the compact structure energy spectrum . In the context of the PH model (3), for a sequence $`\stackrel{}{h}`$ and its ground state $`\stackrel{}{s}`$,
$`\mathrm{\Delta }`$ $`=`$ $`{\displaystyle \underset{i}{}}h_i(s_i<s_i>),`$ (10)
$`\mathrm{\Gamma }`$ $`=`$ $`\sqrt{{\displaystyle \underset{ij}{}}h_ih_jc_{i,j}},`$ (11)
where $`c_{ij}`$ is given by Eq. (6). So in principle for every structure $`\stackrel{}{s}`$ one can maximize the $`Z`$ score with respect to $`\stackrel{}{h}`$ to get the “best” or “ideal” sequence for $`\stackrel{}{s}`$ that gives the highest $`Z`$ score, $`Z_S`$. It is however much easier to obtain a lower bound $`Z_{S}^{}{}_{}{}^{}`$ for $`Z_S`$ by letting $`\stackrel{}{h}=\stackrel{}{s}`$: $`Z_{S}^{}{}_{}{}^{}=\mathrm{\Delta }^{}/\mathrm{\Gamma }^{}`$ with
$`\mathrm{\Delta }^{}`$ $`=`$ $`{\displaystyle \underset{i}{}}(s_i^2s_i<s_i>),`$ (12)
$`\mathrm{\Gamma }^{}`$ $`=`$ $`\gamma /2,`$ (13)
where $`\gamma `$ is given by Eq. (5). In Fig. 14b, the $`\mathrm{\Delta }^{}`$ for all the $`6\times 6`$ compact structures are plotted against $`N_S`$ for the PH model. There is little if any correlation between $`N_S`$ and $`\mathrm{\Delta }^{}`$ for the $`6\times 6`$ PH model. Thus, correlations between $`N_S`$ and $`Z^{}`$ in this model come mainly from the one between $`N_S`$ and $`\mathrm{\Gamma }^{}=\gamma /2`$ (Fig. 14a). So a large $`Z^{}`$ is a necessary but not sufficient condition for a structure to have a large $`N_S`$.
## 5 Summary
We have demonstrated with simple models that structures are very different in terms of their designability and that high designability leads to thermodynamic stability, “proteinlike” structural motifs, and foldability. Highly designable structures emerge because of an asymmetry between the sequence and the structure ensembles. Our results are rather robust and have been demonstrated recently in larger lattice models and in off-lattice models . A broad distribution of designability has also been found in RNA secondary structures . However, the set of all sequences designing a good structure, instead of forming a compact “Voronoi polytope” like in proteins, forms a “neutral network” percolating the entire space . It would be interesting to study the similarities and differences of the two systems. Finally, our picture indicates that the properties of the proteinlike sequences are intimately coupled to that of the proteinlike (i.e. the highly designable) structures; the picture unifies various aspects of the two special ensembles. It also suggests that understanding the emergence and properties of the highly designable structures is a key to the protein folding problem.
This work was done in collaboration with Hao Li, Ned Wingreen, Régis Mélin, Robert Helling, and Jonathan Miller. I am grateful to Jeannie Chen for her critical reading of the manuscript. |
no-problem/9912/astro-ph9912455.html | ar5iv | text | # On the X-ray heated skin of Accretion Disks
## 1. Introduction
X-ray illumination of an accretion disk surface is a problem of general astrophysical interest. Since the X-ray heating of the disk atmosphere changes energy and ionization balances there, the spectra emitted by X-ray illuminated accretion disks in any wavelength may be quite different from those resulting from non-illuminated disks. It has been known for many years (e.g., Basko, Sunyaev, & Titarchuk 1974) that X-ray illumination leads to formation of a hot (and often completely ionized) X-ray “skin” above the illuminated material. Unfortunately, due to numerical difficulties, most previous studies of X-ray illumination had to rely on a constant density assumption for the illuminated gas (e.g., Ross & Fabian 1993; $`\dot{\mathrm{Z}}`$ycki et al. 1994; Matt, Fabian & Ross 1993; 1996; Ross, Fabian & Brandt 1996), in which case the completely ionized skin forms only for very large ionization parameters (e.g., Ross, Fabian & Young 1999).
Recently, Nayakshin, Kazanas & Kallman (1999; hereafter paper I) have shown that if the assumption of the constant density is relaxed, then the temperature and ionization structure of the illuminated material is determined by the thermal ionization instability, and that the X-ray heated skin always forms on the top of the disk (see also Raymond 1993; Ko & Kallman 1994; Ró$`\dot{\mathrm{z}}`$ańska & Czerny 1996). Due to the presence of the hot skin, the resulting reflected spectra are quite different from those obtained with the constant density assumption.
Unfortunately, computations similar to those reported in paper I are rather numerically involved and time consuming and thus are not readily performed. In this Letter, we present an approximate expression for the Thomson depth of the hot skin which allows one to qualitatively understand effects of the thermal instability on the reflected spectra. As an example, we apply our approximate expression to the observed X-ray Baldwin effect (e.g., Nandra et al. 1997b) in the geometry of a cold accretion disk illuminated by an X-ray source situated at some height above the black hole.
## 2. Thomson Depth of the Skin
We assume that X-rays are incident on the surface of an accretion disk whose structure is given by the standard accretion disk theory (e.g., SS73). Let us define the hot skin as the region of the gas where temperature significantly exceeds the effective temperature (see, e.g., Fig. 3b below). As explained in paper I, even Fe atoms are completely ionized in the X-ray skin if the X-ray flux, $`F_\mathrm{x}`$, is comparable with or larger than the disk intrinsic flux, $`F_d`$, and if the illuminating X-rays have relatively hard spectra, i.e., photon index $`\mathrm{\Gamma }\stackrel{<}{}2`$. Under those conditions, the Compton temperature, $`T_c`$, is close to $``$ few keV. Therefore, we can neglect all the atomic processes and only consider Compton scattering and bremsstrahlung emission in our analytical study of the X-ray skin.
The Thomson depth of the hot layer, $`\tau _1`$, is obtained by integrating $`d\tau _1=\sigma _Tn_e(z)dz`$, from $`z=z_b`$ to infinity, where $`n_e(z)`$ is the electron density; $`z`$ is the vertical coordinate; $`z_bH`$ gives the location of the bottom of the ionized skin; and $`H`$ is the disk scale height. Note that a simpler estimate of $`\tau _1`$ can be obtained if one assumes that the gas temperature is equal to the Compton one, and that the density law follows a Gaussian law (see Kallman & White 1989).
In our calculations of $`\tau _1`$, we will neglect by the reprocessing of the radiation as it penetrates through the skin. This limits the applicability of our results to $`\tau _1\stackrel{<}{}1`$ if the angle, $`\theta `$, that the X-ray radiation makes with the normal to the disk, is not too large, i.e., that $`\mu _i\mathrm{cos}\theta `$ is not too small. In the case when $`\mu _i1`$, the spectral reprocessing cannot be neglected for $`\tau _1\stackrel{>}{}\mu _i`$. For simplicity, we will assume $`\mu _i\stackrel{<}{}1`$ and postpone the treatment of large incident angles to a future publication.
If $`P_{\mathrm{bot}}`$ is the gas pressure at $`z=z_b`$, then we can define dimensionless gas pressure as $`P_{}P/P_{\mathrm{bot}}`$, where $`P=\rho /\mu kT`$ and $`\rho `$ is the gas density, and also dimensionless gas temperature by $`T_{}T/T_c`$ ($`T_c`$ is the Compton temperature, see below). The Compton-heated skin then has $`P_{}<1`$, whereas the colder material below the skin has $`P_{}>1`$. As shown in paper I, the incident X-rays do not affect the hydrostatic balance in the ionized skin because the main source of opacity is Compton scattering (see Fig. 5b in paper I and also Sincell & Krolik 1997). With this, one can re-write hydrostatic balance equation in terms of dimensionless variables as
$$\frac{P_{}}{x}=2\frac{P_{}}{T_{}}\frac{x\zeta }{\lambda ^2},$$
(1)
where $`xz/H`$, $`\zeta `$ is the ratio of the disk midplane radiation pressure to the total pressure $`\zeta =P_{\mathrm{rad}}(0)/(P_{\mathrm{rad}}(0)+P_{\mathrm{gas}}(0))`$, and $`\lambda `$ is the scale-height of the skin in units of $`H`$: $`\lambda ^2=(2kT_cR^3/GM\mu H^2)`$. Expressing the electron density as $`n_e=\rho /\mu _e`$, one can arrive at
$$\tau _1=\frac{\mu }{\mu _e}\frac{cP_{\mathrm{bot}}}{F_\mathrm{x}}\frac{l_x}{\theta _c}\lambda _{y_b}^{\mathrm{}}\frac{P_{}}{T_{}}𝑑y\tau _0W(y_b),$$
(2)
where $`l_x`$ is the compactness parameter of the illuminating X-rays (see, e.g., equation 20 in paper I); $`\theta _ckT_c/m_ec^2`$; we also defined $`y(x\zeta )/\lambda `$ and $`y_b=(z_b/H\zeta )/\lambda `$; finally, $`\tau _0`$ is the expression before the sign of the integral. The integral in the equation (2) is designated as $`W(y_b)`$ and often (see below) turns out to be of order unity. Krolik, McKee & Tarter (1981, §IV2b; KMT hereafter) showed that for a completely ionized gas the energy balance equation can be written as
$$T_{}^2+T_{}^{1/2}\mathrm{\Xi }_{}^1T_{}=0,$$
(3)
where $`\mathrm{\Xi }_{}\mathrm{\Xi }/\mathrm{\Xi }_{ic}`$ is the pressure ionization parameter normalized by the “inverse Compton” ionization parameter $`\mathrm{\Xi }_{ic}`$, which is given by equation (4.5) of KMT, re-written in order to take into account the difference in definitions of $`\mathrm{\Xi }=F_\mathrm{x}/cP`$ (the one used here) and $`\mathrm{\Xi }=F_{ion}/n_HkT`$ (KMT; $`F_{ion}`$ is the X-ray flux between 1 and 10<sup>3</sup> Ry): $`\mathrm{\Xi }_{ic}=0.47T_8^{3/2}`$; $`T_8`$ here is the Compton temperature in units of $`10^8`$ Kelvin.
The solutions presented in paper I possess the property that the transition from the Compton-heated to cooler layers occurs at the upper bend of the S-curve (e.g., point (c) in Fig. 1 of paper I). In that point, $`dT/d\mathrm{\Xi }=\mathrm{}`$. Using this condition, one can show that the transition happens at $`T_{}=1/3`$ where $`\mathrm{\Xi }=\frac{3^{3/2}}{2}\mathrm{\Xi }_{ic}`$. Therefore,
$`P_{\mathrm{bot}}={\displaystyle \frac{2}{\mathrm{\hspace{0.17em}3}\sqrt{3}}}\mathrm{\Xi }_{ic}^1{\displaystyle \frac{F_\mathrm{x}}{c}}=3.2\times 10^2T_1^{3/2}{\displaystyle \frac{F_\mathrm{x}}{c}}`$ (4)
$`\tau _16.9T_1{\displaystyle \frac{F_\mathrm{x}}{F_d}}G^{1/2}(r)\dot{m}(1f)`$ (5)
$`G(r){\displaystyle \frac{2^{16}}{27}}[1(3/r)^{1/2}]^2r^3,`$
where $`T_1=kT_c/1`$ keV; $`r=R/R_s`$, $`R_s=2GM/c^2`$ is the Schwarzschild radius; $`\dot{m}`$ is the dimensionless accretion rate ($`\dot{m}=1`$ corresponds to Eddington luminosity for the accretion disk) and $`0f<1`$ is the coronal dissipation parameter (e.g., Svensson & Zdziarski 1994). The maximum of function $`G(r)`$ occurs at $`r=16/3`$ where it is equal to unity. We also approximated $`\mu _e=m_p`$, $`\mu =m_p/2`$, and $`W(y_b)W(0)=1.23`$ for the following reason. If the vertical pressure profile is given by $`P(z)=P(0)\mathrm{exp}[(z/H)^2]`$ for the gas-dominated disk, the location of the temperature discontinuity is
$$z_b=H\mathrm{ln}^{1/2}\left[\frac{P(0)}{P_{\mathrm{bot}}}\right]$$
(6)
The function $`\mathrm{ln}^{1/2}(t)`$ is a very slow, monotonically increasing one: for example, its value is $`1.52`$ for $`t=10`$ and $`3.72`$ for $`t=10^6`$. Therefore, for most realistic situations, $`z_b`$ is not very much larger than $`H`$. As is easy to check, $`\lambda 1`$ for gas-dominated disks, and thus $`y_b1`$. For radiation-pressure dominated disks, $`z_b`$ is nearly equal to $`H`$ (see §3.4 in paper I), and hence $`y_b=(1\zeta )/\lambda 1`$ in this case as well. To summarize this statement in words, the vertical extent of the ionized skin is always large enough that the exact location of the inner boundary is unimportant, for either gas or radiation-dominated disks.
A cautionary note is in place. So far we assumed that the spectrum of the ionizing radiation does not change with the depth into the illuminated material. This is true only for optically thin situations, i.e., when $`\tau _11`$, but for larger optical depths the Compton temperature is smaller than the one corresponding to $`\tau _1=0`$. The decrease in the local value of $`T_1`$ dictates a corresponding decrease in the value of $`P_{\mathrm{bot}}`$ according to equation (4), and therefore the transition to the cold equilibrium solution will always happen at a smaller $`\tau _1`$ than equation (5) predicts. To reflect this fact, Kallman & White (1989) introduced $`\tau _{\mathrm{Th}\mathrm{crit}}110`$, defined to be the maximum of $`\tau _1`$. Note that the behavior of $`\tau _\mathrm{h}`$ with Compton temperature, X-ray flux and radius given by equation (5) coincides with that obtained by these authors ($`T_{\mathrm{IC8}}`$ on the second line of their equation should be in power $`+1`$ rather than $`1/2`$ \[T. Kallman 1999, private communication\]).
With the understanding that for $`\tau _1\stackrel{>}{}1`$ our analysis may not be expected to be very accurate, we choose to fix $`\tau _{\mathrm{Th}\mathrm{crit}}`$ at a value of 3. This number is motivated only by the convenience – in paper I, due to the large volume of calculations, we have limited the Thomson depth of the illuminated gas that can be strongly ionized to $`\tau _T=3`$. This choice did not lead to any spurious results because the reprocessing features (from the cold material below the skin) become negligible already at $`\tau _T=2`$ (see paper I). In this paper, we use the following simple modification to the Thomson depth of the hot skin:
$$\tau _\mathrm{h}=\frac{\tau _1}{1+\tau _1/3},$$
(7)
We can now compare our results with values of $`\tau _\mathrm{h}`$ numerically calculated in paper I. There, we conducted two types of tests. For radiation-dominated disks, we used fully self-consistent formalism and thus we can directly compare results of §5 in paper I to results of equations (5) and (7). For purposes of isolating dependence of the reflected spectra on one parameter at a time and yet covering a large parameter space, we also conducted several tests where we artificially varied the “gravity parameter $`A`$” (see §4 in paper I). In order to compare results of these latter tests, one should use the same approach that led us to equation (5), and also use the same gravity law as used in paper I with (note that in our current formulation, $`A=4\theta _cl_x^1\lambda ^2`$). This yields $`\tau _1=0.73T_1(l_x/A)^{1/2}`$. Further, since $`\lambda `$ is now expressed as $`\lambda ^2=4\theta _cA^1l_x^1`$, the value of $`y_b`$ is not necessarily small, and thus $`W(y_b)`$ needs to be evaluated exactly. Finally, to locate the position of the lower boundary of the illuminated layer (which occurs almost at the same height as the temperature discontinuity – see Fig. 5d in paper I for an example), we must substitute $`P_{\mathrm{bot}}`$ in equation (6) on the value of gas pressure at Thomson depth $`\tau _{\mathrm{max}}=4`$, as used in paper I, which is approximately equal to $`\tau _{\mathrm{max}}AF_\mathrm{x}/c=4AF_\mathrm{x}/c`$.
The results of such a comparison are summarized in Figure (1), where we show $`\tau _\mathrm{h}`$ obtained in this paper versus that obtained numerically in paper I. The value of the Compton temperature (i.e., $`T_1`$ in eq. 5) is taken to be the temperature of the first zone in the temperature profiles shown in the corresponding Figures in paper I. We note that the deviation of our approximate expression from “exact” results is less than $`20`$% even though we covered a wide range of physical conditions, i.e., strong to weak illumination limits, different indices of the incident X-ray radiation, and the disk itself is either gas or radiation dominated.
## 3. “Lamp Post” Model
As an application of our methods, we choose to analyze the model where the X-ray source is located at some height $`h_x`$ above the black hole (the “lamp post model” hereafter). Iwasawa et al. (1996) reported observations of iron line variability in Seyfert Galaxy MCG-6-30-15, and pointed out several problems connected with theoretical interpretation of these observations. Reynolds & Begelman (1999) showed that the accretion flow within the innermost stable radius may be optically thick and thus produce fluorescent iron line emission in addition to such emission from the disk itself. These authors argued that the line emission from within the innermost stable radius may be important for the interpretations of the observations of Iwasawa et al. (see also Dabrowski et al. 1997). Reynolds et al. (1999) studied response of the iron line profiles to changes in the X-ray flux (iron line reverberation). In this paper we will not discuss the region within the innermost stable orbit for the reason that the properties of the accretion flow there are not well constrained. In particular, the hydrostatic and energy balance equations do not necessarily apply since the hydrostatic and thermal time scales can be longer than the in-fall time.
We will assume a non-rotating black hole and that all the X-rays are produced within the central source. We also neglect all relativistic effects in the present treatment. The projected X-ray flux impinging on the disk is
$$F_\mathrm{x}=\frac{L_xh_x}{4\pi (R^2+h_x^2)^{3/2}}.$$
(8)
Let us define $`\eta _x`$ as the ratio of the total X-ray luminosity to the integrated disk luminosity of the source, i.e., $`\eta _xL_x/L_d`$. Using equation (5), one obtains
$$\tau _127.2\frac{h_x}{R_s}\eta _xT_1[1+(h_x/R)^2]^{3/2}r^{3/2}\dot{m}$$
(9)
(one still has to use equation to get the final answer for $`\tau _\mathrm{h}`$). For illustration, we choose a value of $`h_x=6R_s`$. For $`\mathrm{\Gamma }=1.9`$, a typical value for Seyfert Galaxies, the Compton temperature $`kT_x7.6`$ keV. As discussed in paper I (§4.6), the Compton temperature at the surface of the disk depends on the cosine of the X-ray incidence angle, $`\mu _i`$, and the ratio $`F_\mathrm{x}/F_\mathrm{d}`$ approximately as:
$$T_cT_x\left[1+\mu _i\sqrt{3}\frac{F_\mathrm{x}+F_\mathrm{d}}{F_\mathrm{x}}\right]^1.$$
(10)
Figure (2) shows the Thomson depth of the skin as a function of radius for several values of $`\dot{m}`$. The parameters in Figure (2) are chosen to be: (a) $`\mathrm{\Gamma }=1.9`$, $`\eta _x=1`$ (“X-ray strong” case); (b) $`\mathrm{\Gamma }=1.9`$, $`\eta _x=0.1`$ (“X-ray weak” case), and (c) $`\mathrm{\Gamma }=2.3`$, $`\eta _x=1`$. Figure (3) shows the angle-averaged reflected spectra and temperature profiles of the hot layer for the three cases just considered and the accretion rate equal to the Eddington value ($`\dot{m}=1`$) for $`r=10`$. The details of numerical methods with which these spectra were obtained are described in paper I.
If Compton temperature $`T_c\stackrel{>}{}1`$, the illuminated gas is nearly completely ionized, and the line is emitted almost exclusively from the material below the skin. For this reason, it turns out that local emissivity of the line is negligible when $`\tau _\mathrm{h}\stackrel{>}{}1`$ (see paper I). Thus, the strength of the iron line will be decreasing with $`\dot{m}`$ for the X-ray strong case (Fig. 2a) when $`L_x\stackrel{>}{}`$ few percent of the Eddington value. The skin is thickest for smaller radii, and therefore the broad iron line component will decrease first. The narrow line component (emitted farther away from the black hole) will also decrease with X-ray luminosity, but considerably slower than the broad line. In addition, if some of the line comes from a putative distant obscuring torus (Krolik, Madau & $`\dot{\mathrm{Z}}`$ycki 1994), or the disk has a concave geometry (Blackman 1999), then the narrow component will decrease even slower. Thus, our qualitative description of the behavior of the iron line EW with luminosity for X-ray strong sources is consistent with Figure (3) of Nandra et al. (1997b), suggesting an explanation to the X-ray Baldwin effect.
The X-ray weak case (Figs. 2b & 3b) is different in two respects. Firstly, the Thomson depth of the hot skin is smaller at a given accretion rate compared with the X-ray strong case. Most importantly, however, the skin is not “that hot”. Namely, the skin temperature is only $`0.3`$ keV in the inner disk, and some of the iron ions are not completely ionized. For that reason, it turns out that the iron line centroid energy is close to 6.7 keV, and a very deep absorption edge appears. This edge is in fact much stronger than the one resulting from a neutral material. Hence it is possible that this relatively cold X-ray heated skin can be unambiguously detected in spectra of real AGN.
Similarly, the soft incident X-ray spectrum leads to a relatively cool skin because $`T_x1`$ keV only (see Fig. 2c & 3c). As in the case $`\eta _x=0.1`$, a strong absorption edge is observed. In addition, the 6.7 keV Fe line is stronger, with EW of 65 eV. Therefore, such skin may also be detectable if it exists.
Comparing the value of $`\tau _\mathrm{h}`$ resulting from analytical formulae with those seen in Fig. (3b), one notes that deviations are as large as $``$ 50%. These relatively large deviations exemplify the fact that our equations (5) and (7) are good approximations only to cases with strong X-ray flux and hard incident spectra (i.e., $`F_\mathrm{x}\stackrel{>}{}F_d`$ and $`\mathrm{\Gamma }\stackrel{<}{}2`$). When one of these two conditions is not satisfied (and it is the case for all three cases presented in Figs. 2 & 3) the Compton heated layer is cool enough for atomic processes to provide additional sources of heating and cooling beyond Compton and bremsstrahlung processes, so that the energy equation (3) is not obeyed. However, our results can still be used as an order of magnitude estimate of $`\tau _\mathrm{h}`$ in the latter cases.
## 4. Summary
In this Letter, we derived an approximate expression for the Thomson depth of the hot completely ionized skin on the top of an accretion disk illuminated by X-rays. Our results are only weakly dependent on the a priori unknown $`\alpha `$-viscosity parameter (because it enters through the boundary conditions). This allows us to reduce the uncertainty in documenting predictions of different accretion theories with respect to the iron line profiles and the strength of the X-ray reflection hump. Using the “lamp post model” geometry as an example, we showed that an inner part of an accretion disk may have a Thomson-thick ionized skin. Under certain conditions ($`F_\mathrm{x}\stackrel{>}{}F_d`$ and $`\mathrm{\Gamma }\stackrel{<}{}2`$), the X-ray heated skin may act as a perfect mirror for photons with energies below $`30`$ keV. The physical cause of this is that Compton scattering in the skin prevents X-rays from penetrating to the deep cold layers that are capable of producing fluorescent line emission as well as setting other marks of atomic physics.
We note that because the ionized skin is thickest in the inner disk, the observed absence or deficit of the relativistically broadened line and other reprocessing features in some systems (e.g., $`\dot{\mathrm{Z}}`$ycki , Done & Smith 1997, 1998), which was interpreted as a possible evidence for a disruption of the cold disk for small radii, may also mean that the “cold” disk is still present up to the innermost radius, but the skin effectively shields it from the X-ray illuminating flux. It is interesting, however, that the presence of the skin becomes apparent in systems that have $`F_\mathrm{x}\stackrel{<}{}F_d`$ or $`\mathrm{\Gamma }\stackrel{>}{}2`$, because the Fe atoms are not completely stripped of their electrons and thus produce strong ionized edges and lines. We believe these predictions should be testable observationally with current X-ray missions such as Chandra, Astro-E and XMM. Also note that for a patchy corona model of accretion disks (e.g., Haardt, Maraschi & Ghisellini 1994), the Thomson depth of the hot skin is always larger than the one found here, since the ratio $`F_\mathrm{x}/F_d`$ is larger. Finally, the presence of the ionized skin is important not only for the X-rays, but for other wavelengths as well (e.g., in studies of Lyman edge of accretion disks, correlation of optical/UV light curves with X-rays).
A shortcoming of this paper is that we neglected the changes in the ionizing continuum due to scattering in the skin, which, strictly speaking, limits the applicability of our results to optically thin situations $`\tau _\mathrm{h}\stackrel{<}{}1`$ and incident angles not too far from normal<sup>1</sup><sup>1</sup>1A crude “fix” to the problem of small incident angles is to use $`cI_x`$, where $`I_x`$ is the X-ray intensity integrated over $`\varphi `$ angle, in place of $`F_\mathrm{x}`$ in equation (5). We plan to address this issue in future.
The author acknowledges support from NAS/NRC Associateship and many useful discussions with D. Kazanas, T. Kallman and M. Bautista. The anonymous referee is thanked for several constructive suggestions. |
no-problem/9912/astro-ph9912389.html | ar5iv | text | # Discovery of a red and blue shifted iron disk line in the galactic jet source GRO J1655-40
## 1 Introduction
The X-ray nova GRO J1655-40 discovered with BATSE (Zhang et al. 1994) is one of the two only definite Galactic superluminal jet sources: GRO J1655-40 and GRS 1915+105 (Mirabel & Rodriguez 1994). These two sources are also transient whereas the source SS 433 with velocity in the jets of 0.26 c is persistent. Radio images of the source showed condensations moving in opposite directions (Tingay et al. 1995). The apparent superluminal motion implies that the emitting plasma has velocity close to c, in fact 0.92 c (Hjellming & Rupen 1995). Initially, X-ray outbursts were separated by $``$120 d (Zhang et al. 1995), and on 1996, April 25 an outburst began lasting 16 months as shown by the RXTE ASM (Sobczak et al. 1999). By the time of this outburst, the strong radio activity had ceased, although radio emission was detected again on 1996, May 29 (Hunsted & Campbell-Wilson 1996). Lack of radio detection after that date despite regular monitoring (Tingay 1999) implies that the jets had ceased to exist.
The optical observation of 1996, March provided a mass of the central objects of $`7.02\pm 0.22`$ M$`_{}`$ and an inclination angle of $`69.5\pm 0.08^{^{}}`$ (Orosz & Bailyn 1997). An inclination of $`67.2\pm 3.5^{^{}}`$ was found by van der Hooft et al. (1998). GRO J1655-40 is thus generally accepted to be a Black Hole Binary, the only Galactic jet source with mass determination evidence for its black hole nature.
Several observations were made with ASCA, the first on 1994, August 23 and on 1997, Feb 26–28 an observation lasting one orbital cycle was made during which the RXTE observation discussed here was also made. Iron absorption line features were found at $``$6.6 keV and $``$7.7 keV when the source was less bright (0.27–0.57 crab), and at $``$7 keV and $``$8 keV when the source was brighter (2.2 crab), and these were identified with $`\mathrm{K}_\alpha `$ and $`\mathrm{K}_\beta `$ lines of Helium-like and Hydrogen-like iron respectively. In the observation of 1997, Feb 26–28, a broad absorption feature at $``$6.8 keV was seen, thought to be a blend of He-like and H-like lines (Yamaoka et al. 1999). Sobczak et al. (1999) found evidence for an iron edge at 8 keV. X-ray dipping has been observed in GRO J1655-40 and many short and deep dips similar to the those in Cygnus X-1 (Bałucińska-Church et al. 1999) are seen.
## 2 Analysis and Results
The observation of GRO J1655-40 analysed here took place on 1997, Feb 26 lasting 14,600 s and had an exposure of 7.6 ks. Data from the PCA instruments in “Standard 2” mode are presented. Data were screened by selecting for angular distance less than 0.02$`^{^{}}`$. The PCA consists of 5 Proportional Counting Units (PCU0 – 4); spectra were extracted for each PCU separately but only units 0, 1 and 4 were used because of the consistently higher values of $`\chi ^2`$ found in fitting data on the Crab Nebula with PCUs 2 and 3 (Sobczak et al. 1999). Figure 1 shows the light curve from all 5 PCUs of the PCA with a binning of 16 s. Strong dipping is seen at 8.5 ks. Four spectra were selected at different times during the observation avoiding the dips; these are indicated by arrows in Fig. 1, and are labelled spectrum 1 to spectrum 4. The third of these (at $``$9 ks) follows a dip, and may have slightly reduced intensity. Results from 3 further spectra not presented gave similar results.
Spectra were accumulated over times averaging 140 s, equivalent to $``$340,000 counts per spectrum. Data were used between 3.5 – 25 keV. Primitive channels were regrouped in 2s between channels 30 and 39 (13 –16 keV) and in 4s above channel 40, and systematic errors of 1% added. Background subtraction was carried out using standard background models and instrument responses from 1998, Nov 28 used. Spectra from each time interval were extracted for PCUs 0, 1 and 4 and these were fitted simultaneously with a variable normalization allowed between the PCUs in the fitting. A number of spectral models were investigated. Simple one-component models were not able to fit the spectra and a good fit to the continuum was obtained with a two-component model consisting of a disk blackbody plus a power law component. The luminosity (0.1 – 100 keV) was $`9.7\times 10^{37}`$ erg s<sup>-1</sup> with the disk blackbody constituting 89% of the total and the power law 11%.
However, for the continuum-only model, the $`\chi ^2`$ per degree of freedom (dof) was poor, typically 130/91 and positive residuals could be seen in the spectra as shown in the example of Fig. 2 (spectrum 3); and these data are re-plotted in the form of ratios of the data to the best-fit model for each of the 4 datasets in Fig. 3. Strong line features at $``$5.8 keV and $``$7.3 keV can be seen in all of the spectra. Note that in the ratio plots, the lower energy part of the line is reduced compared with the higher energy part because of the decreasing continuum, so that the line centre appears to be at higher energy than its true value shown in the residual plots. The 4 spectra were re-fitted with 2 Gaussian lines added to the model. There was a marked improvement in the quality of fit, with an average value of $`\chi ^2`$/dof of 70/85. Results for all 4 spectra are shown in Table 1, where values of $`\chi ^2`$ are compared with values for the continuum model alone. F-tests showed that the addition of 2 lines was significant at $`>>`$99.9% confidence. Fig. 4 shows the spectrum of Fig. 2 with 2 lines added to the model. Equivalent widths (EW) were derived for each Gaussian component, treating the red and blue wings as separate lines and the red wing had a mean EW of 70 eV and the blue wing a mean EW of 160 eV.
Absorption features can also be seen in the spectra, for example, at $``$8.9 keV in spectrum 1 (lowest panel). This feature can be seen in all 4 spectra, and there is some evidence for small changes in its energy. Spectrum 3 also apparently has an absorption line or edge at 8.2 keV, and this may be indicative that the data are not completely out of the dip. To investigate this point further, dip spectra were examined in relatively shallow dipping in which the spectrum is not modified in a major way by absorption. A spectrum was selected in the intensity band 6100–6900 count s<sup>-1</sup> from Fig. 1 and the continuum fitted with an appropriate model with partial covering of the disk blackbody plus power law. In this case, the ratio plot showed an even stronger depression at $``$8 keV than that of spectrum 3 in Fig. 3,
Although we have detected clear absorption features in the spectra, we can also ask whether the spectra may be modelled by addition of a postulated, but undetected, absorption feature at an energy between the red and blue emission peaks; i.e. by modelling the valley between the emission peaks without requiring there to be emission. The continuum-only model leaves positive residuals at $``$5.8 and $``$7.3 keV as can be seen in Fig. 2 and Fig. 3. Since the continuum is well-fitted (as shown by the $`\chi ^2`$/dof for the best-fit models), these positive residuals are in no way model dependent, and so are strong evidence for the disk line. Thus it is impossible for the positive residuals to be removed by adding an absorption line. It is confirmed by spectral fitting tests that the positive residuals cannot be removed in this way.
Fitting the spectra with two emission lines was the obvious way of investigating the energies of the emission features. The fact that the feature energies are consistent with red and blue shifted iron disk line emission is strong evidence that the features are iron disk line emission (the radio jets having ceased to exist) and makes it unlikely that these features are coincidentally at these energies and have another origin. The intensity of the blue wing of a disk line is however, expected to be higher than that of the red wing (Fabian et al. 1989; Laor 1991), whereas we find the blue wing intensity to be generally rather less than that of the red wing. The absorption features detected in ASCA and in the present work at energies above 6.6 keV can modify the observed broad disk line considerably, and we require absorption at about the energy of the blue wing for our results to be consistent with a disk line. Consequently, we have carried out fitting with a model containg the Laor disk line model in ‘Xspec’ added to the disk blackbody plus power law continuum components, plus an absorption line. Stable, free fitting results were obtained for all four spectra with this model and results are shown in Table 2. Values of the restframe energy varied between 6.4 keV and 6.8 keV with a mean of 6.56 keV. The energy of the absorption line was well-constrained since a large residual at a well-defined energy would result from the blue wing of the Laor model if the absorption line was omitted. Line energies varied between 7.0 and 7.3 keV. $`\chi ^2`$/dof values were similar to those obtained with two emission lines, varying between 60/84 and 87/84. The inner radius of the disk line emission region r<sub>1</sub> was found to be $``$10r<sub>S</sub>; $`\mathrm{r}_2`$ was $`\text{ }>`$50r<sub>S</sub>, but was poorly constrained. We have thus shown that it is possible to fit the spectra with a model based on the assumption that the blue disk line wing appears relatively weak because of absorption at $``$7 keV. It can be argued that the Laor model is preferred because it contains the correct line shape and two emission lines do not. On the other hand, the two emission line model is better able to determine red and blue energies as there is no complicating extra absorption component in the model. However, the line energies must be interpreted as an average over the inner disk. Finally, we note that we do not require absorption at exactly the energies derived above to reduce the observed flux of the blue wing; we are not attempting to fit all of the absorption features in the spectrum and various absorption features at 7 – 8 keV would be capable of reducing the blue wing flux.
Finally, we have tested whether a reflection component can be detected in GRO 1655-40, i.e. a component spectrally different from the incident power law (see Ross, Fabian & Young 1999). To do this, we added the ‘pexriv’ component in Xspec to our best-fit models (Magdziarz & Zdziarski 1995), although this model may be inaccurate for high values of $`\xi `$ (Ross et al. 1999). This was done for the two emission line models and for the Laor disk line plus absorption line models. Our results for the disk line energies and for the absorption line indicate an ionized medium and so we have allowed the ionization parameter $`\xi `$ in the reflection model to vary between 500 and $`10^4`$. The reflection component normalization and power law index were chained to the values of the power law. We note however, that without the reflection component, for both the two emission line and Laor model fitting, $`\chi ^2`$/dof was already acceptable. Fitting with a reflection component added showed that there was no reduction in $`\chi ^2`$. the upper limit flux of the pexriv reflection component was 1% of the total flux at 7 keV. For $`\xi `$ $`<`$ $`10^4`$, the actual contribution of a reflection component would be less than 1%. We thus conclude that we have not detected a reflection component. If a reflection component exists at a flux level of $`\text{ }<`$1%, it would not be possible for the edge in this component to modify in any significant way the values we have derived for red and blue wing energies, widths and EWs. In this source the blackbody strongly dominates the spectrum with 90% of the luminosity, see Fig. 5 above, unlike in Cygnus X-1 in the Low State (e.g. Bałucińska-Church et al. 1995) where the power law strongly dominates. The reflection component in pexriv is a fraction of the underlying power law of the order of 10%, and thus we expect the contribution of a reflection component to be small.
## 3 Discussion
We have presented evidence for a broad emission feature in the X-ray spectrum of GRO J1655-40 having red and blue wings which we have modelled by two Gaussian lines and also using the Laor model plus an absorption line. Using the first model, the line components have high significance as shown by F-tests, and the average line energies obtained fitting two emission lines from all 4 datasets analysed are $`5.85\pm 0.08`$ keV and $`7.32\pm 0.13`$ keV. Given the inability to fit the spectra without emission lines, and given these line energies, it is likely that the emission is iron emission. From the fact that radio emission was switched off before the observation discussed, the lines almost certainly originate in the disk.
We have also fitted the spectra with a model consisting of continuum components plus a Laor disk line plus an absorption line, and conclude that the blue wing intensity less than expected can be explained by absorption at $``$7 keV reducing the observed flux of the blue wing. This can be either iron $`\mathrm{K}_\alpha `$ or $`\mathrm{K}_\beta `$ absorption. The results for this model give somewhat smaller restframe energies than derived from the two emission line model, the mean value being 6.56$`\pm `$0.14 keV compared with 6.88$`\pm `$ keV.
Firstly, we will compare our results with those obtained using ASCA. In the observations of 1994 – 1996, the energies at which iron absorption lines were detected were $``$6.6 keV and $``$7.7 keV for the source at lower intensity, and $``$7 keV and $``$8 keV when brighter (Ueda et al 1998). In the RXTE data however, we see evidence for an absorption feature at $``$9 keV and for an absorption feature at 8 keV in spectrum 3. In the ASCA observation made on 1997, Feb 26–28 which included the RXTE observation, the total spectrum containing all data showed broad absorption at 6.7 keV thought to be a blend of He-like and H-like iron $`\mathrm{K}_\alpha `$ lines (Yamaoka et al. 1999). This energy corresponds approximately to the position of the neck between the red and blue peaks that we detect. The ASCA spectra do show some evidence for emission however (i.e positive residuals), particularly when the source is not very bright (Fig. 3 of Ueda et al. 1998). It is not clear at this stage why emission features were not seen more clearly in ASCA; one possibility is that the emission may vary with time, and the ASCA spectra were integrated over relatively long periods leading to smearing of the emission.
Using our results from fitting two emission lines, the relativistic Doppler formula can be used with the mean energies of the red and blue wings E<sub>1</sub> = 5.85 keV and E<sub>2</sub> = 7.32 keV to solve for the velocity $`\beta `$ = v/c and the restframe energy of the disk line. This gives a mean restframe energy of $`6.88\pm 0.12`$ keV and a mean $`\beta `$ = $`0.33\pm 0.02`$ for an inclination angle of $`70^{}`$. The mean of $`\mathrm{E}_1`$ and $`\mathrm{E}_2`$ has an average of 6.58$`\pm 0.10`$ keV over the four spectra so that the redshift is 0.3 keV, i.e. z = 0.046. This is partly gravitational redshift and partly the transverse Doppler effect. A gravitational redshift of 0.046 would be produced by emission at $``$12r<sub>S</sub>.
Results from the Laor model give a mean restframe energy of 6.56$`\pm 0.14`$ keV. We conclude that the restframe energy is between 6.4 and 7.0 keV indicating Fe $`\mathrm{K}_\alpha `$ emission. The exact ionization state is not clear; the mean of 6.56 keV implies Fe XXII. The Laor model fitting further provided the inner and outer radii of the disk line emission region r<sub>1</sub> $``$10r<sub>S</sub> and r<sub>2</sub> $`\text{ }>`$50r<sub>S</sub>. It may be thought that emission from different radii would tend to smear out the wings; however, simulations show that this does not take place for emission between 10 – 100r<sub>S</sub>, or even for emission between 10 – 200r<sub>S</sub>. The inner radius of the emission region is more important, and if we allow emission from 1 – 10r<sub>S</sub>, there is smearing out because of the strong emission from inner radii and changing energy shifts. However, it is likely that the disk inside 10r<sub>S</sub> is totally ionized by X-ray emission so that no lines are produced and large-scale smearing out does not occur.
The ASCA observation of 1997, February spanning one orbital cycle, detected an absorption line at $``$6.8 keV at all orbital phases, showing that the absorbing material was not confined to part of the accretion disk structure (Yamaoka et al. 1999) as in LMXB. Moreover, the line energies seen generally in ASCA were H-like or He-like showing that the absorbing plasma was highly ionized. Our observation of highly broadened iron emission clearly shows the line to originate in the inner, highly ionized, disk. The location of the absorber is however, not so clear.
Further observation and detailed analysis is clearly needed to explain the observed spectral features which are complex, the absorption features in particular, appearing to change with source intensity. GRO J1655-40 offers probably the best opportunity of studying disk lines strongly affected by gravitational and Doppler effects because of the high inclination at which it is seen. Our detection of the red and blue shifted wings at energies of 5.8 and 7.3 keV is direct evidence for the black hole, since a splitting as wide as this cannot be produced by a neutron star and is the first detection of a red and blue shifted disk line in a Galactic source.
## 4 Acknowledgments
We would like to thank Prof. Hajime Inoue for valuable discussions on the relation between the emission features reported here and absorption features discovered with ASCA. |
no-problem/9912/gr-qc9912118.html | ar5iv | text | # Black Hole Entropy from Horizon Conformal Field Theory
## 1 SOME QUESTIONS
More than 25 years have now passed since the discovery by Bekenstein and Hawking that black holes are thermal objects, with characteristic temperatures and entropies. But while black hole thermodynamics is by now well established, the underlying statistical mechanical explanation remains profoundly mysterious. Recent partial successes in string theory and the “quantum geometry” program have only added to the problem: we now have several competing microscopic pictures of the same phenomena, with no clear way to understand why they give identical results.
The mystery is deepened when we recall that the original analysis of Bekenstein and Hawking needed none of the details of quantum gravity, relying only on semiclassical results that had no obvious connection with microscopic degrees of freedom. This is the problem of “universality”: why should such profoundly different approaches all give the same answers for black hole temperature and entropy?
I will orient this presentation around a few fundamental questions. At first sight, some of these questions—although not their answers—are obvious, while others may seem more obscure. I hope to show that these are “right” questions, in that they lead toward a plausible solution to the problem of universality in black hole thermodynamics. The solution I suggest is certainly not proven, however, and perhaps at this stage the questions are as important as the answers.
The questions are these:
* Why do black holes have entropy?
* Can black hole horizons be treated as boundaries?
* Why do different approaches to quantum gravity yield the same black hole entropy?
* Can classical symmetries control the density of quantum states?
* Can two-dimensional conformal field theory be relevant to realistic (3+1)-dimensional gravity?
### 1.1 Why do black holes have entropy?
Our starting point is the Bekenstein-Hawking entropy
$$S=\frac{A}{4\mathrm{}G}$$
(1)
for a black hole of horizon area $`A`$. It is possible, of course, that black holes are fundamentally unlike any other thermodynamic systems, and that black hole entropy is unrelated to any microscopic degrees of freedom. But if we reject such a radical proposal, then even knowing nothing about quantum gravity, we can make some reasonable guesses.
First, the underlying microscopic degrees of freedom must be quantum mechanical, since $`S`$ depends on Planck’s constant. Second, they must be, in some sense, gravitational, since $`S`$ depends on Newton’s constant. It is thus reasonable to suppose that they are quantum gravitational, though this conclusion is not quite necessary—the relevant degrees of freedom could conceivably be those of a quantum field theory in a classical gravitational background. Third, the dependence of $`S`$ on the horizon area suggests (though again does not prove) that the degrees of freedom responsible for black hole entropy live on or very near the horizon.
At the same time, we know that the relevant degrees of freedom cannot be the ordinary “graviton” degrees of freedom one expects in quantum gravity. As Bañados, Teitelboim, and Zanelli showed , black holes exist in (2+1)-dimensional general relativity, and exhibit the usual thermodynamic behavior. But in 2+1 dimensions there are no gravitons, and the ordinary “bulk” degrees of freedom are finite in number . Indeed, on a spatially compact (2+1)-dimensional manifold, the bulk gravitational degrees of freedom are far too few to account for black hole entropy, and we can obtain the Bekenstein-Hawking entropy from quantum gravity only if we admit extra “boundary” degrees of freedom . Such boundary excitations appear naturally in the Chern-Simons formulation of (2+1)-dimensional gravity as “would-be gauge degrees of freedom,” that is, excitations that would normally be unphysical, pure gauge configurations but that become physical in the presence of a boundary .
In the (2+1)-dimensional theory, there are two obvious candidates for a “boundary,” spatial infinity and the black hole horizon. The degrees of freedom at spatial infinity are naturally described by a Liouville theory , and it is not obvious that there are enough of them to account for black hole entropy. However, an elegant conformal field theory argument due to Strominger leads to the correct Bekenstein-Hawking formula. Unfortunately, though, the theory at infinity cannot distinguish between a black hole and, for example, a star of the same mass, and cannot easily attribute separate entropies to distinct horizons in multi-black hole spacetimes. For that, we need a “boundary” associated with each horizon. This leads to the next question:
### 1.2 Can black hole horizons be treated as boundaries?
A black hole horizon is not, of course, a true physical boundary. A freely falling observer can cross a horizon without seeing anything special happen; she certainly doesn’t fall off the edge of the universe. To understand the role of a horizon as a boundary, one must think more carefully about the meaning of a black hole in quantum gravity.
Suppose we wish to ask a question about a black hole: for example, what is the spectrum of Hawking radiation? In semiclassical gravity, this is easy—we merely choose a black hole metric as a background and do quantum field theoretical calculations in that fixed curved spacetime.
In full quantum gravity, though, life is not so simple. The metric is now a quantum field, and the uncertainty principle forbids us from fixing its value. We can at most fix “half the degrees of freedom” of the geometry. How, then, do we know whether there is a black hole?
The answer is that we can restrict the metric to one in which a horizon is present. A horizon is a codimension one hypersurface, and we need only “half the degrees of freedom” to fix its properties. This does not, of course, make the horizon a physical boundary, but it makes it a place at which “boundary conditions” are set. In a path integral formulation, for instance, we can impose the existence of a horizon by splitting the manifold into two pieces along some hypersurface and performing separate path integrals over the fields on each piece, with fields restricted at the “boundary” by the requirement that it be a horizon. This kind of split path integral has been studied in detail in 2+1 dimensions , where it can be shown that the usual boundary degrees of freedom appear. It seems at least plausible that the same should be true in higher dimensions.
(I should confess that I have been sweeping a rather difficult problem under the rug. It is not clear what kind of horizon one should choose, or what the appropriate boundary conditions should be. Recent work by Ashtekar and collaborators on “isolated horizons” is probably relevant , but these results have not yet been applied to this question.)
### 1.3 Why do different approaches to quantum gravity yield the same entropy?
We next turn to the problem posed at the beginning of this work, that of universality. Ten years ago, if someone had asked for a statistical mechanical explanation of black hole entropy, the best answer would have been, “We don’t know.” Today we suffer an embarrassment of riches—we have several explanations from string and D-brane theory , another from the “quantum geometry” program , yet another from Sakharov-style induced gravity . None of these is yet completely satisfactory, but all give the right functional dependence and the right order of magnitude for the entropy. And all agree with the original semiclassical results that were obtained without any assumptions about quantum gravitational microstates.
In one sense, this agreement is not surprising: any quantum theory of gravity had better give back the semiclassical results in an appropriate limit. But the quantity we are investigating, the entropy, is a measure of the density of states, about as quantum mechanical a quantity as one could hope to find. Merely pointing to the semiclassical results does not explain why the density of states behaves as it does.
This problem has not yet been solved. But perhaps the most plausible direction in which to look for a solution is suggested by Strominger’s recent work . Regardless of the details of the degrees of freedom, any quantum theory of gravity will inherit from classical general relativity a symmetry group, the group of diffeomorphisms. While the commutators may receive quantum corrections of order $`\mathrm{}`$, we expect the fundamental structure to remain. So perhaps the classical structure of the group of diffeomorphisms is sufficient to govern the gross behavior of the density of quantum states.
### 1.4 Can classical symmetries control the density of quantum states?
Symmetries determine many properties of a quantum theory, but one does not ordinarily think of the density of states as being one of these properties. In one large set of examples, however, the two-dimensional conformal field theories, the symmetry group does precisely that.
Consider a conformal field theory on the complex plane. The fundamental symmetries are the holomorphic diffeomorphisms $`zz+ϵf(z)`$. If one takes a basis $`f_n(z)=z^n`$ of holomorphic functions and considers the corresponding algebra of generators $`L_n`$ of diffeomorphisms, it is easy to show that
$$[L_m,L_n]=(mn)L_{m+n}+\frac{c}{12}m(m^21)\delta _{m+n}$$
(2)
where the constant $`c`$, the central charge, is a measure of the conformal anomaly. The algebra (2) is known as the Virasoro algebra.
As Cardy first showed , the central charge $`c`$ is nearly enough to determine the asymptotic behavior of the density of states. Let $`\mathrm{\Delta }_0`$ be the lowest eigenvalue of the Virasoro generator $`L_0`$, that is, the “energy” of the ground state, and denote by $`\rho (\mathrm{\Delta })`$ the density of eigenstates of $`L_0`$ with eigenvalue $`\mathrm{\Delta }`$. Then for large $`\mathrm{\Delta }`$,
$$\rho (\mathrm{\Delta })\mathrm{exp}\left\{2\pi \sqrt{\frac{c_{\text{eff}}\mathrm{\Delta }}{6}}\right\}$$
(3)
where
$$c_{\text{eff}}=c24\mathrm{\Delta }_0.$$
(4)
A careful derivation of the Cardy formula (3)–(4) is given in reference . The proof involves a simple use of duality:
Consider a conformal field theory on a cylinder; then analytically continue to imaginary time, and compactify time to form a torus. In addition to the familiar “small” diffeomorphisms, the torus admits “large” diffeomorphisms, one of which is an exchange of the two circumferences. Under such an exchange, the angular and time coordinates are swapped. Since the theory is conformal, we can always rescale to normalize the angular circumference to have length 1. The exchange of circumferences is then a map from $`t`$ to $`1/t`$, or equivalently from energy $`E`$ to $`1/E`$. The high energy states are thus connected by symmetry to low energy states, and one can obtain their gross properties—in particular, the behavior of the density of high-$`E`$ states—from knowledge of low-$`E`$ states. The dependence of the Cardy formula on the central charge appears because the theory is not really quite scale-invariant: $`c`$ is a conformal anomaly, and the rescaling of the angular circumference introduces factors of $`c`$.
The central charge of a conformal field theory ordinarily arises from operator ordering in the quantization. But as Brown and Henneaux have stressed , a central charge can appear classically for a theory on a manifold with boundary. In the presence of a boundary, the generators of diffeomorphisms typically acquire extra “surface” terms, which are required in order that functional derivatives and Poisson brackets be well-defined . These surface terms are determined only up to the addition of constants, but the constants may not be completely removable; instead, they can appear as central terms in the Poisson algebra of the generators.
The canonical example of such a classical central charge is (2+1)-dimensional gravity with a negative cosmological constant $`\mathrm{\Lambda }=1/\mathrm{}^2`$ . For configurations that are asymptotically anti-de Sitter, the algebra of diffeomorphisms acquires a surface term at spatial infinity, and the induced algebra at the boundary becomes a pair of Virasoro algebras with central charges
$$c=\overline{c}=3\mathrm{}/2G.$$
(5)
Strominger’s key observation was that if one takes the eigenvalues of $`L_0`$ and $`\overline{L}_0`$ that correspond to a black hole,
$$M=(L_0+\overline{L}_0)/\mathrm{},J=L_0\overline{L}_0$$
(6)
and assumes $`\mathrm{\Delta }_0=0`$, the Cardy formula (3)–(4) gives the standard Bekenstein-Hawking entropy (1) for the (2+1)-dimensional black hole.
Strominger’s derivation is an elegant “existence proof” for the idea that black hole entropy can be determined by classical symmetries. As a general argument, though, it has two obvious limitations. First, it uses features peculiar to 2+1 dimensions. In particular, the relevant Virasoro algebras have a natural geometrical meaning: they are the symmetries of the two-dimensional boundary of three-dimensional adS space. While many of the black holes in string theory have near-horizon geometries that resemble the (2+1)-dimensional black hole , others do not, so this limitation is a serious one.
Second, since these Virasoro algebras appear at spatial infinity, they cannot in themselves detect the details of the interior geometry. For example, multi-black hole solutions ought to have a distinct entropy attributed to each horizon, but an asymptotic algebra at infinity can only determine the total entropy. Indeed, the classical central charge at infinity cannot tell whether the configuration is a black hole or a star.
We are thus led to look for generalizations of Strominger’s approach to higher dimensions and to boundary terms at individual black hole horizons. It seems natural to start by looking for higher-dimensional generalizations of the Cardy formula. Unfortunately, no such generalizations are known, and the derivation of equations (3)–(4) does not naturally extend to more than two dimensions. We are thus led to our next question:
### 1.5 Can two-dimensional conformal field theory be relevant to realistic (3+1)-dimensional gravity?
We have one example in which symmetries can control the density of quantum states: conformal field theory in two dimensions. For this fact to be relevant to realistic (3+1)-dimensional gravity, we must argue that some two-dimensional submanifold of spacetime plays a special role in black hole thermodynamics. This is in fact the case: in the semiclassical approach to black hole thermodynamics in any dimension, all of the interesting physics takes place in the “$`r`$$`t`$ plane.”
Let us consider for simplicity an $`n`$-dimensional Schwarzschild black hole, analytically continued to “Euclidean” signature. (The generalization to more complicated black holes is fairly straightforward.) Near the horizon, the metric takes the form
$$ds^2\frac{r_+}{rr_+}dr^2+\frac{rr_+}{r_+}d\tau ^2+r_+^2d\mathrm{\Omega }^2$$
(7)
where the horizon is located at $`r=r_+`$. It is well known that the Hawking temperature can be obtained by demanding that there be no conical singularity in the $`r`$$`\tau `$ plane. Indeed, if we transform to new coordinates
$$R=2\sqrt{r_+(rr_+)},\widehat{\tau }=\frac{\tau }{2r_+}$$
(8)
the metric (7) becomes
$$ds^2dR^2+R^2d\widehat{\tau }^2+r_+^2d\mathrm{\Omega }^2.$$
(9)
Smoothness at $`R=0`$ then requires that $`\widehat{\tau }`$ be an ordinary angular coordinate with period $`2\pi `$, i.e., that $`\tau `$ have a period $`\beta =4\pi r_+`$, the correct inverse Hawking temperature. Note that this argument depended only on the geometry of the $`r`$$`\tau `$ plane near the horizon, and not on the angular coordinates.
What is less well known is that the semiclassical computation of the entropy also depends only on the near-horizon geometry of the $`r`$$`\tau `$ plane. If one chooses boundary conditions appropriate for the microcanonical ensemble, the classical action of general relativity reduces to the Einstein-Hilbert action for a “cylinder” $`\mathrm{\Delta }_ϵ\times S^{n2}`$, where $`\mathrm{\Delta }_ϵ`$ is a disk of radius $`ϵ`$ around the point $`r=r_+`$ in the $`r`$$`\tau `$ plane . It may be shown that the action factorizes, becoming
$$\underset{ϵ0}{lim}I=\frac{1}{8\pi G}\chi (\mathrm{\Delta }_ϵ)\times \text{Vol}(S^{n2})$$
(10)
where $`\chi (\mathrm{\Delta }_ϵ)`$ is the Euler characteristic of $`\mathrm{\Delta }_ϵ`$.
In the standard Euclidean path integral approach to black hole thermodynamics, the action $`I/\mathrm{}`$, evaluated with appropriate boundary conditions, gives the leading order contribution to the entropy. It is evident from equation (10) that the “transverse” factor $`\text{Vol}(S^{n2})`$ is needed to obtain the correct entropy—it gives the factor of area in equation (1). But this term is also nondynamical, merely providing a fixed multiplicative factor; it is the topology of the $`r`$$`\tau `$ plane that distinguishes the black hole configuration from any other.
To better understand the relevant conformal field theory, it is useful to combine the coordinates $`R`$ and $`\widehat{\tau }`$ in equation (9) into a complex coordinate $`z=Re^{i\widehat{\tau }}`$:
$$z\mathrm{exp}\left\{\frac{1}{2r_+}\left[i\tau +r_+\mathrm{ln}\left(\frac{r}{r_+}1\right)\right]\right\}.$$
(11)
Continuing back to Lorentzian signature, we see, perhaps not surprisingly, that “holomorphic” and “antiholomorphic” functions correspond to functions of $`t\pm r_{}`$, where $`r_{}`$ is the usual “tortoise coordinate.”
These results suggest two possible strategies for further investigating the statistical mechanics of black hole entropy. We can try to dimensionally reduce general relativity to two dimensions in the vicinity of a black hole horizon, and see whether we can identify a conformal field theory and determine the central charge. Alternatively, we can look at the Poisson algebra of the generators of diffeomorphism and see whether an appropriate subgroup of transformations of the $`r`$$`t`$ plane acquires a central charge.
The first of these strategies has been explored by Solodukhin , who has shown that the near-horizon dimensional reduction of general relativity leads to a Liouville theory with a calculable central charge. While there is still some uncertainty about the proper choice of the eigenvalue $`\mathrm{\Delta }`$ in the Cardy formula, it seems likely that the symmetries yield the correct Bekenstein-Hawking entropy. The second strategy is my next topic.
## 2 ENTROPY IN ANY DIMENSION
The arguments of the preceding section have led us to a possible strategy for understanding the universal nature of the Bekenstein-Hawking entropy. We begin with classical general relativity on a manifold with boundary, imposing boundary conditions to ensure that the boundary is a black hole horizon. We next investigate the classical Poisson algebra of the generators of diffeomorphisms on this manifold, concentrating in particular on the subalgebra of diffeomorphisms of the $`r`$$`t`$ plane. We expect this subalgebra to acquire a classical central extension, with a computable central charge and with some eigenvalue $`\mathrm{\Delta }`$ of $`L_0`$ associated with the black hole. We then see whether these values of $`c`$ and $`\mathrm{\Delta }`$ yield, via the Cardy formula, the correct asymptotic behavior of the density of states. If they do, we will have demonstrated that the Bekenstein-Hawking entropy is indeed governed by symmetries, independent of the finer details of quantum gravity.
The exploration of this strategy was begun in references and . While conclusive answers have not yet been obtained, the results so far suggest that the program may succeed. I will now briefly summarize the progress.
### 2.1 Central terms
In classical general relativity, the “gauge transformations” have generators $`H[\xi ]`$ that can be written as integrals of the canonical constraints over a spacelike hypersurface. For a spatially closed manifold, these generators obey the standard Poisson algebra $`\{H[\xi _1],H[\xi _2]\}=H[\{\xi _1\xi _2\}]`$, where the bracket $`\{\xi _1\xi _2\}`$ is the usual Lie bracket for vector fields, or the closely related “surface deformation” bracket of the canonical algebra . As DeWitt and Regge and Teitelboim noted, however, the presence of a boundary can alter the generators $`H[\xi ]`$: in order for their functional derivatives and Poisson brackets to be well-defined, one must add surface terms $`J[\xi ]`$, whose exact form depends on the choice of boundary conditions.
The presence of such surface terms can, in turn, affect the algebra of constraints. If one writes $`H[\xi ]+J[\xi ]=L[\xi ]`$, one generically finds an algebra of the form
$$\{L[\xi _1],L[\xi _2]\}=L[\{\xi _1,\xi _2\}]+K[\xi _1,\xi _2]$$
(12)
where the central term $`K[\xi _1,\xi _2]`$ depends on the metric only through its fixed boundary values. As Brown and Henneaux pointed out , one can evaluate this central extension by looking at the surface terms in the variation $`\delta _{\xi _1}H[\xi _2]`$, where $`\delta _{\xi _1}`$ means the variation corresponding to a diffeomorphism parametrized by the vector $`\xi _1`$. Indeed, since such a variation is generated by $`L[\xi _1]`$, it is clearly related to the brackets (12). A general expression for these brackets, derived using Wald’s covariant “Noether charge” formalism , is given in reference .
### 2.2 Boundary conditions
To apply these techniques to a black hole, we must next decide how to impose boundary conditions that imply the presence of a horizon. This is perhaps the most delicate aspect of the program, and it is not yet fully understood.
One possible procedure is to look at the functional form of the metric of a genuine black hole in some chosen coordinate system near its horizon, and to restrict oneself to metrics that approach this form near the boundary. This seems fairly straightforward, but it may be too coordinate-dependent, and it is not obvious how fast one should require metric components to approach their boundary values. A second procedure is to impose the existence of a local Killing horizon in the vicinity of the boundary. This has the advantage of covariance, but it is probably too restrictive for a dynamical black hole, and it is again unclear how fast the geometry should approach the desired boundary values.
Each of these choices of boundary conditions leads to the constraint algebra (12), with a nonvanishing central term $`K[\xi _1,\xi _2]`$. Moreover, when restricted to diffeomorphisms of the $`r`$$`t`$ plane, this algebra reduces to something that is almost a Virasoro algebra. In the covariant phase space approach of reference , for example, one finds a central term of the form
$$K[\xi _1,\xi _2]_{}\widehat{ϵ}\left(D\xi _1D^2\xi _2D\xi _2D^2\xi _1\right)$$
(13)
where $``$ is a constant-time cross section of the horizon, $`\widehat{ϵ}`$ is the induced volume element on $``$, and $`D=/v`$ is a “time” derivative, that is, a derivative along the orbit of the Killing vector at the horizon.
If this expression included an integration over the time parameter $`v`$, equation (13) would be precisely the ordinary central term for a Virasoro algebra, and Fourier decomposition would reproduce equation (2). Unfortunately, no such $`v`$ integration is present. This mismatch of integrations was first noted by Cadoni and Mignemi in the study of symmetry algebras in (1+1)-dimensional asymptotically anti-de Sitter spacetimes. They suggested that one could define new “time-averaged” generators, which would then obey the usual Virasoro algebra. Alternatively, in more than 1+1 dimensions one can add extra “angular” dependence to the $`r`$$`t`$ diffeomorphisms in such a way that a conventional Virasoro algebra is recovered. It was argued in reference that such angular dependence may be needed to have a well-defined Hamiltonian. However, there is clearly more to be understood.
### 2.3 Entropy and the Cardy formula
Let us set aside this problem for the moment, and suppose that we have included a suitable angular dependence or an extra integration over $`v`$ in order to obtain a standard Virasoro algebra. The resulting central charge can then be computed as in reference , and the application of the Cardy formula (3)–(4) is straightforward. One finds that both the boundary conditions of reference and those of reference yield the standard Bekenstein-Hawking entropy (1).
Although this analysis was carried out for ordinary black holes in general relativity, it can be easily extended to a number of other interesting problems. The same type of argument yields the correct entropy for cosmological horizons , and probably for “Misner strings” in Taub-NUT and Taub-bolt spaces . The analysis can be easily applied to dilaton gravity as well, where it again gives the correct entropy.
Several problems remain. The first two, which are probably related, are to find the right general boundary conditions and to understand the extra integration required in equation (13).
The third is conceptually more difficult. The approach to black hole entropy described here is nondynamical: by imposing boundary conditions at a horizon, we are fixing the characteristics of that horizon, effectively forbidding such processes as Hawking radiation. This is not a terrible thing if we are only interested in equilibrium thermodynamics; the Bekenstein-Hawking entropy, for example, is presumably “really” the entropy of a black hole in equilibrium with its Hawking radiation. To fully understand the quantum dynamics of a black hole, however, one needs a more general approach, in which boundary conditions are strong enough to determine that a black hole is present, but flexible enough to allow that black hole to evolve with time. I leave this as a challenge for the future. |
no-problem/9912/astro-ph9912341.html | ar5iv | text | # Abstract
## Abstract
We present new sets of chemical yields from low- and intermediate-mass stars with $`0.8M_{}MM_{\mathrm{up}}5M_{}`$, and three choices of the metallicity, $`Z=0.02`$, $`Z=0.008`$, and $`Z=0.004`$ (Marigo 2000, in preparation). These are then compared with the yields calculated by other authors on the basis of different model prescriptions, and basic observational constraints which should be reproduced.
## 1 Surface chemical abundances
In this work we predict the changes in the surface abundances of several elements (H, <sup>3</sup>He, <sup>4</sup>He, <sup>12</sup>C, <sup>13</sup>C, <sup>14</sup>N, <sup>15</sup>N, <sup>16</sup>O, <sup>17</sup>O, and <sup>18</sup>O), for a dense grid of stellar models with initial masses from $`0.8M_{}`$ to $`M_{\mathrm{up}}=5M_{}`$, and three choices of the original composition, i.e. $`[Y=0.273,Z=0.019]`$, $`[Y=0.250,Z=0.008]`$, $`[Y=0.240,Z=0.004]`$.
Various processes may concur to alter the surface chemical composition of a star, namely: i) the first dredge-up occurring at the base of the RGB; ii) the second dredge-up taking place during the E-AGB only for stars with $`M>(3.54.0)M_{}`$; iii) the third dredge-up experienced by stars with $`M>(1.21.5)M_{}`$ during the TP-AGB phase; and iv) hot-bottom burning in the most massive AGB stars with $`M>(3.54.0)M_{}`$.
Predictions for first and second dredge-up are taken from Padova stellar models with overshooting (Girardi et al. 2000), whereas for the TP-AGB phase the results of synthetic calculations are adopted (Marigo et al. 1996, 1998, 1999). The reader should refer to these works for all the details.
## 2 Stellar yields
Yields from low- and intermediate-mass stars are determined by the wind contributions during the RGB and AGB phases. In these calculations mass loss is described by the Reimers’ prescription ($`\eta =0.45`$) for the RGB phase, and by the Vassiliadis & Wood (1993) formalism for the AGB phase. Yields for the elements under consideration are shown in Fig. 1 as a function of the initial stellar mass, for three choices of the metallicity. Major positive contributions correspond to <sup>4</sup>He, <sup>12</sup>C, and <sup>14</sup>N. Complete sets of stellar yields, distinguishing both the secondary and primary components of the CNO contributions, will be available in Marigo (2000, in preparation).
## 3 Comparison with previous synthetic AGB models
Chemical yields crucially depend on the adopted mass-loss and nucleosynthesis prescriptions. In Fig. 2 a comparison is made between different synthetic AGB models - namely: Renzini & Voli 1981 (RV81), van de Hoek & Groenewegen 1997 (HG97), and this work (MAR99) - on the basis of some key-observables. All these constraints are satisfactorily reproduced by both calibrated HG97 and MAR99 models, whereas RV81 results are quite discrepant. Specifically, the uncalibrated RV model predicts too few faint and too many bright carbon stars, together with a sizeable excess of massive white dwarfs (with $`M>0.7M_{}`$). This can be explained considering the lower efficiency of both the third dredge-up and mass-loss (Reimers’ law with $`\eta =0.3330.666`$) adopted by RV81.
The effect of different mass-loss prescriptions is also clear from Fig. 3. The most massive AGB stars are expected to suffer a huge number of thermal pulses (hence dredge-up episodes) in the RV81 model ($`10^4`$), about two order of magnitude more than in the MAR99 model ($`10^2`$, left panel). As a consequence, the duration of the TP-AGB phase for these models is affected in the same direction, being much longer in RV81 than in MAR99 (middle panel of Fig. 3)
A final remark concerns the process of hot-bottom burning suffered by the most massive AGB stars ($`M>(3.54)M_{}`$). Its treatment determines the temperature at the base of the convective envelope and the related nucleosynthesis, as well as the luminosity evolution of these stars. To this respect, it is worth noticing that the break-down of the $`M_\mathrm{c}L`$ relation (first pointed out by Blöcker & Schönberner 1991) is included by MAR99 (see Marigo 1998), whereas in RV81 and HG97 this overluminosity effect is not taken into account (right panel of Fig. 3).
## 4 Yields from single stellar populations
In order to compare stellar yields with different values of $`M_{\mathrm{up}}`$ (i.e. $`8M_{}`$ for RV81 and HG97 classical models, and $`5M_{}`$ for MAR99 model with overshooting), we calculate the quantities
$$y_k^{\mathrm{lims}}=\frac{_{0.8}^{M_{\mathrm{up}}}mp_k(m)\varphi (m)𝑑m}{_{0.8}^{100}m\varphi (m)𝑑m}$$
(1)
where $`p_k(m)`$ is the fractional yield of the element $`k`$ produced by a star of initial mass $`m`$. These quantities express the relative chemical contribution from low- and intermediate stars belonging to a given simple stellar population. They are shown in Fig. 4 as a function of the metallicity for the three sets here considered.
Differences show up both in metallicity trends and absolute values of $`y_k^{\mathrm{lims}}`$. Compared to previous calculations, MAR99 yields show a pronounced dependence on the metallicity, i.e. yields increase with decreasing $`Z`$. Conversely, the RV81 and HG97 sets present quite weak trends with $`Z`$.
The metallicity dependence can be explained as follows. On one side, AGB lifetimes of low-mass stars increase at decreasing metallicities, as mass-loss rates are expected to be lower. This fact leads to a larger number of dredge-up episodes. Moreover, both the onset and the efficiency of the third dredge-up are favoured at lower metallicities. These factors concur to produce a greater enrichment in carbon. On the other side, hot-bottom burning in more massive AGB stars becomes more efficient at lower metallicities, leading to a greater enrichment in nitrogen. The combination of all factors favours higher positive yields of helium at lower $`Z`$.
As far as the single elemental species are concerned, we can notice:
* MAR99 yields of <sup>4</sup>He are larger than those by HG97, due to the earlier activation of the third dredge-up and, likely, to a greater efficiency/duration of hot-bottom burning in our models. Predictions by RV81 show no significant trend with $`Z`$, and higher positive yields (due to the quite low mass-loss rates adopted).
* MAR99 yields of <sup>12</sup>C are systematically higher than those of RV81 and HG97 because of the earlier onset (and average greater efficiency than in RV81) of the third dredge-up.
* The dominant contribution to the yields of <sup>14</sup>N comes from hot-bottom burning in the most massive AGB stars. Differences in the results reflect different efficiencies of nuclear reactions and AGB lifetimes. In particular, according to MAR99 the production of <sup>14</sup>N, mainly of primary synthesis, is favoured at lower $`Z`$. |
no-problem/9912/astro-ph9912162.html | ar5iv | text | # Young stellar kinematic groups and their relation with young open clusters, star forming regions and the Gould Belt
## 1. Introduction
It has long been known that in the solar vicinity there are several kinematic groups of stars that share the same space motions that well know open clusters. These stellar kinematic groups (SKG), called moving groups (MG) or superclusters (SC), are kinematically coherent groups of stars that share a common origin (the evaporation of a open cluster, the remnants of a star formation region or a juxtaposition of several little star formation bursts at different epochs in adjacent cells of the velocity field). The youngest and best documented SKG are: the Hyades supercluster (600 Myr, Eggen 1992b) the Ursa Mayor group (Sirius supercluster) (300 Myr, Eggen 1992a, 1998; Soderblom & Mayor 1993), the Local Association or Pleiades moving group (20 to 150 Myr, Eggen 1992c), the IC 2391 supercluster (35-55 Myr, Eggen 1995), and the Castor moving group (200 Myr, Barrado y Navascués 1998). Since Olin Eggen introduced the concept of moving group and the idea that stars can maintain a kinematic signature over long periods of time, their existence has been rather controversial. However, recent studies (Dehnen 1998; Chereul et al. 1999; Asiain et al. 1999; Skuljan et al. 1999) using astrometric data taken from Hipparcos and different procedures to detect MG not only confirm the existence of classical young MG, but also detect finer structures in space velocity and age. Well known members to these moving groups are mainly early type stars and few studies have been centered in late-type stars. We have compiled (Montes et al. 1999, 2000) a sample of late-type stars (single and spectroscopic binaries) of previously established members and possible new candidates to these five young SKG.
In order to better understand the origin of these young MG, and to be able to identify late-type stars members of the classical and the recently identified MG and substructures, one also need to study the kinematic properties of nearby young open clusters and star forming regions. With this aim I have calculated the galactic space-velocity components (U, V, W), using the most recent data available in the literature (including astrometric data from Hipparcos Catalogue), of the nearby young open clusters (Robichon et al. 1999), OB associations (de Zeeuw et al. 1999), T associations, and other associations of young stars as TW Hya (Webb et al. 1999). The position of these different young structures in the UV and WV diagrams is compared with the position of the classical MG as well as with the position and associated velocity dispersion of the new substructures recently found by Chereul et al. (1999) and Asiain et al. (1999). Finally, the possible relation of all these young stars concentrations with the young flattened and inclined Galactic structure known as the Gould Belt has also been analysed.
## 2. Young Open Clusters and OB associations
Mean astrometric parameters of nearby young open clusters (d$`<`$ 500pc) have been taken from Robichon et al. (1999), from Perryman (1998) for the Hyades, Scholz et al. (1999) for IC 348 and Platais et al. (1998) for a Car. These mean parallaxes and proper motion have been computed using Hipparcos data (ESA, 1997). For other young open clusters with no Hipparcos data parameters have been taken for differences sources (Palouš et al. 1977). Other clusters (as $`\delta `$ Lyr (Eggen 1968, 1972, 1983) and NGC1039 (Eggen 1983a)) cited in the literature as probably associated with some SKG have also been included here.
Table 1.<sup>1</sup><sup>1</sup>1Table 1 available at http://www.ucm.es/info/Astrof/oclusters$`\mathrm{\_}`$uvw$`\mathrm{\_}`$tab.html list all the open clusters included in this study in order of increasing age. I give the name, age (Myr), distance (pc), metallicity, \[Fe/H\], coordinates (FK5 1950.0), and the U, V, W, calculated components with their associated errors in km/s. Age, distance, and metallicity have been taken from different sources as given at the end of the table. Galactic space-velocity components (U, V , W) in a right-handed coordinated system (positive in the directions of the Galactic center, Galactic rotation, and the North Galactic Pole, respectively) and their associated errors have been calculated using the procedures given by Johnson & Soderblom (1987).
For nearby OB associations we adopt the space velocities (U, V, W) calculated by de Zeeuw et al. (1999) using mean astrometric parameters from Hipparcos (ESA, 1997).
## 3. Young stellar kinematic groups
The (U, V) velocity components of the five young stellar kinematic groups above mentioned are plotted in Fig. 1 (left panel). All these groups fall inside the boundaries (black dashed line in Fig. 1) that determine the young disk population as defined by Eggen (1984, 1989). The different inner structures found by Chereul et al. (1999, C99 hereafter) and Asiain et al. (1999, A99 hereafter) in these MG, as well as possible new MG identified by these authors are plotted in Fig. 1 to 3. The dashed ellipse plotted for each structure represents the associated velocity dispersion. The position of the clusters and OB associations in the UV diagram is plotted in Fig. 1 (left panel). The dotted ellipse plotted around each open cluster position represents the associated errors in U and V. Each cluster and OB associations is identified with its name in figures 1 to 3.
Local Association
Several clusters and associations have been suggested as members of the Local Association (LA): The southern concentrations NGC 2516, IC 2602, Upper Centaurus Lupus (UCL) and Upper Scorpius (US) (Eggen 1983a) and the northern concentrations $`\alpha `$ Per, Pleiades, NGC 1039, $`\delta `$ Lyr (Eggen 1983b). In Fig. 1 (right panel) we can see that the U, V velocities of all these open clusters are close to the U, V velocities of the LA, except NGC 1039 whose velocities fall even outside of the young disk boundaries. Other clusters that can also be associted to this MG are IC 4665 and a Car. The OB association UCL and Cas-Tau 1, and the structure associated to Centaurus Lupus by C99 have velocity components close to the LA, but, US and Cep OB6 has U, V velocities a bit different. A99 found four substructures (B1, B2, B3 and B4) of different ages associated to the LA. As it can be seen in Fig. 1 (right panel) these structures fall well around the central position of the LA. B1 and B2 are the youngest structures and seems to be related with IC 2602, IC 4665, UCL, Cas-Tau 1 and TW Hya. B4 is associated to the Pleiades cluster, and the oldest structure B4 could be related with NGC2516. The two structures (1 and 2) found by C99 have also velocity components close to the LA.
IC 2391
In Fig. 2 (right panel) it can be seen that the velocity components of the IC 2391 open cluster as well as the substructures associated by C99 and A99 to the IC 2391 supercluster are above the (U, V) position given for this supercluster. Close in velocity space to IC 2391 open cluster and the substructures there are (see Fig. 2 right panel) a large concentration of open clusters: NGC 2264, NGC 2232, NGC 2547, IC 348, NGC 2532, Col 140, NGC 1901, Col 121. Other cluster as NGC 2422, NGC 2451 and Tr 10 (and Tr 10 OB association) are not in the mentioned concentration but could be related with IC 2391 supercluster.
Castor moving group
Fig. 3 (left panel) is centered in the (U, V) velocities of the Castor moving group. The structure associated to Centaurus Crux by C99 have a very large velocity dispersion and fall in this region together with the OB associations Lower Centaurus Crux (LCC), Cas-Tau 2, and Col 121. The velocity components of the groups c1, c3, and C1 found by A99 are more close to Cas-Tau 2 association than to the Coma Berenices open cluster (see Fig. 3 left panel).
Ursa Mayor group
The open cluster NGC 7091 was suggested by Eggen as possible related with the Ursa Mayor group, but as it can bee seen in Fig. 3 (right panel) its (U, V) components are very far from the center of the group. Other cluster not related with the group is IC 4756. Four substructures (a1, a2, a3, a4) have been found by A99 in this region. a1 y a2 have U component lower than the Ursa Mayor group and are close to the OB association Cep OB2. a34 have a very low U component and is more close (see Fig. 3 right panel) to the new supercluster identified by C99. The substructures 1 and 2 of C99 have (U, V) components very close to the Ursa Mayor group.
Hyades supercluster
The Hyades and Praesepe open clusters have velocity components very very close to the (U, V) components of the Hyades supercluster (see Fig. 2 left panel). Eggen (1996) assumed the NGC 1901 cluster as a component of the Hyades supercluster, however as it can be seen in Fig. 2 (left panel) the (U, V) components of the open cluster and of the substructure associated to NGC 1901 by C99 are very different from the supercluster velocity components. At (U, V) velocities intermediate between the supercluster and NGC 1901 is where C99 found substructures Hyades 1, 2 and 3 and A99 found the substructures d1, d2 and D.
## 4. The Gould Belt
The Gould Belt (Gould 1879; Pöppel 1997) is a ring-like, expanding, flattened and inclined Galactic structure outlined by OB associations, molecular gas, star forming regions and young galactic clusters. Recently, Guillout et al. (1998a, 1998b) discovered a galactic latitude enhancement of X-ray active stars consistent with the Gould Belt. Detailed analysis of the surface density, distance and X-ray luminosity distributions led these authors to conclude that these stars are distributed in a disk disrupted near the Sun (Gould disk). Wichmann et al. (1997) have also found that the spatial distribution of the ROSAT discovered WTTS stars near Lupus, perpendicular to the galactic plane, is centered on the Gould Belt. Many of the young stars concentrations analysed here are associated with the Gould Belt and therefore this galactic structure could also be the origin of the young moving groups (for a detailed analysis see Montes (2000, in preparation).
### Acknowledgments.
This work has been supported by the Universidad Complutense de Madrid and the Spanish Dirección General de Enseñanza Superior e Investigación Científica (DGESIC) under grant PB97-0259.
## References
Asiain R., Figueras F., Torra J., Chen B., 1999, A&A 341, 427
Barrado y Navascués D., 1998, A&A 339, 831
Chereul E., Creze M., Bienayme O., 1999, A&AS 135, 5
de Zeeuw P.T., et al., 1999, AJ 117, 354
Dehnen W., 1998, AJ 115, 2384
Eggen O.J., 1968, ApJ 152, 77; Eggen O.J., 1972, ApJ 173, 63
Eggen O.J., 1983a, MNRAS 204, 391; Eggen O.J., 1983b, 204, 377
Eggen O.J. 1984, ApJS, 55, 597; Eggen O.J. 1989, PASP 101, 366
Eggen O.J., 1995, AJ 110, 2862; Eggen O.J., 1996, AJ 111, 1615
Eggen O.J., 1992a, AJ 104, 1493; 1992b, AJ 104, 1482; 1992c, AJ, 103, 1302
Eggen O.J., 1998, AJ 116, 782
ESA, 1997, The Hipparcos and Tycho Catalogues, ESA SP-1200
Gould B.A., Uranometria Argentina, ed. P.E. Corni, Buenos Aires, p. 354
Guillout, P., et al. 1998a, A&A 334, 540; 1998b, A&A 337, 113
Johnson D.R.H., Soderblom D.R., 1987, AJ 93, 864
Montes D., Latorre A., Fernández-Figueroa M.J., 1999, ASP Conf. Ser., ”Stellar clusters and associations”, R. Pallavicini, et al. eds. (in press), (http://www.ucm.es/info/Astrof/ltyskg.html)
Montes D., Fernández-Figueroa M.J., E. De Castro, M. Cornide, A. Latorre, 2000, (these Proceedings)
Palouš J., Ruprecht J., Dluzhnevskaia O.B., Piskunov T., 1977, A&A 61, 27
Perryman M.A.C., et al., 1998, A&A 331, 81
Platais I., et al. 1998, AJ 116, 2423
Pöppel W., 1997, Fundamental Cosmic Phys. Vol 18, 1
Robichon N., Arenou F., Mermilliod J.-C, Turon C., 1999, A&A 345, 471
Scholz R.-D., et al., 1999, A&AS 137, 305
Skuljan J., Hearnshaw J.B., Cottrell P.L., 1999, MNRAS 308, 731
Soderblom D.R., Mayor M., 1993, AJ 105, 226
Webb et al. 1999, ApJ 512, 63L
Wichmann et al., 1997, A&A 326, 211 |
no-problem/9912/astro-ph9912148.html | ar5iv | text | # Galaxy population properties in the rich clusters MS0839.8+2938, 1224.7+2007 and 1231.3+1542Based on observations with the Canada France Hawaii Telescope, which is operated by the NRC of Canada, CNRS of France, and the University of Hawaii
## 1 Introduction and data
This paper continues the discussions of galaxy populations of rich clusters of intermediate redshift, based on the CNOC1 project database. The project and basic data have been described by Yee, Ellingson and Carlberg (1996), and similar cluster investigations have been published by Abraham et al (1996) and Morris et al (1998) for the clusters A2390 (z=0.23) and MS1621.5+2640 (z=0.42) (papers 1 and 2). Further discussion of the whole cluster ensemble has been carried out by Balogh et al (1999). We also discuss VLA radio imaging data for the clusters 0839+29 and 1224+20, which were obtained in conjunction with the CNOC1 program. The VLA data were obtained with the C configuration at wavelengths 20cm and 6cm. Some details of the VLA observations are given in paper 1.
The clusters selected for this paper were chosen from the CNOC database because of their sample size, redshift comparison with those of papers 1 and 2, and the radio imaging data. The investigation follows the same lines as papers 1 and 2: discussion of the spatial distribution of spectral properties and colours, particularly with projected distance from the central galaxy, and fitting of models that indicate the stellar content and star-formation history of galaxies in the cluster. We also compare the properties of different clusters.
The data are described by Yee, Ellingson, and Carlberg (1996), and papers 1 and 2 contain full intoductory material on the galaxy population studies: the reader is referred to those publications for details. Our treatment of the data is the same as in papers 1 and 2. As in paper 2, we have used the GISSEL96 models for comparison with the measurements from the data. Table 1 summarizes the data and properties of the three clusters.
The D4000 index is measured the same way as in the other cluster papers (Hamilton 1995). Balogh et al (1999) use a narrower definition in their measure which may be less prone to systematic effects of redshift, passband, and reddening. We have compared measures made with the ‘standard’ passbands and two reduced passbands of Balogh (private communication). For the clusters in this paper, the mean ratio of narrow to standard passband measures ranges from 1.00 to 1.06 for an intermediate bandpass, and from 0.85 to 0.90 for a very narrow one. We do not think this indicates any significant systematic effect in retaining the original Hamilton passbands, and this has the advantages of enabling direct comparison with the two other CNOC cluster papers. As in the other papers, the same measures are made in the model spectra as the observed.
We did look for systematic differences with Balogh et al’s results by counting galaxies in the boxes they define in the D4000/H$`\delta `$ plane. Balogh et al count galaxies in these boxes in order to compare their results with those of Barger et al (1996). The boundaries chosen pass through the centre of the dense distribution of galaxies, so that the numbers are very sensitive to the exact placement of the boundaries. Using the Hamilton D4000 measure, we can closely match the counts of Balogh et al with the D4000 boundary moved from 1.45 to $``$1.65, which is also close to the variation introduced by bandpass choice, as noted above. In view of the sensitivity of number counts to the chosen boundaries, and the relatively small numbers of galaxies in our clusters, we do not think such number counts are particularly useful for this paper. With the above D4000 normalisation, they are however, the same as those of Balogh et al for the whole CNOC cluster sample, within the poisson scatter expected for the number counts in the individual clusters in this paper.
The H$`\delta `$ measures have also been the subject of discussion. Abraham et al (1996) used a combination of two different measures intended to account for the changing contamination of the continuum bands with different stellar populations. Morris et al (1998) noted that since the same measures are made on model and observed spectra, and the conclusions are unaffected, only one of the H$`\delta `$ measures was used. However, a correction of 1.7A was applied to account for the different resolution of the model and observed spectra. Most recently, Balogh et al have claimed that the sampling has little effect on the H$`\delta `$ measure, and omitted this correction. Rather than add to this debate, in this paper, we have followed Morris et al to enable direct comparison with their results.
Balogh et al (1999) have found that the formal uncertainties in some line indices underestimate the reproducibility of their measurements. For comparison with the cluster papers 1 and 2, we have not applied any correction factors to the errors we display. The mean scale factors to these error bars derived by Balogh are 2.0, 1.4, and 1.4 for D4000, H$`\delta `$, and \[O II\], respectively.
For each cluster we discuss membership and define a sample of field galaxies from the same sample. There are a few galaxies which lie at the edge of the clusters which may not (yet) be bound members, and these are designated near-field galaxies, as in paper 2. We use the same definition of these as Balogh et al, although they stand out clearly to the eye in the redshift-radius plots. Galaxy colours from the direct images and spectral indices (D4000, H$`\delta `$, and \[O II\]) from the spectra are plotted with clustocentric distance. The galaxies are grouped in radius bins and the measured quantities of each subset compared with the same measures on stellar population models for galaxies. We present mean properties of various subsets of the galaxies, and present mean spectra for several of these subsets. We also show the spatial distribution of some subsets of the galaxy populations.
We discuss the inferred stellar populations and galaxy evolution for the clusters and compare the results for all the CNOC clusters thus analysed.
## 2 0839+29
This cluster (along with 1224+20) has been discussed as a cooling flow cluster on the basis of extended H$`\alpha `$ emission, by Donahue, Stocke, and Gioia (1992), and from X-ray imaging by Nesci, Perola, and Wolter (1995).
The membership of the cluster from the sample spectra is illustrated in figure 5. Figure 5 shows the colour gradient with magnitude and radius, and from this we adopt the red sequence of members at g-R$`>`$0.75. The blue and red galaxies fall within the same cusp envelope in Figure 5 (which seem to be poorly sampled at radii above $``$400”), so the colour cut is not particularly significant. We have labelled 3 galaxies that lie outside a symmetrical cusp envelope but very close to the cluster, as ‘near-field’. These are all blue, as was found for similar galaxies in 1621+26.
The cluster is relatively compact within the large size of fields sampled (see Table 1). Only two galaxies with the cluster redshift are seen further than 463” from the central galaxy, compared with 1400” for the similar redshift A2390, and 700” for the higher redshift 1621+264 in papers 1 and 2. Carlberg, Yee, and Ellingson (1997) compare the dynamical parameters of the CNOC clusters, and 0839+29 has the lowest virial radius of all, except for the binary cluster 0906+11.
The spatial distributions of some subsets of the galaxies are shown in Figure 5. The field galaxies used for comparison lie in the redshift range 0.17 to 0.22. There are 25 of these and their redshift range is small enough that no colour corrections are needed for model comparisions. In any case, in this paper we do not show any color models.
As in papers 1 and 2, we choose the colour, D4000, H$`\delta `$, and \[O II\] spectral measures for discussion and modelling for stellar populations. Figure 5 shows these plotted with (log) radius, and compared with the non-member sample. The measurements and error estimates were made using the same algorithm as in papers 1 and 2. Table 2 shows mean properties of various subsets of the galaxies. This table also includes indices of other lines that are sensitive to stellar population. The table also includes mean values for extreme \[O II\] and H$`\delta `$ as defined. Such quantities are shown for all three clusters as different ways of summarizing the data.
There is a group of 15 galaxies with redshift near 0.217 that appear to be clustered. They are located close to each other in the sky, centred about 100” from the centre of 0839+29. Table 2 also shows their properties: they seem to have active star-formation and a young stellar population. There is a group of 4 galaxies that lie at an intermediate redshift between the main and this small cluster, that has even younger populations. These appear as a group in the sky, but some 200” to the other side of the main cluster centre. Thus, they are not obviously involved with either cluster, but may be a small group on their own.
In Figure 5 we have plotted measurements with log radius: the cD galaxy is arbitrarily placed at 0.41, since it lies at zero radius. The plot shows an apparent gradient of D4000 with radius in the cluster, particularly beyond 250”. In all the data the \[O II\] emission line is not seen below redshift $``$0.2 because of the spectral bandpass, but other field samples and those above z$``$0.2 show no trend across the cluster redshifts that might concern our camparison with cluster members. The suggested smaller cluster at z=0.217 does stand out in \[O II\] behaviour. The colour plot shows little change in the red sequence, and there are a few H$`\delta `$ strong galaxies seen in the central part of the cluster. The majority of the cluster population seems to have evolved passively for some time, and there is little accretion happening. The three near-field galaxies do stand out as younger and forming stars. One of them has an Seyfert spectrum, whose H$`\delta `$ emission is off-scale in the plot. We note that the spectral passband prevents \[O II\] measures in the lower redshift field galaxies.
Figure 5 shows mean spectra of the major subgroups. These are derived from the galaxies with best signal in each group, but we have checked that their mean measured properties are still representative of the whole subgroup. These may be compared with similar mean spectra from papers 1 and 2, and the other clusters in this paper, but have no remarkable properties in that context.
We have divided the members into three radial distance groups of roughly equal population, and with boundaries close to radii where measured quantities change. Figure 5 shows plots of measured quantities for these subsets, and also the field sample. The diagrams also show models computed from GISSEL96, as in paper 2. The upper curve is for a 1 Gyr starburst followed by passive evolution, and the lower curve is for exponentially decreasing star-formation with a timescale of 4 Gyr. Both models are for solar abundance, and the H$`\delta `$ index is corrected for spectral resolution as in paper 2. Paper 2 also discusses in more detail the effects of different abundances and initial mass funtion.
These plots show there are several blue galaxies with fairly strong H$`\delta `$ in the central region (or projected on to it, as their high velocity dispersion suggests). This is unusual in the central parts of these rich clusters. There are also a few galaxies with ongoing star-formation in the outer regions. As noted from the other plots above, there is an old population seen at all radii among the members.
In Figure 5 and Figure 5 we show the variation of D4000 and H$`\delta `$ with age together with the histograms of these quantities in all three clusters. At the redshift of 0839+29 we expect the oldest populations to be some 12 Gyr. The bulk of the cluster population seems to be signficantly younger than this, and from figure 5 we see little radial gradient. The outer cluster has no old populations. The field galaxy population is similar to the outer cluster members. The inner cluster has several galaxies with higher H$`\delta `$ index than the field or the models, as seen in 1621+26. This may result from a truncated IMF in the inner cluster galaxy populations, as noted in paper 2.
We find 4 galaxy coincidences with radio sources in the VLA images. These are summarized in Table 3. The cD galaxy is the strongest source. The second radio source is the Seyfert galaxy. It is a disk galaxy with a bright nucleus. The next source is a red galaxy within the central cluster, with a fairly young stellar population. It is asymmetrical and possibly interacting. The fourth galaxy has no spectrum but is clearly interacting and is probably a foreground galaxy, judging by its size and brightness. Table 3 summarizes the radio source identifications.
In order to investigate the galaxy environments of cluster members, we created a table of the distance and magnitude of galaxies within 100 pixels ($``$33 arcsec) of each, from the imaging catalogue of the field, which extends to magnitude 24 and fainter. Plots from these numbers do not reveal any connection between spectra and apparent small groups. The most clustered members do not have unusual spectral measures or active star-formation.
## 3 1224+20
The data for this cluster are published by Abraham et al (1998). This cluster has been discussed as an example of velocity bias by Carlberg (1994), and Fahlman et al (1994) also find a weak lensing mass that exceeds the virial mass by a factor of more than 2. Lewis et al (1999) report an X-ray mass estimate which is consistent with the dynamical one. Carlberg, Yee and Ellingson (1997) show from the CNOC database that the cluster radius is lower than average for the group, and the velocity dispersion close to average at 798 km s<sup>-1</sup>. In addition to the CNOC database, we also have VLA maps at 20cm and 6cm of the cluster.
The CNOC spectroscopic database is relatively small for this cluster, with 75 spectra, of which 30 are considered members. Among the sample, there is a very clear redshift separation of members and field galaxies, but the sample does not indicate the usual central velocity spread among the members. The colour plots in Figure 5 suggest the red sequence colour cut-off at g-R=0.95: however, with this cut, the blue member galaxies do not show the usual larger scatter in redshift-radius distribution than the red members (see Figure 5). It may be the lack of central velocity spread among the CNOC members that causes the virial mass discrepancies noted above. The densest grouping of cluster members occurs some 80 arcsec away from the cD galaxy nominally regarded as the cluster centre. This dense group does not have a special redshift, and the low velocity outliers do not form a spatial group, so there is no obvious simple substructure. Figure 5 shows the distribution of galaxy subsets in the sky.
The radial gradients and field comparisons of measured quantities are shown in Figure 5. The cluster members generally have considerably more evolved stellar populations than the field. Some of the blue members do have younger populations, but do not show much star-formation. The values of D4000 however, are unusually low for an evolved population, with an average for red members of 1.7, compared with values well over 2 for other CNOC clusters (see figure 5). The measures were made with the same code as all other spectra, and are small with any definition of D4000 index and redshift, so this suggests that the cluster may have formed fairly recently, and that star-formation turned off not very long ago.
The mean spectra of representative subsets are shown in Figure 5. The low D4000 break is also seen here. Table 4 gives the mean measured values. We looked at the off-centre compact group (‘A’) separately, but they do not show any interesting difference that suggests they are a subgroup, as also noted above.
Figure 5 shows model measures and the data in two radial bins, and the field. There is little radial gradient in D4000 which is also consistent with recent cluster-wide evolution in this quantity shortly after star-formation ceases. The field galaxies in this region have very blue colours, presumably indicating young stellar populations. The member and field galaxy colours are consistent with this suggestion. There is little or no colour gradient in the red sequence within the cluster.
In view of the virial peculiarities noted for the cluster, the low velocity dispersion about the cD galaxy, and the dense grouping of members off the centre, we investigated the consequences of adopting this dense region as cluster centre. The redshift/radius relationship does look more normal: it is highest at this centre, although still low. This centre also shifts the two low D4000 (young population) galaxies out of the central 100” radius bin, which is also more normal. The second brightest member galaxy in our sample (18.6 compared with 18.0 for the cD) lies near this new centre, at +91, -47 in arcsec from the cD. However, its redshift is far from the cluster mean at 0.3198, and it has an old population and red colour. The dymanic situation of this cluster needs more detailed investigation.
Among the radio sources in the field, only four correspond with galaxies in the image. The brightest two sources (15 - 20 mJy at 20 cm) are large foreground galaxies for which we have no spectrum, or hence redshift. The next source has flux 2.6 mJy at 20 cm, and is a 16mag foreground galaxy at redshift 0.05. The last source identified has 20cm flux 0.4 mJy, and is measured at 3.8” from the cD galaxy, which we adopt as the identification. Thus, there is only one weak cluster source which is the cD.
Searching the photometric catalogue for faint companions, we find the crowded region at radius 65” - 85” also stands out down to 24th magnitude. In addition, there are few faint galaxies inside this radius. This suggests that the region around the cD galaxy (if it is the cluster central region) has been cleared of faint galaxies or has enough extinction to hide background galaxies.
## 4 1231+15
This cluster has a larger radius and lower velocity dispersion than the other two. The sample includes three masks in the central field, and one each to the north and south, so that the central region is well sampled. The red sequence cutoff is easily defined at g-R=0.7, as there is a wide gap with no galaxies of colours between 0.62 and 0.76 (Figure 5). The red sequence colour is normal for its redshift among CNOC clusters, and has a clear gradient with radius and magnitude. The redshift-radius plot (Figure 5) shows a well virialised distribution of galaxies, both red and blue, with 3 near-field galaxies (in the radius range 250-300 arcsec) which are red. These do not form a close physical group, in spite of the very similar redshifts. The cD galaxy has a velocity some 250 km s<sup>-1</sup> lower than the cluster mean. The galaxy distribution on the sky is asymmetrical about the cD (Figure 5), suggesting that possibly the cluster has merged from two. We investigated this by splitting the galaxies into groups, as shown in the diagram. The redshift-radius plots are very similar for the two, but group 1 has younger population indicators. Table 4 shows differences in this sense in all mean indices, each at about 1$`\sigma `$ level.
The plots of measured properties with (log) radius are in Figure 5. The cluster is characterised by high values of D4000 (along with other line measures indicating old or high abundance populations). This indicator is higher for group 1 than group 2. The radial gradient of spectral features seen in Figure 5 is unusual in showing undulations, and this too may be explained by a different population (and hence history) of galaxies in the two main groups. Figure 5 shows mean spectra of representative subsets.
Figure 5, 5, and 5 show the comparison with models. This cluster has a significant number of high H$`\delta `$ red galaxies, and high D4000 values, particularly in the mid-range of radius. This could arise from a higher abundance in the stellar populations, possibly by being formed from galaxies that had long periods of star formation before forming the cluster. There are similar galaxies in the field, as well as some (still the majority) that have ongoing star-formation.
Looking for faint companions to cluster galaxies, we find that many of the 82 have several galaxies within 5 arcsec, and 13 of them look like compact groups in the sky. Most of those are unremarkable red galaxies but there are three with blue colours and/or active star-formation. Of those, the bluest is in a populous group of 10, while the other two are have 3-4 very close companions. In these cases at least it seems possible that the stellar populations are affected by local interactions within the groups. This cluster has a higher density of faint galaxies than the other two. The average number of galaxies to the catalogue limits are 45 for 1231+15, 20 for 1224+20, and 15 for 0839+29. Comparing the magnitude distributions, the 0839+29 catalogue is less complete to 24 mag, while the 1231+15 is the most complete. The distributions of slit masks across the three clusters are similar (Table 1), but less centrally concentrated in 0839+29. However, the groups of around the star-forming member galaxies in 1231+15 are not at the faint limit and would be visible in either of the other two catalogues.
## 5 Comparison and discussion
We have presented the same plots and model comparisions for the three clusters. We have not done this in the same detail as in papers 1 and 2, focussing principally on H$`\delta `$ and D4000 in the model discussions. Other quantities are shown in the data plots and tables.
The colours of the red sequences for the clusters and those of papers 1 and 2 (A2390 and 1621+26) follow a progression with redshift, except for 0839+29 which is redder by some 0.15 in g-R. This may indicate the presence of some dust in this cluster, since the line indices do not correspond with high abundance or age. This cluster may contain populations of different ages or abundance, as the D4000 distribution appears bimodal (see figure 5). This is seen in 1621+26, but in 1621 the main population is old/high abundance while in 0839+29 it is young/low abundance.
The cluster 1224+20 has remarkably low D4000, suggesting that its stellar population is either of low abundance or that the cluster itself has only formed, and hence stopped star-formation, recently. As noted, the discussions of velocity suggest that the cluster may be in an early stage of its dynamical evolution too. The highest concentration of member galaxies and the largest velocity spread is found some distance from the cD galaxy. The lack of star-forming members and strong Balmer absorbers also suggest that there is little current infall, since infall is regarded as truncating or restarting star-formation. The lack of neighbouring redshifts among the field sample also is consistent with this scenario.
The cluster 1231+15 has a wide spread of age/abundance, and a complex gradient of D4000 with radius. As we have discussed, there are suggestions of a recent merging of major subclusters to form this cluster. The relatively high fraction of star-forming and H$`\delta `$ strong galaxies in this cluster may also result from this.
We are grateful to David Schade and Simon Morris for help with the CNOC database and with the GISSEL models.
References
Abraham R.G. et al 1996, ApJ, 471, 694 (paper 1)
Abraham R.G., Yee H.K.C., Ellingson E., Carlberg R.G., Gravel P., 1998, ApJS, 116, 231
Balogh M.L., Morris S.L., Yee H.K.C., Carlberg R.G., Ellingson E., ApJ (in press: astro-ph 9906470)
Barger A.J. et al, 1998, ApJ, 501, 5223.
Carlberg R.G., Yee H.K.C., and Ellingson E., 1997, ApJ, 478, 462
Carlberg R.G., 1994, ApJ, 434, L51
Donahue M., Stocke J.T., Gioia I.M., 1992, ApJ, 385, 49
Fahlman G., Kaiser N., Squires G., and Woods D., 1994, ApJ, 437, 56
Hamilton D., 1985, ApJ, 297, 371
Lewis A.D., Ellingson E., Morris S.L., Carlberg R.G., 1999, ApJ in press (astro-ph 9901062)
Morris S.L., et al, 1998, ApJ, 507, 84 (paper 2)
Nesci R., Perola G.C., Wolter A., 1995, A&A, 299, 34
Yee H.K.C., Ellingson E., and Carlberg R.G., 1996, ApJS, 102, 289
Captions to Figures |
no-problem/9912/astro-ph9912101.html | ar5iv | text | # Black Hole Emergence in Supernovae
## Introduction
Theory suggests that the compact object formed in a core-collapse supernova can be either a neutron star or a black hole, depending on the character of the progenitor and the details of the explosion Fryer-here . The presence of several radio pulsars in sites of known supernovae provides substantial observational evidence that neutron stars are indeed created in supernovae, but similar evidence for a black hole - supernova connection is still mostly unavailable (see Israelian99 for recent indirect evidence).
A newly formed black hole in a supernova can be identified directly if it imposes an observable effect on the continuous emission of light that follows the explosion - the light curve. In particular, if some material from the bottom of the expanding envelope remains gravitationally bound to the black hole, it will gradually fall back onto it, generating an accretion luminosity ZCSW98 . The black hole can be said to “emerge” in the supernova light curve if and when this luminosity becomes comparable to the other sources that power the light curve.
## Black Hole Emergence in the Light Curve
Since the material which remains bound to the black hole following a supernova is outflowing in an overall expansion, the accretion rate must decrease in time. The expansion will also cause pressure forces to become unimportant eventually, and the accretion will proceed as dust-like, following a power-law decline in time according to $`\dot{M}t^{5/3}`$ CSW96 . As shown in ZCSW98 , the accretion flow and the radiation field proceed as a sequence quasi-steady-states, and the accretion luminosity can therefore be estimated according to the formula of Blondin Blondin86 for stationary, spherical, hypercritical accretion onto a black hole ($`L\dot{M}^{5/6}`$). The accretion luminosity then takes the form ZCSW98 ; ZSC98 ; BSZ99 :
$$L_{acc}(t)L_{acc,0}t^{25/18},$$
(1)
where $`L_{acc,0}`$ depends on the kinetic energy, density and composition of the accreting material at the onset of dust-like flow.
Heating by decays of radioactive elements synthesized in the explosion may provide a significant source of luminosity in the late-time light curve. The time dependence of radioactive heating rate for an isotope $`X`$ may be estimated as Woosleyal89
$$Q_X(t)=M_X\epsilon _Xf_{X,\gamma }(t)\text{e}^{t/\tau _X},$$
(2)
where $`M_X`$ is the total mass of the isotope $`X`$ in the envelope, $`\tau _X`$ is the isotope’s life time, and $`\epsilon _X`$ is the initial energy generation rate per unit mass. The factor $`f_{X,\gamma }(t)`$ reflects that not all $`\gamma `$rays emitted in the decays are efficiently trapped in the envelope (and so do not contribute to the UVOIR luminosity).
Since accretion luminosity decreases as a power law in time while radioactive heating declines exponentially, then - assuming that spherical accretion persists - the accretion luminosity must eventually become the dominant source in the light curve. Furthermore, the non-exponential character of the accretion luminosity should be readily distinguishable in observations, announcing that the black hole has “emerged” in the light curve.
## Realistic Supernovae
The typical amount of radioactive elements observed in type II supernovae suggests that an observation of black hole emergence in the light curve will usually be impractical. For example, luminosity due to accretion onto a hypothetical black hole in SN1987A would become comparable to the heating rate due to positron emission in <sup>44</sup>Ti decays only $`900`$years after the explosion. At this time the luminosity will have dropped to only $`10^{32}\text{ergs}\text{s}^1`$ ZCSW98 .
An important exception is expected in the case of higher mass progenitors, $`M_{}=2540M_{}`$. Explosions of such stars are likely to involve significant early fallback even while the explosion is still proceeding. The survey of Woosley and Weaver WoosWeav95 suggests that, in general, larger mass stars leave behind larger remnants and expel a smaller amount of radioactive isotopes (since these are synthesized in the deepest layers of the envelope, and a significant fraction is advected back onto the collapsed core). Clearly, for such an explosion, there is likely to be a larger reservoir of bound material for late time accretion, so that combined with the low background of radioactive isotopes, an actual detection of black hole emergence may become feasible. We have recently conducted a numerical investigation of the expected emergence of a black hole in such supernovae BSZ99 . This investigation was carried out with the spherical, fully relativistic radiation-hydrodynamics code described in ZCSW98 , modified to include a variable chemical composition with a detailed photon opacity table, and to account for radioactive heating.
### Black Hole Emergence in “Radioactive-Free” Supernovae
The most favorable case for identifying black hole emergence in supernova would be a low-to-medium energy ($`1.3\times 10^{51}`$ergs) explosion of a progenitor star with a mass of $`3540M_{}`$, where the ejected envelope is expected to be practically free of radioactive isotopes WoosWeav95 . For such supernovae, the black hole should emerge within a few tens of days after the explosion. As an example, we show in Fig. 2 the calculated light curve of such an explosion, based on the theoretical model S35A of WoosWeav95 ($`M_{}=35M_{},M_{BH}=7.5M_{}`$). The luminosity at emergence is $`10^{37}\text{ergs}\text{s}^1`$, after which the light curve clearly follows a power law decline in time. If such a supernova were observed, it would offer an explicit opportunity to confirm the presence of a newly formed black hole ZCSW98 .
### SN1997D
While such an ideal candidate is not available at present, SN1997D may provide a marginally observable case for idenfying the emergence of a black hole. Discovered on January 14, 1997 in the galaxy NGC 1536, SN1997D is the most sub-luminous type II supernova ever recorded. Through an analysis of the light curve and spectra, Turatto et al. Turatto98 suggested that the supernova was a low energy explosion, $`4\times 10^{50}`$ergs, of a $`26M_{}`$ star. The observed late-time light curve (up to 416 days after the explosion) is consistent with only $`0.002M_{}`$ of <sup>56</sup>Co in the ejected envelope, much lower than the $`0.1M_{}`$ typical of most type II supernova ($`0.075M_{}`$ in SN1987A). In a preliminary investigation, Zampieri et al. ZSC98 pointed out that the $`3M_{}`$ black hole (prdecited for this model) may emerge in SN1997D as early as $`3`$years after the explosion, with an accretion luminosity ranging between $`10^{35}`$ to as much as $`5\times 10^{36}\text{ergs}\text{s}^1`$.
Our calculated light curve for SN1997D based on the best-fit post-explosion model of Turatto98 is shown in Fig. 2. The earlier part of the light curve is in good agreement with the observed data, while at a later time we find that the black hole emerges about 1050 days after the explosion \- which corresponds to late 1999 - NOW!. Figure 2 compares the heating due to the isotopes <sup>56</sup>Co, <sup>57</sup>Co and <sup>44</sup>Ti to the accretion luminosity. Note that radioactive heating (especially <sup>44</sup>Ti) is never negligible with respect to the accretion luminosity, so the total luminosity does not fall off as an exact power law. Nonetheless, the presence of the black hole could still be inferred by attempting to decompose the total light curve.
In this calculation, the total luminosity at emergence is about $`7\times 10^{35}\text{ergs}\text{s}^1`$. However, this luminosity is dependent on the finer details of the initial profile, which are difficult to constrain from the early light curve. Considering these uncertainties and those regarding the abundances of <sup>57</sup>Co and <sup>44</sup>Ti, we find through a revised analytical estimates (which account for $`\gamma `$-ray transparency and Eddington-rate limits on the accretion flow), that while the time of emergence is fairly well determined, the plausible range for the luminosity at emergence is $`0.52\times 10^{36}\text{ergs}\text{s}^1`$. One example, where the luminosity at emergence is $`1.4\times 10^{36}\text{ergs}\text{s}^1`$ is also shown in Fig. 2. In this case, the accretion luminosity is sufficiently high that contribution of radioactive heating does not cause any significant deviation from a power-law decay. We estimate that such a luminosity is still marginally detectable ($`m_v29`$) with the HST STIS camera, so that if observed, SN1997D could provide first direct observational evidence of black hole formation in supernova within the next year. |
no-problem/9912/cond-mat9912390.html | ar5iv | text | # Magnetotransport in the low carrier density ferromagnet EuB6
## Abstract
We present a magnetotransport study of the low–carrier density ferromagnet EuB<sub>6</sub>. This semimetallic compound, which undergoes two ferromagnetic transitions at $`T_l`$$`=`$ 15.3 K and $`T_c`$$`=`$ 12.5 K, exhibits close to $`T_l`$ a colossal magnetoresistivity (CMR). We quantitatively compare our data to recent theoretical work , which however fails to explain our observations. We attribute this disagreement with theory to the unique type of magnetic polaronic formation in EuB<sub>6</sub>.
Recently, critical fluctuations have been proposed by Majumdar and Littlewood as mechanism causing the colossal magnetoresistivity (CMR) in a number of non–manganite materials, like in the pyrochlores or chalcogenide spinels . The authors of Ref. argued that, because in a ferromagnetic metal close to its critical point the dominant magnetic fluctuations are those with a wave vector $`q`$$``$ 0, the contributions from these fluctuations to the resistivity $`\rho `$ should grow as Fermi number $`k_f`$ and the carrier density $`n`$ decrease. For sufficiently small $`n`$, like in ferromagnetic semimetals, a major part of the zero–field resistivity close to $`T_c`$ would be caused by magnetic fluctuations. Suppressing these in magnetic fields should generate the CMR in such materials.
The resistivity of a low–carrier density system might also be affected by magnetic polarons. But magnetic polarons disappear (i.e. delocalize) if the magnetically correlated regions overlap, implying that critical magnetic scattering should dominate the resistivity for $`k_f`$$`\xi (T)`$$``$ 1 ($`\xi (T)`$: magnetic correlation length). In this regime of dominant critical scattering and in the clean limit, i.e. $`k_f`$$`\lambda `$$``$ 1, the low–field magnetoresistivity is quantitatively predicted to $`(\rho (T,B)\rho _0)/\rho _0=\mathrm{\Delta }\rho /\rho =C(M/M_{sat})^2`$, with $`C`$$``$ ($`k_f`$$`\xi _0`$)<sup>-2</sup> ($`\lambda `$: mean free path; $`M`$: magnetization; $`\xi _0`$: magnetic lattice spacing). Then, for a free electron gas a relationship between magnetoresistive coefficient and carrier density, $`C`$$``$ ($`n^{2/3}`$$`\xi _0^2`$)<sup>-1</sup>, emerges as central result of Ref. , with a proportionality constant $``$ 1/38.
To test this prediction we performed a detailed study of the magnetoresistive properties of the divalent cubic hexaboride EuB<sub>6</sub> . This semimetal, with a carrier density determined from quantum oscillation experiments of 8.8$`\times `$10<sup>-3</sup> electrons per unit cell, undergoes two ferromagnetic transitions at $`T_c`$ = 12.5 K and $`T_l`$ = 15.3 K , derived here from the maxima in the temperature derivative of the resistivity, $`d\rho /dT`$. The effective carrier masses are slightly smaller than the free electron mass, and the Fermi surface is almost spherical. Hence, a free electron model appropriately describes this compound. The zero–field resistivity is metallic, and in free electron approximation we find $`k_f`$$`\lambda `$$``$ 1 up to room temperature . Further, with $`\xi _0`$$`=`$$`\xi _{300\mathrm{K}}`$$`=`$ 4.185 Å, in Oernstein–Zernicke approximation, $`\chi (0)`$$``$$`\xi ^2(T)`$, and with the experimentally determined dc–susceptibility $`\chi _0`$ from Ref. the condition $`k_f`$$`\xi (T)`$$``$ 1 is fulfilled below 17 K. Hence, EuB<sub>6</sub> fulfills all requirements of the model of Ref. .
Here, we present resistivity and magnetoresistivity measurements employing a standard 4–probe ac–technique on the crystal studied in Ref. , with the current applied along the and the field along the of the crystalline unit cell. For a quantitative analysis of the magnetoresistivity we use the magnetization from Ref. .
In Fig. 1(a) and (b) we plot our raw data: the temperature ($`T`$) dependent resistivity $`\rho `$ of EuB<sub>6</sub> in fields $`B`$ up to 5 T and the normalized magnetoresistivity $`\mathrm{\Delta }\rho (B)/\rho `$ between 5.5 and 20 K, corrected for demagnetization effects. The field dependence of $`\rho `$ reveals two different magnetoresistive regimes: For small $`B`$ a rapid decrease of $`\rho (B)`$ close to $`T_l`$ occurs, while hardly any effect on $`\rho `$ is observable below $``$ 10 K. The suppression of $`\rho `$ close to $`T_l`$, $`\mathrm{\Delta }\rho /\rho `$$``$$`0.9`$ in 2 T, is comparable in size to other CMR compounds . In contrast, for large fields $`\rho (B)`$ increases with $`B`$, this in particular at low $`T`$. The positive magnetoresistivity represents the normal metallic contribution $`\rho _{met}`$ to $`\rho (B)`$.
We extract the magnetic scattering contribution from the total magnetoresistivity by subtracting the metallic magnetoresistivity $`\rho _{met}`$. To do that we parametrize the high–field magnetoresistivity with $`\rho _{met}`$ = $`\rho _0+aB^x`$, $`x`$$``$ 2 and derive the magnetic part $`\rho _{mag}`$$`=`$$`\rho (B)\rho _{met}`$. The field dependence of $`\rho _{met}`$ thus established for the data at 13 K is included in Fig. 1(b) as dashed line.
At any given temperature the minimum value of $`\rho `$ as function of $`B`$, $`\rho _{min}`$, constitutes an upper limit for the phonon contribution to $`\rho `$. We have included the values $`\rho _{min}`$ as function of $`T`$ in Fig. 1(a) as shaded area, illustrating that at and above $`T_c`$ phonons contribute less than 15% to the zero–field resistivity.
To examine the dependence of the magnetic magnetoresistivity $`\mathrm{\Delta }\rho _{mag}/\rho `$ on the normalized magnetization $`M/M_{sat}`$ we plot the two quantities at different temperatures in Fig. 2 in a log–log plot, with the magnetic field as implicit variable. In order to compare with the model of Ref. , which is valid only in the paramagnetic phase, we restrict our analysis to temperatures $`T`$$``$$`T_l`$. As is illustrated in Fig. 2, at these temperatures all data sets collapse on a universal curve. In particular, for $`M/M_{sat}`$$``$ 0.07 we find $`\mathrm{\Delta }\rho _{mag}/\rho `$$`=`$$`C`$$`(M/M_{sat})^2`$, with $`C`$ = 75 (solid line).
The value of $`C`$ is in striking contrast to the prediction of Ref. . With the carrier density $`n`$$`=`$ 1.2$`\times `$10<sup>-4</sup><sup>3</sup> for our crystal EuB<sub>6</sub> we compute $`C`$$``$ (38 $`n^{2/3}\xi _0^2`$)<sup>-1</sup> = 0.62 rather than the observed 75. More generally, following Ref. we plot $`C`$ vs. $`n`$$`\xi _0^3`$ for metallic ferromagnets and manganites in Fig. 3, together with the data for EuB<sub>6</sub> of our crystal and from previous works . In the plot we include the predicted values $`C`$$`=`$ (38 $`n^{2/3}\xi _0^2`$)<sup>-1</sup>. The data for EuB<sub>6</sub> deviate by an order of magnitude from those of the other materials, emphasizing the vastly different magnetoresistive behavior of this compound.
We believe that the unique type of magnetic polaron formation feature in EuB<sub>6</sub> causes the failure of the model of Ref. to account for the observed behavior. As we have proposed elsewhere , at $`T_l`$ polaron metallization via magnetic polaron overlap leads to a drop of $`\rho (T)`$. The polaron metallization is accompanied by a filamentory type of ferromagnetic ordering, which arises from internal structure of the polarons. The bulk magnetic transition occurs at $`T_c`$. The field dependence of the resistivity close to $`T_l`$ then is mainly governed by the increase of the polaron size with magnetic field (rather than by the suppression of critical scattering, as suggested in Ref. ), causing the metallization to occur at a higher temperature, and leading to the reduction of the resistivity at $`T_l`$ in magnetic fields.
Work at the University of Michigan was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under grant 94–ER–45526 and 97–ER–2753. Work at the TU Braunschweig was supported by the Deutsche Forschungsgemeinschaft DFG |
no-problem/9912/cond-mat9912336.html | ar5iv | text | # Jamming and Fluctuations in Granular Drag
## Abstract
We investigate the dynamic evolution of jamming in granular media through fluctuations in the granular drag force. The successive collapse and formation of jammed states give a stick-slip nature to the fluctuations which is independent of the contact surface between the grains and the dragged object – thus implying that the stress-induced collapse is nucleated in the bulk of the granular sample. We also find that while the fluctuations are periodic at small depths, they become ”stepped” at large depths, a transition which we interpret as a consequence of the long-range nature of the force chains.
Materials in granular form are composed of many solid particles that interact only through contact forces. In a granular pile, the strain resulting from the grains’ weight combines with the randomness in their packing to constrain the motion of individual grains. This leads to a ”jammed” state which also characterizes a variety of other frustrated physical systems, such as dense colloidal suspensions and spin glasses . Not surprisingly, the dynamics of granular materials, while in many ways analogous to those of fluids, are in fact quite different due to this frustration of local motion . The effects of this jamming are also manifested in static properties, leading to inhomogeneous stress propagation through force chains of strained grains and arch formation . Although jamming in granular materials has previously been discussed in the context of the gravitational stress induced by the weight of the grains, it can result from any compressive stress. For example, a solid object being pulled slowly through a granular medium is resisted by local jamming, and can only advance with large scale reorganizations of the grains. The granular drag force originates in the force needed to induce such reorganizations, and thus exhibits strong fluctuations which qualitatively distinguish it from the analogous drag in fluids .
In this Letter we investigate the dynamic evolution of jamming in granular media through these fluctuations in the granular drag force. The successive collapse and formation of jammed states give a stick-slip character to the force with a power spectrum proportional to $`1/f^2`$. We find that the stick-slip process does not depend on the contact surface between the grains and the dragged object, and thus the slip events must be nucleated in the bulk of the grains opposing its motion. While the fluctuations are remarkably periodic for small depths, they undergo a transition to ”stepped” motion at large depths. These results point to the importance of the long-range nature of the force chains to both the dynamics of granular media and the strength of the granular jammed state.
The experimental apparatus, shown in Fig. 1a, consists of a vertical steel cylinder of diameter $`d_c`$ inserted to a depth $`H`$ in a bed of glass spheres moving with constant speed . The cylinder is attached to a fixed force cell , which measures the force $`F(t)`$ acting on the cylinder as function of time. The bearings on the cylinder’s support structure allow the cylinder to advance freely only in the direction of motion so that the force cell alone is opposing the drag force from the grains. We incorporate a spring of known spring constant, $`k`$, between the cylinder and the force cell – choosing $`k`$ (between 5 to 100 N/cm) so that this spring dominates the elastic response of the cylinder and all other parts of the apparatus. We vary the speed ($`v`$) from 0.04 to 1.4 mm/s, the depth of insertion ($`H`$) from 20 to $`190`$ mm, and the cylinder diameter ($`d_c`$) from 8 to 24 mm, studying grains of diameter ($`d_g`$) 0.3, 0.5, 0.7, 0.9, and 1.1 mm . The force is recorded at 150 Hz and the response time of the force cell and the amplifier are $`<\mathrm{\hspace{0.17em}0.2}`$ ms.
Consistent with earlier results , we find that the average drag force on the cylinder is independent of $`v`$ and $`k`$, and is given by $`\overline{F}=\eta \rho gd_cH^2`$, where $`\eta `$ characterizes the grain properties (surface friction, packing fraction, etc.), $`\rho `$ is the density of the glass beads, and $`g`$ is gravitational acceleration. As shown in Fig. 2, however, $`F(t)`$ is not constant, but has large stick-slip fluctuations consisting of linear rises associated with a compression of the spring and sharp drops associated with the collapse of the jammed grains opposing the motion. The linear rises in $`F(t)`$ correspond to the development of an increasingly compressed jammed state of the grains opposing the motion. We find that, independent of the depth, the slopes of the rises are given by $`\frac{1}{v}dF/dt=k`$ for all springs with $`k<100`$ N/cm, confirming that the spring dominates the elasticity of the apparatus. This result also implies that during the rises the jammed grains opposing the cylinder’s motion do not move relative to each other or the cylinder. The power spectra, $`P(f)`$ (the squared amplitudes of the Fourier components of $`F(t)`$), are independent of both the elasticity of the apparatus and the rate of motion, so that the scaled spectra $`kvP(f)\text{ vs. }f/kv`$ collapse in the low frequency regime ($`f<10`$ Hz). This indicates that the fluctuations reflect intrinsic properties of the development and collapse of the jammed state rather than details of the measurement process. The power spectra also exhibit a distinct power law, $`P(f)f^2`$, over as much as two decades in frequency (Fig. 3) a phenomenon which has been reported in other stick-slip processes and is intrinsic to random sawtooth signals .
During each fluctuation the force first rises to a local maximum value ($`F_{max}`$), and then drops sharply (by an amount $`\mathrm{\Delta }F`$), corresponding to a collapse of the jammed state. The force from the cylinder propagates through the medium via chains of highly strained grains, and a collapse occurs when the local interparticle forces somewhere along one of the chains exceed a local threshold. The corresponding grains then slip relative to each other, which in turn nucleates an avalanche of grain reorganization to relieve the strain. This allows the cylinder to advance relative to the granular reference frame, with a corresponding decompression of the spring and a drop in the measured force $`F(t)`$ . The interparticle forces within the force chains are largest at the cylinder’s surface, where the chains originate, and their magnitude decreases as we move away from the cylinder and the force chains bifurcate. Consequently, one might expect that the reorganization is nucleated among grains in contact with the cylinder, but we find no change either in $`\overline{F}`$ or in the fluctuations when we vary the coefficient of friction between the grains and the cylinder by a factor of 2.5 (substituting a teflon-coated cylinder for the usual steel cylinder). As demonstrated in the inset to Fig. 3, the power spectra are also unchanged even by substituting a half-cylinder (i.e. a cylinder bisected along a plane through its axis and oriented so that plane is normal to the grain flow) for a full cylinder of the same size, indicating that the geometric factors do not play a significant role either. These results indicate that the fluctuations are not determined by the interface between the dragged object and the medium, but rather that the failure of the jammed state is nucleated within the bulk of the medium. In this respect, the fluctuations are rather different from either ordinary frictional stick-slip processes which originate at a planar interface between moving objects, or the motion of a frictional plate on top of a granular medium .
A striking feature of the data is that the fluctuations change character with depth. For $`H<H_c80`$ mm the fluctuations are quite periodic, i.e. $`F(t)`$ increases continuously to a nearly constant value of $`F_{max}`$ and then collapses with a nearly constant drop of $`\mathrm{\Delta }F`$ (Fig. 2). As the depth increases, however, we observe a change in $`F(t)`$ to a ”stepped” signal: instead of a long linear increase followed by a roughly equal sudden drop, $`F(t)`$ rises in small linear increments to increasing values of $`F_{max}`$, followed by small drops (in which $`\mathrm{\Delta }F`$ is on average smaller than the rises), until $`F_{max}`$ reaches a characteristic high value, at which point a large drop is observed. This transition from a periodic to a ”stepped” regime is best quantified in Fig. 4, where we plot the depth dependence of $`\overline{\mathrm{\Delta }F}`$ and the relative standard deviation of $`\mathrm{\Delta }F`$, $`\sigma _n=\sigma _{\mathrm{\Delta }F}/\overline{\mathrm{\Delta }F}`$. In the periodic regime, $`\overline{\mathrm{\Delta }F}`$ rises due to the increase in $`\overline{F}`$. As the large uniform rises of the periodic regime are broken up by the small intermediate drops, however, $`\overline{\mathrm{\Delta }F}`$ shows a local minimum and $`\sigma _{\mathrm{\Delta }F}/\overline{\mathrm{\Delta }F}`$ increases drastically, saturating for large depths. The transition is also observed in the power spectra as shown in the upper inset to Fig. 4. For low depths the power spectra display a distinct peak characteristic of periodic fluctuations, but these peaks are suppressed for $`H>H_c`$ in correlation with the changes in $`\sigma _{\mathrm{\Delta }F}/\overline{\mathrm{\Delta }F}`$ and the qualitative character of $`F(t)`$.
The transition from a periodic to a ”stepped” signal is rather unexpected, since it implies qualitative changes in the failure and reorganization process as $`H`$ increases and the existence of a critical depth, $`H_c`$. An explanation for $`H_c`$ could be provided by Janssen’s law which states that the average pressure (which correlates directly with the local failure process) should become depth independent below some critical depth in containers with finite width. This should not occur in our container, however, which has a diameter of 25 cm, much larger than $`H_c`$. Furthermore we see no deviation in the behavior of $`\overline{F}(H)`$ from $`\overline{F}H^2`$, which depends on the pressure increasing linearly with the depth (Fig. 4 lower inset).
In order to account for the observed transition, we must inspect how the force chains originating at the surface of the cylinder nucleate the reorganizations. The motion of the cylinder attempting to advance is opposed by force chains that start at the cylinder’s surface and propagate on average in the direction of the cylinder’s motion. These force chains will terminate rather differently depending on the depth at which they originate, as shown schematically in Fig. 1b. For small $`H`$, some force chains will terminate on the top surface of the granular sample and the stress can be relieved by a slight rise of the surface. Force chains originating at large depths, however, will all terminate at the container’s walls. Since the wall does not permit stress relaxation, the grains in these force chains will be more rigidly held in place. According to this picture, $`H_c`$ corresponds to the smallest depth for which all force chains terminate on the wall. When the cylinder applies stress on the medium, the force chains originating at small H ($`H<H_c`$) reduce their strain through a microscopic upward relaxation of chains ending on the free surface. By contrast, the higher rigidity of force chains originating at $`H>H_c`$ impedes such microscopic relaxations. Thus a higher proportion of the total force applied by the cylinder will be supported by those force chains, enhancing the probability of a local slip event occuring at high depths. Such a slip event would not necessarily reorganize the grains at all depths (for example the grains closer to the surface may not be near the threshold of reorganization), thus the slip event might induce only a local reorganization and a small drop in $`F(t)`$. The large drops in $`F(t)`$ would occur when force chains at all depths are strained to the point where the local forces are close to the threshold for a slip event. This scenario also explains why $`\overline{F}(H)`$ does not change at $`H_c`$, since $`\overline{F}`$ is determined by the collective collapse of the jammed structure of the system.
According to this picture, the transition is expected at smaller depths in smaller containers since the force chains would terminate on the walls sooner (seen Fig. 1b). Indeed, as we show in Fig. 4, the transition does occur at a depth approximately 20 mm smaller when the measurements are performed in a container 2.5 times smaller (with diameter of 100 mm). Furthermore, we fail to observe the periodic fluctuations in any grains with diameters 1.4 mm or larger , which is consistent with the suggested mechanism, since larger grains correspond to a smaller effective system size.
It is interesting to compare our results with those of Miller et al. who studied fluctuations in the normal stress at the bottom of a sheared granular annulus. Those fluctuations were also independent of the rate of the motion, and demonstrated the long-range nature of the vertical force chains. In the present experiments, we confirm that force chains originating from a horizontal stress are also long range through our observation of the transition at depth $`H_c`$. Since for small grains ($`d_g1.1`$ mm) $`H_c`$ has no measurable dependence on grain size, we find in agreement with Miller et al. that the nature of the force chains is not strongly dependent on grain size. Our observations also shed light on the implications of the long-range force chains for granular dynamics and the nature of jamming in granular materials. The crossover at $`H_c`$ suggests that drag fluctuations in an infinitely wide container would be periodic, but that the finite size of a real container destroys the periodicity. In other words, the finite size of a container relative to these chains reduces the strength of jammed granular states within the container. These results point to the need for a better understanding of the detailed dynamics of force chains – both how they form when stress is applied to a granular medium and how they disperse geometrically from a point source of stress –in order to gain an understanding of slow granular flows.
We gratefully acknowledge the support of the Petroleum Research Foundation administered by the ACS, the Alfred P. Sloan Foundation, and NSF grants PHYS95-31383 and DMR97-01998. |
no-problem/9912/astro-ph9912362.html | ar5iv | text | # Wavelength Doesn’t Matter: Optical vs. X-ray Luminosities of Galaxy Clusters
## 1 Introduction
Clusters of galaxies are the most massive gravitationally bound and collapsed objects in the Universe and are important systems for studies of the formation and evolution of Large-Scale Structure (LSS) in the Universe. Their large mean spatial separations ($`60100h_{50}^1`$Mpc) make them excellent tracers for LSS. The cosmological subfield of LSS is still in its infancy, with new and exciting discoveries occurring often. As large, all-sky surveys continue to catalog the heavens, galaxy clusters will remain the most efficient tool for mapping LSS. The importance of a complete map of LSS has become evident in some recent discoveries of how the internal properties of clusters can be affected by surrounding large-scale structure (Novikov et al. 1999; Loken et al. 1999; Miller et al. 1999). Additionally, clusters are often thought to be a “fair sample” of the mass in the Universe, and therefore useful for estimating the relative fractions of baryonic and “dark” matter (White et al. 1993; White and Fabian 1995; however see Qin and Wu 2000).
Most of the largest cluster surveys presently available in the astronomical literature have been constructed at optical wavelengths using galaxy population overdensities to define the individual clusters. The best known example of this technique is the Abell (1958) and Abell, Corwin, and Olowin (1989- hereafter ACO) optical cluster catalogs. Many researchers have discovered observational selection biases within these catalogs as a direct result of the criteria used in their creation (see e.g. Sutherland 1988; Efstathiou 1992). One of the more deserved critiques of the Abell/ACO catalogs is the inaccuracy of the the richness classifications (Lumsden et al. 1992; van Haarlem et al. 1997; White, Jones and Forman 1999). Richness has long been used as a classification in nearly all optically selected cluster catalogs – i.e. Abell, APM, EDCC, PDCS etc – yet it is rather unphysical to quantify cluster properties (such as mass, size, etc) based on a simple two–dimensional galaxy count within an arbitrary radius, to an arbitrary magnitude-limit with a local and/or global field galaxy count subtracted off. This ignores a significant amount of information contained in the individual galaxies (such as intrinsic brightness and color). Also, the richness of a cluster can be severely contaminated by projection effects as one goes to higher redshift.
In addition to these problems with galaxy richness, the recent discovery of “fossil groups” and “dark clusters” further illustrate the deficiencies of richness. Both Vikhlinin et al. (1999) and Romer et al. (1999) have discovered several examples of “fossil groups” which are the proposed relics of group formation and evolution i.e. all the galaxies in the group have ‘cooled’ and fully merged into one large galaxy in the center of the potential well. However, the X–ray emitting gas is still ‘hot’ and appears extended. Therefore, in an optical survey these systems appear to be one galaxy while in the X–rays they appear like groups of galaxies. Clearly there is a dark–matter halo associated with such systems which would be completely missed in a classical, richness–limited survey. However, in an optical luminosity–based cluster survey, these systems would be included since the one central galaxy is typical many $`L^{}`$ in brightness (see Romer et al. 1999).
## 2 The Cluster Samples
We propose here dropping the notion of cluster richness in favor of total optical luminosity of a cluster; This can either be in a single passband or as a function of color e.g. constraining the color of the light to that appropriate for elliptical galaxies. Thus we allow the individual galaxy properties to re-enter the analysis. Currently, a large portion of the sky has reasonably accurate galaxy magnitudes through the APM/COSMOS galaxy catalog. This database covers the Southern galactic cap. There is a significant amount of work being done in the Northern hemisphere (e.g. the DPOSS survey (Djorgovski et al. 1999) and the Sloan Digital Sky Survey). Soon, we will have precise magnitude, and color information for most bright ($`b_j20.5`$) galaxies throughout the entire sky. We show in this work that future cluster identifications would be better quantified by total optical cluster luminosity than by richness.
### 2.1 The Edinburgh-Durham Cluster Catalog
The Edinburgh-Durham Cluster Catalog (Lumsden et al. 1992 -hereafter EDCC) was objectively selected from the Edinburgh/Durham Southern Galaxy Catalogue (EDSGC; Collins, Nichol & Lumsden 1992, 2000) which was built from machine-scans of 60 UK Schmidt photographic survey plates. In total, the EDCC contains over 700 galaxy overdensities covering over $`1000\mathrm{d}\mathrm{e}\mathrm{g}^2`$ of the sky centered on the South Galactic Pole. The reader is referred to Lumsden et al. (1992) for details about the EDCC. We supplemented the optically–selected EDCC catalog with X–ray information taken from the ROSAT All-Sky Survey (RASS) Bright Source Catalog (Voges et al. 1999). This was achieved through cross–correlation of the EDCC centroids with the RASS–BSC positions. In Figure 1, we show our separation analysis for our EDCC–BSC sample which demonstrates a positive correlation (above that expected from random) for separations of less than 4 arcmins. Below 4 arcminutes, we would only expect a random 5.3 matching pairs while we see 53 matching pairs. Therefore, we use 4 arcminutes as our matching radius which should ensure that 90% of our these 53 EDCC clusters have a true X–ray companion. We then cut the EDCC–BSC sample to include clusters with a known redshift (Collins et al. 1995) as well as those EDCC clusters coincident with an extended BSC source, thus guaranteeing minimal contamination from active galactic nuclei (AGN). These restrictions leave us with a subset of 20 EDCC–BSC clusters from the original 53 matches.
The X-ray luminosities of the EDCC–BSC clusters were computed as follows. First, the observed BSC count rate (counts per second) was converted to a flux (ergs per second per $`\mathrm{cm}^2`$) by integrating a thermal bremsstrahlung spectrum of $`\mathrm{T}_\mathrm{e}=5`$keV and metallicity of 0.3 solar over the ROSAT PSPC response function ($`0.12.4`$keV) and comparing this to the observed count rate (we corrected for absorption by the galactic neutral hydrogen using the data of Stark et al. 1990). Second, we corrected this aperture flux into a total cluster flux using a standard King profile ($`r_c=250h^1`$kpc and $`\beta =0.66`$). Finally, our fluxes were converted into X-ray luminosities using $`q_o=0.5`$, and $`h_{50}=H_0/50`$ (although we note that the choice of $`q_o`$ makes little difference in our calculations).
The optical luminosities for these 20 EDCC clusters were calculated using photometry from the EDSGC which was calibrated to an accuracy of $`\mathrm{\Delta }m0.1`$ across the whole survey (see Collins et al. 2000). We first calculated the absolute magnitudes for all galaxies with $`b_j20.5`$ within each cluster using the same methods as Lumsden et al. (1992). We use $`K(z)=4.14z0.44z^2`$ as the $`K`$-correction suitable for the $`b_j`$ passband (Ellis 1983), and $`A(b_j)`$ as the extinction values taken from the Schlegel, Finkbeiner, and Davis (1998) reddening maps. Assuming that each galaxy lies at the distance of the cluster, we summed the individual galaxy luminosities to determine the local average background luminosity out to $`20h_{50}^1`$Mpc around the cluster center. We then subtracted the local background from total cluster luminosity as determined within $`2h_{50}^1`$Mpc of the cluster center. We applied apparent and absolute magnitude constraints and found only small ($`10\%`$) variations in our results. We also calculated the background using the average luminosity within the ring from $`10h_{50}^1`$Mpc to $`20h_{50}^1`$Mpc, effectively excluding the optical emissions from the clusters. Again, our final results only varied by $`10\%`$. From these analyses, we conclude that our methods are robust and we measure errors on $`L_o`$ according to the variation in the choice of absolute magnitude limits (from $`24<M_{b_j}<21`$). We present our optical and X-ray luminosities in Table 1.
In this work, we are studying a unique subset of the EDCC data, i.e. only those that have a redshift and extended X-ray emission in the RASS–BSC. Therefore, the sample presented in Table 1 has a complicated selection function since we have made two flux cuts (for the EDCC & BCS), a cut on X–ray extent and a cut on richness (since the high richness clusters have measured redshifts; Nichol et al. 1992). However, this selection function should not effect our analysis and results since we are simply using this sample to study the relationship of optical–to–X–ray luminosities of these systems and are presently uninterested in the cross–comparison of these qualities between clusters.
### 2.2 The Abell/ACO Cluster sample
The second sample we have used is the Abell/ACO subset presented by Fritcsh & Buchert (1999, hereafter FB) who used a sample of 78 Abell/ACO clusters to create a fundamental plane in $`L_{optical}`$, $`L_{Xray}`$ and half-light radius and $`R_{optical}`$. They determined their optical luminosities after subtracting a local galaxy/photon distribution and applying a Schechter luminosity function with $`M_{}=21.8`$ and $`\alpha =1.25`$. The individual galaxy magnitudes were taken from the COSMOS galaxy catalog. The X-ray luminosities were determined from ROSAT data using a Raymond-Smith code. More details can be found in FB. Unfortunately, FB do not provide the uncertainties on their data.
### 2.3 The RASS Bright Sample
Our third cluster sample is the ROSAT All-Sky Survey (RASS) Bright Sample which contains 130 clusters constructed from the ESO Key Program. This has since become the REFLEX cluster survey (Guzzo et al. 1999; Bohringer et al. 1998). The RASS Bright Sample is an X-ray selected cluster catalog. Extended X-ray sources in the RASS data were searched for over-densities in the galaxy distribution. This survey is count-rate limited in the ROSAT hard X-ray band. The RASS Bright Sample covers 2.5 steradian around the Southern Galactic Cap. We find 20 RASS clusters within a similar section of the sky as the EDSGC. However, due to the significantly different cluster selection and X-ray identification techniques between the RASS and EDCC surveys, we find only five clusters that are common to both. In other words, although the EDCC and RASS samples are in the same portion of the sky, the two samples remain nearly statistically independent. The X-ray luminosities for the RASS clusters are listed in De Grandi et al. (1999). We measured optical luminosities for the RASS clusters similarly to the EDCC clusters (see above) and present them in Table 1.
### 2.4 Galaxy Groups
Finally, we use data provided by Mahdavi et al. 1997 for poor galaxy groups. Mahdavi et al. studied 36 groups with at least five galaxy members selected from the Center for Astrophysics Redshift Survey (Ramella et al. 1995). Nine of these groups were found to have definite X-ray emissions via RASS data. The optical properties were determined from Zwicky magnitudes listed in Ramella et al. and Mahdavi et al. fit Schechter luminosity functions (with $`\alpha =1`$ and $`M_B^{}=20.6`$) to determine total optical luminosities to a limit of $`M_B=18.4`$. We note that three groups in this sample are also $`R=0`$ Abell clusters (although none are in the FB data as well).
## 3 Results
In Figure 2, we plot $`log_{10}L_x`$ versus $`log_{10}L_o`$ for the Abell/ACO clusters (Fig 2a), EDCC clusters (Fig 2b), galaxy groups (Fig 2c) and RASS clusters (Fig 2d). A correlation analysis indicates that the optical and X-ray luminosities are linearly related to 4.4$`\sigma `$, 3.1$`\sigma `$, 2.1$`\sigma `$, and 1.9$`\sigma `$ significance levels respectively. \[Note: if we remove the three “outliers” in the RASS sample, the significance rises to 3.2$`\sigma `$. This suggests that these points may not be clusters, but extended AGNs\]. In all but the group sample, we also provide a robust best-fit line through the data points. The fit for the galaxy group is given by Mahdavi et al. (1997). In all cases, the slopes (listed in Fig. 2) are very close to unity, suggesting a simple proportionality survives the many procedural differences in the way these samples were selected and analyzed: i.e. Schechter functions were fit to some clusters, while for others, a simple sum of the individual galaxy luminosities was performed. Also, different optical wavelength windows were used i.e. $`b_j`$ for Abell/ACO, EDCC, RASS, and $`m_{Zwicky}`$ for the galaxy groups. Different aperture sizes within which magnitudes were summed were used: variable radius for Abell/ACO clusters, $`2.0h_{50}^1`$Mpc for EDCC clusters and RASS clusters, and $`0.4h_{50}^1`$Mpc for the galaxy groups.
At this point, it would be helpful to understand the role of the amplitude in Figure 2. However, we simply do not have enough information to constrain any more parameters than the slope. Some of the effects on the amplitude are more easily quantified, such as the solar luminosity used in the $`L_o/L_{}`$ normalization or the Hubble constant used. The other procedural differences within these samples are more difficult to ascertain and correct for, such as the counting-radius used, magnitude-limits, etc. Therefore, at this point we would like to stress the importance of the simple proportionality between optical and X-ray cluster luminosities.
## 4 Discussion
The simplest possible assumption about the assembled contents and their emitted radiation seems to produce the observed results. This need not have been true. Optical luminosity measures the total optical radiation of all the stars in the clusters–which depends on the efficiency of star formation, the initial mass function of the stars formed, and the age of the populations. The X-ray emission depends on the mass, temperature, and the (square of the) density of the hot gas. We have studied a wide range of objects from galaxy groups up through rich clusters–several orders of magnitude in mass and luminosity. The trend which forms the basis of our result continues through this mass hierarchy.
This result could be the endpoint of a cosmic conspiracy of canceling effects. However, a simpler explanation is that “averaging” is effective, even on small scales. If systems as small as galaxy groups sample the stellar and gaseous phases of baryonic matter in the Universe as well as rich clusters, and the overall efficiency and mass range of star formation is about the same, and if the radiative efficiency of bremsstrahlung radiation is again about the same on average, then our result will follow. A slightly puzzling aspect of this result is that the larger systems are known to have higher velocity dispersions–and one would expect higher gas temperatures. One would think this would lead to greater X-ray luminosity per unit mass of gas.
Besides the cosmological implications, we must also consider how these results can be immediately applied in the construction of future cluster catalogs. With a proper normalization to the $`L_xL_o`$ relation, we can very quickly and easily estimate cluster X-ray luminosities and masses using only the galaxy magnitudes and a minimal amount of spectroscopic and X-ray information. We need not perform extensive spectroscopic or X-ray observations of an entire cluster sample in order to analyze mass functions or correlation-functions and their dependence on cluster masses and X-ray luminosities.
As an example, we plot $`M_o`$ vs $`L_o`$ for the Abell/ACO clusters in Figure 3. Here, we have used cluster virial masses ($`M_{CV}`$) as published in Girardi et al. (1998) and the richness counts from the ACO catalog. The optical luminosities presented in FB were calculated within radii that included all of the light above the background. Therefore, we use those masses corresponding to the largest radii as published in Table 3 of Girardi et al. Figure 3 shows that optical luminosity linearly traces the total mass in galaxy clusters over a wide range, whereas we see a much weaker dependence of richness on mass. Therefore, $`L_o`$ is much easier to correlate to mass than richness. Smail et al. (1997) have used more distant clusters to show that $`M/L`$ varies little over a much smaller range of mass. We cannot provide any quantitative limits on $`M/L`$ for clusters without more information on the FB optical luminosities.
In this paper, we have demonstrated that the total optical luminosity of clusters and groups of galaxies is correlated with both the X–ray luminosity and mass of these systems. This result is very important because, in the near future, there will be many new surveys of galaxies and clusters, e.g. DPOSS, REFLEX, SDSS, EIS, 2MASS etc., and we will need objective methods to a) compare these different catalogs and b) relate the cluster properties to physically meaningful quantities. Optical cluster luminosities will be easy to compute from these digital catalogs and will allow us to directly relate optical cluster catalogs with X–ray–selected catalogs (e.g. REFLEX, EMSS, SHARC). Moreover, it is hoped that the optical luminosity of a cluster is more easily related to theoretical cluster research than the galaxy richness, since we have shown that the optical luminosity is simply probing the density in baryons in the cluster. It is appears that galaxy richness has a similar physical motivation.
Acknowledgments ALM thanks Carnegie Mellon University for support during his sabbatical, as well as the National Center for Supercomputing Applications for high-speed computing. We thank Brad Holden for his helpful discussions. Also, we thank Chris Collins and Stuart Lumsden for providing unrestricted access to the EDSGC database. RCN was partially funded by NASA grant NAG5-3202 during this work. |
no-problem/9912/quant-ph9912081.html | ar5iv | text | # A Kochen-Specker Theorem for Imprecisely Specified Measurements
\[
## Abstract
A recent claim that finite precision in the design of real experiments “nullifies” the impact of the Kochen-Specker theorem, is shown to be unsupportable, because of the continuity of probabilities of measurement outcomes under slight changes in the experimental configuration.
\]
The Kochen-Specker (KS) theorem is one of the major no-hidden-variables theorems of quantum mechanics. It exhibits a finite set of finite-valued observables with the following property: there is no way to associate with each observable in the set a particular one of its eigenvalues so that the eigenvalues associated with every subset of mutually commuting observables obey certain algebraic identities obeyed by the observables themselves. Such a set of observables is traditionally called uncolorable.
The physical significance of an uncolorable set of observables stems from the fact that a simultaneous measurement of a mutually commuting set must yield a set of simultaneous eigenvalues, which are constrained to obey the algebraic identities obeyed by the observables themselves. So any attempt to assign every observable in a KS uncolorable set a preexisting noncontextual value (a “hidden variable”) that is simply revealed by a measurement, will necessarily assign to at least one mutually commuting subset of observables a set of values specifying results that quantum mechanics forbids.
The term “noncontextual” emphasizes that the disagreement with quantum mechanics only arises if the value associated with each observable is required to be independent of the choice of the other mutually commuting observables with which it is measured. Contextual value assignments in full agreement with all quantum mechanical constraints can, in fact, be made. The import of the KS theorem is that there exist sets of observables — “uncolorable sets” — for which any assignment of preexisting values must be contextual if all the outcomes specified by those values are allowed by the laws of quantum mechanics. The theorem prohibits noncontextual hidden-variable theories that agree with all the quantitative predictions of quantum mechanics.
Meyer and Kent have questioned the relevance of the KS theorem to the outcomes of real imperfect laboratory experiments, by constructing some clever noncontextual assignments of eigenvalues to every observable in a dense subset of observables, whose closure contains the KS uncolorable observables. While no KS uncolorable set can be contained in such a dense colorable subset, observables in the dense colorable subset can be found arbitrarily close to every observable in any KS uncolorable set. This leads Meyer and Kent to assert that their noncontextual value assignments to dense sets of observables “nullify” the KS theorem. In support of this claim they note that observables measured in an actual experiment cannot be specified with perfect precision so, in Kent’s words, “no Kochen-Specker-like contradiction can rule out hidden variable theories indistinguishable from quantum theory by finite precision measurements$`\mathrm{}`$.”
I show below that this plausible-sounding but not entirely sharply formulated intuition dissolves under close scrutiny. First I describe how the KS conclusion that quantum mechanics requires any assignment of pre-existing values to be contextual can be deduced directly from the data, even when one is not sure precisely which observables are actually being measured. Then I identify where the intuition of Meyer and Kent goes astray.
At first glance it is not evident that either a KS uncolorable set or a Meyer-Kent (MK) dense colorable set of observables is relevant when one cannot specify to more than a certain high precision what observables are actually being measured. As traditionally viewed, the KS theorem merely makes a point about the formal structure of quantum mechanics, telling us that there is no consistent way to interpret the theory in terms of the statistical behavior of an ensemble, in each individual member of which every observable in the theory has a unique noncontextual value waiting to be revealed by any appropriate measurement. Upon further reflection, however, there emerges a straightforward way to apply the result of the KS theorem to measurements specified with high but imperfect precision, which makes it evident that the theorem and its various descendants remain entirely relevant to imperfect experiments, while the ingenious constructions of Meyer and Kent do not.
Let us first rephrase the implications of the KS theorem in the ideal case of perfectly specified measurements. The theorem gives a finite uncolorable set of observables, each with a finite number of eigenvalues. Because the number of possible assignments of noncontextual values to observables in the set is finite, no matter what probabilities are used to associate such values with the observables, the assignment must give nonzero probability to at least one mutually commuting subset of the observables having values that disagree with the laws of quantum mechanics. So if noncontextual preexisting values existed and if one could carry out a series of ideal experiments in each of which one measured a randomly selected subset of perfectly defined mutually commuting observables from a KS uncolorable set, then a definite nonzero fraction of those measurements would produce results violating the laws of quantum mechanics.
Conversely, if an appropriately large number of such randomly selected measurements all yielded results satisfying the relevant quantum mechanical constraints, then in the absence of bizarre conspiratorial correlations between one’s random choice of which mutually commuting subset to measure and the hypothetical preexisting noncontextual values waiting to be revealed by that measurement, one would have established directly from the ideal data that there could be no preexisting noncontextual values.
In a real experiment, of course, the observables cannot be precisely specified. The actual apparatus used to measure any mutually commuting subset of an uncolorable finite set of observables will be slightly misaligned at all stages of the measuring process. Therefore the pointer readings from which one deduces their discrete simultaneous values will give slightly unreliable information about the ideal observables one was trying to measure. If the misalignment is at the limit of ones ability to control or discern, as it will be in a well designed experiment, then one can and will label the outcomes of such a procedure with the same discrete eigenvalues used to label the gedanken outcomes of the ideal perfectly aligned measurement. The misalignment will only reveal itself through the occasional occurrence of runs with outcomes that the laws of quantum mechanics prohibit for a perfectly aligned apparatus. But although the outcomes deduced from such imperfect measurements will occasionally differ dramatically from those allowed in the ideal case, if the misalignment is very slight, the statistical distribution of outcomes will differ only slightly from the ideal case.
It is this continuity of quantum mechanical probabilities under small variations in the experimental configuration (without which quantum mechanics, or, for that matter, any other physical theory, would be quite useless) that makes the KS conclusion relevant to the imperfect case. Even though the apparatus cannot be perfectly aligned, if quantum mechanics is correct in its quantitative predictions, then the fraction of runs which violate the quantum-mechanical rules applying to the ideal observables can be made arbitrarily small in the realistic case by making the alignment sufficiently sharp. But the KS theorem tells us that if the possible results of the ideal gedanken measurements were consistent with preexisting noncontextual values, then the fraction of quantum-mechanically forbidden outcomes for the real experiments would have to approach a nonzero limit as the alignment became sharp.
By making the experimental misalignment sufficiently small, one can make the statistics of the slightly unreliable results of the randomly selected realistic measurements arbitrarily close to the statistics of the theoretical results of the randomly selected ideal measurements. Therefore a failure in the realistic case to observe sufficiently many values that contradict the constraints imposed on the data by quantum mechanics can demonstrate, just as effectively as the failure to observe any such contradictions in the ideal case, that if the measurements are revealing preexisting values, then those values must be contextual.
Because it can be stated in terms of outcome probabilities, and because those probabilities must vary continuously with variations in the experimental apparatus, the conclusion of the KS theorem are not “nullified” by the finite precision with which actual measurements can be specified.
But what about Meyer’s and Kent’s intuition that the dense colorable MK set can be used to furnish a group of nearly ideal experiments with noncontextual values that agree with the constraints imposed by quantum mechanics on all mutually commuting ideal subsets? The crucial word here is “all”. It is impossible for a colorable set of observables in one-to-one correspondence with the observables in a KS uncolorable set, to have mutually commuting subsets that correspond to every mutually commuting subset of the KS uncolorable set. At least one of those subsets cannot be mutually commuting.
This impossibility is established by the KS theorem itself, which uses only the topology of the network of links between commuting observables in the full KS uncolorable set. This topology would be preserved by the correspondence between the nearby colorable and uncolorable sets, if the correspondence between their mutually commuting subsets were complete. Because, as Meyer and Kent explicitly show, the MK set is colorable, the correspondence cannot be complete.
Thus any finite MK colorable set in sufficiently close one-to-one correspondence with a finite KS uncolorable set, must necessarily lack the full range of mutually commuting subsets that the KS uncolorable one contains. There is therefore at least one mutually commuting subset of the KS uncolorable set for which the MK colorable set fails to provide a set of values agreeing with the constraints imposed by quantum mechanics. It is only this deficiency that makes it possible to color the observables in the MK set in a noncontextual way that satisfies the quantum mechanical constraints for all mutually commuting subsets. But this same deficiency makes the MK set useless for specifying preassigned noncontextual values agreeing with quantum mechanics for the outcomes of every one of the slightly imperfect experiments that corresponds to measuring a mutually commuting subset of observables from the ideal KS uncolorable set.
If one tries to bridge this gap in the argument by associating more than a single nearby MK colorable observable with some of the observables in the ideal uncolorable set, one sacrifices the noncontextuality of the value assignments. Nor does the MK “nullification” of the KS theorem work in a toy universe in which the physically allowed observables are restricted to be those in the dense colorable set. In such a universe measurements still could not be specified with perfect precision, and one would still have to rely on the continuity of outcome probabilities with small changes in the experimental configuration to relate the theory to actual imperfect observations. Under such conditions it would be highly convenient to introduce fictitious observables which were the limit points of physical observables in the dense colorable set, whose statistics could represent to high accuracy the statistics of physically allowed observables in their immediate vicinity. The KS theorem would hold for these fictitious limit-point observables, and would therefore apply by continuity to measurements of the nearby physical observables, for exactly the reasons I have just described in the case of conventional quantum mechanics with its continuum of observables. Indeed, I can see no grounds other than convenience (which is, of course, so enormous as to be utterly compelling) for treating the whole continuum of observables as physically real, as we actually do, rather than regarding it as a fictitious extension of some countable dense subset of observables.
So contrary to the claim of Meyer and Kent, the KS theorem is not nullified by the finite precision of real experimental setups because of the fundamental physical requirement that probabilities of outcomes of real experiments vary only slightly under slight variations in the configuration of the experimental apparatus, and because the import of the theorem can be stated in terms of whether certain outcomes never occur, or occur a definite nonzero fraction of the time in a set of randomly chosen ideal experiments.
But the elegant MK colorings of dense sets of observables make an instructive contribution to our understanding of the KS theorem, by forcing us to recognize that a principle of continuity of physical probability, obeyed by the quantum theory, plays an essential role in relating the conclusions of the theorem to real experiments. Indeed, the relationship between the MK dense colorable sets and the KS uncolorable set offers a novel perspective on why it is sensible to base physics on real (as opposed to rational) numbers, in spite of the finite precision of actual experimental arrangements.
Acknowledgment. This reexamination of the physical setting of the Kochen-Specker theorem was supported by the National Science Foundation, Grant No. PHY 9722065. |
no-problem/9912/cond-mat9912402.html | ar5iv | text | # References
Thermodynamics of Heat Shock Response
Kristine Bourke Arnvig<sup>a</sup>, Steen Pedersen<sup>a</sup> and Kim Sneppen<sup>b</sup>
<sup>a</sup> Institute of Molecular Biology, Øster Farimagsgade 2A, 1353 Copenhagen K, Denmark.
<sup>b</sup> Nordita, Blegdamsvej 17, 2100 Copenhagen Ø, Denmark
July 26, 1999
Abstract: Production of heat shock proteins are induced when a living cell is exposed to a rise in temperature. The heat shock response of protein DnaK synthesis in E.coli for temperature shifts $`TT+\mathrm{\Delta }T`$ and $`TT\mathrm{\Delta }T`$ is measured as function of the initial temperature $`T`$. We observe a reversed heat shock at low $`T`$. The magnitude of the shock increases when one increase the distance to the temperature $`T_023^o`$, thereby mimicking the non monotous stability of proteins at low temperature. This suggest that stability related to hot as well as cold unfolding of proteins is directly implemented in the biological control of protein folding.
PACS numbers: 5.65.+b, 82.60-s, 87., 87.10.+e, 87.14.Ee, 87.15.-v, 87.17.-d
Chaperones direct protein folding in the living cell by binding to unfolded or misfolded proteins. The expression level of many of these catalysts of protein folding change in response to environmental changes. In particular, when a living cell is exposed to a temperature shock the production of these proteins are transiently increased. The response is seen for all organisms , and in fact involves related proteins across all biological kingdoms. The heat shock response (HS) in E.coli involves about 40 genes that are widely dispersed over its genome . For E.coli the response is activated through the $`\sigma ^{32}`$ protein. The $`\sigma ^{32}`$ binds to RNA polymerase (RNAp) where it displaces the $`\sigma ^{70}`$ subunit and thereby changes RNAp’s affinity to a number of promoters in the E.coli genome. This induce production of the heat shock proteins. Thus if the gene for $`\sigma ^{32}`$ is removed from the E.coli genome the HS is suppressed and also the cell cannot grow above $`20^oC`$.
The HS is fast. In some cases it can be detected by a changed synthesis rate of e.g. the chaperone protein DnaK already about a minute after the temperature shift. Given that the DnaK protein in itself takes about 45 seconds to synthesize the observed fast change in DnaK production must be very close to the physical mechanism that trigger the response. In fact we will argue for a mechanism which does not demand additional synthesis of $`\sigma ^{32}`$ in spite of the fact that DnaK is only expressed from a $`\sigma ^{32}`$ promoter, and thus postulate that changed synthesis of $`\sigma ^{32}`$ only plays a role in the later stages of the HS. To quantify the physical mechanism we measure the dependence of HS with initial temperature and find that the magnitude of the shock is inversely proportional to the folding stability of a typical globular protein.
The present work measure the expression of protein DnaK. Steady state levels at various temperature and growth conditions can be found in . The steady state number of DnaK in an E.coli cell varies from approx. 4000 at $`T=13.5`$ to approx. 6000 at $`37^oC`$, thereby remaining roughly proportional to number of ribosomes. DnaK is a chaperone, and have a high affinity for hydrophobic residues , as these signal a possible misfold (for folded proteins the hydrophobic residues are typically in the interior). $`\sigma ^{32}`$ control the expression of DnaK, the $`\sigma ^{32}`$ must bind to the RNAp before this bind to the promoter for DnaK. One expect at most a few hundred $`\sigma ^{32}`$ in the cell, a number which is dynamical adjustable because the in vivo half life of $`\sigma ^{32}`$ is short (in steady state it is 0.7 minutes at $`42^o`$C and $`15`$ minutes at $`22^o`$C ). The life time $`\sigma ^{32}`$ is known to increase transiently under the HS.
The measurement was on E.coli K12 strain grown on A+B medium with a $`{}_{}{}^{3}H`$ labelled amino acid added. After the temperature shift we extracted 0.6 ml samples of the culture at subsequent times. Each sample was exposed to radioactive labelled methionine for 30 seconds, after which non radioactive methionine was added in huge excess ($`10^5`$ fold). Methionine is an amino acid that the bacteria absorb very rapidly, and then use in protein synthesis. Protein DnaK was separated by 2-dim gel electrophoresis as described by O’Farrell , and the amount of synthesis during the 30 seconds of labeled methionine exposure was counted by radioactive activity and normalized first with respect to $`{}_{}{}^{3}H`$ count and then with respect to total protein production. This result in an overall accuracy of about $`10\%`$. The result is a count of the differential rate of DnaK production (i.e. the fraction DnaK constitute of total protein synthesis relative to the same fraction before the temperature shift ) as function of time after the temperature shift. For the shift $`TT+\mathrm{\Delta }T`$ at time $`t=0`$ we thus record:
$$r(T,t)=\frac{RateofDnaKproductionattimet}{RateofDnaKproductionattimet=0}$$
(1)
where the denominator counts steady state production of DnaK at temperature $`T`$. In figure 1 we display 3 examples, all associated to temperature changes of absolute magnitude $`\mathrm{\Delta }T=7^oC`$. When changing $`T`$ from $`30^oC`$ to $`37^oC`$ one observe that $`r`$ increases to $`6`$ after a time of 0.07 generation. Later the expression rate relaxes to normal level again, reflecting that other processes counteracts the initial response. When reversing the jump, we see the opposite effect, namely a transient decrease in expression rate. Finally we also show a temperature jump at a low temperature, and here we observe the opposite effect, namely that a $`T`$ increase give a decrease in expression rate. Here a corresponding $`T`$ decrease in fact gives an increase in expression rate (not shown). Thus the cells response to a positive temperature jump is opposite at low temperature $`T`$ than it is at high $`T`$.
In figure 2 we summarize our findings by plotting for a number of positive shocks $`TT+7^oC`$ the value of $`r=R`$ where the deviation from $`r=1`$ is largest. This value can be well fitted by the following dependence on temperature $`T`$
$$ln(R(T))=(\alpha \mathrm{\Delta }T)(TT_0)$$
(2)
where $`R(T=T_0=23^o)=1`$ and $`\alpha \mathrm{\Delta }T=\frac{ln(R_1/R_2)}{T_1T_2}`$ = $`0.2K^1`$ (i.e. $`\alpha =0.03K^2`$).
In order to interpret this result we first assume that the production rate of DnaK is controlled by two factors, a slowly varying factor $`C`$ that depends on composition of some other molecules in the cell, and an instantaneous chemical reaction constant $`K`$. Thus at time $`t`$ after a shift in temperature the production of DnaK in the cell is:
$$\frac{d[DnaK]}{dt}(t,TT+\mathrm{\Delta }T)=C(t,TT+\mathrm{\Delta }T)K(T+\mathrm{\Delta }T)$$
(3)
where the initial composition of molecules, $`C(t=0,TT+\mathrm{\Delta }T)`$ equals their equilibrium number at the temperature we changed from, i.e. $`=C_{eq}(T)`$.
To lowest approximation, where we even ignore feedback from changed DnaK in the cell until DnaK production rate have reached its peak value:
$$R=\frac{K(T+\mathrm{\Delta }T)}{K(T)}$$
(4)
which implies that
$$ln(R)=ln(K(T+\mathrm{\Delta }T))ln(K(T))=\frac{dln(K)}{dT}\mathrm{\Delta }T$$
(5)
Using the linear approximation in Fig. 2
$$ln(K)const+\frac{\alpha }{2}(TT_0)^2$$
(6)
Identifying $`K=exp(\mathrm{\Delta }G/T)`$ the effective free energy associated to the reaction is
$$\mathrm{\Delta }GG_0\frac{\alpha T}{2}(TT_0)^2$$
(7)
Thus $`\mathrm{\Delta }G`$ has a maximum at $`T=T_0=23^o`$.
To interpret the fact that HS is connected to a $`\mathrm{\Delta }G`$ that have a maximum at $`T=T_023^o`$ we note that many proteins exhibit a maximum stability at $`T`$ between $`10^o`$C and $`30^o`$C . Thus $`\mathrm{\Delta }G=G(folded)G(unfolded)`$ connected to the folded state of a protein is at a minimum at $`T_0`$. The corresponding maximum of stability is in effect the result of a complicated balance between destabilization from entropy of polymer degrees of freedom at high $`T`$, and destabilization due to decreased entropic contribution to hydrophobic stabilization of proteins at low $`T`$ . One should expect a similar behaviour also for some parts of a protein , and thus expect a max binding for hydrophobic protein-protein associations around $`T_0`$. Quantitatively the size of the $`\mathrm{\Delta }G`$ change inferred form the measured value of $`\alpha =0.03K^2`$ correspond to a changed $`G`$ of about $`2030`$kT (about 15Kcal/mol), for a temperature shift of about 40-50$`{}_{}{}^{o}C`$. This matches the change observed for typical single domain proteins . Thus the HS is associated to a $`\mathrm{\Delta }G`$ change equivalent to the destabilization of a typical protein.
The above picture still leave us with the puzzle that protein binding and folding stability is at a maximum around $`T_0`$, whereas the effective $`\mathrm{\Delta }G`$ we observe have a minimum there. This can only be reconciled if the interaction we consider is inhibitory. An inhibitory binding that controls the feedback is indeed possible . To summarize our understanding we in figure 3 display the molecular network that we believe is controlling the transient heat shock levels of DnaK in the cell. The key inhibitory control mechanism is the association of DnaK to $`\sigma ^{32}`$. DnaK binds to unfolded protein residues , and the amount of DnaK$`\sigma ^{32}`$ association thereby monitor cellular consequences of a shift in temperature.
Impacts of mutants: We have measured the heat shock in a strain where the $`\sigma ^{32}`$ gene is located on a high copy number plasmid. In this strain where the synthesis rate for $`\sigma ^{32}`$ may approach that of DnaK we find a HS that was smaller and also remained positive down to temperature jumps from $`T`$ well below $`T_0=23`$. According to fig. 3 this reflects a situation where both $`\sigma ^{32}`$ and DnaK are increased. With increased DnaK level, one may have a situation where DnaK exceeds the amount of unfolded proteins, and free DnaK concentration thus becomes nearly independent of the overall state of proteins in the cell. Also the huge increase in $`\sigma ^{32}`$ supply may decrease the possibility for the sink to act effectively. Thereby other effects as e.g. the temperature dependence of the binding $`\sigma ^{32}RNAp`$ versus the binding $`\sigma ^{70}RNAp`$ (i.e. $`K_{32}/K_{70}`$ from figure 3) may govern a response that otherwise would be masked by a strongly varying inhibition from $`\sigma ^{32}`$ binding to DnaK.
The reaction network in Figure 3 allow a more careful analysis of the production rate of DnaK:
$$\frac{d[DnaK]}{dt}[RNAp\sigma ^{32}]\frac{[\sigma ^{32}]}{1+g[DnaK]}$$
(8)
where the $`\sigma ^{32}`$ changes when bound to DnaK due to degradation by proteases (the “sink” in figure 3):
$$\frac{d[\sigma _{32}]}{dt}Supply[DnaK\sigma ^{32}]Supply\frac{[\sigma ^{32}]}{1+(g[DnaK])^1}$$
(9)
Here $`g=exp(\mathrm{\Delta }G/T)`$ is an effective reaction constant. In the approximation where we ignore free $`\sigma ^{32}`$, free $`\sigma ^{70}`$ and the fraction of DnaK bound by $`\sigma ^{32}`$ then:
$$g=\left(\frac{K_{70}[\sigma ^{70}]}{K_{32}[RNAp]}\right)\left(\frac{K_{D32}}{1+K_{DU}[U_f]}\right)$$
(10)
The first term expresses the $`\sigma `$’s competition for RNAp binding whereas the second term expresses the DnaK controlled response. $`[U_f]`$, which denote unfolded proteins that are not bound to DnaK, decreases with increasing \[DnaK\].
When moving away from $`T_0`$, i.e. lowering $`g`$ by increasing $`[U_f]`$, the rate for DnaK production increases. For an approximately unchanged “Supply” the extremum in production occurs when $`d[\sigma ^{32}]/dt=0`$ and has a value that approximately is $`\mathrm{\hspace{0.33em}1}/g`$. With the assumption that “Supply” does not have time to change before extreme response is obtained, we identify $`R`$ with $`1/g`$ and thereby with the free energy difference $`\mathrm{\Delta }G`$ that controls the HS. The early rise in $`r`$ is reproduced when most $`\sigma ^{32}`$ are bound to RNAp reflected in the condition $`g[DnaK]<<1`$. This implies a significant increase in $`\sigma ^{32}`$ lifetime under a positive HS, and implies that the early HS is due to a changed depletion rate of $`\sigma ^{32}`$. Later the response is modified, partly by a changed “Supply” and finally by a changed level of the heat shock induced protease HflB that depends on and counteracts the $`\sigma ^{32}`$ level in the cell.
The largest uncertainty in our analysis is the possibility of a significant time variation in “Supply” and HflB level during the HS. As these will govern the late stages of the heat shock, in particular including its decline, the variation in “$`\mathrm{\Delta }G`$” for proteins in the cell may easily be underestimated from using the peak height variation with $`T`$. Adding to the uncertainty in what $`\mathrm{\Delta }G`$ precisely represents is also the fact that although we only measure DnaK, it can be the complex of the heat shock proteins DnaK, GrpE and DnaJ that sense the state of unfolded proteins in the cell, as removal of any of these display an increased life time of $`\sigma ^{32}`$ . Such cooperativity may amplify the heat shock.
For the final interpretation of $`\mathrm{\Delta }G`$ we stress that it effectively counts the free energy difference between the complex DnaK$`\sigma ^{32}`$ and that of DnaK being free or being bound to unfolded proteins in the cell. Dependent on the fraction of DnaK relative to unfolded proteins \[U\] in the cell, i.e. whether $`K_{DU}[U_f]`$ is larger or smaller than 1, the HS will depend or not depend on the overall folding stability of proteins in the cell. Thus for much more unfolded proteins than DnaK in the cell, the measured $`\mathrm{\Delta }G`$ reflect both an increases of the binding to unfolded residues $`K_{DU}[U_f]`$ as well as a decrease of the DnaK$`\sigma ^{32}`$ binding $`K_{D32}`$ when moving away from $`T_0`$. Our data does not discriminate between these processes. This discrimination can however be obtained from the data of ref. where it was found that overexpression of DnaK through a $`\sigma ^{32}`$ dependent pathway in fact repress HS. As DnaK$`\sigma ^{32}`$ binding still play a crucial role in this setup, the vanishing HS of support a scenario where too much DnaK imply $`K_{DU}[U_f]<<1`$. Then $`g`$ in eq. 10 and thereby the $`\sigma ^{32}`$ response becomes insensitive to the amount of unfolded proteins in the cell.
We conclude that the HS is induced through the changed folding stability of proteins throughout the cell, sensed by a changed need of chaperones. We believe this reflect primarily an increased amount of proteins that are on the way to become folded (nascent proteins), and not an increased denaturation of already folded proteins, because spontaneous denaturation of proteins is extremely unlikely at these temperatures. Thus the deduced sensitivity to the thermodynamic stability of proteins may primarily reflect a correspondingly sensitivity to change in folding times.
We now discuss related proposals of “cellular thermometers” for the HS. McCarthy et al. proposed that the thermometer was a change in autophosphorylation of the DnaK protein. This should cause a temperature dependent activity of this protein. However their data does not indicate that the reversed HS response that we observe at $`T<23`$ C could be caused by such mechanism. Gross made an extensive network of possible chemical feedback mechanisms which connect a rise in $`\sigma ^{32}`$ level with the folding state of proteins in the cell. It included HS induced through an increased synthesis of $`\sigma ^{32}`$, an increased release of $`\sigma ^{32}`$ from DnaK as well as an increased stability of $`\sigma ^{32}`$ when DnaK gets bound to unfolded protein residues. Our fig. 3 specifies these possibilities to a minimalistic chemical response including the two latter mechanisms combined, and of these only the option of a changed stability of $`\sigma ^{32}`$ due to a sink controlled by DnaK$`\sigma ^{32}`$ is able to reproduce also the fact that the max HS takes time to develop. In regards to the by Morita proposed increased synthesis of $`\sigma ^{32}`$, we note that for high temperatures $`T`$, the major mechanism that controls $`\sigma ^{32}`$ synthesis in fact is a $`T`$ dependent change in the mRNA structure that leads to an increased translation at increased $`T`$ . However again our finding of a reversed HS at $`T<23`$ C is not readily explained by such changes in the stability of mRNA structures below 23 C.
In summa, we observed that positive heat shock is induced when $`T`$ changes away from $`T_023^o`$. We found that the size of the heat shock qualitatively as well as quantitatively follows the thermodynamic stability of proteins with temperature. This suggested that stability related to hot as well as to cold unfolding of proteins is implemented the in HS. We demonstrated that such an implementation was possible in a minimalistic chemical network where the control is through an inhibitory binding of the central heat shock proteins. We finally saw that the temporal behaviour of the HS is reproduced when this inhibitory binding controls the heat shock by exposing $`\sigma ^{32}`$ to a protease.
Figure Captions
* Fig. 1. Heat shock response measures as DnaK rate production change as function of time since a temperature shift. The production is normalized with overall protein production rate, as well as with its initial rate. In all cases we use absolute $`\mathrm{\Delta }T=7^oC`$. We stress that the time scale is in units of one bacterial generation measured at the initial temperature. At $`T=37^o`$C the generation time is 50 min, at $`30^oC`$ it is 75 min and at $`20^o`$C it is 4 hours.
* Fig. 2. Induction fold $`R`$ for positive temperature jumps as function of initial temperature. The straight line correspond to the fit used in eq. 2.
* Fig. 3. Sufficient molecular network for the early heat shock. All dashed lines with arrows in both ends are chemical reactions which may reach equilibrium within a few seconds (they represent the homeostatic response). The full directed arrows represent one way reactions, with the production of DnaK through the $`\sigma ^{32}RNAp`$ complex being the central one in this work (this step is catalytic, it involves DNA translation etc.) The time and temperature dependence of the early HS is reproduced when most DnaK is bound to unfolded proteins, and when remaining DnaK binds to $`\sigma ^{32}`$ to facilitate a fast depletion of $`\sigma ^{32}`$ through degradation by protease HflB. |
no-problem/9912/hep-th9912131.html | ar5iv | text | # Appendix A
## Appendix A
As a very simple example, we take, for the vector field $`V\beta (g)\frac{}{g}`$ in a one-dimensional action space (coordinate $`g`$), the first term of $`\beta (g)`$ in a $`g`$ expansion, say
$$\beta (g)=bg^2.$$
The exponential $`\mathrm{exp}\{t\underset{¯}{V}\}x`$ is defined by its Taylor expansion
$$\mathrm{exp}\{t\underset{¯}{V}\}=\underset{n=0}{\overset{\mathrm{}}{}}(t\underset{¯}{V})^n\frac{1}{n!}$$
From
$$\underset{¯}{V}\underset{¯}{V}bg^2\frac{}{g}\left(bg^2\frac{}{g}\right)=2b^2g^3\frac{}{g}+b^2g^4\frac{^2}{g^2}$$
it is straightforward to deduce
$$\underset{n\text{factors}}{\underset{}{\underset{¯}{V}\underset{¯}{V}\mathrm{}\underset{¯}{V}}}=n!b^ng^{n+1}\frac{}{g}+O\left(\frac{^n}{g^n},n2\right).$$
Therefore
$$\overline{g}(t,g)=\mathrm{exp}\{t\underset{¯}{V}\}g=\underset{n=0}{\overset{\mathrm{}}{}}t^nb^ng^{n+1}=\frac{g}{1tbg},$$
a well-known result.
As an exercise, the reader can establish, according to the above, the approximate Bogoljubov-Shirkov relation in the next order for $`\overline{g}(t,g)`$
$$\overline{g}(t,g)=g[1b_1gt+\frac{b_2}{b_1}g\mathrm{log}(1b_1gt)]^1$$
by taking
$$\beta (g)=b_1g^2+b_2g^3.$$
A second exercise is to show that
$$\mathrm{exp}\{t\underset{¯}{V}\}g^n=\overline{g}^n(t,g)=\frac{g^n}{(1gbt)^n}$$
when the vector field $`\underset{¯}{V}=\beta (g)\frac{}{g}`$ is approximated by the first term in the $`g`$ expansion of $`\beta (g)`$ i.e. $`\beta (g)=bg^2`$, like in the first example.
Since $`S`$-matrix elements can be expanded in powers of $`g`$
$$S(p_i,g)=\underset{n=0}{\overset{\mathrm{}}{}}a_n(p_i)g^n$$
it follows that
$$\mathrm{exp}\{t\underset{¯}{V}\}S(\mathrm{}g)=S(\mathrm{}\overline{g})$$
($`p_i`$ and the dots stand for arguments other than $`g`$ and independent of it. The case when other couplings, like $`g_i`$ and masses $`m_i`$ occur, goes outside the one-dimensional action space and $`\underset{¯}{V}`$ becomes $`V=V^\alpha \frac{}{x^\alpha }`$, the $`x^\alpha `$ being the coordinates in the enlarged action space, namely the $`g_i`$ and $`m_i`$ above.)
## Appendix B
The passage from a one-parameter case to the case with several parameters $`c_i`$ is far from trivial, although treated at length in the Textbook references, especially \[T.1\]–\[T.4\]. Sketching what happens, from an element $`g`$ depending on one parameter $`g(t)g(s)=g(s+t)`$ with $`g(t)=1+t\underset{¯}{V}`$ for $`t`$ infinitesimal one goes to
$$g(t_1,\mathrm{}t_i)=1+t_1\underset{¯}{V}_1+t_2\underset{¯}{V}_2+\mathrm{}t_i\underset{¯}{V}_i$$
with all $`t_i`$ infinitesimal. ($`i`$ generators $`\underset{¯}{V}_i`$)
The combination of two such elements $`g(t_1,\mathrm{}t_1)g(s_1,\mathrm{}s_i)`$ is given by the well-known Baker-Campbell-Hausdorf formula.
For the product to be also an element of the set, the condition $`[\underset{¯}{V}_i,\underset{¯}{V}_j]=c_{ij}^\kappa \underset{¯}{V}_\kappa `$ is necessary and sufficient. It was a condition explicitly formulated in for a set of normalization conditions to form a group. For our concern, the $`g(t)`$ are connected with the flow $`\sigma _t()`$. Therefore the group property concerns the flows, as in the one-parameter case discussed in this paper.
The non-trivial aspect now is that we must distinguish between the left combination of $`g(t_i)`$ with $`g(s_i)`$ from the right combination. All this is treated in the mentioned textbooks and goes beyond the scope of the present modest account of vector fields.
Textbooks. Rigorous proofs lie beyond the scope of this paper. They can be found
* for mathematicians with a view on physics, for instance in Y. Choquet-Bruhet et al. \[T.1\], especially Chapter III, Sections A and B. Sections C and D offer a generalization to several-parameter Lie groups with vector fields $`V_i,i=1\mathrm{}n`$, as infinitesimal generators, n = dim Lie Algebra = dim G. For $`n=\mathrm{}`$, see Chapter VII, Section A.
* for mathematicians, in several treatises including this subject: for example K. Yano \[T.2\] Chapters I to VII included; S. Helgason \[T.3\], Chapters I and II (there our $`V`$ is denoted by $`X`$, and our $`\sigma _t`$ by $`\gamma (t)`$). Propositions 5.3 and theorem 6.1 of Chapter I are cornerstones to the rigorous proofs. Chapter II offers the generalization from one-parameter to several-parameter Lie group algebras (§1). See also S. Kobayashi and T. Nomizu \[T.4\].
* For physicists, in oversimplified compendiums of differential geometry, like, for instance some chapters of \[T.5\], with definitions of the concepts used here and some sketches of proofs. A valuable reading is also A. Visconti \[T.6\].
* Y. Choquet-Bruhat, C. De Witt-Morette and M. Dillard-Bleick, Analysis, Manifolds and Physics, (North-Holland, Amsterdam, 1977 and subsequent editions, 1982 etc.).
* K. Yano, The theory of Lie derivatives and its applications, (North-Holland, Amsterdam, 1957).
* S. Helgason, Differential geometry, Lie groups and Symmetric spaces, (Academic Press, New York, 1978).
* S. Kobayashi and K. Nomizu, Foundations of Differential Geometry, I, II (Wiley Interscience, New York, 1963).
* R. Bertlmann, Anomalies in Quantum Field Theory (Clarendon Press, Oxford, 1996).
* A. Visconti, Introductory differential geometry for physicists (World Scientific, Singapore, 1992). |
no-problem/9912/cond-mat9912112.html | ar5iv | text | # Fractal Analysis of Electrical Power Time Series
## Abstract
Fractal time series has been shown to be self-affine and are characterized by a roughness exponent $`H`$. The exponent $`H`$ is a measure of the persistence of the fluctuations associated with the time series. We use a recently introduced method for measuring the roughness exponent, the mobile averages analysis, to compare electrical power demand of two different places, a touristic city and a whole country.
Over the last years, physicists, biologists and economists have found a new way of understanding the growth of complexity in nature. Fractals are the way of seeing order and pattern where formerly only the random and unpredictable had been observed. The father of fractals, Benoit Mandelbrot, begun thinking in them studying the distribution of large and small incomes in economy in the $`60`$’s. He noticed that an economist’s article of faith seemed to be wrong. It was the conviction that small, transient changes had nothing in common with large, long term changes. Instead of separating tiny changes from grand ones, his picture bound them together. He found patterns across every scale. Each particular change was random and unpredictable. But the sequence of changes was independent of scale: curves of daily changes and monthly changes matched perfectly.
Mandelbrot worked in IBM, and after his study in economy, he came upon the problem of noise in telephone lines used to transmit information between computers. The transmission noise was well known to come in clusters. Periods of errorless communication would be followed by periods of errors. Mandelbrot provided a way of describing the distribution of errors that predicted exactly the pattern of errors observed. His description worked by making deeper and deeper separations between periods with errors and periods without errors. But within periods of errors (no matter how short) some periods completely clean will be found. Mandelbrot was duplicating the Cantor set, created by the $`19^{th}`$ century mathematician Georg Cantor.
In the fractal way of looking nature roughness and asymmetry are not just accidents on the classic and smooth shapes of Euclidian geometry. Mandelbrot has said that “mountains are not cones and clouds are not spheres”. Fractals have been named the geometry of nature because they can be found everywhere in nature: mammalian lungs, trees, and coastlines, to name just a few.
An english scientist, Lewis Richardson around 1920 checked encyclopedias in Spain, Portugal, Belgium and the Netherlands and discovered discrepancies of twenty percent in the lengths of their common frontiers. In the authors measured the coast of Britain on a geographical map with different compass settings by counting the number of steps along the coast with each setting. The smaller the setting they used, the longer the length of the coastline they obtained. If this experiment is done to measure the perimeter of a circle, (or any other euclidean shape), the length obtained converges with smaller compass settings. In his famous book Mandelbrot states that the length of a coastline can never be actually measured, because it depends on the length of the ruler we use.
Fractal sets show self similarity with respect to space. Fractal time series have statistical self similarity with respect to time. In the social and economic fields, time series are very common. In a simple way to demonstrate self-similarity in a time series of stock returns is devised by asking the reader to guess which graph corresponds to daily, weekly and monthly returns between three different graphs with no scale on the axes.
An important statistics used to characterize time series is the Hurst exponent . Hurst was a hydrologist who worked on the Nile River Dam project in the first decades of this century. At that time, it was common to assume that the uncontrollable influx of water from rainfall followed a random walk, in which each step is random and independent from previous ones. The random walk is based on the fundamental concept of Brownian motion. Brownian motion refers to the erratic displacements of small solid particles suspended in a liquid. The botanist Robert Brown, about 1828, realized that the motion of the particles is due to light collisions with the molecules of the liquid.
Hurst measured how the reservoir level fluctuated around its average level over time. The range of this fluctuation depends on the length of time used for measurement. If the series were produced by a random walk, the range would increase with the square root of time as $`T^{1/2}`$. Hurst found that the random walk assumption was wrong for the fluctuations of the reservoir level as well as for most natural phenomena, like temperatures, rainfall and sunspots. The fluctuations for all this phenomena may be characterized as a “biased random walk”-a trend with noise- with range increasing as $`T^H`$, with $`H>0.5`$. Mandelbrot called this kind of generalized random walk fractional brownian motion. In high-school statistical courses we have been taught that nature follows the gaussian distribution which corresponds to random walk and $`H=1/2`$. Hurst’s findings show that it is wrong.
The proper range for $`H`$ is from $`0`$, corresponding to very rough random fractal curves, to 1 corresponding to rather smooth looking fractals. In fact, there is a relation between $`H`$ and the fractal dimension $`D`$ of the graph of a random fractal:
$$D=2H.$$
(1)
Thus, when the exponent $`H`$ vary from $`0`$ to $`1`$, yields dimensions $`D`$ decreasing from $`2`$ to $`1`$, which correspond to more or less wiggly lines drawn in two dimensions.
Fractional Brownian motion can be divided into three distinct categories: $`H<1/2`$, $`H=1/2`$ and $`H>1/2`$. The case $`H=1/2`$ is the ordinary random walk or Brownian motion with independent increments which correspond to normal distribution.
For $`H<1/2`$ there is a negative correlation between the increments. This type of system is called antipersistent. If the system has been up in some period, it is more likely to be down in the next period. Conversely, if it was down before, it is more likely to be up next. The antipersistence strength depends on how far $`H`$ is from $`1/2`$.
For $`H>1/2`$ there is a positive correlation between the increments. This is a persistent series. If the system has been up (down) in the last period, it will likely continue positive (negative) in the next period. Trends are characteristics of persistent series. The strength of persistence increases as $`H`$ approaches $`1`$. Persistent time series are plentiful in nature and in social systems. As an example the Hurst exponent of the Nile river is $`0.9`$, a long range persistence that requires unusually high barriers, such as the Aswân High Dam to contain damage in the floods.
Generally, experimental time series don’t have a unique $`H`$. In that case time series may be considered as a multifractal, $`H`$ is associated with the singularity exponent $`\alpha `$ restricted to an interval $`\alpha _{min}<\alpha <\alpha _{max}`$ characterized by the singularity spectrum $`f(\alpha )`$.
In order to analyze the fractal properties of electrical power consumption, we used a recently developed technique to obtain Hurst exponents of self-affine time series based on the so-called mobile averages . The method is introduced as follows: consider a time series $`y(t)`$ given at discrete times $`t`$, a mobile average $`\overline{y}(t)`$ is defined as
$$\overline{y}(t)=\frac{1}{T}\underset{i=0}{\overset{T1}{}}y(ti),$$
(2)
i.e., the average of the last $`T`$ data points. Its can be easily show that if $`y(t)`$ increases (decreases) with time, $`\overline{y}<y`$ ($`\overline{y}>y`$). Thus the mobile average captures the trend of the signal over the time interval $`T`$. Although the mobile average is used mainly for studying the behavior of financial time series, the procedure can be, in fact, used on any times series.
Now, consider two different mobile averages $`\overline{y}_1`$and $`\overline{y}_2`$characterized over intervals $`T_1`$ and $`T_2`$ respectively, such that $`T_2>T_1`$. As it is well known in stock chart analysis, the crossing of $`\overline{y}_1`$and $`\overline{y}_2`$coincide with drastic changes of the trend of $`y(t)`$. If $`y(t)`$ increases for a long period before decreasing rapidly, $`\overline{y}_1`$will cross $`\overline{y}_2`$from above. On the contrary, if $`\overline{y}_1`$crosses $`\overline{y}_2`$from below, the crossing point coincides with an upsurge of the signal $`y(t)`$. This type of crosses are used in empirical finance to extrapolate the evolution of the market.
If a signal is self-affine, it shows scaling properties of the form
$$y(t)b^Hy(bt),$$
(3)
where the exponent $`H`$ is the Hurst exponent.
It is well known that the set of crossing points between the signal $`y(t)`$ and the $`y=0`$ level is a Cantor set with fractal dimension $`1H`$. On the other hand, in reference the question of whether there is a Cantor set for the crossing points between $`\overline{y}_1`$and $`\overline{y}_2`$is raised. The density $`\rho `$ of such crossing points is calculated for various artificially generated time series with different values of $`H`$. In all the checked cases, $`\rho `$ is independent of the size $`N`$ of the time series and the fractal dimension of the set of crossing points is $`1`$, i.e., the points are homogeneously distributed in time along $`\overline{y}_1`$and $`\overline{y}_2`$. Due to the homogeneous distribution of the crossing points, the forecasting of the crossing points between $`\overline{y}_1`$and $`\overline{y}_2`$is impossible even for self-affine signals $`y(t)`$. However, in the same reference , it is shown that there exists a scaling relation between $`\rho `$ and $`\mathrm{\Delta }T`$, where $`\mathrm{\Delta }T`$is defined as
$$\mathrm{\Delta }T=\frac{T_2T_1}{T_2}.$$
(4)
With this definition, it is shown that $`\rho `$ scales as
$$\rho \frac{1}{T_2}\left[(\mathrm{\Delta }T)(1\mathrm{\Delta }T)\right]^{H1},$$
(5)
i.e., for small values of $`\mathrm{\Delta }T`$, $`\rho `$ scales as $`\mathrm{\Delta }T^{H1}`$, while the relation $`\rho (1\mathrm{\Delta }T)^{H1}`$ is valid for $`\mathrm{\Delta }T`$close to $`1`$.
One practical interest of the above relations stems in the easy implementation of an algorithm for measuring $`H`$. We have used the scaling properties of $`\rho `$ in order to estimate the fractal exponent for two time series representing the electrical power demand of two completely different places: Australia, a whole continent, and Mar del Plata a touristic city of Argentina. Data of Australia electrical power demand were obtained at the web site: http://www.tg.nsw.gov.au/sem/
statistics, while Mar del Plata electrical demand time series was kindly provided by Centro Operativo de Distribucion Mar del Plata belonging to EDEA, a local energy distribution enterprise.
Two time series of 8832 points were taken, as seen in Fig. 1 a) and b). In Fig. 2, the density $`\rho `$ of crossing points as a function of the relative difference $`\mathrm{\Delta }T`$, with $`T_2=100`$, is shown for Australian time series. The slope through the point is $`0.264\pm 0.024`$, which means $`H0.736`$ according to equation 5. In Fig. 3, the density $`\rho `$ of crossing points for the same value of $`T_2`$ is shown for Mar del Plata time series. In this case, the corresponding slope through the point is $`0.218\pm 0.012`$, which means $`H0.782`$. From the exponent $`H`$, the corresponding fractal dimensions $`D_f`$ can be calculated using equation 1. In both cases similar exponents $`H`$ are obtained with approximately the same degree of persistence, although both time series correspond to completely different kind of electrical power consumption, a continent and a touristic city. A more detailed fractal analysis of these time series was recently done by one of us using wavelets. The wavelet technique of fractal analysis is more powerful than the mobile averages technique, but much more complicated. In both time series were characterized as multifractals. However, the results reported here for the roughness exponent have a good agreement with the value $`H`$ associated with the maximum of the multifractal spectrum.
FIGURE CAPTIONS
Fig. 1. a) Electrical power demand time series for Australia. b) Electrical power demand time series for Mar del Plata.
Fig. 2 Log-log plot of the density $`\rho `$ of crossing points as a function of the relative difference $`\mathrm{\Delta }T`$, with $`T_2=100`$, for Australian time series. The slope through the point is $`0.264\pm 0.024`$, which means $`H0.736`$.
Fig. 3 Log-log plot of the density $`\rho `$ of crossing points as a function of the relative difference $`\mathrm{\Delta }T`$, with $`T_2=100`$, for Mar del Plata time series. The corresponding slope through the point is $`0.218\pm 0.012`$, which means $`H0.782`$. |
no-problem/9912/hep-ph9912229.html | ar5iv | text | # LANGEVIN INTERPRETATION OF KADANOFF-BAYM EQUATIONS
## 1 Motivation
Non-equilibrium many body theory had been traditionally a major topic of research for describing various (quantum) transport phenomena in plasma physics, in condensed matter physics and nuclear physics. Over the last years a lot of interest for non-equilibrium quantum field theory has now emerged also in particle physics. A very powerful diagrammatic tool is given by the ‘Schwinger-Keldysh’ or ‘closed time path’ (CTP) technique by means of non-equilibrium Green’s functions for describing a quantum system also beyond thermal equilibrium. The resulting causal and nonperturbative equations of motion (by various approximations), the so called Kadanoff-Baym (KB) equations, have to be considered as an ensemble average over the initial density matrix characterizing the preparation of the initial state of the system. If the system behaves dissipatively, as a consequence of the famous fluctuation-dissipation theorem, there must exist fluctuations. The Kadanoff-Baym equations have then to be understood as an ensemble average over all the possible fluctuations. This inherent stochastic aspect of the KB equations is what we want to point out and thus provide, as we believe, some new physical insight into its merely complex structure . In what follows below we want to point out its intimate connection to Langevin like processes.
As an elementary reminder of a Langevin process let us first briefly review the description of classical Brownian motion. Consider a heavy ‘Brownian’ particle with mass $`M`$ placed in a thermal environment obeying an effective Langevin equation, i.e.
$$M\ddot{x}+2\underset{\mathrm{}}{\overset{t}{}}𝑑t^{}\mathrm{\Gamma }(tt^{})\dot{x}(t^{})=\xi (t).$$
(1)
Here $`\xi (t)`$ has to be interpreted as a ‘noisy’ source driving the fluctuations of the Brownian particle. For many applications $`\xi (t)`$ is completely specified by a Gaussian distribution with zero mean and the correlation kernel $`I`$ :
$$I(tt^{}):=\xi (t)\xi (t^{})\mathrm{\hspace{0.17em}2}T\mathrm{\Gamma }(tt^{}).$$
(2)
$`\mathrm{}`$ denotes the average over all possible realizations of the stochastic variable $`\xi (t)`$. In fact, the simple relation between the dissipation kernel $`\mathrm{\Gamma }`$ and the strength $`I`$ of the random force $`\xi (t)`$ just stated is a manifestitaion of the fluctuation-dissipation theorem and (in the long time limit) is in accordance with the equipartition condition $`p^2/(2M)\stackrel{!}{=}T/2`$.
For a further physical motivation let us return to quantum field theory and already point out some similarities. One of the major present topics in quantum field theory at finite temperature or near thermal equilibrium concerns the evolution and behavior of the long wavelength modes. These modes often lie entirely in the non-perturbative regime. Therefore solutions of the classical field equations in Minkowski space have been widely used in recent years to describe long-distance properties of quantum fields that require a non-perturbative analysis. A justification of the classical treatment of the long-distance dynamics of bosonic quantum fields at high temperature is based on the observation that the average thermal amplitude of low-momentum modes is large. For the low-momentum modes $`|\stackrel{}{p}|T`$ (and for a weakly coupled quantum field theory) their (Bose) occupation number $`n_B`$ approaches the classical equipartition limit. The classical field equations should provide a good approximation for the dynamics of such highly occupied modes. However, in a correct semi-classical treatment of the soft modes the hard, i.e. thermal modes cannot simply be neglected, but it should incorporate their influence in a consistent way. In a recent paper it was shown how to construct an effective semi-classical action for describing not only the classical behavior of the long wavelength modes below some appropriate cutoff $`k_c`$, but taking into account also perturbatively the interaction among the soft and hard modes. By integrating out the ‘influence’ of the hard modes on the two-loop level (for standard $`\varphi ^4`$-theory) the emerging semi-classical equations of motion for the soft fields can be derived from an effective action and become stochastic equations of motion of generalized Langevin type , which resemble in their structure the analogous expression to (1). The hard modes act as an environmental heat bath. They also guarantee that the soft modes become, on average, thermally populated with the same temperature as the heat bath. For the semi-classical regime, where $`|\stackrel{}{p}|T`$, one finds for the ensemble average of the squared amplitude
$`{\displaystyle \frac{1}{V}}|\varphi (\stackrel{}{p})|^2`$ $``$ $`{\displaystyle \frac{1}{E_p^2}}T{\displaystyle \frac{1}{E_p}}n_B(E_p).`$ (3)
Such kind of Langevin description for the non-perturbative evolution of (super-)soft modes (on a scale of $`|\stackrel{}{p}|g^2TT`$) in non Abelian gauge theories has recently been put forward . The understanding of the behavior of the soft modes is crucial e.g. for the issue of anomalous baryon number violation due to the diffusion of topological Chern-Simons charge in hot electroweak theory (see eg and references listed therein).
In analogy to the Langevin description stated above we want to sketch in the following (pedagogical) study the effect of the heat bath on the evolution of the system degrees of freedom by means of the ‘closed time path Green’s function’ (CTPGF) technique. For this we discuss a free scalar field theory interacting with a heat bath.
## 2 Stochastic Interpretation of KB equations
We start with the CTP action for a scalar bosonic field $`\varphi `$ coupled to an environmental heat bath of temperature $`T`$:
$`S`$ $`=`$ $`{\displaystyle d^4x\frac{1}{2}\left[\varphi ^+(\mathrm{}m^2)\varphi ^+\varphi ^{}(\mathrm{}m^2)\varphi ^{}\right]}`$
$`(){\displaystyle d^4x_1d^4x_2\frac{1}{2}\left[\varphi ^+\mathrm{\Sigma }^{++}\varphi ^++\varphi ^+\mathrm{\Sigma }^+\varphi ^{}+\varphi ^{}\mathrm{\Sigma }^+\varphi ^++\varphi ^{}\mathrm{\Sigma }^{}\varphi ^{}\right]}.`$
The system starts to evolve from some initial density matrix. The interaction among the system and the heat bath is stated by an interaction kernel involving a self energy operator $`\mathrm{\Sigma }`$ resulting effectively from integrating out the heat bath degrees of freedom. Schematically this is sketched in fig. 1. In (2) the self energy contribution from the heat bath is parametrized in the Keldysh notation by the four self energy parts, which can be expressed by means of the standard contributions $`\mathrm{\Sigma }^<`$ and $`\mathrm{\Sigma }^>`$. Clearly, this self energy operator is the only quantity which might drive the system towards equilibrium. If the heat bath is at equilibrium, then the Kubo-Martin-Schwinger relation holds:
$$\overline{\mathrm{\Sigma }}^>(k)=e^{k_0/T}\overline{\mathrm{\Sigma }}^<(k).$$
(5)
(Here and in the following $`\overline{A}(k):=\overline{A}(k_0,\stackrel{}{k})`$ denotes the 4-dim. Fourier transform of a (stationary and translational invariant) quantity $`A(x_1,x_2)A(x=x_1x_2)`$ with respect to the 4-dim. relative coordinate $`x`$.)
From (2) it is now straightforward to obtain the equations of motion for the characteristic two-point functions . For the retarded propagator one has
$$(\mathrm{}m^2\mathrm{\Sigma }^{\mathrm{ret}})D^{\mathrm{ret}}=\delta ,$$
(6)
where $`\mathrm{\Sigma }^{\mathrm{ret}}:=\mathrm{\Theta }(t_1t_2)\left[\mathrm{\Sigma }^>\mathrm{\Sigma }^<\right]`$. Additional dynamical information comes from the equation of motion of the propagator $`D^<`$
$$(\mathrm{}m^2)D^<\mathrm{\Sigma }^{\mathrm{ret}}D^<\mathrm{\Sigma }^<D^{\mathrm{av}}=0.$$
(7)
This is just the famous KB equation. (6) and (7) determine the complete and causal (non-equilibrium) evolution for the two-point functions.
To get more physical insight into the (effective) action (2) and in the equations of motion we introduce the following real valued quantities:
$`s(x_1,x_2)`$ $`:=`$ $`{\displaystyle \frac{1}{2}}\mathrm{sgn}(t_1t_2)\left(\mathrm{\Sigma }^>(x_1,x_2)\mathrm{\Sigma }^<(x_1,x_2)\right)=s(x_2,x_1),`$ (8)
$`a(x_1,x_2)`$ $`:=`$ $`{\displaystyle \frac{1}{2}}\left(\mathrm{\Sigma }^>(x_1,x_2)\mathrm{\Sigma }^<(x_1,x_2)\right)=a(x_2,x_1),`$ (9)
$`I(x_1,x_2)`$ $`:=`$ $`{\displaystyle \frac{1}{2i}}\left(\mathrm{\Sigma }^>(x_1,x_2)+\mathrm{\Sigma }^<(x_1,x_2)\right)=I(x_2,x_1).`$ (10)
Our notion for $`s`$ and $`a`$ serves as a reminder for the respective symmetry properties. It basically represents the standard decomposition of the real and imaginary part of the Fourier transform of the retarded self energy operator $`\overline{\mathrm{\Sigma }}^{\mathrm{ret}}`$. $`s`$ yields a (dynamical) mass shift for the $`\varphi `$ modes caused by the interaction with the modes of the heat bath, while $`a`$ is responsible for the damping, i.e. dissipation of the $`\varphi `$ fields. The important thing to point out will be that $`I`$ characterizes the fluctuations.
We first note that the CTP action (2) can be written as
$`S`$ $`=`$ $`{\displaystyle d^4x\frac{1}{2}\left[\varphi ^+(\mathrm{}m^2)\varphi ^+\varphi ^{}(\mathrm{}m^2)\varphi ^{}\right]}`$
$`{\displaystyle d^4x_1d^4x_2\frac{1}{2}\left[(\varphi ^+\varphi ^{})(s+a)(\varphi ^++\varphi ^{})+i(\varphi ^+\varphi ^{})I(\varphi ^+\varphi ^{})\right]}.`$
This expression is identical to the so called influence functional given by Feynman and Vernon. To the exponential factor $`e^{iS}`$ in the path integral the $`(s+a)`$ term contributes a phase while the $`I`$ term causes an exponential damping and thus signals nonunitary evolution.
The two relevant equations of motion are stated as
$`(\mathrm{}m^2sa)D^{\mathrm{ret}}`$ $`=`$ $`\delta ,`$ (12)
$`(\mathrm{}m^2sa)D^<+(a+iI)D^{\mathrm{av}}`$ $`=`$ $`0.`$ (13)
We see that the last equation is the only one where $`I`$ occurs.
For the interpretation of $`s`$, $`a`$ and $`I`$ consider the long-time behavior of these equations. In this case we can assume that the system becomes translational invariant in time and space and the boundary terms are no longer important. For the spectral function one immediately finds
$$𝒜(k):=\frac{i}{2}[\overline{D}^{\mathrm{ret}}(k)\overline{D}^{\mathrm{av}}(k)]=\frac{i\overline{a}(k)}{[k^2m^2\overline{s}(k)]^2+|\overline{a}(k)|^2}.$$
(14)
It becomes obvious that $`\overline{s}\mathrm{}\overline{\mathrm{\Sigma }}^{\mathrm{ret}}`$ contributes an (energy dependent) mass shift while $`\overline{a}\mathrm{}\overline{\mathrm{\Sigma }}^{\mathrm{ret}}`$ causes the damping of propagating modes. $`\overline{a}`$ is related to the commonly used damping rate $`\overline{\mathrm{\Gamma }}`$ via
$$\overline{\mathrm{\Gamma }}(k)=i\frac{\overline{a}(k)}{k_0}.$$
(15)
For $`D^<`$ one finds in the long-time limit the relation
$`\overline{D}^<(k)`$ $`=`$ $`\overline{D}^{\mathrm{ret}}(k)\overline{\mathrm{\Sigma }}^<\overline{D}^{\mathrm{av}}(k)`$ (16)
$`=`$ $`\overline{D}^{\mathrm{ret}}(k)[\overline{a}(k)i\overline{I}(k)]\overline{D}^{\mathrm{av}}(k)2in(k)𝒜(k),`$
where, by employing KMS condition (5),
$$n(k)=\frac{\overline{\mathrm{\Sigma }}^<(k)}{\overline{\mathrm{\Sigma }}^>(k)\overline{\mathrm{\Sigma }}^<(k)}=\frac{1}{e^{k_0/T}1}n_B(k_0),$$
(17)
which indeed shows that the phase space occupation number in the long-time limit becomes a Bose distribution with the temperature of the heat bath.
It is now very illuminating to explicitly write down the relation between $`\overline{a}(k)`$ and $`\overline{I}(k)`$ using the definitions (9) and (10):
$`\overline{I}(k)={\displaystyle \frac{\overline{\mathrm{\Sigma }}^>(k)+\overline{\mathrm{\Sigma }}^<(k)}{\overline{\mathrm{\Sigma }}^>(k)\overline{\mathrm{\Sigma }}^<(k)}}i\overline{a}(k)=\mathrm{coth}\left({\displaystyle \frac{k_0}{2T}}\right)i\overline{a}(k).`$ (18)
In the high temperature (classical) limit $`(k_0T)`$ one gets
$$\overline{I}(k)=\frac{T}{k_0}2i\overline{a}(k),$$
(19)
or, employing (15),
$$\overline{I}(k)=2T\overline{\mathrm{\Gamma }}(k).$$
(20)
Recalling the discussion of Brownian motion in the introduction this compares favorably well with (2). The physical meaning of $`I`$ as a ‘noise’ correlator will become obvious. The relation (18) thus represents the generalized fluctuation-dissipation relation from a microscopic point of view by the various definitions of $`\overline{I},\overline{a}`$ and $`\overline{\mathrm{\Gamma }}`$ through the parts $`\overline{\mathrm{\Sigma }}^<`$ and $`\overline{\mathrm{\Sigma }}^>`$ of the self energy.
To see now more closely the connection to stochastic equations we decompose the influence action $`S`$ as given in (2) in its real and imaginary part and write for the corresponding generating functional
$`Z[j^+,j^{}]`$ (21)
$`:=`$ $`{\displaystyle 𝒟[\varphi ^+,\varphi ^{}]\rho [\varphi ^+,\varphi ^{}]e^{iS[\varphi ^+,\varphi ^{}]+ij^+\varphi ^++ij^{}\varphi ^{}}}`$
$`=`$ $`{\displaystyle 𝒟[\varphi ^+,\varphi ^{}]\rho [\varphi ^+,\varphi ^{}]e^{i\mathrm{}S[\varphi ^+,\varphi ^{}]+ij^+\varphi ^++ij^{}\varphi ^{}\frac{1}{2}(\varphi ^+\varphi ^{})I(\varphi ^+\varphi ^{})}}`$
$`=`$ $`{\displaystyle \frac{1}{\stackrel{~}{N}}}{\displaystyle 𝒟[\xi ]e^{\frac{1}{2}\xi I^1\xi }𝒟[\varphi ^+,\varphi ^{}]\rho [\varphi ^+,\varphi ^{}]e^{i\mathrm{}S[\varphi ^+,\varphi ^{}]+ij^+\varphi ^++ij^{}\varphi ^{}+i\xi (\varphi ^+\varphi ^{})}}`$
$`=`$ $`{\displaystyle \frac{1}{\stackrel{~}{N}}}{\displaystyle 𝒟[\xi ]e^{\frac{1}{2}\xi I^1\xi }Z^{}[j^++\xi ,j^{}\xi ]}Z^{}[j^++\xi ,j^{}\xi ]`$
with $`\stackrel{~}{N}:=𝒟\xi e^{\frac{1}{2}\xi I^1\xi }`$. The action entering the definition of $`Z^{}`$ is no longer $`S`$, but only the real part of the influence action (2). The generating functional $`Z[j^+,j^{}]`$ can thus be interpreted as a new stochastic generating functional $`Z^{}[j^++\xi ,j^{}\xi ]`$ averaged over a random Gaussian (noise) field $`\xi `$ with the width function $`I`$, i.e.
$$𝒪:=\frac{1}{\stackrel{~}{N}}𝒟\xi 𝒪e^{\frac{1}{2}\xi I^1\xi }.$$
(22)
From the last definition we find that the (ensemble) average over the noise field vanishes, i.e. $`\xi =0`$ , while the noise correlator is given by
$$\xi \xi =I.$$
(23)
From the stochastic functional $`Z^{}`$ a Langevin equation for a classical $`\varphi `$ field can now readily be derived. Noting that the fields $`\varphi ^+_\xi `$ on the upper branch and $`\varphi ^{}_\xi `$ on the lower branch are equal (and denoted as $`\varphi _\xi `$ in the following), its equation of motion derived from $`Z^{}`$ takes the form
$$(\mathrm{}m^2s)\varphi _\xi a\varphi _\xi =\xi .$$
(24)
This, indeed, represents a standard Langevin equation. The spatial Fourier transform of the Langevin equation (24) then takes the form
$$\ddot{\varphi }_\xi (\stackrel{}{k},t)+(m^2+\stackrel{}{k}^22\mathrm{\Gamma }(\stackrel{}{k},\mathrm{\Delta }t=0))\varphi _\xi +2\underset{\mathrm{}}{\overset{t}{}}𝑑t^{}\mathrm{\Gamma }(\stackrel{}{k},tt^{})\dot{\varphi }_\xi (\stackrel{}{k},t^{})=\xi (\stackrel{}{k},t).$$
(25)
The analogy between this Langevin equation (25) and the one for a single classical oscillator is obvious. The important difference, however, is the fact that the corresponding relations (18) and (2) between the respective noise kernel $`I`$ and friction kernel $`\mathrm{\Gamma }`$ only agree in the high temperature limit.
One can further ask to what extend the classical equations of motion (24) together with (23) are an approximation for the full quantum problem given by the equation of motion (13) for $`D^<`$. Inverting (24) one finds for the correlation function in the long-time limit
$`i\varphi ^+_\xi \varphi ^{}_\xi =iD^{\mathrm{ret}}\xi \xi D^{\mathrm{av}}=iD^{\mathrm{ret}}ID^{\mathrm{av}}.`$ (26)
Note that (26) is indeed the relation (3) advocated in the introduction to hold in the (semi-)classical regime. This one has to compare with the full quantum correlation function $`D^<`$ of (16). One thus has that $`(\overline{a}i\overline{I})`$ is approximated by $`i\overline{I}`$. Of course this is justified, if $`|\overline{a}|\overline{I}`$ holds. Using the microscopic quantum version (18) of the fluctuation-dissipation theorem this is equivalent to $`\mathrm{coth}\left(\frac{k_0}{2T}\right)1`$. Thus in the high temperature limit or – turning the argument around – for low frequency modes, i.e. for $`k_0T`$, the classical solution yields a good approximation to the full quantum case. To be more precise: In simulations one has to solve the classical Langevin equation (24) and calculate $`n`$-point functions by averaging over the random sources.
One can also write down the equations of motion for the quantum two-point functions with external noise by introducing ’noisy’ two-point propagators. Averaging the equation of motion over the noise fields according to (22) one indeed rederives the KB equation. This demonstrates that the KB equation can be interpreted as an ensemble average over fluctuating fields which are subject to noise, the latter being correlated by the sum of self energies $`\mathrm{\Sigma }^<`$ and $`\mathrm{\Sigma }^>`$, i.e. from a transport theoretical point of view the sum of production and annihilation rate. We want to note once more that the ‘noisy’ or fluctuating part denoted by $`I`$ inherent to the structure of the KB equation (7) guarantees that the modes or particles become correctly (thermally) populated, as can be realized by inspecting (16).
We close our discussion by noting that one can also pursue to derive a standard kinetic transport equation for the (semi-classical) phase-space distribution $`f(\stackrel{}{x},\stackrel{}{k},t)`$ including fluctuations . The derived kinetic transport process has the structure of the phenomenologically inspired Boltzmann-Langevin equation. Our approach carried out in has to be considered as a clear derivation from first principles. Indeed it shows (nearly) a one to one correspondence to the phenomenologically introduced scheme. However, also some severe interpretational difficulties in the interpretation of the fluctuating phase-space density remain. We refer the interested reader to our discussion in .
## 3 Some further conclusions
In our discussions we have elucidated on the stochastic aspects inherent to the (non-) equilibrium quantum transport equations. We have isolated a term denoted by I which solely characterizes the (thermal and quantum) fluctuations inherent to the underlying transport process. By introducing a stochastic generating functional the emerging stochastic equations of motion can then be seen as generalized (quantum) Langevin processes. What is changed, if we replace our toy model of a free system coupled to an external heat bath by a self-coupled and thus nonlinear closed system? In an interacting field theory of a closed system the KB equations formally have exactly the same structure as in our toy model. The important difference, however, is that the self energy operator is now described fully (within the appropriate approximative scheme) by the system variables, i.e. it is expressed as a convolution of various two-point functions. Hence, an underlying simple stochastic process, as in our case an external stochastic Gaussian process, cannot really be extracted. However, we emphasize that the emerging structure of the KB equations is identical. The decomposition of the self energy operator into its three physical parts (mass shift $`s`$, damping $`a`$, and fluctuation kernel $`I`$) can immediately be taken over. Hence these three parts keep their clear physical meaning also for a nonlinear closed system.
One can also nicely demonstrate how so called pinch singularities are regulated within the non-perturbative context of the thermalization process. These singularities do (and have to) appear in the perturbative evaluation of higher order diagrams within the CTP description of non-equilibrium quantum field theory. They are simply connected to the standard divergence in elementary scattering theory. The occurrence of pinch singularities signals the occurrence of (onshell) damping or dissipation. This necessitates in the description of the evolution of the system by means of non-perturbative transport equations.
As a further application (discussed in the talk) for the direct use of semi-classical Langevin equations we want to mention the recent work in : By applying a microscopically motivated Langevin description of the linear sigma model, one can investigate the stochastic evolution of a so called disoriented chiral condensate (DCC) in a rapidly expanding system, expected to occur in ultrarelativistic heavy ion collisions. Within such an approach one finds that an experimentally feasible DCC, if it does exist in nature, has to be a rare event, but still occuring with some finite and nonvanishing probability. The statistical distribution of final emitted pion number out of domains shows a striking nonpoissonian and nontrivial behaviour. One should indeed interpret those particular rarely occuring events as semi-classical ‘pion bursts’ similar to the mystique Centauro candidates. A further analysis of this unusual distribution by means of the cumulant expansion shows that the reduced higher order factorial cumulants exhibit an abnormal, exponentially increasing tendency and thus serves as a new and powerful signature. The occurence of a rapid chiral phase transition (and thus DCCs) might then probably only be identified experimentally by inspecting higher order facorial cumulants $`\theta _m`$ ($`m3`$) for taken distributions of low momentum pions.
## References |
no-problem/9912/chao-dyn9912037.html | ar5iv | text | # Large Global Coupled Maps with Multiple Attractors
## I Introduction
The emergence of non trivial collective behaviour in multidimensional systems has been analized in the last years by many authors Kaneko (1991) Shimada (1998) Dominguez and Cerdeira (1995). Those important class of systems are the ones that present global interactions.
A basic model extensively analized by Kaneko is an unidimensional array of $`N`$ elements:
$$X_{t+1}(k)=(1ϵ)f(X_t(k))+\frac{ϵ}{N}\underset{l=1}{\overset{N}{}}f(X_t(l))$$
(1)
where $`k=1,\mathrm{},N`$, is an index identifying the elements of the array, $`t`$ a temporal discret variable, $`ϵ`$ is the coupling parameter and $`f(x)`$ describes the local dynamic and taken as the logistic map. In this work, we consider $`f(x)`$ as a cubic map given by:
$$f(x)=(1a)x+ax^3$$
(2)
where $`a[0,4]`$ is a control parameter and $`x[1,1]`$. The map dynamic has been extensively studied by Testa et.al.Testa and Held (1983), and many applications come up from artificial neural networks where the cubic map, as local dynamic, is taken into account for modelizing an associative memory system. Ishi et. al. Ishii and Sato (1998) proposed a GCM model to modelize this system optimazing the Hopfield’s model.
The subarmonic cascade, showed on fig-1 prove the coexistence of two equal volume stable attractors. The later is verified even as the GCM given by Eq.1 has $`ϵ>0`$. Janosi et. al. Jánosi and Gallas (1999) studied a globally coupled multiattractor quartic map with different volume basin attractors, which is as simple second iterate of the map proposed by Kaneko, emphazasing their analysis on the control parameter of the local dynamic. They showed that for these systems the mean field dynamic is controlled by the number of elements in the initial partition of each basin of attraction. This behaviour is also present in the map used in this work.
In order to study the coherent-ordered phase transition of the Kaneko’s GCM model, Cerdeira et. al. Xie and Cerdeira (1996) analized the mechanism of the on-off intermitency appearing in the onset of this transition. Since the cubic map is characterized by a dynamic with multiple attractors, the first step to determine the differences with the well known cuadratic map given by Kaneko is to obtain the phase diagram of Eq.1 and to study the the coherent-ordered dynamical transition for a fixed value of the control parameter $`a`$. The later is done near an internal crisis of the cubic map, as a function of the number of elements $`N_1`$ with initial conditions in one basin and the values of the coupling parameter $`ϵ`$, setting $`N`$ equal to 256. After that, the existence of an inverse period doubling bifurcation as function of $`ϵ`$ and $`N_1`$ is analized.
## II Phase Diagram
The dynamical analysis process breaks the phase space in sets formed by synchronized elements which are called clusters. This is so, even when, there are identical interactions between identical elements. The system is labeled as 1-cluster, 2-cluster, etc. state if the $`X_i`$ values fall into one, two or more sets of synchronized elements of the phase space. Two different elements $`i`$ and $`j`$ belong to the same cluster within a precision $`\delta `$ (we consider $`\delta =10^6`$) only if
$$\left|X_i(t)X_j(t)\right|\delta $$
(3)
Thus the system of Eq.1, shows the existence of different phases with clustering (coherent, ordered, partially ordered, turbulent). This phenomena appearing in GCM was studied by Kaneko for logistic coupled maps when the control and coupling parameters vary. A rough phase diagram for an array of 256 elements is determined for the number of clusters calculated from 500 randomly sets of initial conditions within the precision specified above. This diagram displayed in fig-2, was obtained following the criteria established by this author. Therefore, the $`K`$ number of clusters and the number of elements that build them are relevant magnitudes to characterize the system behaviour.
## III Phase Transition
In order to study phase transition, the two greatest Lyapunov exponents are shown in fig-4 and fig-5. They are depicted for a=3.34 as a function of $`ϵ`$ and for three different values of initial elements $`N_1`$.
In the coherent phase, as soon as $`ϵ`$ decrease, the maximum Lyapunov exponent changes steeply from a positive to a negative value when the two cluster state is reached. A sudden change in the attractor phase space occurs for a critical value of the coupling parameter $`ϵ_c`$ in the analysis of the transition from two to one cluster state. Besides that, in the same transition for the same $`ϵ_c`$, a metastable transient state of two cluster to one cluster chaotic state is observed, due to the existence of an unstable orbit inside of the chaotic basin of attraction, as is shown in fig-3 The characteristic time $`T_t`$ in which the system is entertained in the metastable transient is depicted in Fig-6, for values of $`ϵ`$ near and above $`ϵ_c`$.
For a given set of initial conditions, it is possible to fit this transient as:
$$T_t=(\frac{ϵϵ_c}{ϵ_c})^\beta $$
(4)
This fitting exponent $`\beta `$, depends upon the number of elements with initial conditions in each basin as is shown in the next table for three $`N_1`$ values and setting $`N=256`$.
| $`N_1`$ | $`ϵ_c`$ | $`\beta `$ |
| --- | --- | --- |
| 128 | 0.471829 | 0.792734 |
| 95 | 0.3697115 | 0.606751 |
| 64 | 0.3198161 | 0.519833 |
It is worth noting from the table that $`\beta `$ increases with $`N_1`$ up to $`N/2`$, and for $`N_1`$ due to the basins symmetry.
## IV Inverse Period Doubling Bifurcation
In order to analize the existence of period doubling bifurcations, the maxima Lyapunov exponent $`\lambda _{max}`$ is calculated as function of $`N_1`$ and $`ϵ`$. For each $`N_1`$, critical values of the coupling parameter, called $`ϵ_{bif}`$, are observed when a negative $`\lambda _{max}`$ reaches a zero value without changing sign. This behaviour is related to inverse period doubling bifurcations of the GCM. Fitting all these critical pair of values $`(ϵ_{bif},N_1)`$, a rough $`N_1`$ vs $`ϵ_{bif}`$ graph is shown in fig-7, and different curves appears as boundary regions of the parameter space where the system displays $`2^n`$ ($`n=1,2,3`$) periods states . This is obtained without taking into accout the number of final clusters. It is clear that greater values of $`N_1`$, correspond to smaller $`ϵ_{bif}`$ for the occurrence of the bifurcation. Evidence of period 16 appears for values of $`N_1`$ smaller than 30. In fig-7 T=2(symmetric) means period two orbit, with clusters oscillating with equal amplitud around zero, T=2(asymmetric) means period two orbit, with clusters oscillating with different amplitud.
## V Concluding Remarks
The study of systems with coexistence of multiple attractors gives a much richer dynamics and a new control parameter must necessarily be added. Although the dimensionality in the parameter space is increased by one, the dynamics is rather simple to characterize. Some of the relevant aspects of this kind of systems are shown in this work. The phase diagram that was obtained shows the existence of similar phases to those using the cuadratic and quartic map, this behaviour suggests some kind of universality in the dynamics of the GCM. Another interesting issue found, concerns the metastable transition between two to one cluster state, along with a sudden jump in the maximum Lyapunov exponent, as it was displayed in fig.7. The characteristic time given by Eq.4 also correspond to the above transition where the critical exponent $`\beta `$ and the critical coupling parameter $`ϵ`$ shows a strong dependence on the number of initial elements in each basin. An inverse bifurcation cascade appears when the system is in two or more clusters state where $`ϵ`$ and $`N_1`$ are the critical parameters of the bifurcation, which means the maximum Lyapunov exponent is equal to zero.
## VI Acknowledgments
This work is partially supported by CONICET (grant PIP 4210). MFC and LR also express their acknowledgment to the ICTP where the initial discussion of the work was performed. |
no-problem/9912/astro-ph9912128.html | ar5iv | text | # Deprojection of light distributions of nearby systems: perspective effect and non-uniqueness
## 1 Introduction
Deprojection of galaxies from the observed light distribution on the sky plane to the intrinsic 3-dimensional volume luminosity distribution is one of the basic problems of astronomy. It is common knowledge that the deprojected results are generally non-unique because of the freedom of distributing stars along any line of sight. The best example is that a round distribution in projection may correspond to any intrinsically prolate object pointing towards us. Likewise we cannot tell from observed elliptical isophotes whether the object is an intrinsically oblate bulge or a triaxial bar if both are edge-on and at infinity (Contopoulos 1956).
The perspective effect of nearby triaxial objects, however, does make them appear different from an oblate object. The best example for this is the famous left-to-right asymmetry of the dereddened light distribution of the Milky Way bulge when plotted in the Galactic $`(l,b)`$ coordinates, which led Blitz & Spergel (1991) to conclude that the Galactic bulge is in fact an almost edge-on bar, pointing at an angle from the Sun. They divide the Galaxy into a left and a right part with the $`l=0^o`$ plane, which passes the Sun-center line and the rotation axis of the Galaxy. When folded along the $`l=0^o`$ line the surface brightness map $`I(l,b)`$ is decomposed into two independent maps: an asymmetry map $`\left[I(l,b)I(l,b)\right]/2`$ by subtracting the $`l<0^o`$ side from the $`l>0^o`$ side, and a symmetric map $`\left[I(l,b)+I(l,b)\right]/2`$ by adding up the two sides. The signal in the asymmetry map, they explain, is because the right hand side ($`l>0^o`$) of the bar is nearer to us and the perspective effect makes it appear slightly bigger than the left hand side ($`l<0^o`$). A simple sketch of the geometry is shown in the top diagram of Fig. 1.
The perspective effect allows Binney & Gerhard (1996) and Binney, Gerhard & Spergel (1997) to derive a non-parametric volume density distribution of the inner Galaxy. The key element in their method is to impose mirror symmetry for the bulge part of the Galaxy to allow for a triaxial bar, and central symmetry for the disk part so to allow for the spiral arm. Although shifting material along the same line of sight does not alter the isophotes, it spoils the symmetry of the object. Their numerical experiments suggest that the COBE map is consistent with a range of bar models with the bar major axis pointing some $`15^o35^o`$ from the Sun.
Unfortunately there is a lack of analytical studies of general non-uniqueness for nearby objects. This is compared to a series of papers exploiting the analytical properties of konuses in the deprojection of external axisymmetric systems (Rybicki 1986, Palmer 1994, Kochanek & Rybicki 1996, van den Bosch 1997, Romanowsky & Kochanek 1997). A small amount of konuses, as christened by Gerhard & Binney (1996) for a well-studied class of artificial density models with zero surface brightness, can be added to a galaxy at infinity without perturbing its isophotes or creating negative-density zones.
This paper is a first attempt to give an analytical description of the degeneracy in deprojecting the light distribution of a general nearby system. To ensure our arguments are not diluted by the necessary mathematics, we will first introduce the so-called phantom spheroidal (PS) density in §2 and describe its effect on non-uniqueness in words and illustrations. The more mathematical aspects of the problem are given in §3-§5, where we describe the properties of PS densities and how to generate them. We will then briefly discuss the relations to the Milky Way bar in §6, and relations with well-known non-uniqueness of extragalactic objects in §7. We conclude in §8. Some additional results on the major axis angle and a generalization of the phantom density are given in the Appendix.
## 2 Phantom Spheroid: a compromise between mirror symmetry and perspective effect
A phantom density is a model with both positive and negative density regions and a zero net surface brightness to an observer at some distance away. An example of a phantom density is illustrated in the middle diagram of Fig. 1, where we tailor the radial profile of a stratified ball-shaped bulge in such a way so to have the same angular size and projected intensity as a uniform cigar-shaped bar placed end-on. Subtracting the ball-shaped bulge from the cigar-shaped bar yields a Phantom Spheroid (PS). The thin dark ring in the bulge is a density peak, corresponding to the line of sight to the far edge of the cigar-shaped bar, a direction where the depth and the projected intensity of the bar are at maximum. The fall-off of density towards the edge of the bulge corresponds to the ever-decreasing depth of the cigar-shaped bar with increasing impact parameter of the line of sight.
The problem with this simple cigar model is that if we superimpose this unphysical component on any physical density distribution, say, an ellipsoidal bar with a Gaussian radial profile, we get a new density with generally twisted density patterns. So while the added cigar component is invisible from the observer’s perspective, it generally spoils the mirror symmetry of the system.
The exception is, as shown in the lower diagram of Fig. 1, when the superimposed end-on cigar is a clone of the original cigar, in which case the final density of the twin should have mirror symmetry with respect to a new plane (the dotted line) which is just in between the Sun-center line and the major axis of the original cigar. In this case the mirror symmetry is preserved, only the symmetry plane is rotated. Subtracting the round bulge has no further effect on the symmetry plane, but will take away any trace of transformation in the map of the integrated light. Thus a PS which rotates the symmetry axes and is invisible in projection can be obtained by placing the original prolate cigar end-on and then subtracting off a round bulge.
We can apply the same trick to oblate systems, such as disks, because a face-on disk mimics a spherical distribution the same way a prolate cigar bar does. So suppose the original model is a disk tilted at an angle $`\alpha `$ from the line of sight. Adding a face-on clone of the disk will put the mirror plane of the twin disks at an angle $`\frac{\alpha }{2}`$. By subtracting off a spherical model we can take away any effect of the face-on clone in the projected brightness map. In fact this kind of non-uniqueness applies any spheroidal (i.e. prolate or oblate) distribution with any radial profile since our arguments about non-uniqueness is independent of the aspect ratio and density profile of the bar. For example, the original model may consists of a small spheroidal perturbation on top of a large positive spherical component, both with general smooth radial profiles.
Despite these generalizations, the models are restricted in the sense that the models come only as twins with identical projected brightness distribution. This is compared to non-uniqueness in the extragalactic systems, such as konuses, where we find a family of models with indistinguishable surface brightness maps with a tunable amount of konuses. The models here are also likely to have large zones of negative density because we subtract off a significant amount of matter with a spherical distribution. This problem can be softened if our original model has a large positive smooth spherical component to start with.
## 3 Method for generating general Phantom Spheroids
Let’s first set up a rectangular coordinate system $`(x,y,z)`$ centered on the object with the $`x`$ axis pointing towards the observer. In this coordinate system the observer is at $`(D_0,0,0)`$, where $`D_0`$ is the observer’s distance to the object center. From the observer’s perspective the surface brightness map of the system is most conveniently specified by two angles $`(l^{},b^{})`$, which are the equivalent of the Galactic coordinate system with the $`b^{}=z=0`$ plane being the $`xy`$ plane and the $`l^{}=y=0`$ plane passing through $`z`$-axis of the system. In these coordinates the system center and anti-center are in the directions $`(l^{},b^{})=(0,0)`$ and $`(\pi ,0)`$. A point $`(x,y,z)`$ in rectangular coordinates can thus be specified by the distance $`D`$ to the observer along the line of sight $`(l^{},b^{})`$ with
$$x=D_0D\mathrm{cos}b^{}\mathrm{cos}l^{},y=D\mathrm{cos}b^{}\mathrm{sin}l^{},z=D\mathrm{sin}b^{}.$$
(1)
The impact parameter $`w`$ with respect to the object center is given by
$$w=D_0\sqrt{\left(\mathrm{sin}b^{}\right)^2+\left(\mathrm{cos}b^{}\mathrm{sin}l^{}\right)^2}$$
(2)
for the line of sight $`(l^{},b^{})`$. The distance $`r`$ to the center of the object at $`(0,0,0)`$ is given by
$$r=\sqrt{x^2+y^2+z^2}=\sqrt{\mathrm{\Delta }^2+w^2},\mathrm{\Delta }=D\sqrt{D_0^2w^2},$$
(3)
where $`\mathrm{\Delta }`$ is the offset distance from the tangent point. The rectangular coordinates can also be expressed in terms of $`w`$ and $`D`$ with
$$x=D_0D\sqrt{1\frac{w^2}{D_0^2}},\sqrt{y^2+z^2}=\frac{D}{D_0}w.$$
(4)
Most generally our phantom spheroid $`F(x,y,z)`$ is made by subtracting from an end-on prolate distribution $`P(|x|,\sqrt{y^2+z^2})`$ a spherical bulge $`S(r)`$ with matching surface brightness, i.e.,
$$_0^{\mathrm{}}F(x,y,z)𝑑D=0,F(x,y,z)P(|x|,\sqrt{r^2x^2}=\sqrt{y^2+z^2})S(r),$$
(5)
where $`_0^{\mathrm{}}𝑑D`$ is an integration along any line-of-sight direction, say, $`(l^{},b^{})`$. Clearly $`F(x,y,z)`$ is an even function of $`x`$, $`y`$ and $`z`$ with
$$F(x,y,z)=F(x,y,z)=F(x,y,z)=F(x,y,z)=F(x,y,z),$$
(6)
and in fact has rotational symmetry around the $`x`$-axis. The total luminosity of a phantom $`L_{PS}`$ is given by
$$L_{PS}d^3𝐫F(x,y,z)=L_PL_S,L_Pd^3𝐫P(|x|,\sqrt{y^2+z^2}),L_Sd^3𝐫S(r).$$
(7)
As we will show later a phantom spheroid can have a net luminosity because the total luminosity of the prolate component $`L_P`$ is only approximately that of the spherical component $`L_S`$ with the difference being infinitely small when the observer is infinitely far away from the object.
Let $`J(w)`$ be the light intensity of the prolate distribution $`P(|x|,\sqrt{y^2+z^2})`$ integrated over a line of sight with impact parameter $`w`$, including both the forward direction and the backward direction, then
$$J(w)_{\mathrm{}}^{\mathrm{}}P(|x|,\sqrt{y^2+z^2})𝑑D,$$
(8)
where $`x,y,z`$ can be expressed in terms of $`w`$ and $`D`$ using eq. (3) and eq. (4). Once $`J(w)`$ is computed from the integration of $`P(|x|,\sqrt{y^2+z^2})`$, our task is to find a spherical bulge $`S(r)`$ such that
$$J(w)=_{\mathrm{}}^{\mathrm{}}S(r)𝑑D.$$
(9)
This can be done using the Abel transformation
$$S(r)=\frac{1}{\pi }_r^{\mathrm{}}\frac{dJ(w)}{dw}\frac{dw}{\sqrt{w^2r^2}}.$$
(10)
The inversion uses effectively a variation of the well-known Eddington formula for deprojecting a spherical system (cf. Binney & Tremaine 1987). Thus we find a general expression for the phantom spheroid (PS); there is no restriction on the radial profile or the axis ratio of the prolate distribution $`P(|x|,\sqrt{y^2+z^2})`$, and in fact it is allowed to be oblate. Any PS can also lead to a family of PS densities because new PS can be generated by applying a linear operator to an old PS. For example, suppose $`F(x,y,z)`$ has a free parameter $`\beta `$, then $`\frac{}{\beta }F(x,y,z)`$ is also a PS (cf. Appendix B).
To reformulate the results in the previous section about the “twin bars” in mathematical terms, we define a new set of rectangular coordinates $`(x_\alpha ,y_\alpha ,z_\alpha )`$ which relate to the rectangular coordinates $`(x,y,z)`$ by a rotation around the $`z`$-axis (i.e., in the $`xy`$ plane) by an angle $`\alpha `$ with
$$(x_\alpha ,y_\alpha ,z_\alpha )=(x\mathrm{cos}\alpha +y\mathrm{sin}\alpha ,x\mathrm{sin}\alpha +y\mathrm{cos}\alpha ,z).$$
(11)
The rotation transformation has the property that
$`x_\alpha ^2`$ $`=`$ $`x_{\alpha /2}^2\mathrm{cos}^2{\displaystyle \frac{\alpha }{2}}+y_{\alpha /2}^2\mathrm{sin}^2{\displaystyle \frac{\alpha }{2}}+2x_{\alpha /2}y_{\alpha /2}\mathrm{sin}{\displaystyle \frac{\alpha }{2}}\mathrm{cos}{\displaystyle \frac{\alpha }{2}},`$ (12)
$`x^2`$ $`=`$ $`x_{\alpha /2}^2\mathrm{cos}^2{\displaystyle \frac{\alpha }{2}}+y_{\alpha /2}^2\mathrm{sin}^2{\displaystyle \frac{\alpha }{2}}2x_{\alpha /2}y_{\alpha /2}\mathrm{sin}{\displaystyle \frac{\alpha }{2}}\mathrm{cos}{\displaystyle \frac{\alpha }{2}}.`$
In these notations we let $`P(|x_\alpha |,\sqrt{r^2x_\alpha ^2})`$ denote any general prolate bar, made by rotating its end-on twin bar $`P(|x|,\sqrt{r^2x^2})`$ to an angle $`\alpha `$ from the line of sight in the $`xy`$ plane. Then we can always design a triaxial object
$$T(|x_{\alpha /2}|,|y_{\alpha /2}|,|z_{\alpha /2}|)P(|x_\alpha |,\sqrt{r^2x_\alpha ^2})+P(|x|,\sqrt{r^2x^2})S(r),$$
(13)
where $`S(r)`$ is given by eqs. (8) and (10) so that the triaxial object $`T(|x_{\alpha /2}|,|y_{\alpha /2}|,|z_{\alpha /2}|)`$ has exactly the same surface brightness as the prolate bar $`P(|x_\alpha |,\sqrt{y_\alpha ^2+z_\alpha ^2})`$, i.e.,
$$_{\mathrm{}}^{\mathrm{}}P(|x_\alpha |,\sqrt{y_\alpha ^2+z_\alpha ^2})𝑑D=_{\mathrm{}}^{\mathrm{}}T(|x_{\alpha /2}|,|y_{\alpha /2}|,|z_{\alpha /2}|)𝑑D.$$
(14)
To verify the mirror symmetries of the distribution $`T(|x_{\alpha /2}|,|y_{\alpha /2}|,|z_{\alpha /2}|)`$, simply substitute eq. (12) to eq. (13) and apply the transformation $`x_{\alpha /2}x_{\alpha /2}`$ or $`y_{\alpha /2}y_{\alpha /2}`$. In both cases we have $`|x_\alpha ||x|`$. The mirror symmetry with $`z_{\alpha /2}=0`$ plane is obvious because $`T(|x_{\alpha /2}|,|y_{\alpha /2}|,|z_{\alpha /2}|)`$ is an even function of $`z_{\alpha /2}`$, which comes in only through the $`r^2=x_{\alpha /2}^2+y_{\alpha /2}^2+z_{\alpha /2}^2`$ factor. Note that we do not lose generality by choosing the rotation to be around the $`z`$-axis since there is no prefered direction in the $`yz`$ plane of the prolate bar $`P(|x|,\sqrt{y^2+z^2})`$.
To summarize the main result here, we have shown that any prolate bar has a triaxial counterpart with identical surface brightness but half the viewing angle. An immediate question is to what extent we can build triaxial models with the viewing angle varying continuously.
## 4 A general sequence of triaxial models with identical surface brightness
In the following we will construct a subclass of these phantom spheroids which preserve the mirror symmetry for a continuous sequence of fairly general triaxial models. We do not know whether such constructions exist for all triaxial models, but it does exist if the triaxial density distribution $`\nu _0(x,y,z)`$ can be decomposed to a spherical part $`G(r)`$ and a non-spherical part $`p(r)Q_0(x,y,z)`$, i.e.,
$$\nu _0(x,y,z)=G(r)+p(r)Q_0(x,y,z),$$
(15)
where the only restriction is that $`Q_0(x,y,z)`$ is a quadratic function of rectangular coordinates, i.e.,
$$Q_0c_{11}x^2+c_{22}y^2+c_{33}z^2+2c_{12}xy+2c_{23}yz+2c_{31}zx.$$
(16)
There is complete freedom with the functional forms of $`G(r)`$ and $`r^2p(r)`$, which are the the radial profiles of the spherical component and the non-spherical component respectively. These models are also triaxial by construction because surfaces of constant $`Q_0`$ are ellipsoidal isosurfaces. The symmetry planes of the ellipsoid can be oriented to any direction by changing the elements of the symmetric matrix $`c_{ij}`$ where $`c_{ij}=c_{ji}`$ and the indices $`i`$ and $`j`$ are from 1 to 3. The total luminosity of the model, $`L_0`$, is given by
$$L_0d^3𝐫\nu _0(x,y,z)=4\pi _0^{\mathrm{}}𝑑rr^2G(r)+\frac{4\pi }{3}\underset{i=1}{\overset{3}{}}c_{ii}_0^{\mathrm{}}𝑑rr^4p(r).$$
(17)
We define $`I(l^{},b^{})`$ as the surface brightness integrated along both the $`(l^{},b^{})`$ direction and the opposite $`(\pi +l^{},b^{})`$ direction. The central surface brightness, $`I_0`$, integrated along the $`x`$-axis (i.e. with $`x`$ from $`\mathrm{}`$ to $`\mathrm{}`$ and $`y=z=0`$) is given by
$$I_0_{\mathrm{}}^{\mathrm{}}\nu _0(x,0,0)𝑑x=_{\mathrm{}}^{\mathrm{}}\left[G(r)+c_{11}r^2p(r)\right]𝑑r,r=x,$$
(18)
where $`r`$ and $`x`$ are interchangeable dummy variables for the integrations.
For the above triaxial model we can construct a phantom spheroid $`F(x,y,z)`$ with
$$F(x,y,z)=P(|x|,\sqrt{y^2+z^2})S(r),P(|x|,\sqrt{y^2+z^2})=x^2p(r),$$
(19)
where the spherical component $`S(r)`$ is computed from (cf. eq. 10)
$$S(r)=\frac{1}{\pi }_r^{\mathrm{}}\frac{dJ(w)}{dw}\frac{dw}{\sqrt{w^2r^2}},J(w)=_{\mathrm{}}^{\mathrm{}}x^2p(r)𝑑D,$$
(20)
so to cancel the $`P(|x|,\sqrt{y^2+z^2})`$ component in the projected light distribution. Clearly $`F(x,y,z)`$ is a spheroidal distribution with axial symmetry around the Sun-center axis since it is made by subtracting from a prolate distribution $`x^2p(r)`$ a spherical distribution $`S(r)`$ of identical surface brightness. Let $`J_0`$ be the line of sight integration of either the spherical $`S(r)`$ or the prolate $`x^2p(r)`$ distribution along the $`x`$-axis, then
$$J_0J(0)=_{\mathrm{}}^{\mathrm{}}S(r)𝑑r=_{\mathrm{}}^{\mathrm{}}r^2p(r)𝑑r.$$
(21)
The total luminosity of the phantom spheroid, $`L_{PS}`$, is given by
$$L_{PS}=L_PL_S,L_P=\frac{4\pi }{3}_0^{\mathrm{}}𝑑rr^4p(r),L_S=4\pi _0^{\mathrm{}}𝑑rr^2S(r).$$
(22)
Adding a fraction $`\gamma `$ of the above phantom spheroid to our original model $`\nu _0(x,y,z)`$ we get a new model
$$\nu _\gamma (x,y,z)=\nu _0(x,y,z)+F(x,y,z)\gamma ,$$
(23)
which should have identical surface brightness map as the old one. Rearranging the terms we find
$$\nu _\gamma (x,y,z)=G_\gamma (r)+p(r)Q_\gamma (x,y,z),$$
(24)
where
$$G_\gamma G(r)\gamma S(r),r=\sqrt{x^2+y^2+z^2},$$
(25)
and
$$Q_\gamma (x,y,z)Q_0(x,y,z)+\gamma x^2=(\gamma +c_{11})x^2+c_{22}y^2+c_{33}z^2+2c_{12}xy+2c_{23}yz+2c_{31}zx.$$
(26)
So the new model is very similar to the old model. Both are superpositions of a spherical component and a triaxial perturbation; the triaxiality is guaranteed by the fact that $`Q_\gamma (x,y,z)`$ prescribes ellipsoidal isosurfaces. In general the orientation of the object is a function of $`\gamma `$ given by
$$\mathrm{cot}2\alpha _{xy}=\frac{\gamma +c_{11}c_{22}}{2c_{12}},\mathrm{cot}2\alpha _{xz}=\frac{\gamma +c_{11}c_{33}}{2c_{13}},\mathrm{cot}2\alpha _{yz}=\frac{c_{22}c_{33}}{2c_{23}},$$
(27)
where $`\alpha _{xy}`$ and $`\alpha _{xz}`$ are the tilt angles of the object from the the line of sight in the $`xy`$ plane (i.e., the $`z=0`$ cut) and the $`xz`$ plane (i.e. the $`y=0`$ cut), and $`\alpha _{yz}`$ is the tilt angle counting from the $`y`$-axis in the $`yz`$ plane (i.e., the $`x=0`$ cut). We make two comments here. First the orientation in the $`yz`$ plane is not affected by the phantom spheroid because the latter is generally rotationally symmetric around in the $`yz`$ plane (cf. eq. 19). Second the directions of the principal axes of the triaxial object should generally be eigenvectors of the $`3\times 3`$ matrix $`c_{ij}+\gamma \delta _{11}`$ with $`i,j=1,2,3`$. The corresponding eigenvalues are generally the three roots of a cubic equation. In practice, it is more convenient to speak of the orientation of the object in terms of the angles $`\alpha _{xy}`$ and $`\alpha _{xz}`$, which prescribe the major axes in the $`xy`$ and $`xz`$ plane cuts.
There are, however, limits to $`\gamma `$, the amount of PS that we can add to a model. A physical density model must have a positive phase space density and must be stable. This, in principle, set limits on the non-uniqueness if we can build dynamical (numerical) models for a sequence of potentials with different $`\gamma `$. In practice stringent limits can already be obtained from the minimal requirement that the volume density of a physical model is everywhere positive, i.e.,
$$\nu _\gamma (x,y,z)=\left[G(r)+p(r)Q_0(x,y,z)\right]+\gamma \left[x^2p(r)S(r)\right]0.$$
(28)
This generally involves (numerically) searching over a 3-dimensional space $`(x,y,z)`$ for the minimum of the density $`\nu _\gamma (x,y,z)`$, and finding the range of $`\gamma `$ which brings the minimum above zero; for edge-on models, this can be reduced to a search in 2-dimensional space (see Appendix A).
Nevertheless there are many easy-to-use variations of the positivity equation. For example, a positive volume density at the object center requires (cf. eq. 28)
$$\nu _\gamma (0,0,0)=G_\gamma (0)=G(0)\gamma S(0)0,$$
(29)
while a less interesting condition can come from requiring a positive total luminosity of the object $`L_\gamma `$, given by
$$L_\gamma d^3𝐫\nu _\gamma (x,y,z)=L_0+\gamma L_{PS}0.$$
(30)
A stringent set of conditions can be derived by computing the following moments of the density distribution. This is done by first multiplying both sides of eq. (28) by a factor $`r^k`$, where $`k`$ can be 0, 1, 2 etc.. Then integrate over $`r`$ along a general direction $`𝐧=(n_1,n_2,n_3)`$ through the origin, i.e., from $`r=\mathrm{}`$ to $`r=\mathrm{}`$ with $`(x,y,z)=(n_1r,n_2r,n_3r)`$. We find
$$_{\mathrm{}}^{\mathrm{}}\nu _\gamma (n_1r,n_2r,n_3r)r^k𝑑r=_{\mathrm{}}^{\mathrm{}}\left[G(r)+c_{nn}r^2p(r)\right]r^k𝑑r+_{\mathrm{}}^{\mathrm{}}\left[\gamma n_1^2r^2p(r)\gamma S(r)\right]r^k𝑑r,$$
(31)
where
$$n_1^2+n_2^2+n_3^2=1,c_{nn}\underset{i,j=1}{\overset{3}{}}c_{ij}n_in_j.$$
(32)
Requiring the $`k=0`$ moment of the density $`\nu _\gamma (x,y,z)`$ to be positive we have
$$_{\mathrm{}}^{\mathrm{}}\nu _\gamma (n_1r,n_2r,n_3r)𝑑r=I_0+\left[\left(\underset{i,j=1}{\overset{3}{}}c_{ij}n_in_jc_{11}\right)\left(n_2^2+n_3^2\right)\gamma \right]J_00,$$
(33)
where we have substituted in the definitions for $`I_0`$ and $`J_0`$ (cf. eqns. 18 and 21). Eq. (33) generally yields a necessary upper limit on $`\gamma `$, and the limit is most stringent when the line $`(x,y,z)=(n_1r,n_2r,n_3r)`$ is the minor axis of the model. For example, along the $`y`$-axis with $`(n_1,n_2,n_3)=(0,1,0)`$ we have
$$2c_{12}\mathrm{cot}2\alpha _{xy}=\gamma +c_{11}c_{22}\frac{I_0}{J_0}$$
(34)
where we express the constraint in terms of the tilt angle $`\alpha _{xy}`$ (cf. eq. 27) as well as $`\gamma `$. Note we assume $`J_0>0`$, which is generally valid. Likewise along the $`z`$-axis with $`(n_1,n_2,n_3)=(0,0,1)`$ we obtain
$$2c_{13}\mathrm{cot}2\alpha _{xz}=\gamma +c_{11}c_{33}\frac{I_0}{J_0}.$$
(35)
Likewise a positive $`k=2`$ moment of the density requires
$$4\pi _{\mathrm{}}^{\mathrm{}}\nu _\gamma (n_1r,n_2r,n_3r)r^2𝑑r=L_0+\left(3\underset{i,j=1}{\overset{3}{}}c_{ij}n_in_j\underset{i=1}{\overset{3}{}}c_{ii}\right)L_P+\gamma (3n_1^2L_PL_S)0,$$
(36)
where we have substituted in the definitions for $`L_0`$, $`L_P`$ and $`L_S`$ (cf. eq. 17 and 22). Eq. (36) generally yields a necessary lower limit on $`\gamma `$. For example, along the $`x`$-axis with $`𝐧=(1,0,0)`$ we have
$$2c_{12}\mathrm{cot}2\alpha _{xy}+2c_{13}\mathrm{cot}2\alpha _{xz}=2\gamma +2c_{11}c_{22}c_{33}\frac{L_\gamma }{L_P},$$
(37)
where we assume the prolate component has a positive luminosity, i.e., $`L_P>0`$, which is generally the case. The r.h.s of eq. (37) can be further simplied in the limit $`D_0\mathrm{}`$, i.e., the observer is far away from the object. In this case $`L_\gamma L_0`$.
## 5 A sequence of nearly edge-on triaxial models with a Gaussian profile
The above general results apply to models with any radial profile. To illustrate the model properties effectively, we will use the following smooth triaxial models with a Gaussian radial profile with
$$\nu _0(x,y,z)=G(r)+p(r)Q_0(x,y,z),G(r)=\nu _ce^{\frac{r^2}{2a^2}},p(r)=\frac{\nu _c}{a^2}e^{\frac{r^2}{a^2}},$$
(38)
where $`a`$ is the characteristic scale of the model, $`\nu _c`$ is a characteristic density. These two intrinsic parameters are related to the observable $`I_0`$ (cf. eq. 18) by
$$I_0=_{\mathrm{}}^{\mathrm{}}\nu _0(x,0,0)𝑑x=\sqrt{\pi }\left(\sqrt{2}+\frac{c_{11}}{2}\right)\nu _ca,$$
(39)
where $`I_0`$ is effectively the integrated light intensity from both the object center $`(l^{},b^{})=(0,0)`$ and the anti-center $`(l^{},b^{})=(\pi ,0)`$. While we prefer to keep our derivations as general as possible, wherever quantitative numerical calculations are shown we set the scaling parameters $`a`$ and $`\nu _c`$ to unity, and use the following set of model parameters
$$c_{11}=c_{22}=\frac{5}{6},c_{33}=\frac{7}{6},c_{12}=\frac{1}{3},$$
(40)
and
$$c_{31}=c_{23}=\frac{1}{9},$$
(41)
and put the object at the distance
$$D_0=4a$$
(42)
unless otherwise mentioned. These parameters describe nearly edge-on models with a characteristic angular size of $`\frac{a}{D_0}=\frac{1}{4}15^o`$; edge-on models have $`c_{31}=c_{23}=0`$. We will vary $`\gamma `$ to show the properties of a sequence of models. The results can be generalized qualitatively to all models described by eq. (24).
To find the expression for the phantom spheroid, we substitute eq. (38) in eq. (8). This yields
$$J(w)=_{\mathrm{}}^{\mathrm{}}x^2p(r)𝑑D=J_0e^{\frac{w^2}{a^2}}\left(1+\frac{2w^2}{a^2}\frac{w^2}{D_0^2}\right),J_0=\frac{\sqrt{\pi }}{2}\nu _ca,$$
(43)
where $`J(w)`$ is the integrated intensity of the prolate distribution $`x^2p(r)`$ along the line of sight with the impact parameter $`w`$, which becomes $`J_0`$ when $`w=0`$. Applying eq. (10) we find that $`S(r)`$ reduces to a simple analytical expression,
$$S(r)=\frac{1}{\pi }_r^{\mathrm{}}\frac{dJ(w)}{dw}\frac{dw}{\sqrt{w^2r^2}}=a^2p(r)\left(\frac{1}{2}+\frac{r^4}{a^2D_0^2}\frac{3r^2}{2D_0^2}\right),$$
(44)
and the PS is given by
$$F(x,y,z)=x^2pS=\nu _ce^{\frac{r^2}{a^2}}\left[\left(\frac{x^2}{a^2}\frac{1}{2}\right)+\frac{a^2}{D_0^2}\left(\frac{3r^2}{2a^2}\frac{r^4}{a^4}\right)\right].$$
(45)
A few examples of the PS are shown in Fig. 2. They vary as a function of characteristic angular size of the object $`\frac{a}{D_0}`$. When the observer is well inside the object, $`\frac{a}{D_0}1`$, the PS density becomes nearly spherical because of the strong dependence on $`r`$ (cf. eq. 45). The amplitude changes from $`\frac{a^2}{5D_0^2}\nu _c`$ to $`\frac{a^2}{5D_0^2}\nu _c`$ when $`r`$ moves from $`r=a`$ to $`r=1.5a`$.
We can also generate a sequence of analytical phantom spheroids by repeatedly applying the derivative operator $`\frac{1}{2}\frac{}{\mathrm{ln}a}`$ to the original phantom spheroid $`F(x,y,z)`$ (cf. eq. 45), i.e,
$$F_n(x,y,z)=x^2p_n(r)S_n(r),p_n(r)\frac{^np(r)}{2^n(\mathrm{ln}a)^n},S_n(r)\frac{^nS(r)}{2^n(\mathrm{ln}a)^n}$$
(46)
for $`n=1,2,3,\mathrm{}`$, where $`x^2p_n(r)`$ and $`S_n(r)`$ form a new pair of a prolate bar and a spherical bulge with matching surface brightness. These new analytical PS densities are given in the Appendix B. The contours for these new PS densities are shown in Fig. 3, and radial profiles in Fig. 4. As $`D_0`$ decreases, the amplitude of the radial oscillation of the PS increases, but at the origin we have,
$$F(0,0,0)=0.5\nu _c,F_1(0,0,0)=F_2(0,0,0)=0,$$
(47)
independent of $`D_0`$.
Adding the phantom spheroidal density $`F(x,y,z)`$ to our $`\gamma =0`$ model (cf. eq. 38), we obtain the general expression for the model density
$`\nu _\gamma (x,y,z)`$ $`=`$ $`\nu _ce^{\frac{r^2}{2a^2}}\left\{1+e^{\frac{r^2}{2a^2}}\left[{\displaystyle \frac{Q_\gamma }{a^2}}{\displaystyle \frac{\gamma }{2}}{\displaystyle \frac{\gamma r^4}{a^2D_0^2}}+{\displaystyle \frac{3\gamma r^2}{2D_0^2}}\right]\right\},`$ (48)
$`Q_\gamma `$ $`=`$ $`(\gamma +c_{11})x^2+c_{22}y^2+c_{33}z^2+2c_{12}xy+2c_{23}yz+2c_{31}zx.`$
The object-observer distance $`D_0`$ can take on any positive value, and the parameters $`\gamma `$ and $`c_{ij}`$ can describe the most general orientation of the three symmetry planes of the model (cf. eq. 26). These models have also the nice property that the PS density falls off steeper than the spherical term $`G(r)`$, meaning that the models are always nearly spherical with a positive, Gaussian profile at large radii.
The free parameter $`\gamma `$ is constrained by the requirement of a positive volume density to the range
$$\mathrm{Min}(\gamma )<\gamma <\mathrm{Max}(\gamma ),\mathrm{Min}(\gamma )0.5,\mathrm{Max}(\gamma )0.6,$$
(49)
where the lower and upper limits are found by computing numerically the density on a spatial grid in the $`(x,y,z)`$ and examining the positivity (cf. eq. 28 with the parameters in eq. 40). They translates to
$$\mathrm{Min}(\alpha _{\mathrm{xy}})<\alpha _{\mathrm{xy}}<\mathrm{Max}(\alpha _{\mathrm{xy}}),\mathrm{Min}(\alpha _{\mathrm{xz}})<\alpha _{\mathrm{xz}}<\mathrm{Max}(\alpha _{\mathrm{xz}}),$$
(50)
where
$`\mathrm{Min}(\alpha _{\mathrm{xy}})`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{arctan}\left[{\displaystyle \frac{2c_{12}}{\mathrm{Max}(\gamma )+\mathrm{c}_{11}\mathrm{c}_{22}}}\right]25^o,`$ (51)
$`\mathrm{Max}(\alpha _{\mathrm{xy}})`$ $`=`$ $`{\displaystyle \frac{\pi }{2}}{\displaystyle \frac{1}{2}}\mathrm{arctan}\left[{\displaystyle \frac{2c_{12}}{c_{22}c_{11}\mathrm{Min}(\gamma )}}\right]65^o,`$
$`\mathrm{Min}(\alpha _{\mathrm{xz}})`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{arctan}\left[{\displaystyle \frac{2c_{31}}{\mathrm{Max}(\gamma )+\mathrm{c}_{11}\mathrm{c}_{33}}}\right]7^o,`$ (52)
$`\mathrm{Max}(\alpha _{\mathrm{xz}})`$ $`=`$ $`{\displaystyle \frac{\pi }{2}}{\displaystyle \frac{1}{2}}\mathrm{arctan}\left[{\displaystyle \frac{2c_{31}}{c_{33}c_{11}\mathrm{Min}(\gamma )}}\right]65^o.`$
Necessary (but not sufficient) limits can also be derived analytically using, for example, conditions eq. (34), and (35). Upon substitution of eqs. (43) and (39) for our Gaussian triaxial models we find
$`\alpha _{xy}`$ $`>`$ $`{\displaystyle \frac{1}{2}}\mathrm{arctan}\left({\displaystyle \frac{2c_{12}J_0}{I_0}}\right)={\displaystyle \frac{1}{2}}\mathrm{arctan}\left({\displaystyle \frac{2c_{12}}{2\sqrt{2}+c_{11}}}\right)10^o,`$ (53)
$`\alpha _{xz}`$ $`>`$ $`{\displaystyle \frac{1}{2}}\mathrm{arctan}\left({\displaystyle \frac{2c_{31}J_0}{I_0}}\right)={\displaystyle \frac{1}{2}}\mathrm{arctan}\left({\displaystyle \frac{2c_{31}}{2\sqrt{2}+c_{11}}}\right)3^o.`$
And likewise eq. (37) requires
$$\gamma \left(1\frac{3a^2}{2D_0^2}\right)+\left(2\sqrt{2}+\frac{3}{2}c_{11}\right)>0.$$
(54)
For an observer at $`D_0=4a`$ this reduces to a lower limit on the amount of phantom spheroid with $`\gamma >1.75`$, or to an upper limit on the model major axis angles with
$`\alpha _{xy}`$ $`=`$ $`{\displaystyle \frac{\pi }{2}}{\displaystyle \frac{1}{2}}\mathrm{arctan}\left({\displaystyle \frac{2c_{12}}{c_{22}c_{11}\gamma }}\right)<80^o,`$ (55)
$`\alpha _{xz}`$ $`=`$ $`{\displaystyle \frac{\pi }{2}}{\displaystyle \frac{1}{2}}\mathrm{arctan}\left({\displaystyle \frac{2c_{31}}{c_{33}c_{11}\gamma }}\right)<85^o.`$
Fig. 6 and 6 illustrate two Gaussian triaxial models, where the parameter $`\gamma `$ is fixed by assigning the tilt angle $`\alpha _{xy}`$ (cf. eq. 27) in the $`xy`$ plane to $`25^o`$ or $`50^o`$. Figs. (10) and (11) show the identical surface brightness distribution of the two models, independent of $`\gamma `$. Unlike the cigar-shaped bars in Fig. 1, these two triaxial models are smooth with a positive density everywhere and an overall Gaussian radial profile except for a small bump near $`\frac{D_0}{2}`$ (cf. Fig. 7).
The two models differ only in the even part of the volume density as shown by the cross-sections at, say, $`x=2a/5`$ (cf. Fig. 8). Subtracting one model from another we get back the phantom spheroid, which is rotationally symmetric around the Sun-object center axis.
The odd part of the volume density is identical for both models, and is shown in Fig. (9). Mathematically, the odd part of the density is defined by
$$\nu ^{odd,y}\frac{1}{2}\left[\nu _\gamma (x,y,z)\nu _\gamma (x,y,z)\right]=2y\left(c_{12}x+c_{23}z\right)p(r),$$
(56)
and
$$\nu ^{odd,z}\frac{1}{2}\left[\nu _\gamma (x,y,z)\nu _\gamma (x,y,z)\right]=2z\left(c_{13}x+c_{23}y\right)p(r).$$
(57)
Clearly both $`\nu ^{odd,y}`$ and $`\nu ^{odd,z}`$ are independent of $`\gamma `$, i.e., the amount of PS in the model.
The odd part of the density shows up as the asymmetry in the surface brightness. The left-to-right asymmetry is a line-of-sight integration of $`\nu ^{odd,y}`$ with
$$\frac{1}{2}\left[I(l^{},b^{})I(l^{},b^{})\right]=_{\mathrm{}}^{\mathrm{}}\nu ^{odd,y}𝑑D,$$
(58)
and the up-to-down asymmetry is an integration of $`\nu ^{odd,z}`$ along the line of sight with
$$\frac{1}{2}\left[I(l^{},b^{})I(l^{},b^{})\right]=_{\mathrm{}}^{\mathrm{}}\nu ^{odd,z}𝑑D.$$
(59)
More specific for the Gaussian triaxial models here, the asymmetry along the $`b^{}=0`$ cut is given by
$$\frac{I(l^{},0)I(l^{},0)}{2}=I_0\stackrel{~}{c}_{12}W(w)\mathrm{sin}2l^{},\stackrel{~}{c}_{12}\frac{c_{12}}{2\sqrt{2}+c_{11}},$$
(60)
and the asymmetry along the $`l^{}=0`$ cut is given by
$$\frac{I(0,b^{})I(0,b^{})}{2}=I_0\stackrel{~}{c}_{13}W(w)\mathrm{sin}2b^{},\stackrel{~}{c}_{13}\frac{c_{13}}{2\sqrt{2}+c_{11}}$$
(61)
where $`\stackrel{~}{c}_{12}`$ and $`\stackrel{~}{c}_{13}`$ are the combinations of the intrinsic shape parameters of the model, and
$$W(w)=e^{\frac{w^2}{a^2}}\left(\frac{2w^2}{a^2}1\right),$$
(62)
is a function of the impact parameter $`w`$. Effectively $`W(w)`$ is a rescaled asymmetry distribution after removing the periodical anti-symmetric term $`\mathrm{sin}2l^{}`$ or $`\mathrm{sin}2b^{}`$. Note that the asymmetry distribution has a reversal of sign around the impact parameter
$$w=a/\sqrt{2}.$$
(63)
This means we can derive the scale length of the model $`a`$ directly from the observable asymmetry map. For example, for the models in Fig. (11), the object is at a distance $`D_0=4a`$, so the reversal of the asymmetry, i.e., $`W(w)=0`$, happens at
$$|l^{}|=\mathrm{arcsin}\frac{w}{D_0}10^o,$$
(64)
in the longitudinal cut, or at $`|b^{}|10^o`$ in the latitudinal cut. While fitting the asymmetry map will tell us unambiguously the scale length $`a`$ and the shape parameter $`\stackrel{~}{ϵ}`$ of the triaxial object, it does not directly constrain the major-axis angle $`\alpha `$, nor does it constrain the amount of phantom spheroidal density.
## 6 Phantom Spheroid vs. the Galactic bar
This new form of non-uniqueness of nearby objects might apply to the Galactic bar as well. As mentioned in the Introduction, the COBE/DIRBE surface brightness distribution is the main source of information about the volume density of the Galactic bar. Imaging the effect of adding any amount of the phantom spheroids here $`\gamma F(x,y,z)`$ to the COBE bar. It does not matter whether we use the cigar-shaped one in Fig. 1 or the smoother ones shown in Fig. 2, as long as the PS is exactly centered at the Galactic center, i.e., we set $`D_0`$ of these PS densities to the Galactocentric distance (say 8 kpc). The result is a new volume density of the Galaxy with a generally twisted shape in the central part, but there should be no trace of alteration in the the surface brightness maps (the symmetric maps or the asymmetry maps). This is purely because of the construction of the PS (cf. eq. 5). So the perspective effect in the COBE/DIRBE maps is bypassed completely. However, adding any phantom spheroid here will modify only the even part of the volume density and will have no influence on the anti-symmetric part since the PS is always symmetric with respect to the $`l=0`$ plane and the $`b=0`$ plane (cf. eq. 6 and Fig. 8). So as illustrated by Fig. (9) and Fig. (10), the prominent perspective effect in the left-to-right asymmetry map $`\left[I(l,b)I(l,b)\right]/2`$ or the up-to-down asymmetry map $`\left[I(l,b)I(l,b)\right]/2`$ should tightly constrain the part of the volume density which is anti-symmetric with respect to the $`l=0^o`$ plane or the $`b=0^o`$ plane (cf. eqs. 58 and 59).
It is tempting to suggest that phantom spheroids are responsible for the uncertain major axis angle of the Galactic bar from the COBE/DIRBE map (Binney & Gerhard 1996, Binney, Gerhard & Spergel 1997). However, this statement depends on whether there exists a phantom spheroidal density which preserves the assumed mirror symmetry of the COBE bar exactly. Unless the added PS has a density profile matching that of the COBE bar, it will create a $`m=2`$ twist, like a two-armed spiral pattern or an S-shaped warp. On the other hand, part of such distortion may be hidden by the spiral arms in the Galactic disk. After all the mirror symmetry is merely an assumption, which is motivated by the fact that external face-on barred galaxies have a largely bi-symmetric distribution of light. Here we remark that the sequence of Gaussian triaxial models here capture the main features of the Galactic bar and the COBE/DIRBE maps: the models shown in Fig. 6 and 6 with the scalelength $`a=D_0/4=2`$kpc resemble the Gaussian Galactic bar models of Dwek et al. (1995) and Freudenreich (1998) in terms of aspect ratios, radial profile and the offset of the Sun from all three symmetry planes of the bar; the asymmetric patterns in the surface brightness cuts in Fig. (11) also mimics those seen in the COBE/DIRBE maps. Hence it would not be surprising if there exists a phantom spheroid which is qualitatively similar the one prescribed by eq. (45) and preserves the mirror-symmetry of the Galactic bar. Nevertheless a quantitative determination of the parameters of the COBE bar is a complex numerical problem involving to the least a reliable treatment of dust extinction (Arendt et al. 1994, Spergel 1997). Note that patchy dust in the Galaxy makes a phantom spheroid here project to a non-zero irregular surface brightness map. The effect is strongest when the dust is mixed with stars near the center, but even a dust screen near the Sun can cast a faint shadow unless the phantom spheroid is confined inside the solar circle (e.g. the cigar-shaped PS in Fig. 1). The fact that dust extinction is a function of wavelength could constrain the phantom spheroids. The range of the bar angle and axis ratio are also constrained by other observational constraints of the bar such as microlensing optical depth (Zhao & Mao 1996, Bissantz, Englmaier, Binney, & Gerhard 1997) and star counts (Stanek et al. 1994). Star count data, for example, can constrain the line-of-sight distribution of the bar, hence can in principle distinguish between a prolate bar and a spherical bulge, and break the degeneracy of the PS. A detailed numerical treatment of the Galactic bar is beyond the scope of this paper on the existence of non-uniqueness in deprojecting a general nearby object.
## 7 Phantom Spheroid vs. known non-uniqueness in external galaxies
The phantom spheroidal density for nearby systems is of a different nature from the kind of non-uniqueness associated with a simple shear and/or stretch transformation of an ellipsoid. This applies even in the limit that the object is at infinity because these transformations normally change the odd part of the density, while the PS density here is strictly an even function (cf. eq. 6).
Our phantom spheroidal density $`F(x,y,z)`$ is a function of the object distance $`D_0`$. In general
$$F(x,y,z)=F_{D_0\mathrm{}}(x,y,z)+D_0^2F_0(r),$$
(65)
where $`F_{D_0\mathrm{}}`$ is the PS in the limit of $`D_0\mathrm{}`$, and $`F_0(r)`$ is (up to a constant) the PS in the limit of $`D_00`$. For a hypothetical observer receding from the object, the distance $`D_0`$ increases, the PS density is adjusted slightly to accommodate the changing perspective so to preserve this kind of non-uniqueness all the way to extragalactic distance. The common ambiguity of an extragalactic axisymmetric bulge with an end-on bar is a very special case of the kind of non-uniqueness for nearby objects.
The divergence of $`F(x,y,z)`$ at small $`D_0`$ (cf. eq. 65) has an interesting implication. In the limit that $`D_00`$ some regions of the phantom spheroid will have infinitely positive density, and some regions infinitely negative density because $`\gamma F(x,y,z)\gamma D_0^2F_0(r)`$, and $`F_0(r)`$ changes sign, for example, at $`r=a\sqrt{3/2}`$ (cf. eq. 45 or Fig. 4). So to add any finite amount of PS would violate the positivity requirement. Hence only the model with $`\gamma =0`$ is allowed, and the major axis direction is fixed to the direction where the object appears the brightest in the projected map. In other words the kind of non-uniqueness discussed here disappears if the observer is very close to the object center. In general the range for $`\gamma `$ becomes very narrow if $`D_0a`$, and is insensitive to the distance $`D_0`$ to the object if $`D_0>2a`$.
Unlike its external counterpart with $`D_0\mathrm{}`$, the PS density for a nearby triaxial body is not massless. The total luminosity $`L_\gamma `$ of our model changes with $`\gamma `$. For the Gaussian model we have (cf. eqs. 30 and 48)
$`L_\gamma `$ $`=`$ $`{\displaystyle d^3𝐫\nu _\gamma (x,y,z)}=L_0+\gamma L_{PS},`$ (66)
$`L_0`$ $`=`$ $`\pi ^{\frac{3}{2}}\nu _ca^3\left[2^{\frac{3}{2}}+{\displaystyle \frac{c_{11}+c_{22}+c_{33}}{2}}\right],`$
$`L_{PS}`$ $`=`$ $`\left({\displaystyle \frac{3\pi ^{\frac{3}{2}}\nu _ca^3}{2}}\right)\left({\displaystyle \frac{\gamma a^2}{D_0^2}}\right),`$
where $`L_0`$ is the luminosity in the absence of the PS. The PS has a luminosity $`L_{PS}`$ proportional to $`D_0^2`$.
The normalization $`\nu _c`$ and the scale $`a`$ are fixed by the projected light intensity $`I_0`$ and the asymmetry map $`W(w)`$ (cf. eqs. 62 and 39), but the total luminosity changes by a fraction of the order of $`\frac{a^2}{D_0^2}`$. This is because unlike external systems, the observed angular distribution of light in a nearby object does not simply sum up to a unique measurement of its total luminosity. We may slightly underestimate/overestimate the intrinsic luminosity by about a few percent, which may not be easy to detect in reality. Moving away from the object the observer has a full outside view and hence a better determination of its luminosity but at the expense of relaxing the constraint on the orientation of the object from the perspective effect and positivity.
The phantom spheroid affects also the dynamics of the model, however, the effect can be fairly difficult to detect. For the models in eq. (23), the depth of the potential well at the center expressed in terms of the escape velocity $`V_{esc}`$ is given by
$$\frac{1}{2}V_{esc}^2=\pi \nu _ca^2G(M/L)\left\{\left[4+\frac{2}{3}\left(c_{11}+c_{22}+c_{33}\right)\right]\left(\frac{1}{3}+\frac{a^2}{D_0^2}\right)\gamma \right\},$$
(67)
which decreases linearly with increasing fraction ($`\gamma `$) of the superimposed PS, where $`M/L`$ is the mass-to-light ratio. As a result, the potential well of the $`\alpha =25^o`$ model is slightly shallower than that of the more centrally concentrated $`\alpha =50^o`$ model (cf. Fig. 6 and 6). However, the difference is only 7% in terms of the maximum escape velocity $`V_{esc}`$ of the models. The differences in terms of the mass weighted average velocity dispersion (estimated from the virial theorem) and the circular velocity (estimated directly from the potential) are also at only a few percent level, too small to be measured with certainty. However it might still be feasible to distinguish the models in terms of the stellar orbits in them. The projected velocity distribution should change with the major axis angle.
Our phantom spheroidal density is also different from konuses since it is not confined to any cone in the Fourier k-space even in the limit $`D_0\mathrm{}`$. Using eqs. (4), (24) and (25) of Palmer (1994) we can compute the Fourier transform of our PS (cf. eq. 45). We find
$$\stackrel{~}{F}(𝐤)d^3𝐫\mathrm{exp}(i𝐤𝐫)\left[p(r)x^2S(r)\right]=\stackrel{~}{F}_{D_0\mathrm{}}(𝐤)+D_0^2\stackrel{~}{F}_0(𝐤),$$
(68)
which is a k-space spheroidal distribution around the Sun-center line, where
$$\stackrel{~}{F}_{D_0\mathrm{}}(𝐤)\pi ^{\frac{3}{2}}\nu _ca^3\left(\mathrm{cos}^2\theta _𝐤\frac{2}{3}\right)\frac{k^2a^2}{4}e^{\frac{k^2a^2}{4}},$$
(69)
$`\theta _𝐤`$ is the angle of the k-vector with the Sun-center line, and
$$D_0^2\stackrel{~}{F}_0(𝐤)\left(1\frac{7k^2a^2}{12}+\frac{k^4a^4}{24}\right)e^{\frac{k^2a^2}{4}}L_{PS}.$$
(70)
We recover the luminosity of the PS in the limit $`𝐤0`$, $`\stackrel{~}{F}(0)=D_0^2\stackrel{~}{F}_0(0)=L_{PS}`$ (cf. eq. 66). Fig. 12 shows that even when the object is placed at infinity, $`\stackrel{~}{F}(𝐤)`$ is non-zero everywhere except at k$`=0`$ or $`\theta _𝐤=\mathrm{cos}^1\sqrt{\frac{2}{3}}`$ (cf. eq. 69). Since any konus-like structure (or its triaxial version) must have a certain “cone of ignorance” around a principal axis of the model outside which $`\stackrel{~}{F}(𝐤)=0`$ (Rybicki 1986, Gerhard & Binney 1996, Kochanek & Rybicki 1996), the kind of non-uniqueness shown in this paper has little to do with konuses. The two sequences of models meet only when the object is a face-on or end-on spheroidal system, in which case, the PS here is equivalent to a trivial konus with the entire $`𝐤`$-space in the “cone of ignorance”.
## 8 Conclusion
The non-uniqueness in deprojecting the surface brightness maps of a nearby system can originate from the simple fact that a spherical bulge can match an end-on prolate bar or a face-on disk in the line-of-sight integrated light distribution. Subtracting the spherical bulge from the prolate end-on bar or the face-on disk forms a phantom spheroid (PS), which has both positive and negative density regions. The non-uniqueness due to such PS can be characterized by two numbers: the distance ($`D_0`$) of the object and the amount ($`\gamma `$) of the PS density that we can superimpose. By tailoring the PS, it is possible to preserve the triaxial reflection symmetries of the model. The orientation of these symmetry planes with respect to our line of sight changes as increasing amount of PS is added up to the point when negative density regions appear in the final model. The limits on the major-axis angles are given analytically. The phantom spheroidal density here forms a new class of non-uniqueness of entirely different nature from known degeneracy in deprojecting extragalactic objects. It does not preserve the total luminosity of the system.
The author thanks Frank van den Bosch and Tim de Zeeuw for many helpful comments on the presentation, and Ortwin Gerhard for careful reading of an early draft.
## Appendix A Positivity as a constraint to the major axis angle of edge-on models
Edge-on models are specified by
$$c_{23}=c_{31}=\alpha _{yz}=\alpha _{xz}=0,c_{12}ϵ,\alpha \alpha _{xy},$$
(71)
so that the $`z=0`$ plane, which passes through the observer at $`(x,y,z)=(D_0,0,0)`$, is a symmetry plane of the model. The volume density model of the system is best described in a cylindrical coordinate system $`(R,Z,\psi )`$ centered on the object. This coordinate system is related to the rectangular coordinate system by
$$x=R\mathrm{cos}\psi ,y=R\mathrm{sin}\psi ,z=Z,$$
(72)
where the $`Z`$-axis is the symmetry axis of the edge-on system, and $`\psi =0`$ is a plane, passing through the Sun-object line and the $`Z`$-axis. In this coordinate system we can rewrite the ellipsoidal term $`Q_\gamma `$ (cf. eq. 26) as
$$Q_\gamma =\left[c_{33}Z^2+\left(c_{22}+ϵ\mathrm{cot}2\alpha \right)R^2\right]+\frac{ϵR^2}{\mathrm{sin}2\alpha }\mathrm{cos}(2\psi 2\alpha ).$$
(73)
Clearly surfaces of constant $`Q_\gamma `$ are ellipsoidal surfaces with mirror symmetry with respect to the $`\psi =\alpha `$ plane, the $`\psi =\alpha +\frac{\pi }{2}`$ plane and the $`Z=0`$ plane.
The triaxial model density (cf. eq. 24) can then be written in above notations as
$$\nu _\gamma (R,Z,\psi )=n_0(R,Z)+n_2(R,Z)\mathrm{cos}(2\psi 2\alpha ),$$
(74)
where the second term is a triaxial perturbation with the major axis along $`\psi =\alpha `$ and amplitude
$$n_2(R,Z)=\frac{ϵR^2p}{\mathrm{sin}2\alpha },$$
(75)
and the first term, the axisymmetric part, is given by
$$n_0(R,Z)E(R,Z)+ϵ\mathrm{cot}2\alpha \left[R^2p2S\right],$$
(76)
where
$$E(R,Z)\left[G+(c_{11}c_{22})S\right]+(c_{22}R^2+c_{33}Z^2)p.$$
(77)
Imposing positivity requirements to $`\nu _\gamma (R,Z,\psi )`$ for all values of the azimuthal angle $`\psi `$, we have
$$n_0(R,Z)n_2(R,Z),$$
(78)
i.e.,
$$E(R,Z)+ϵ\mathrm{cot}2\alpha \left[R^2p2S\right]\frac{ϵR^2p}{\mathrm{sin}2\alpha }.$$
(79)
Using the following relations for sinusoidal functions
$$\frac{1}{\mathrm{sin}2\alpha }=\frac{1+t^2}{2t},\mathrm{cot}2\alpha =\frac{1t^2}{2t},t\mathrm{tan}\alpha ,$$
(80)
the inequality reduces to an upper and a lower bound for the angle $`\alpha `$ with,
$$\mathrm{Max}\left[\mathrm{t}_{}\right]\mathrm{tan}\alpha \mathrm{Min}\left[\mathrm{t}_+\right],$$
(81)
where $`t_{}tt_+`$ is the range bounded by the effectively quadratic inequality for $`t`$
$$\left[ϵR^2pϵS\right]t+ϵSt^1E(R,Z)$$
(82)
at a given position $`(R,Z)`$ on the meridional plane, and the overlapped interval of these ranges is used in eq. (81). For example, for models with parameters in eq. (40) and $`D_0=4a`$, we find $`77^o>\alpha >15^o`$ to ensure a positive density at $`(R,Z)=(a,a)`$, but a narrower range $`63.5^o\alpha 22.5^o`$ is necessary to ensure positivity everywhere in the $`(R,Z)`$ plane.
Interestingly we recover the results of eq. (34) if we integrate both sides of eq. (82) along the $`y`$-axis. And likewise if we multiply both sides of eq. (82) by a factor $`x^2`$ and then integrate along the $`x`$-axis, we recover the results of eq. (37).
## Appendix B Analytical expressions for a series of phantom spheroids
New phantom spheroids can be generated from the original PS $`F(x,y,z)=x^2p(r)S(r)`$ (cf. eq. 45) by repeatedly applying the derivative operator $`\frac{1}{2}\frac{}{\mathrm{ln}a}`$, i.e,
$$F_1(x,y,z)=\frac{F(x,y,z)}{2(\mathrm{ln}a)}=x^2p_1(r)S_1(r),$$
(83)
where
$$p_1(r)\frac{p(r)}{2(\mathrm{ln}a)}=p(r)\left(1\frac{r^2}{a^2}\right),$$
(84)
$$S_1(r)\frac{S(r)}{2(\mathrm{ln}a)}=a^2p(r)\left[\frac{r^2}{2a^2}+\frac{a^2}{D_0^2}\left(\frac{5r^4}{2a^4}+\frac{r^6}{a^6}\right)\right],$$
(85)
and
$$F_2(x,y,z)=\frac{F_1(x,y,z)}{2(\mathrm{ln}a)}=x^2p_2(r)S_2(r),$$
(86)
where
$$p_2(r)\frac{p_1(r)}{2(\mathrm{ln}a)}=p(r)\left(1\frac{3r^2}{a^2}+\frac{r^4}{a^4}\right),$$
(87)
$$S_2(r)\frac{S_1(r)}{2(\mathrm{ln}a)}=a^2p(r)\left[\left(\frac{r^2}{2a^2}\frac{r^4}{2a^4}\right)+\frac{a^2}{D_0^2}\left(\frac{5r^4}{2a^4}+\frac{9r^6}{a^6}\frac{r^8}{a^8}\right)\right].$$
(88) |
no-problem/9912/astro-ph9912407.html | ar5iv | text | # Hydrodynamics and Radiation from a Relativistic Expanding Jet with Applications to GRB Afterglow
## Introduction
The level of beaming in GRBs is one of the most interesting open questions in this subject. The relativistic flow which drives a GRB may range from isotropic to strongly collimated into a narrow opening angle. The degree of collimation (beaming) of the outflow has many implications on most aspects GRB research, such as the requirements from the “inner engine”, the energy release and the event rate. During the prompt emission the Lorentz factor of the flow is very high ($`\gamma >100`$), and due to relativistic aberration of light, only a narrow angle of $`1/\gamma `$ around the line of sight (LOS) is visible. During the afterglow stage, the flow decelerates and an increasingly larger angle becomes visible. As long as $`\gamma >1/\theta `$ an outflow collimated within an angle $`\theta `$ around the LOS produces the same observed radiation as if it were part of a spherical outflow. Once $`\gamma `$ drops below $`1/\theta `$, the observer can notice that radiation arrives only from within an angle $`\theta `$ around the LOS, instead of $`1/\gamma `$ as in the spherical case. Sideways expansion is also expected to become important when $`\gamma 1/\theta `$ Rhoads ; SPH . These two effects combine to create a break in the light curve at $`\gamma 1/\theta `$, but it is not quite clear whether this break is sharp enough to be detected Rhoads ; SPH ; WL . To explore this question we have performed fully relativistic three dimensional simulations that follow the slowing down and the lateral expansion of a relativistic jet. We then calculate the synchrotron light curve and spectrum using the conditions determined by the hydrodynamical simulation.
## The Hydrodynamics
We use a fully relativistic three dimensional code for the hydrodynamical calculations. The initial conditions are a wedge with an opening angle $`\theta _0=0.2`$ taken from the Blandford McKee BM , BM hereafter, self similar spherical solution, embedded in a cold and homogeneous ambient medium. For this initial opening angle, the jet is expected to show considerable lateral expansion when $`\gamma 1/\theta _0=5`$, where $`\gamma `$ is the Lorentz factor of the fluid. The BM solution used for the initial conditions was therefore at the time when $`\mathrm{\Gamma }=10`$, where $`\mathrm{\Gamma }=\sqrt{2}\gamma `$ is the Lorentz factor of the shock. The total isotropic energy was $`E=10^{52}\mathrm{ergs}`$, and the ambient number density was $`n=1\mathrm{cm}^3`$.
In Figures 2-5 we present snapshots of the number density, internal energy density, Lorentz factor and velocity field of the fluid, as the jet slows down and spreads sideways. An explanation of what is seen in these snapshots is given in Figure 1. The snapshots are taken at consecutive times, ranging $`t=137282\mathrm{days}`$ in the rest frame of the ambient medium (corresponding roughly to observer times from one and a half to twenty days for an observer along the jet axis). The results shown in this work are still preliminary. The resolution is not sufficient to resolve the very thin initial shell. For this reason the maximal Lorentz factor of the matter is just over $`3`$, instead of $`10/\sqrt{2}7.07`$ (see Figure 4). Higher resolution runs are in progress.
## Calculating Light Curves and Spectra
The local emission coefficient at a given space-time point is calculated directly from the hydrodynamical quantities there. The magnetic field and electron energy densities are assumed to hold constant fractions, $`ϵ_B`$ and $`ϵ_e`$, respectively, of the internal energy, while the electrons posses a power law energy distribution, $`N(\gamma _e)\gamma _e^p`$. The local emissivity is approximated by a broken power law: $`F_\nu \nu ^{1/3}`$ and $`\nu ^{(1p)/2}`$, below and above the typical synchrotron frequency, respectively GPS .
The light curves and spectra are calculated for several viewing angles with respect to the jet axis. Once the emission coefficient is determined in the local frame, it is transformed to the frame of each observer. The time of arrival to each observer is then calculated, and the contributions are sumed over space-time, producing the various light curves. This is done for several frequencies, simultaneously, so that the spectrum may be obtained, as well as the light curve at different frequencies.
A few light curves, calculated for $`ϵ_B=ϵ_e=0.1`$ and $`p=2.5`$, are shown in Figure (6). These light curves serve to demonstrate the potential of this approach. Future simulations are expected to achieve sufficient resolution to produce realistic light curves which could be compared with afterglow observations. |
no-problem/9912/astro-ph9912555.html | ar5iv | text | # Untitled Document
The Cosmological Consequences of the Preon Structure of Matter
Vladimir Burdyuzha, Grigory Vereshkov,
Olga Lalakulich and Yuri Ponomarev
Astro Space Center of Lebedev Physical Institute of Russian Academy of Sciences, Profsouznaya str. 84/32, 117810 Moscow, Russia
Rostov State University, Stachki str. 194, 344104 Rostov on Don, Russia
## Abstract
If the preon structure of quarks, leptons and gauge bosons will be proved then in the Universe during a relativistic phase transition the production of nonperturbative preon condensates has occured. Familons are collective excitations of these condensates. It is shown that the dark matter consisting of familon type pseudogoldstone bosons was undergone to two relativistic phase transitions temperatures of which were different. In the result of these phase transitions the structurization of dark matter and therefore the baryon subsystem has taken place. In the Universe two characteristic scales which have printed this phenomenon arise naturally.
The commonly accepted point of view on formation of Large Scale Structure (LSS) of the Universe is based on the assumption about de velopment after (LSS) of the Universe is based on the assumption about development after inflation period of homogeneous and isotropic Gausian scalar perturbations (perturbations of density). The post-recombination spectrum of these perturbations connects with initial perturbations by the transition function which depends on nature of dark matter very strong. The standard model predicts $`\mathrm{\Omega }_{total}=1`$ that is $`\mathrm{\Omega }_{CDM}+\mathrm{\Omega }_{HDM}+\mathrm{\Omega }_{baryon}+\mathrm{\Omega }_\mathrm{\Lambda }=1`$ in which large-mass objects are created at $`z23`$. For formation of LSS significantly earlier ($`z68`$) it is necessary to include in $`\mathrm{\Omega }_{total}`$ the density of vacuum energy ($`\mathrm{\Lambda }`$-term). Our hypothesis about the formation of seeds for LSS is connected with the late relativistic phase transitions (RPT) in the gas of familon type pseudo-Goldstone bosons (pseudo-GB) which are a probable candidate in dark matter (DM).
The foundation of RPT theory has been formulated by Kirzhnits and Linde and cosmological application of this theory is used many authors. The time of the beginning of these RPT depends on a mass of familons. If $`m_{GB}>3\times 10^4eV`$ then the essential influence of these RPT may take place in no far last evolution of our Universe. If $`m_{GB}<3\times 10^4eV`$ then similar influence of these RPT may take place in the nearest cosmological future when $`T_{CMB}<2.7K`$. We suppose that our particles theory of RPT in the gas and the vacuum of pseudo-GB allows more effectively to obtain $`\delta \rho /\rho 1`$ on earlier stage of the Universe evolution ($`z>5`$) than the standard model.
We have investigated the cosmological consequenses of the simplest boson-fermion model of quarks and leptons . Our interest to the preon model was induced by the fact of possible leptoquarks resonance in the experiment HERA . We have researched more detail the structure of preon nonperturbative vacuum arising in the result of the correlation of nonabelian fields on two scales ($`\mathrm{\Lambda }_{mc}1Tev`$ is the confinement scale of metacolour and $`\mathrm{\Lambda }_c150Mev`$ is QCD scale). We have detected that in spectrum of excitations of heterogenic nonperturbative preon vacuum pseudo-GB modes of familon type occurs. Familons are created in the result of spontaneous breaking symmetry of quark-lepton generations. Nonrezo masses of familons are the result of superweak interactions with quark condensates. We consider these particles as the basic constituent of cosmological dark matter. The distinguishing characteristic of these particles is the availability of the residual $`U(1)`$ symmetry and possibility of it spontaneous breaking for temperatures $`\frac{\mathrm{\Lambda }_c}{\mathrm{\Lambda }_{mc}}10^3eV`$ in the result of RPT.
We have proposed that this relativistic phase transition has direct relation to the production of primordial perturbations in DM the evolution of which leads to baryon large scale structure formation. The idea of RPT in the cosmological gas of pseudo-GB in connection with LSS problem was formulated by Frieman et al. . Here we have investigated by quantitatively the preon-familon model of this RPT.
At the beginning we point more detail the astrophysical motivation of our theory. Observational data show that some baryon objects such as the quasars on $`z4.69`$ and $`z4.41`$ and galaxies on $`z>5`$ were produced as minimum on redshifts $`z6÷8`$. This is the difficulty for standard CDM and CHDM model to produce their (the best fit is $`z2÷3`$ and observations provide the support of this). If early baryon cosmological structures produced on $`z>10`$ then the key role must play DM particles with nonstandard properties. In standard model DM consists of ideal gas particles with $`m0`$ practically noninteracting with usual matter (till now they do not detected because of their superweak interaction with baryons and leptons).
Certainly a characteristic moment of the most of cosmological structures formation finishing remains the same ($`z2÷3`$) and the appearence of baryon structures on high $`z(z>4)`$ will be a result of statistical outburst evolution of the spectrum of DM density perturbations. Early cosmological baryon structures are connected to statistical outbursts in sharp nonlinear physical system which is RPT (the production of inhomogeneities). For clearity we note again that the appearence of early baryon cosmological structures is the consequense of evolution of inhomogeneities on boundary of phase domains then as general situation of RPT in familon gas leads to ”natural” picture of the formation of baryon LSS ($`z2÷3`$). In frame of our theory some unclear questions of the astrophysical interpretation of globular clusters age, distribution of early baryon structures (quasars, CO clouds, galaxies) may be understood and next generation of space instruments (Next Generation Space Telescope, Far-Infrared and Submillimeter Space Telescope) will help to do this.
A new theory of DM must combain properties of superweak interaction of DM particles with baryons and leptons and intensive interaction of theses particles each other. Such interactions are provided by nonlinear properties of DM medium. This is the condition for realization of RPT.
The familon symmetry is experimentally observed (the different generations of quarks and leptons participate in gauge interactions the same way). Breaking of this symmetry gives a mass of particles in different generations. A hypothesis about spontaneous breaking of familon symmetry is natural and the origin of Goldstone bosons is inevitably. The properties of any pseudo-GB as and pseudo-GB of familon type depends on physical realization of Goldstone modes. These modes can be arisen from fundamental Higgs fields or from collective excitations of a heterogenic nonperturbative vacuum condensate more complex than quark-gluon one in QCD. The second possibility can realize the theory in which quarks and leptons are composite that is the preon model of elementary particles. If leptoquarks will be detected then two variants of explanations may be. If leptoquarks resonanse will be narrow and high then theses leptoquarks come from GUT or SUSY theories. The low and wide resonanse can be explained by composite particles only (preons).
Thus, in frame of preon theory DM is interpretated as a system of familon collective excitations of a heterogenic nonperturbative vacuum. This system consists of 3 subsystems:
1) familons of up-quark type;
2) familons of down-quark type;
3) familons of lepton type.
On stages of cosmological evolution when $`T\mathrm{\Lambda }_{mc}`$ the heavy unstable familons are absent. Small masses of familons are the result of superweak interactions of Goldstone fields with nonperturbative vacuum condensates and therefore familons acquire status of pseudo-GB.
The value of these masses is limited by astrophysical and laboratory magnitudes :
$$m_{astrophysical}10^3÷10^5eV$$
$`(1)`$
$$m_{laboratory}10eV$$
The effect of familons mass production corresponds formally mathemathically the appearence of mass terms in the Lagrangian of Goldstone fields. From general considerations one can propose that mass-terms may arise as with ”right” as and with ”wrong” signs. The sign of the mass-terms predetermines the destiny of residual symmetry of Goldstone fields. In the case of ”wrong” sign for low temperatures $`T<T_cm_{familons}0.1÷10^5K`$ a Goldstone condensate produces and the symmetry of the familon gas breaks spontaneously.
The representation about physical nature of familon excitations described above is formalized in a theoretical-field model. As example we discuss a model only one familon subsystem corresponding to up- quarks of second and third generations. The chiral-familon group of the model is $`SU_L(2)\times SU_R(2)`$. The familon excitations are described by eight measure (on number of matrix components) reducible representation of this group factorized on two irreducible representations $`(F,f_a);(\psi ,\phi _a)`$ which differ each other by a sign of space chirality. In this model the interaction of quark fields with familons occurs. However in all calculations quark fields are represented in the form of nonperturbative quark condensates. From QCD and a experiment the connection between quark and gluon condensates is known:
$$0\overline{q}q0\frac{1}{12m_q}0\frac{\alpha _s}{\pi }G_{\mu \nu }^nG_n^{\mu \nu }0\frac{3\mathrm{\Lambda }_c^4}{4m_q}$$
$`(2)`$
Here: $`q=t,c;m_c1.5Gev;m_t175Gev;\mathrm{\Lambda }_c150Mev`$.
The spontaneous breaking of symmetry $`SU_L(2)\times SU_R(2)U(1)`$ is produced by vacuum shifts $`\psi =v;f_3=u`$. The numerical values $`v,u\mathrm{\Lambda }_{mc}`$ are unknown. They must be found by experimentally if our theory corresponds to reality. Parameters $`u`$ and $`v`$ together with the value of condensates (2) define numerical values of basic magnitudes characteristing the familon subsystem. After breaking of symmetry $`SU_L(2)\times SU_R(2)U(1)`$ light pseudo-GB fields contain the real pseudoscalar field with the mass:
$$m_\phi ^{^{}}^2=\frac{1}{6(u^2+v^2)}0\frac{\alpha _s}{\pi }G_{\mu \nu }^nG_n^{\mu \nu }0$$
$`(3)`$
the complex pseudoscalar field with the mass:
$$m_\phi ^2=\frac{1}{24v^2}\frac{m_t}{m_c}0\frac{\alpha _s}{\pi }G_{\mu \nu }^nG_n^{\mu \nu }0$$
$`(4)`$
and complex scalar field the mass square of which is negative:
$$m_f^2=\frac{1}{24u^2}\frac{m_t}{m_c}0\frac{\alpha _s}{\pi }G_{\mu \nu }^nG_n^{\mu \nu }0$$
$`(5)`$
Complex field with masses (4-5) are the nontrivial representation of residual symmetry of $`U(1)`$ group, but the real field (3) is the sole representation of this group. We propose that cosmological DM consists of particles with these masses and their analogies from the down- quark-familon and lepton-familon subsystems.
The negative square of mass of complex scalar field means that for
$$T<T_{c(up)}\overline{m}_f\frac{\mathrm{\Lambda }_c^2}{\mathrm{\Lambda }_{mc}}\sqrt{m_t/m_c}$$
$`(6)`$
pseudogoldstone vacuum is unstable that is when $`T=T_{c(up)}`$ in gas of pseudogoldstone bosons should be the RPT in a state with spontaneous breaking $`U(1)`$ symmetry. Two other familon subsystems can be studied by the same methods. Therefore DM consisting of pseudo-GB of familon type is a many component heterogenic system evolving complex thermodynamical way.
In the phase of breaking symmetry every complex field with masses (4-5) splits on two real fields with different masses. That is the familon subsystem of up-quark type consists from five kinds of particles with different masses. Analogous phenomenon takes place in the down-quark subsystem. The breaking of residual symmetry is when
$$T_{c(down)}\frac{\mathrm{\Lambda }_c^2}{\mathrm{\Lambda }_{mc}}\sqrt{m_b/m_s}$$
$`(7)`$
In the low symmetric phase this subsystem consists also of five kinds particles with different masses. In our theory the lepton-familon subsystem is not undergone RPT therefore it consists of particles and antiparticles of 3 kinds with 3 different masses.
The relativistic phase transitions in familon subsystems must be described in frame of temperature quantum field theory. It is important to underline that sufficiently strong interactions of familons each other provide the evolution of familon subsystem through state of local equilibrium type. Our estimates have shown that the transition in nonthermodynamical regime of evolution occurs on stage after RPT even if RPT took place for temperature $`10^3eV`$. The thermodynamics of familon system may be formulated in approximation of the self-coordinated field. The methods of the RPT theory which will be used by us are similar to ones of our article . The unequilibrium Landau functional of states $`F(T,\eta ,m_A)`$ depends on symmetry order parameter $`\eta `$ and five effective masses of particles $`m_A,A=1,2,3,4,5`$
$$F(T,\eta ,m_A)=\frac{1}{3}\underset{A}{}J_2(T,m_A)+U(\eta ,m_A)$$
$`(8)`$
Here $`J_2`$ is the characteristic integrals (similar integrals were used for the description of RPT in our article ). The conditions of extremum of functional on effective masses give the equation of connection $`m_a=m_a(\eta ,T)`$ which formal defines the typical functional Landau $`F(T,\eta )`$. The condition of minimum of this functional on the parameter of order
$$\frac{d^2F}{d\eta ^2}=\frac{^2F}{\eta ^2}+\underset{A}{}\frac{^2F}{\eta m_A}(\frac{m_A}{\eta })>0$$
$`(9)`$
is concordanced with equation of state $`F/\eta =0`$ that allows: a) to establish the kind of RPT, b) to find thermodynamical boundary of stability phases, c) to calculate values of observed magnitudes (energy density, pressure, thermal capacity, sound velocity et al.) in each phase.
We have detected that RPT in familon gas is RPT of first kind with wide region of phases coexistence. Therefore in epoch RPT or more exactly in region phase coexistence the Universe had block-phase structure containing domains of different phases. The numerical modelling of this RPT has shown that average contrast of density in the block-phase structure is $`\delta ϵ/ϵ1`$.
The size of domains and masses of baryon and dark matter inside domains are defined by distance to horison of events $`L_{horiz.}`$ at moment of RPT. As it is seen from (6-7) numerical values of these magnitudes which are important for LSS theory depend on values of unknown today parameter of preon confainment $`\mathrm{\Lambda }_{mc}`$.
If inhomogeneties appearing during RPT in familon gas have the relation to observable scales of LSS (10 Mpc) then $`\mathrm{\Lambda }_{mc}10^5Tev`$. More detail estimates today is premature but it is necessary to note that suggested theory contains two phase transitions and therefore two characteristic scales of LSS. Here also it is necessary to underline that a catastrophic phenomenon in familon gas could not influence on spectrum of relic radiation even if $`m_f0.4eV`$ due to superweak interaction familons with the usual matter but effects connected with the fragmentation of DM medium may be superimposed at the spectrum of CMB radiation.
Numerical estimates of inhomogeneities parameters arising as the result of strong interaction of domains LS and HS phases in region of their contact show that $`\delta ϵ/ϵ1`$ on scale $`L0.1L_{horison}`$ at the moment of the phase transition.
The more detail publications can be found in \[8-9\].
REFERENCES
1. Terazawa H., Chikashige Y., Akama K., 1977,Phys.Rev. D15,480.
Fritzsch H., Mandelbaum G. 1981,Phys.Lett.B 102,319.
2. Adloff C. et al., 1997, Z.Phys.C74, 191; Breitweg J., 1997, Z.Phys.C74, 207.
3. Frieman J.A., Hill C.T., Watkins R., 1991, Preprint Fermilab Pub-91/324-A;
Hill C.T., Schramm D.N., Fry J., 1989, Comments of Nucl. and Particle Phys. 19, 25.
4. Omont A. et al., 1996, Nature 382,428; Guilloteau et al., 1997, Astron.and Astrophys.328,L1; Weymann R.J. et al., 1998, Astrophys.J. 505,L95; Hu E.M., Cowie L.L.,McMahon R.G.,1998, Astrophys.J. 502,L99; Afanas’ev V.L.,Dodonov S.N. 1999, Nature (submitted)
5. Barnett R.M. et al., 1996,Phys.Rev. D54.
6. Vereshkov G., Burdyuzha V., 1995, Intern.J.Modern.Phys. A10,1343.
7. Burdyuzha V., Lalakulich O., Ponomarev Yu., Vereshkov G., 1997,Phys.Rev. D55,7340 R.
8. Burdyuzha V., Lalakulich O., Ponomarev Yu., Vereshkov G., 1998, Preprint of Yukawa Institute for Theoretical Physics (YITP-98-51) and hep-ph/9907531
9. Terazawa H. 1999, to be published in the Proceedings of the Intern. Conference on Modern Developments in Elementary Particle Physics, Cairo. Ed. A.Sabry |
no-problem/9912/adap-org9912004.html | ar5iv | text | # Fitness versus Longevity in Age-Structured Population Dynamics
## I Introduction
The goal of this paper is to understand the role of mutations on the evolution of fitness and age characteristics of individuals in a simple age-structured population dynamics model. While there are many classical models to describe single-species population dynamics, consideration of age-dependent characteristics is a more recent development. Typically, age characteristics of a population are determined by studying rate equations which include age-dependent birth and death rates. Here we will study an extension of age-structured population dynamics in which the characteristics of an offspring are mutated with respect to its parent. In particular, an offspring may be more “fit” or less fit than its parent, and this may be reflected in attributes such as its birth rate and/or its life expectancy.
In our model, we characterize the fitness of an individual by a single heritable trait – the life expectancy $`n`$ – which is defined as the average life span of an individual in the absence of competition. This provides a useful fitness measure, as a longer-lived individual has a higher chance of producing more offspring throughout its life span. We allow for either deleterious or advantageous mutations, where the offspring fitness is less than or greater than that of the parent, respectively (Fig. 1). This leads to three different behaviors which depend on the ratio between these two mutation rates. When advantageous mutation is favored, the fitness distribution of the population approaches a Gaussian, with the average fitness growing linearly with time $`t`$ and the width of the distribution increasing as $`t^{1/2}`$. Conversely, when deleterious mutation is more likely, a steady-state fitness distribution is approached, with the rate of approach varying as $`t^{2/3}`$. When there is no mutational bias, the fitness distribution again approaches a Gaussian, but with average fitness growing as $`t^{2/3}`$ and the width of the distribution again growing as $`t^{1/2}`$.
In all three cases, the average population age reaches a steady state which, surprisingly, is a decreasing function of the average fitness. Thus within our model, a more fit population does not lead to an increased individual lifetime. Qualitatively, as individuals become more fit, competition plays a more prominent role and is the primary mechanism that leads to premature mortality.
In the following two sections, we formally define the model and outline qualitative features of the population dynamics. In Secs. IV-VI, we analyze the three cases of deleterious, advantageous, and neutral mutational biases in detail. We conclude in Sec. VII. Various calculational details are provided in the Appendices.
## II The Model
Our model is a simple extension of logistic dynamics in which a population with overall density $`N(t)`$ evolves both by birth at rate $`b`$ and death at rate $`\gamma N`$. Such a system is described by the rate equation
$$\dot{N}=bN\gamma N^2,$$
(1)
with steady-state solution $`N_{\mathrm{}}=b/\gamma `$. Our age-structured mutation model incorporates the following additional features:
1. Each individual is endowed with given life expectancy $`n`$. This means that an individual has a rate of dying which equals $`1/n`$.
2. Death by aging occurs at a rate inversely proportional to the life expectancy.
3. Individuals give birth at a constant rate during their lifetimes.
4. In each birth event, the life expectancy of the newborn may be equal to that of its parent, or the life expectancy may be increased by or decreased by 1. The relative rates of these events are $`b`$, $`b_+`$, and $`b_{}`$, respectively.
Each of these features represent idealizations. Most prominently, it would be desirable to incorporate a realistic mortality rate which is an increasing function of age. However, we argue in Sec. VII that our basic conclusions continue to be valid for systems with realistic mortality rates.
To describe this dynamics mathematically, we study $`C_n(a,t)`$, the density of individuals with life expectancy $`n1`$ and age $`a`$ at time $`t`$. We also introduce $`P_n(t)=_0^{\mathrm{}}C_n(a,t)𝑑a`$, which is the density of individuals with life expectancy $`n`$ and any age at time $`t`$. Finally, the total population density is the integral of the population density over all ages and life expectancies,
$$N(t)=\underset{n=1}{\overset{\mathrm{}}{}}_0^{\mathrm{}}C_n(a,t)𝑑a=\underset{n=1}{\overset{\mathrm{}}{}}P_n(t).$$
(2)
According to our model, the rate equation for $`C_n(a,t)`$ is
$$\left(\frac{}{t}+\frac{}{a}\right)C_n(a,t)=\left(\gamma N(t)+\frac{1}{n}\right)C_n(a,t).$$
(3)
The derivative with respect to $`a`$ on the left-hand side accounts for the continuous aging of the population. On the right-hand side, $`\gamma NC_n`$ is the death rate due to competition, which is assumed to be independent of an individual’s age and fitness. As discussed above, the mortality rate is taken as age independent, and the form $`C_n/n`$ guarantees that the life expectancy in the absence of competition equals $`n`$. Because birth creates individuals of age $`a=0`$, the population of newborns provides the following boundary condition for $`C_n(0,t)`$,
$$C_n(0,t)=bP_n(t)+b_+P_{n1}(t)+b_{}P_{n+1}(t).$$
(4)
Finally, the condition $`P_0=0`$ follows from the requirement that offspring with zero life expectancy cannot be born.
## III Basic Population Characteristics
Let us first study the fitness characteristics of the population and disregard the age structure. The rate equation for $`P_n(t)`$ is found by integrating Eq. (3) over all ages and then using the boundary condition Eq. (4) to give
$$\frac{dP_n}{dt}=\left(b\gamma N\frac{1}{n}\right)P_n+b_+P_{n1}+b_{}P_{n+1}.$$
(5)
This describes a random-walk-like process with state-dependent hopping rates in the one-dimensional fitness space $`n`$. Notice the hidden non-linearity embodied by the term $`\gamma NP_n`$, since the total population density $`N(t)=_nP_n(t)`$. From Eq. (5), we find that $`N(t)`$ obeys a generalized logistic equation
$$\frac{dN}{dt}=(b+b_++b_{}\gamma N)N\underset{n=1}{\overset{\mathrm{}}{}}\frac{P_n}{n}b_{}P_1.$$
(6)
The three different dynamical regimes outlined in the introduction are characterized by the relative magnitudes of the mutation rates $`b_+`$ and $`b_{}`$. The main features of these regimes are:
* Subcritical case. Here $`b_+<b_{}`$, or deleterious mutations prevail. The fitness of the population eventually reaches a steady state.
* Critical case. Here $`b_+=b_{}`$, or no mutational bias. The average fitness of the population grows as $`t^{2/3}`$ and the width of the fitness distribution grows as $`t^{1/2}`$.
* Supercritical case. Here $`b_+>b_{}`$, or advantageous mutations are favorable. The average fitness grows linearly in time and the width of the fitness distribution still grows as $`t^{1/2}`$.
In all three cases, the total population density $`N`$ and the average age $`A`$, defined by
$$A=\frac{1}{N}\underset{n=1}{\overset{\mathrm{}}{}}𝑑aaC_n(a)$$
(7)
saturate to finite values. The steady state values of $`N`$ and $`A`$ may be determined by balance between the total birth rate $`Bb+b_++b_{}`$ and the death rate $`\gamma N`$ due to overcrowding. For example, in the critical and supercritical cases, there are essentially no individuals with small fitness, so that the last two terms in Eq. (6) may be neglected. Then the steady-state solution to this equation is simply
$$N=\frac{B}{\gamma }.$$
(8)
This statement also expresses the obvious fact that in the steady state the total birth rate $`B`$ must balance the total death rate $`\gamma N`$. (For fit populations, the death rate due to aging is negligible.) Similarly, the average age may be inferred from the condition it must equal the average time between death events. Thus
$$A=\frac{1}{\gamma N}=\frac{1}{B}.$$
(9)
The behavior of the average age in the subcritical case is more subtle and we treat this case in detail in the section following.
## IV the subcritical case
When deleterious mutations are favored ($`b_{}>b_+`$), the random-walk-like master equation for $`P_n`$ contains both the mutational bias towards the absorbing boundary at the origin, as well as an effective positive bias due to the $`1/n`$ term on the right-hand side of Eq. (5). The balance between these two opposite biases leads to a stationary state whose solution is found by setting $`\dot{P}_n=0`$ in Eq. (5). To obtain this steady state solution, it is convenient to introduce the generating function
$$F(x)=\underset{n=1}{\overset{\mathrm{}}{}}P_nx^{n1}.$$
(10)
Multiplying Eq. (5) by $`x^{n1}`$ and summing over $`n`$ gives
$$0=(b\gamma N)F\underset{n=1}{\overset{\mathrm{}}{}}\frac{P_n}{n}x^{n1}+b_+xF+b_{}\left(\frac{F}{x}P_1\right).$$
(11)
The term involving $`P_n/n`$ is simplified by using
$$\underset{n=1}{\overset{\mathrm{}}{}}\frac{P_n}{n}x^{n1}=\frac{1}{x}_0^xF(y)𝑑y.$$
(12)
Multiplying Eq. (11) by $`x`$ and differentiating with respect to $`x`$ gives
$$\frac{F^{}(x)}{F(x)}=\frac{\gamma Nb+12b_+x}{b_+x^2(\gamma Nb)x+b_{}},$$
(13)
where the prime denotes differentiation.
As in the case of the master equation for $`P_n`$, this differential equation for $`F`$ has a hidden indeterminacy, as the total population density $`N=F(x=1)`$ appears on the right-hand side. Thus an integration of Eq. (13), subject to the boundary condition $`F(1)=N`$, actually gives a family of solutions which are parameterized by the value of $`N`$. While the family of solutions can be obtained straightforwardly by a direct integration of Eq. (13), only one member of this family is the correct one. To determine this true solution, we must invoke additional arguments about the physically realizable value of $`N`$ for a given initial condition.
An upper bound for $`N`$ may be found from the steady-state version of Eq. (6),
$$(B\gamma N)N=\underset{n=1}{\overset{\mathrm{}}{}}\frac{P_n}{n}+b_{}P_1.$$
(14)
Since the right-hand side must be non-negative, this provides the bound $`\gamma N<B`$. On the other hand, we may obtain a lower bound for $`N`$ by considering the master equation for $`P_n`$ in the steady state. For $`n\mathrm{}`$, we may neglect the $`P_n/n`$ term in Eq. (5) and then solve the resulting equation to give $`P_n=A_+\lambda _+^n+A_{}\lambda _{}^n`$, where
$$\lambda _\pm =\left[\gamma Nb\pm \sqrt{(\gamma Nb)^24b_+b_{}}\right]/2b_{}.$$
(15)
For $`P_n`$ to remain positive, $`\lambda _\pm `$ should be real. Hence we require $`\gamma N>b+2\sqrt{b_+b_{}}`$. We therefore conclude that $`N`$ must lie in the range
$$b+2\sqrt{b_+b_{}}\gamma N<B.$$
(16)
While the foregoing suggests that $`N`$ lies within a finite range, we find numerically that the minimal solution, which satisfies the lower bound of Eq. (16), is the one that is generally realized (Fig. 3). This selection phenomenon is reminiscent of the corresponding behavior in the Fisher-Kolmogorov equation and related reaction-diffusion systems, where only the extremal solution is selected from a continuous range of a priori solutions.
To understand the nature of this extremal solution in the present context, notice that with the bounds on $`N`$ given in Eq. (16), $`\lambda _+`$ lies within the range $`[\mu ,1)`$, where
$$\mu \sqrt{\frac{b_+}{b_{}}}$$
(17)
is the fundamental parameter which characterizes the mutational bias. Consequently the steady-state fitness distribution decays exponentially with $`n`$, namely $`P_n\lambda _+^n`$. When the total population density attains the minimal value $`N_{\mathrm{min}}=(b+2\sqrt{b_+b_{}})/\gamma `$, $`\lambda _+`$ also achieves its minimum possible value $`\lambda _+^{\mathrm{min}}=\mu `$, so that the fitness distribution has the most rapid decay in $`n`$ for the minimal solution. Based on the analogy with the Fisher-Kolmogorov equation, we infer that there are two distinct steady-state behaviors for $`P_n`$ as a function of the initial condition $`P_n(0)`$. For any $`P_n(0)`$ with either a finite support in $`n`$ or decaying at least as fast as $`\mu ^n`$, the extremal solution $`P_n\mu ^n`$ is approached as $`t\mathrm{}`$. Conversely, for initial conditions in which $`P_n(0)`$ decays more slowly than $`\mu ^n`$, for example as $`\alpha ^n`$, with $`\alpha `$ in the range $`(\mu ,1)`$, the asymptotic solution also decays as $`\alpha ^n`$. Correspondingly, Eq. (5) in the steady state predicts a larger than minimal population density $`N=(b+b_{}\alpha +b_+\alpha ^1)/\gamma `$.
We also find that the extremal and the non-extremal solutions exhibit different relaxations to the steady state. For those initial conditions which evolve to the extremal solution, the deviation of $`N`$ and indeed each of the $`P_n`$ from their steady state values decay as $`t^{2/3}`$, while for all other initial conditions, the relaxation to the steady state appears to follow a $`t^1`$ power law decay. The power-law approach to the steady state is surprising, since the overall density obeys a logistic-like dynamics, $`\dot{N}=bN\gamma N^2`$, for which the approach to the steady state is exponential. These results are illustrated in Fig. 4 which shows the asymptotic time dependence of $`N(t)`$ based on a numerical integration of Eq. (5) with the fourth-order Runge-Kutta algorithm. The demonstration of the $`t^{2/3}`$ relaxation to the extremal solution relies on a correspondence to the transient behavior in the critical case. This is presented in Appendix B.
A disconcerting feature of the numerical calculation for $`N(t)`$ is the small disagreement between the numerically observed values of the steady-state population density and the expected theoretical prediction (Fig. 5). This discrepancy arises from the finite computer precision which causes very small values of $`P_n`$ to be set to zero. To confirm this, we changed the computer precision from $`10^{100}`$ to the full machine precision of $`10^{308}`$ (Fig. 5). As the precision is increased, $`N`$ saturates to progressively higher values and approaches the theoretical prediction. A similar precisions-dependent phenomenon has been observed in the context of traveling Fisher-Kolmogorov wave propagation.
For the relevant situation where the density $`N`$ takes the minimal value, we may rewrite Eq. (13) as
$$\frac{F^{}}{F}=\frac{2\mu }{1\mu x}+\frac{1}{b_{}(1\mu x)^2}$$
(18)
Integrating from $`x=1`$ to $`x`$ and using $`F(1)=N`$ gives
$$F(x)=N\left(\frac{1\mu }{1\mu x}\right)^2\mathrm{exp}\left\{\frac{1}{b_{}\mu }\left(\frac{1}{1\mu x}\frac{1}{1\mu }\right)\right\}.$$
(19)
One can now formally determine $`P_n`$ by expanding $`F(x)`$ in a Taylor series. For example,
$`P_1`$ $`=`$ $`N(1\mu )^2\mathrm{exp}\left\{{\displaystyle \frac{1}{b_{}(1\mu )}}\right\},`$
$`P_2`$ $`=`$ $`N(1\mu )^2\left(2\mu +{\displaystyle \frac{1}{b_{}}}\right)\mathrm{exp}\left\{{\displaystyle \frac{1}{b_{}(1\mu )}}\right\}.`$
For many applications, however, there is no need to deal with these unwieldy expressions. As we now discuss, the overall fitness or age characteristics of the population can be obtained directly from the generating function without using the explicit formulae for the $`P_n`$.
### A Fitness characteristics
Consider the average fitness of the population
$$n=\frac{1}{N}\underset{n=1}{\overset{\mathrm{}}{}}nP_n,$$
(20)
which can be expressed in terms of the generating function as
$$n=\frac{1}{N}\frac{dF}{dx}|_{x=1}+1.$$
(21)
From Eq. (19) we thereby obtain the average fitness
$$n=\frac{2\mu }{1\mu }+\frac{1}{b_{}(1\mu )^2}+1.$$
(22)
As one might anticipate, the average fitness diverges as $`\mu 1`$ from below, corresponding to the population becoming mutationally neutral. To determine the dispersion of the fitness distribution we make use of the relation
$$n(n1)=\frac{1}{N}\underset{n=1}{\overset{\mathrm{}}{}}n(n1)P_n=\frac{1}{N}\frac{d^2(xF)}{dx^2}|_{x=1}.$$
(23)
Substituting Eqs. (19) and also Eq. (22) then gives
$`n^2=1+{\displaystyle \frac{6\mu }{(1\mu )^2}}+{\displaystyle \frac{3(1+\mu )}{b_{}(1\mu )^3}}+{\displaystyle \frac{1}{b_{}^2(1\mu )^4}}.`$
Thus the dispersion $`\sigma ^2=n^2n^2`$ in the fitness distribution is
$$\sigma ^2=\frac{2\mu }{(1\mu )^2}+\frac{\mu +1}{b_{}(1\mu )^3}.$$
(24)
As the mutational bias vanishes, $`\mu 1`$, the average fitness and the dispersion diverge as $`nb_{}^1(1\mu )^2`$ and $`\sigma \sqrt{2/b_{}}(1\mu )^{3/2}`$. Thus these two moments are related by $`\sigma n^{3/4}`$. As we shall see in Sec. VI, this basic relation continues to hold in the critical case.
### B Age characteristics
In the steady state, we solve Eq. (3) to give the concentration of individuals with age $`a`$ and fitness $`n`$
$$C_n(a)=P_n\left(\gamma N+\frac{1}{n}\right)\mathrm{exp}\left[\left(\gamma N+\frac{1}{n}\right)a\right].$$
(25)
The average age of the population is
$`A`$ $`=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle _0^{\mathrm{}}}𝑑aaC_n(a)`$ (26)
$`=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{P_n}{\gamma N+n^1}},`$ (27)
where the second line is obtained by using Eq. (25). This expression can be rewritten directly in terms of the generating function by first noticing that
$`{\displaystyle _0^1}x^\nu F(x)𝑑x`$ $`=`$ $`{\displaystyle _0^1}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}P_nx^{n+\nu 1}dx`$ (28)
$`=`$ $`{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{P_n}{n+\nu }}.`$ (29)
Thus we re-express Eq. (26) in a form which allows us to exploit Eq. (28). After several elementary steps, we obtain
$`A`$ $`=`$ $`{\displaystyle \frac{1}{\gamma N}}{\displaystyle \frac{1}{N}}{\displaystyle \frac{1}{(\gamma N)^2}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{P_n}{n+(\gamma N)^1}}`$ (30)
$`=`$ $`{\displaystyle \frac{1}{\gamma N}}{\displaystyle \frac{1}{N}}{\displaystyle \frac{1}{(\gamma N)^2}}{\displaystyle _0^1}𝑑xx^{\frac{1}{\gamma N}}F(x).`$ (31)
This expression should be compared with the result for the critical and supercritical cases, namely $`A=(\gamma N)^1=B^1`$ (see Eq. (9)). In the subcritical case, $`\gamma N<B`$ and the above two expressions $`A_{\mathrm{min}}=B^1`$ and $`A_{\mathrm{max}}=(\gamma N)^1`$ provide lower and upper bounds for the average age. This is proved in Appendix A. Fig. 6 shows the surprising feature of Eq. (30) that the average age decreases as the population gets fitter! We also see that the average age of the least fit population $`(\mu 0)`$ is twice that of the increasingly fit populations in the critical and supercritical cases. We now demonstrate this fact. To provide a fair comparison we take the total birth rate rate to be equal to unity in both cases and also choose $`b=0`$ for simplicity. For fit populations (critical and supercritical cases), the average age is simply $`A=B^1=1`$. For the least fit population $`\mu 0`$, and correspondingly $`N0`$. In this limit, we may write Eq. (26) as,
$$A=\frac{1}{N}\underset{n=1}{\overset{\mathrm{}}{}}\frac{P_n}{\gamma N+n^1}\frac{1}{N}\underset{n=1}{\overset{\mathrm{}}{}}nP_n=n.$$
(32)
On the other hand, from Eq. (22) we have
$$n1+\frac{1}{b_{}}2.$$
(33)
The relation $`A=n`$ is natural for the least fit population, as the total density is small and competition among individuals plays an insignificant role. Thus the average age may be found by merely averaging the intrinsic life expectancy of the population. Intriguingly, in this limit the average individual in the least fit population lives twice as long as individuals in relatively fit populations.
It is also worth noting that in the limit of a minimally fit population ($`\mu 0`$) we can expand the generating function in Eq. (19) systematically. We thereby find that the density $`P_n`$ exhibits a super-exponential decay, $`P_n=Ne^1/(n1)!`$.
## V the supercritical case
When advantageous mutations are favored, the master equation for $`P_n`$, Eq. (5), can be viewed as a random walk with a bias towards increasing $`n`$. Because there is no mechanism to counterbalance this bias, the average fitness grows without bound and no steady state exists. As in the case of a uniformly biased random walk on a semi-infinite domain, the distribution of fitness becomes relatively localized in fitness space, with the peak drifting towards increasing $`n`$ with a velocity $`V=b_+b_{}`$. Since the fitness profile is non-zero only for large $`n`$ in the long time limit, it is appropriate to adopt continuum approach. We therefore treat $`n`$ as continuous, and derive the continuum limits of Eqs. (5) and (6). For the time evolution of the fitness distribution $`P(n,t)`$, we obtain the equation of motion
$$\left(\frac{}{t}+V\frac{}{n}\right)P=\left(B\gamma N\frac{1}{n}\right)P+D\frac{^2P}{n^2}.$$
(34)
This is just a convection-diffusion equation, supplemented by a birth/death term. Here the difference between advantageous and deleterious mutations provides the drift velocity $`V=b_+b_{}`$, and the average mutation rate $`D=(b_++b_{})/2`$ plays the role of diffusion constant. For the total population density we obtain
$$\frac{dN}{dt}=(B\gamma N)N_0^{\mathrm{}}𝑑n\frac{P(n,t)}{n}.$$
(35)
To determine the asymptotic behavior of these equations, we use the fact that the fitness distribution becomes strongly localized about a value of $`n`$ which increases as $`Vt`$. Thus we replace the integral in Eq. (35) by its value at the peak of the distribution, $`N/Vt`$. With this crude approximation, Eq. (35) becomes
$$\frac{dN}{dt}=\left(B\gamma N\frac{1}{Vt}\right)N.$$
(36)
Thus we conclude that the density approaches its steady state value as
$$\gamma NB\frac{1}{Vt}.$$
(37)
This provides both a proof Eq. (8), as well as the rate of convergence to the steady state.
We now substitute this asymptotic behavior for the total population density into Eq. (34) to obtain
$$\left(\frac{}{t}+V\frac{}{n}\right)P=\left(\frac{1}{Vt}\frac{1}{n}\right)P+D\frac{^2P}{n^2}.$$
(38)
Notice that the birth/death term on the right hand side is negative (positive) for subpopulations which are less (more) fit than average fitness $`Vt`$. This birth/death term must also be zero, on average, since the total population density saturates to a constant value. Moreover, this term must be small near the peak of the fitness distributions where $`nVt`$. Thus as a simple approximation, we merely neglect this birth/death term and check the validity of this assumption a posteriori. This transforms Eq. (38) into the classical convection-diffusion equation whose solution is
$$P(n,t)=\frac{N}{\sqrt{4\pi Dt}}\mathrm{exp}\left[\frac{(nVt)^2}{4Dt}\right].$$
(39)
This basic results implies that the fitness distribution indeed is a localized peak, with average fitness growing linearly in time, $`n=Vt`$, and width growing diffusively, $`\sigma =\sqrt{2Dt}`$. We now check the validity of dropping the birth/death term in Eq. (38). Near the peak, $`|nVt|\sqrt{Dt}`$, so that the birth/death term is of order $`t^{3/2}\times P`$. On the other hand, the other terms in Eq. (38) are of order $`t^1\times P`$, thus justifying the neglect of birth/death term.
We now turn to the age characteristics. Asymptotically, the density of individuals with given age and fitness changes slowly with time because the overall density reaches a steady state. Consequently, the time variable $`t`$ is slow while the age variable $`a`$ is fast. Physically this contrast reflects the fact that during the lifetime of an individual the change in the overall age characteristics of the population is small. Thus in the first approximation, we retain only the age derivative in Eq. (3). We also ignore the term $`C_n/n`$, which is small near the peak of the asymptotic fitness distribution. Solving the resulting master equation and using the boundary condition of Eq. (4) we obtain
$`C_n(a,t)`$ $``$ $`P_n(t)\gamma Ne^{\gamma Na}`$ (40)
$`=`$ $`{\displaystyle \frac{\gamma N^2}{\sqrt{4\pi Dt}}}\mathrm{exp}\left[\gamma Na{\displaystyle \frac{(nVt)^2}{4Dt}}\right].`$ (41)
Integrating over the fitness variable, we find that the age distribution $`C(a,t)=𝑑nC_n(a,t)`$ has the stationary Poisson form
$$C(a)=\gamma N^2e^{\gamma Na}.$$
(42)
From this, the average age is $`A=(\gamma N)^1=B^1`$ in agreement with Eq. (9). As discussed in Sec. III B, the surprising conclusion is that the average age in the supercritical case is always smaller than that in the subcritical case.
## VI the critical case
We now consider the critical case where the rates of advantageous and deleterious mutations are equal. Without the $`1/n`$ term and with $`\gamma Nb=b_++b_{}`$, Eq. (5) becomes the master equation for an unbiased random walk on the semi-infinite range $`n0`$. Due to the $`1/n`$ term, the system has a bias towards increasing $`n`$ which vanishes as $`n\mathrm{}`$ (see Fig. 2). Thus we anticipate that the average fitness will grow faster than $`t^{1/2}`$ and slower than $`t`$. Hence we can again employ the continuum approach to account for the evolution of the $`P_n`$. In this limit, the corresponding master equation for $`P(n,t)`$ becomes
$$\frac{P}{t}=\left(B\gamma N\frac{1}{n}\right)P+D\frac{^2P}{n^2}.$$
(43)
Numerically, we find $`nt^{2/3}`$, while the dispersion still grows as $`t^{1/2}`$, that is, as $`\sigma \sqrt{t}`$. Thus these two quantities are related by $`\sigma n^{3/4}`$, as derived analytically for the subcritical case.
To provide a more quantitative derivation of the above scaling laws for $`n`$ and $`\sigma `$, as well as to determine the fitness distribution itself, we examine the equation for $`P(n,t)`$. First note that the total population density still obeys Eq. (35), as in the supercritical case. Under the assumption that the fitness distribution is relatively narrow compared to its mean position, a result which we have verified numerically, we again estimate the integral on the right-hand side of Eq. (35) to be of the order of $`N/n`$. This leads to
$$\gamma NB\frac{1}{n}.$$
(44)
Substituting this into Eq. (43) yields
$$\frac{P}{t}=\left(\frac{1}{n}\frac{1}{n}\right)P+D\frac{^2P}{n^2}.$$
(45)
Given that the peak of the distribution is located near $`nn`$, it proves useful to change variables from $`(n,t)`$ to the comoving co-ordinates $`(y=nn,t)`$ to determine how the peak of the distribution spreads. We therefore write the derivatives in the comoving coordinates
$`{\displaystyle \frac{}{t}}={\displaystyle \frac{}{t}}|_{\mathrm{comov}.}{\displaystyle \frac{dn}{dt}}{\displaystyle \frac{}{y}},{\displaystyle \frac{}{n}}={\displaystyle \frac{}{y}},`$
and expand the birth/death term in powers of the deviation $`y=nn`$
$`{\displaystyle \frac{1}{n}}{\displaystyle \frac{1}{n}}={\displaystyle \frac{y}{n^2}}{\displaystyle \frac{y^2}{n^3}}+\mathrm{}`$
Now Eq. (45) becomes
$$\frac{P}{t}\frac{dn}{dt}\frac{P}{y}=\frac{y}{n^2}P\frac{y^2}{n^3}P+D\frac{^2P}{y^2}.$$
(46)
Let us first assume that the average fitness grows faster than diffusively, that is, $`n\sqrt{t}`$. With this assumption, the dominant terms in Eq. (46) are
$$\frac{dn}{dt}\frac{P}{y}=\frac{y}{n^2}P.$$
(47)
These terms balance when $`n(ty)^1yn^2`$. Using this scaling and balancing the remaining sub-dominant terms in Eq. (46) gives $`y\sqrt{t}`$. The combination of these results yields $`nt^{2/3}`$. This justifies our initial assumption that $`n\sqrt{t}`$. Now we write $`n=(ut)^{2/3}`$, with $`u`$ of order unity, to simplify Eq. (47) to
$$\frac{P}{y}=\frac{3y}{2u^2t}P.$$
(48)
In terms of $`n=y+n`$ the solution is
$$P(n,t)=N\sqrt{\frac{3}{4\pi u^2t}}\mathrm{exp}\left\{\frac{3\left[n(ut)^{2/3}\right]^2}{4u^2t}\right\}.$$
(49)
Thus the fitness distribution is again Gaussian, as in the supercritical case, but with the average fitness growing as $`n=(ut)^{2/3}`$. Finally, we determine $`u=\sqrt{3D}`$ by substituting $`n=(ut)^{2/3}`$ in Eq. (46) and balancing the sub-dominant terms.
The age distribution in the critical case can be obtained in similar manner as in the supercritical case. The approximations that were invoked to determine the age distribution in the supercritical case still apply. Consequently, the asymptotic form for $`C_n(a)`$ is still given by Eq. (40), and this gives the same expression for $`C(a)`$ after integrating over $`n`$, as in Eq. (42). Hence the average age is again $`B^1`$, as in Eq. (9).
## VII summary and discussion
We have introduced an age-structured logistic-like population dynamics model, which is augmented by fitness mutation of offspring with respect to their parents. Here fitness is quantified by the life expectancy $`n`$ of an individual. We found unusual age and fitness evolution in which the overall mutational bias leads to three distinct regimes of behavior. Specifically, when deleterious mutations are more likely, the fitness distribution of the population approaches a steady state which is an exponentially decaying function of fitness. When advantageous mutations are favored or when there is no mutational bias, a Gaussian fitness distribution arises, in which the average fitness grows as $`n=Vt`$ and as $`n=(3D)^{1/3}t^{2/3}`$, respectively.
Paradoxically, the average age of the population is maximal for a completely unfit population. Conversely, individuals are less long-lived for either positive or no mutational bias, even though the average fitness increases indefinitely with time. That is, a continuous “rat-race” towards increased fitness leads to a decrease in the average life span. As individuals become fit, increased competition results in their demise well before their intrinsic lifetimes are reached. Thus within our model, an increase in the average fitness is not a feature which promotes longevity.
Our basic conclusions should continue to hold for the more realistic situation where the mortality rate increases with age . The crucial point is that old age is unattainable within our model, even if individuals are infinitely fit. When the mutational bias is non-negative, old age is unattainable due to keen competition among fit individuals, while if deleterious mutations are favored, age is limited by death due to natural mortality. In either case, there are stringent limits on the life expectancy of any individual. To include an age-dependent mortality into our model, we may write the mortality term $`f_n(a)C_n(a,t)`$ instead of $`n^1C_n(a,t)`$ in Eq. (3), where $`f_n(a)`$ is the mortality rate for individuals of age $`a`$. Similarly, the term $`n^1P_n`$ in Eq. (5) should be replaced by $`𝑑af_n(a)C_n(a,t)`$. However, these generalized terms play no role for large $`n`$, since $`f_n(a)`$ is a decreasing function of $`n`$ and old age is unattainable.
We gratefully acknowledge partial support from NSF grant DMR9632059 and ARO grant DAAH04-96-1-0114.
## A Bounds for the average age
The upper bound, $`A<(\gamma N)^1`$, follows immediately from Eq. (30), so we just prove $`A>A_{\mathrm{min}}`$. We have
$`A_{\mathrm{min}}`$ $`=`$ $`B^1={\displaystyle \frac{1}{b+b_{}(1+\mu ^2)}}`$
$`A`$ $`=`$ $`{\displaystyle \frac{1}{b+2b_{}\mu }}{\displaystyle \frac{1}{(b+2b_{}\mu )^2}}{\displaystyle _0^1}𝑑xx^{\frac{1}{b+2b_{}\mu }}\left({\displaystyle \frac{1\mu }{1\mu x}}\right)^2\mathrm{exp}\left\{{\displaystyle \frac{1}{b_{}\mu }}\left({\displaystyle \frac{1}{1\mu x}}{\displaystyle \frac{1}{1\mu }}\right)\right\}.`$
Using these expressions and performing elementary transformations we reduce the inequality $`A>A_{\mathrm{min}}`$ to
$$_0^1\frac{dx}{b_{}(1\mu x)^2}x^{\frac{1}{b+2b_{}\mu }}\mathrm{exp}\left\{\frac{1}{b_{}\mu }\left(\frac{1}{1\mu x}\frac{1}{1\mu }\right)\right\}<\frac{b+2b_{}\mu }{b+b_{}(1+\mu ^2)}.$$
(A1)
Let us now introduce the variable
$$v=\frac{1}{b_{}\mu }\left(\frac{1}{1\mu x}\frac{1}{1\mu }\right),$$
(A2)
so that $`dv=dx/b_{}(1\mu x)^2`$, which varies in the range $`[0,V]`$, with $`V=\frac{1}{b_{}(1\mu )}`$. This simplifies Eq. (A1) to
$$_0^V𝑑ve^v\left[\frac{1V^1v}{1V^1\mu v}\right]^{\frac{1}{b+2b_{}\mu }}<\frac{b+2b_{}\mu }{b+b_{}(1+\mu ^2)}.$$
(A3)
We now use the inequality
$$\left[\frac{1p}{1q}\right]^\nu <e^{(qp)\nu }$$
(A4)
which holds for $`0<q<p<1`$ and $`\nu >0`$. This inequality is readily proven by taking the logarithm on both sides and using the expansion $`\mathrm{ln}(1u)=_{k1}u^k/k`$. Now we apply Eq. (A4) to the integrand in (A3) and then replace the upper limit $`V`$ in the integral by $`\mathrm{}`$ to give
$`{\displaystyle _0^V}𝑑ve^v\left[{\displaystyle \frac{1V^1v}{1V^1\mu v}}\right]^{\frac{1}{b+2b_{}\mu }}`$ $`<`$ $`{\displaystyle _0^V}𝑑v\mathrm{exp}\left\{vv{\displaystyle \frac{b_{}(1\mu )^2}{b+2b_{}\mu }}\right\}`$
$`<`$ $`{\displaystyle _0^{\mathrm{}}}𝑑v\mathrm{exp}\left\{v{\displaystyle \frac{b+b_{}(1+\mu ^2)}{b+2b_{}\mu }}\right\}`$
$`=`$ $`{\displaystyle \frac{b+2b_{}\mu }{b+b_{}(1+\mu ^2)}}.`$
This completes the proof.
The lower bound $`A_{\mathrm{min}}`$ turns out to be very accurate in the case when mutations are slightly deleterious. To see this let us write $`b_+=1,b_{}=(1+ϵ)^2`$, where $`ϵ1`$. Replacing $`x`$ by the transformed variable $`v=ϵ^1(1+ϵx)^1`$ recasts the integral Eq. (30) as
$`ϵ^2{\displaystyle _0^{\frac{1}{ϵ(1+ϵ)}}}𝑑ve^v\left(1{\displaystyle \frac{ϵ^2v}{1ϵv}}\right)^{\frac{1}{b+2+2ϵ}}.`$ (A5)
We now expand the integrand,
$`\left(1{\displaystyle \frac{ϵ^2v}{1ϵv}}\right)^{\frac{1}{b+2+2ϵ}}=1{\displaystyle \frac{ϵ^2v}{b+2+2ϵ}}{\displaystyle \frac{ϵ^3v^2}{b+2+2ϵ}}+𝒪(ϵ^4),`$
replace the upper limit in the integral Eq. (A5) by $`\mathrm{}`$, and compute the resulting simple integrals explicitly to obtain a series expansion in $`ϵ`$ for the average age. Together with analogous expansions for $`A_{\mathrm{max}}`$ and $`A`$ we have
$`A_{\mathrm{max}}`$ $`=`$ $`{\displaystyle \frac{1}{b+2+2ϵ}}`$ (A6)
$`A_{\mathrm{min}}`$ $`=`$ $`A_{\mathrm{max}}{\displaystyle \frac{ϵ^2}{(b+2+2ϵ)^2}}+{\displaystyle \frac{ϵ^4}{(b+2+2ϵ)^3}}+𝒪(ϵ^6)`$ (A7)
$`A`$ $`=`$ $`A_{\mathrm{max}}{\displaystyle \frac{ϵ^2}{(b+2+2ϵ)^2}}+{\displaystyle \frac{ϵ^4}{(b+2+2ϵ)^3}}+{\displaystyle \frac{2ϵ^5}{(b+2+2ϵ)^3}}+𝒪(ϵ^6).`$ (A8)
Thus the difference between the exact value and $`A_{\mathrm{min}}`$ is of order $`ϵ^5`$.
## B Transient behavior of the total density
Numerically, we found that in the subcritical case the total population density approaches the steady state value $`N_{\mathrm{}}=(b+2\sqrt{b_+b_{}})/\gamma `$ from below with a deviation that vanishes as $`t^{2/3}`$. We now explain this behavior by constructing a mapping between this transient behavior in the subcritical case and the transient behavior in the critical case. We start with the basic rate equation, Eq. (5). We may remove the term $`\gamma NP_n`$ through the transformation
$$Q_n(t)=P_n(t)\mathrm{exp}\left\{\gamma _0^t𝑑t^{}N(t^{})\right\},$$
(B1)
which simplifies Eq. (5) to
$$\frac{dQ_n}{dt}=\left(b\frac{1}{n}\right)Q_n+b_+Q_{n1}+b_{}Q_{n+1}.$$
(B2)
Next, the steady state behavior $`P_n\mu ^n`$ suggests replacing the $`Q_n`$ by $`R_n(t)=\mu ^nQ_n(t)`$. This also removes the asymmetry in the birth terms and gives
$$\frac{dR_n}{dt}=\left(b\frac{1}{n}\right)R_n+b_{}(R_{n1}+R_{n+1}),$$
(B3)
where we use the shorthand notation $`b_{}=\sqrt{b_+b_{}}`$.
One cannot use the continuum approximation to determine the steady-state solutions for $`P_n`$ or $`Q_n`$. However, the continuum approximation is appropriate for the $`R_n`$. Then Eq. (B3) reduces to
$$\frac{R}{t}=\left(b+2b_{}\frac{1}{n}\right)R+b_{}\frac{^2R}{n^2},$$
(B4)
which is very similar to Eq. (45). Hence we expect that the distribution of $`R_n`$ is peaked around $`n(3b_{})^{1/3}t^{2/3}`$. It proves convenient to make this scaling manifest. To this end we change variables once more,
$$S_n(t)=R_n(t)\mathrm{exp}\left\{(b+2b_{})t+\left(\frac{9t}{b_{}}\right)^{1/3}\right\},$$
(B5)
to get
$$\frac{S}{t}=\left(\frac{1}{n}\frac{1}{n}\right)S+b_{}\frac{^2S}{n^2}.$$
(B6)
Repeating the procedure of Sec. V we determine the asymptotic solution to Eq. (B6) as
$$S_n(t)\frac{1}{\sqrt{4\pi b_{}t}}\mathrm{exp}\left\{\frac{(nn)^2}{4b_{}t}\right\}.$$
(B7)
To find the asymptotics of the total population density let us compute $`Q_n(t)`$. First, (B1) can be expressed as
$$\underset{n=1}{\overset{\mathrm{}}{}}Q_n(t)=N(t)\mathrm{exp}\left\{\gamma _0^t𝑑t^{}N(t^{})\right\}.$$
(B8)
On the other hand,
$$\underset{n=1}{\overset{\mathrm{}}{}}Q_n(t)=\underset{n=1}{\overset{\mathrm{}}{}}\mu ^nR_n(t)=\mathrm{exp}\left\{(b+2b_{})t\left(\frac{9t}{b_{}}\right)^{1/3}\right\}\underset{n=1}{\overset{\mathrm{}}{}}\mu ^nS_n(t).$$
(B9)
In the last sum, the factor $`\mu ^n`$ suggests that only terms with small $`n`$ contribute significantly. Although the asymptotic expression (B7) is formally justified only in the scaling region, where $`|nn|\sqrt{b_{}t}`$, the continuum approach typically provides a qualitatively correct description even outside this region. Therefore we take Eq. (B7) to estimate $`S_n`$ for small $`n`$. We find
$$\underset{n=1}{\overset{\mathrm{}}{}}\mu ^nS_n(t)\mathrm{exp}\left\{C\left(\frac{9t}{b_{}}\right)^{1/3}\right\},$$
(B10)
where we use $`nt^{2/3}`$, as in the critical case, and $`C`$ is a constant. By substituting Eq. (B10) into Eq. (B9) we obtain
$$\underset{n=1}{\overset{\mathrm{}}{}}Q_n(t)\mathrm{exp}\left\{(b+2b_{})t(1+C)\left(\frac{9t}{b_{}}\right)^{1/3}\right\}.$$
(B11)
Combining Eqs. (B8) and (B11) we arrive at the asymptotic expansion
$$\gamma _0^t𝑑t^{}N(t^{})=(b+2b_{})t(1+C)\left(\frac{9t}{b_{}}\right)^{1/3}+\mathrm{},$$
(B12)
which implies
$$\gamma N(t)=b+2b_{}\mathrm{const}\times t^{2/3}+\mathrm{}$$
(B13) |
no-problem/9912/cond-mat9912349.html | ar5iv | text | # Spin-dependent transport in a quasi-ballistic quantum wire
## Abstract
We describe the transport properties of a 5 $`\mu `$m long one-dimensional (1D) quantum wire. Reduction of conductance plateaux due to the introduction of weakly disorder scattering are observed. In an in-plane magnetic field, we observe spin-splitting of the reduced conductance steps. Our experimental results provide evidence that deviation from conductance quantisation is very small for electrons with spin parallel and is about 1/3 for electrons with spin anti-parallel. Moreover, in a high in-plane magnetic field, a spin-polarised 1D channel shows a plateau-like structure close to $`0.3\times e^2/h`$ which strengthens with increasing temperatures. It is suggested that these results arise from the combination of disorder and the electron-electron interactions in the 1D electron gas.
Using electron beam lithography, one is able to pattern the surface of a GaAs/AlGaAs heterostructure with sub-micron Schottky gates. By negatively biasing the surface gates, one can electrostatically squeeze the underlying two-dimensional electron gas (2DEG) into various shapes. The most noteworthy success of this technique is the experimental realisation of a one-dimensional (1D) channel – by using a pair of “split-gates”, it is possible to define a 1D channel within a 2DEG. If the elastic scattering length is longer than the 1D channel length, transport through the channel is ballistic and one observes conductance plateaux quantised in units of $`2e^2/h`$. Although 1D electron transport has been studied for more than a decade, most experimental results can be explained within a single particle picture without considering electron-electron interactions and spin effects in a 1D system. It is only more recently that a “0.7 structure”, evidence for possible spin polarisation caused by electron-electron interactions , has been extensively studied. Part of the reason may be that in a clean system the conductance is independent of the electron-electron interactions and it is determined by entrance and exit reservoirs . A non-interacting 1D system should show either ballistic quantization or has a conductance decreasing to zero with increasing length and decreasing temperature due to localization. On the other hand it has been suggested that the interaction is observable in the presence of backscattering , so called a dirty Luttinger liquid, but in excess disorder may remove all semblance of 1D transport.
It is well known that as the length of a quantum wire is increased, the effect of disorder within the wire is enhanced and back-scattering in the channel is increased, resulting in a crossover from ballistic towards diffusive transport. In this paper, we present experimental results on the transport properties of a 5 $`\mu `$m quantum wire. In such a regime the weak disorder within the system reduces conductance steps below quantised units in $`2e^2/h`$. In an in-plane magnetic field, we observed spin-splitting of the conductance plateaux, as expected. Recently Kimura, Kuroki and Aoki have proposed that in a dirty Luttinger liquid, a reduction of spin anti-parallel conductance occurs. That is, due to the electron-electron interactions, the conductance for spin antiparallel electrons is smaller than that for spin parallel electrons. We shall show that our results are consistent with their model. When there are more than one 1D subbands occupied in the channel, our data further suggests that the electron transmission probability through a long quasi-ballistic channel shows oscillating behaviour with electron spin species. Moreover, below the first spin-polarised conductance step, a plateau-like structure close to $`0.3\times e^2/h`$ strengthens with increasing temperatures.
The split-gate (SG) device (5 $`\mu `$m long and 0.8 $`\mu `$m wide) was lithographically defined, 300 nm above the 2DEG. The 2DEG has a carrier density of $`3\times 10^{11}`$ cm<sup>-2</sup> with a mobility of $`7.5\times 10^6`$ cm<sup>2</sup>/Vs after brief illumination with a red light emitting diode. Experiments were performed in a pumped <sup>3</sup>He cryostat and the two-terminal conductance $`G=dI/dV`$ was measured using an ac excitation voltage of 10 $`\mu `$V at a frequency of 77 Hz with standard phase-sensitive techniques. The in-plane magnetic field $`B_{}`$ is applied parallel to the source-drain current. To check for an out-of-plane magnetic field component, we measure the Hall voltage. From this we know that the sample was aligned better than 0.1 using an in-situ rotating insert. In all cases, a zero-split-gate-voltage series resistance due to the bulk 2DEG is subtracted from the raw data. In the literature, it has been shown that there is an additional series resistance between a 1D channel and bulk 2DEG . Three different samples at four cool-downs show similar behaviour and measurements taken from one of these samples are presented in this paper.
Impurities in the spacer layer can give rise to potential fluctuations near a 1D wire. This effect could cause nonuniformity of the quantum wire confinement potential, leading to a micro-constriction, a narrowest region in the channel. To check whether conduction through a 1D channel is dominated by a micro-constriction, one can laterally shift the conduction channel. When the channel is moved away from a micro-constriction, the conductance – split-gate voltage pinch-off characteristics would show large variations due to the sudden disappearance of the extra confining potential due to impurities which causes the micro-constriction in the channel. We now show that it is not the case in our long quantum wire. Figure 1 shows $`G(V_{SG})`$ when we differentially bias the two halves of the split-gates. In this case, the 1D channel is laterally shifted by $`\pm `$ 57 nm . It is evident that the pinch-off voltages show a linear dependence of the voltage difference between the two halves of the split-gate. Also resonant features and conductance plateaux show gradual evolution as the channel is moved laterally. This demonstrates that transport through the channel is not dominated by a narrow region in the channel. As shown in Fig. 1, resonant features superimposed upon conductance steps are clearly observed in all traces. These resonances are believed due to potential fluctuations near the 1D channel . The scattering potential within the channel does indeed vary when the 1D channel is moved laterally as the strength of resonant features changes, as illustrated in Fig. 1. Nevertheless, note that conductance plateaux deviations from their quantised values are always observed in all 11 traces. However it is noticeable that the plateaux are reasonably intact showing that the scattering is weak and does not vary significantly with the Fermi energy so producing a semblance of plateaux. In this paper, we concentrate on the case where there is no potential difference between the two halves of the SG.
Figure 2 shows conductance-split gate voltage characteristics $`G(V_{SG})`$ at various temperatures $`T`$. With increasing $`T`$, the feature close to $`0.8\times 2e^2/h`$ and resonant features gradually disappear. The first three conductance plateaux values increase and approach multiples of $`2e^2/h`$ at the highest temperatures. In a shorter wire (3 $`\mu `$m), clean conductance plateaux close at multiples of $`2e^2/h`$ are observed. With increasing temperatures, the conductance plateaux become less well-defined due to thermal smearing. Nevertheless the mid-points of the conductance steps remain close to multiples of $`2e^2/h`$ at high temperatures. This effect is not related to the reports of decreased plateaus values. For example, recently the role of electron injection into V-groove quantum wires have been studied . It has been shown that the observed reduction of ballistic conductance steps is due to poor coupling between the 1D states of the wire and the 2D states of the reservoirs. This mechanism may account for the reduced conductance plateaux observed in cleaved edge overgrown quantum wires studied by Yacoby and co-workers , which is entirely different to the spin-dependent effects previously reported .
We now turn our attention to the reduced conductance plateaux as a function of magnetic field applied parallel to the 2DEG $`B_{}`$. It is well established that a large $`B_{}`$ lifts the electron spin degeneracy as first demonstrated by Wharam et al. , causing consecutive spin-parallel (parallel to $`B_{}`$) and spin-antiparallel (anti-parallel to $`B_{}`$) conductance plateaux in multiples of $`e^2/h`$ . Figure 3 shows $`G(V_{SG})`$ at various $`B_{}`$. With increasing $`B_{}`$, the splitting of the conductance steps can be seen and the spin-split conductance steps values are somewhat lower than multiples of $`e^2/h`$. It is worth mentioning that the feature close to $`0.8\times 2e^2/h`$, believed to be due to resonant transmission through an impurity potential , gradually disappears with increasing $`B_{}`$. This result, together with the data shown in figure 1 when the feature close to $`0.8\times 2e^2/h`$ gradually turns into a resonant peak as the channel is laterally shifted, show that at zero magnetic field one needs to be careful in ascribing any feature close to $`0.7\times 2e^2/h`$ observed in a 1D channel to the “0.7 plateau” extensively studied by Thomas and co-workers . The zero split-gate voltage conductance shows a monotonic decreases with increasing $`B_{}`$, as illustrated in the inset to Fig. 3. This effect is due to the diamagnetic shift of the 2DEG .
As clearly shown in Fig. 3, at $`B_{}=11`$ T the conductance does not show steps in multiples of $`e^2/h`$. We now use a different view which reveals a striking behaviour. Figure 4 now shows the difference in conductance between the mid-points of consecutive steps value $`\mathrm{\Delta }G(n)=G(n)G(n1)`$ where $`n`$ is the number of spin-split 1D subbands occupied. For $`n=1`$, $`\mathrm{\Delta }G(1)`$ is simply the conductance step value. For $`n6`$, an oscillating behaviour is evident – $`\mathrm{\Delta }G(n)`$ approaches a quantised value of $`e^2/h`$ when $`n`$ is an odd integer, and shows substantial deviations (up to 1/3) from a quantised value of $`e^2/h`$ when $`n`$ is an even integer. For $`n6`$, the conductance steps are less pronounced and the striking oscillating behaviour gradually disappears. Assuming that $`\mathrm{\Delta }G(n)`$ reflects the transmission probability for the $`n^{th}`$ spin-split 1D subband, then these experimental results suggest that the spin parallel electrons have almost a full transmission probability (100%) through the 1D channel whereas the spin anti-parallel electrons have a much lower transmission probability ($``$ 65%). The semblance of the observed 1D conductance steps, together with the observed weak resonant features in our weakly disordered 1D wire suggest that our device is in the dirty Luttinger liquid regime. If this is the case, then our experimental results are consistent with the model proposed by Kimura and co-workers. Note that in their model they only consider a two-band (spin parallel and spin antiparallel electrons) Tomonaga-Luttinger liquid. The fact that the reduction of spin antiparallel conductance persists up to n=6 suggests that our results can be extended to a “six-band” (three pairs of spin parallel and spin antiparallel electrons) limit.
Finally we present the temperature dependence measurements which reveal even more striking behaviour. Figure 5 shows $`G(V_{SG})`$ for $`B_{}=11`$ T at different temperatures. As expected, the spin-split conductance steps become less pronounced at higher temperatures due to thermal broadening. However a plateau-like structure close to $`0.3\times e^2/h`$ becomes more pronounced with increasing temperatures. Also the structure approaches $`0.4\times e^2/h`$ at the highest temperature. The reason for this unexpected behaviour is not fully understood at present but we speculate that the strong electron-electron interactions might play a role in this.
The oscillating, spin dependent transmission probability which we observed is in striking contrast to the behaviour of a short, “point contact” ballistic channel where the quantization is always in units of $`e^2/h`$ regardless of spin orientation. In the latter case, theory shows that due to mixing of the reservoir and channel states there can be no electron-electron interaction enhanced deviation from the quantized values. However Kimura and co-workers have consider the Tomonaga-Luttinger liquid when backscattering occurs, they find that this mixes with the interaction to produce a conductance dependent on the Fermi energy and hence spin state although it is still surprising the spin parallel electrons display quantized value. Recently it has been suggested that in the Tomonaga-Luttinger regime charge density wave formation can give rise to a fractional charge behaviour. The results here are consistent with the spin-antiparallel electrons showing a quantized conductance but with the value of the fundamental charge reduced. We note that we cannot attribute the spin-dependent behaviour to any spin-dependent scattering at the entrance and the exit to the channel as it is absent on short devices on the same heterostructure material and the effect reproducible despite a change in sample and channel location.
In summary, we have performed low-temperature measurements on a quasi-ballistic quantum wire. Our results suggest that the electron transmission probability through a long quasi-ballistic channel shows oscillating behaviour with spin species. Moreover, a spin-polarised 1D channel shows a pronounced plateau-like structure close to $`0.3\times e^2/h`$ with increasing strength at higher temperatures. Such striking behaviour is only observed in long quantum wires ($``$$`\mu `$m) but not in a clean 1D channel ($``$$`\mu `$m), suggesting that the back-scattering within the quasi-ballistic 1D system plays an important role.
This work was funded by the UK EPSRC, and in part, by the US Army Research Office. We thank C.J.B. Ford for helpful discussions, K.J. Thomas for drawing our attention to Ref. , H.D. Clark, J.E.F. Frost and M. Kataoka for advice and help on device fabrication at an early stage of this work, and S. Shapira for experimental assistance. C.T.L. is grateful for support from Department of Physics, National Taiwan University.
Figure Captions
Figure 1. $`G(V_{SG})`$ measurements when the channel is laterally shifted by differentially biasing the two halves of the split-gates. From left to right: $`\mathrm{\Delta }V_{SG}`$ = -0.5 V to +0.5 V in 0.1 V steps. The measurement temperature was 0.3 K.
Figure 2. $`G(V_{SG})`$ at various temperatures $`T`$ as illustrated in the figure.
Figure 3. $`G(V_{SG})`$ at various applied magnetic field parallel to the 2DEG $`B_{}`$. From left to right: $`B_{}`$ = 0 to 11 T in 1 T steps. Curves are successively offset by 0.01 V for clarity. The zero split-gate voltage conductance at various $`B_{}`$, as shown in the inset to Fig. 3 has been subtracted from the raw data. The inset shows the zero split-gate voltage conductance as a function of $`B_{}`$. The measurement temperature was 0.3 K.
Figure 4. The difference in conductance between the $`(n1)^{th}`$ and the $`n^{th}`$ 1D conductance steps $`\mathrm{\Delta }G(n)=G(n)G(n1)`$, where $`n`$ is the spin-split subband index.
Figure 5. $`G(V_{SG})`$ for $`B_{}`$ at various temperatures $`T`$. From left to right: $`T=`$ 0.3, 0.363, 0.391, 0.429, 0.483, 0.542, 0.611, 0.680, 0.752, 0.832, 0.920, 1.01, 1.07, 1.16, 1.28, 1.46 and 1.60 K. Curves are successively offset by 0.01 V for clarity. The data was taken at the second cool-down.
Present address: Department of Physics, National Taiwan University, Taipei 106, Taiwan |
no-problem/9912/cond-mat9912316.html | ar5iv | text | # Decay Process for Three - Species Reaction - Diffusion System
## Figure Captions
Fig.1.
The global reaction rate $`R(t)`$ verse time $`t`$ for the reaction - diffusion process of the form $`A+B+CD`$. Our numerical simulation is done for $`2\times 10^2`$ configurations on $`200\times 200`$ square lattice with the periodic boundary conditions.
Fig.2
Plot of $`lnR(t)`$ verse $`lnt`$ before the crossover. The slope of solid line is $`0.49`$.
Fig.3
Plot of $`lnR(t)`$ verse $`lnt`$ after the crossover. The scaling exponent we obtained is approximately $`0.54`$. |
no-problem/9912/math9912210.html | ar5iv | text | # Proof of the volume conjecture for torus knots
## 1. Introduction
In the recent paper H. Murakami and J. Murakami showed that the “quantum dilogarithm” knot invariant, introduced in , is a special case of the colored Jones invariant (polynomial) associated with the quantum group $`SU(2)_q`$. Using the connection of the quantum dilogarithm invariant with the hyperbolic volume of knot’s complement, conjectured in , they also proposed the “volume conjecture”: for any knot the above mentioned specification of the colored Jones invariant in a certain limit gives the simplicial volume (or Gromov norm) of the knot. It is remarkable that this conjecture implies that a knot is trivial if and only if its colored Jones invariants are trivial, see .
The purpose of this paper is to prove the volume conjecture for the case of torus knots. To formulate our result, let us first recall the form of the colored Jones invariant for torus knots .
The colored Jones invariant $`J_{K,k}(h)`$ of a framed knot $`K`$ is a Laurent polynomial in $`q=e^h`$ depending on the ‘color’ $`k`$, the dimension of a $`SU(2)_q`$-module. Let $`m,p`$ be mutually prime positive integers. Denote $`LO_{m,p}`$ the $`(m,p)`$ torus knot obtained as the $`(m,p)`$ cable about the unknot with zero framing (see for the precise definition). Then the colored Jones invariant of $`L`$ has the following explicit form:
(1)
$$2\mathrm{sinh}(kh/2)\frac{J_{L,k}(h)}{J_{O,k}(h)}=\underset{ϵ=\pm 1}{}\underset{r=(k1)/2}{\overset{(k1)/2}{}}ϵe^{hmpr^2+hr(m+ϵp)+ϵh/2},$$
where $`O`$ is the unknot with zero framing, and
$$J_{O,k}(h)=\mathrm{sinh}(kh/2)/\mathrm{sinh}(h/2).$$
According to the H. Murakami and J. Murakami’s result , the quantum dilogarithm invariant $`L_k`$ is the following specification of the colored Jones invariant (up to a multiple of an $`k`$-th root of unity):
(2)
$$L_k\underset{h2\pi 𝗂/k}{lim}\frac{J_{L,k}(h)}{J_{O,k}(h)}.$$
In what follows, we shall call this as “hyperbolic specification”. Our result describes the asymtotic expansion of $`L_k`$ when $`k\mathrm{}`$.
###### Theorem.
The hyperbolic specification (2) of the colored Jones invariant for the $`(m,p)`$ torus knot $`L`$ has the following asymtotic expansion at large $`k`$:
(3)
$$L_ke^{\frac{𝗂\pi }{2k}\left(\frac{m}{p}+\frac{p}{m}\right)}=\underset{j=1}{\overset{mp1}{}}L_k^{(j)}+L_k^{(\mathrm{})},$$
where
(4)
$$L_k^{(j)}=2(2mp/k)^{3/2}e^{𝗂\frac{\pi }{4}}(1)^{(k1)j}e^{𝗂\frac{\pi kj^2}{2mp}}j^2\mathrm{sin}(\pi j/m)\mathrm{sin}(\pi j/p),$$
and
(5)
$$L_k^{(\mathrm{})}=\frac{1}{4}e^{𝗂\pi kmp/2}\underset{n1}{}\frac{1}{n!}\left(\frac{𝗂\pi }{2kmp}\right)^{n1}\frac{^{2n}(x\tau _L(x))}{x^{2n}}|_{x=0},$$
see formula (6) below for the definition of the function $`\tau _L(x)`$.
From eqns (3)–(5) it is easily seen that $`|L_k|k^{3/2}`$, $`k\mathrm{}`$.
###### Corollary.
The volume conjecture holds true for all torus knots, i.e.
$$\underset{k\mathrm{}}{lim}k^1\mathrm{log}|L_k|=0.$$
In the next section we prove the Theorem by using an integral representation for the Gaussian sum in formula (1).
###### Acknowledgments.
We are grateful to D. Borisov, T. Kärki and H. Murakami for discussions. R.K. thanks L.D. Faddeev for his constant support in this work. The work of R.K. is supported by Finnish Academy, and in part by RFFI grant 99-01-00101.
## 2. Proof of the Theorem
To begin with, define the following function:
(6)
$$\tau _L(z)2\mathrm{sinh}(mz)\mathrm{sinh}(pz)/\mathrm{sinh}(mpz).$$
It is related to the Alexander polynomial of the knot $`L`$,
$$\mathrm{\Delta }_L(t)\frac{(t^{mp/2}t^{mp/2})(t^{1/2}t^{1/2})}{(t^{m/2}t^{m/2})(t^{p/2}t^{p/2})},$$
through the formula
$$\tau _L(z)=2\mathrm{sinh}(z)/\mathrm{\Delta }_L(e^{2z}).$$
According to the result of Milnor and Turaev , the function $`\tau _L(z)`$ describes the Reidemeister torsion of the knot complement.
###### Lemma 1.
For any real $`\varphi `$, satisfying the condition $`\mathrm{}he^{2𝗂\varphi }>0`$, formula (1) has the following integral representation
(7)
$$2\mathrm{sinh}(kh/2)\frac{J_{L,k}(h)}{J_{O,k}(h)}=\sqrt{\frac{mp}{\pi h}}e^{\frac{h}{4}\left(\frac{m}{p}+\frac{p}{m}\right)}_{C_\varphi }𝑑ze^{mp\left(kz\frac{z^2}{h}\right)}\tau _L(z),$$
where the integration path $`C_\varphi `$ is the image of the real line under the mapping
(8)
$$xxe^{𝗂\varphi }C_\varphi ,$$
with the induced orientation.
###### Proof.
First note that for any complex $`h0`$ and any complex $`w`$, the following Gaussian integral formula holds:
(9)
$$\sqrt{\pi h}e^{hw^2}=_{C_\varphi }𝑑ze^{z^2/h+2wz},$$
where the choice of the integration path $`C_\varphi `$, described in the formulation of the theorem, is dictated by the convergence condition of the integral, and the square root is the analytical continuation from positive values of $`h`$. Now, starting from the right hand side of eqn (1), collect the terms, containing the summation variable $`r`$, into a complete square:
$$e^{\frac{h}{4}\left(\frac{m}{p}+\frac{p}{m}\right)}\mathrm{r}.\mathrm{h}.\mathrm{s}.(\text{1})=\underset{ϵ=\pm 1}{}ϵ\underset{r=0}{\overset{k1}{}}e^{hmp\left(r\frac{k1}{2}+\frac{m+ϵp}{2mp}\right)^2}$$
— now formula (9) can be applied to the $`r`$-dependent exponential —
$$=\underset{ϵ=\pm 1}{}ϵ\underset{r=0}{\overset{k1}{}}\frac{1}{\sqrt{\pi hmp}}_{C_\varphi }𝑑ze^{\frac{z^2}{hmp}+z(2rk+1+p^1+ϵm^1)}$$
— with subsequent evaluation of the summations —
$$=\frac{2}{\sqrt{\pi hmp}}_{C_\varphi }𝑑ze^{\frac{z^2}{hmp}+z/p}\mathrm{sinh}(kz)\mathrm{sinh}(z/m)/\mathrm{sinh}(z)$$
— the exponential $`\mathrm{exp}(z/p)`$ in the integrand, being multiplied by an odd function of $`z`$ (w.r.t. $`zz`$), can by replaced by it’s odd part —
$$=\frac{2}{\sqrt{\pi hmp}}_{C_\varphi }𝑑ze^{\frac{z^2}{hmp}}\mathrm{sinh}(kz)\mathrm{sinh}(z/m)\mathrm{sinh}(z/p)/\mathrm{sinh}(z)$$
— reversing the previous argument, replace $`\mathrm{sinh}(kz)`$ by an exponential —
$$=\frac{2}{\sqrt{\pi hmp}}_{C_\varphi }𝑑ze^{\frac{z^2}{hmp}+kz}\mathrm{sinh}(z/m)\mathrm{sinh}(z/p)/\mathrm{sinh}(z)$$
— and rescale the integration variable ($`zzmp`$) —
$$=\sqrt{\frac{mp}{\pi h}}_{C_\varphi }𝑑ze^{mp\left(kz\frac{z^2}{h}\right)}\tau _L(z)$$
with notation (6) being used. ∎
Representation (7) is similar to Rozansky’s formula (2.2) from , though the latter is only a shorthand for the power series expansion.
###### Lemma 2.
The hyperbolic specification (2) of the colored Jones invariant for torus knots has the integral representation
(10)
$$2L_k=(mpk/2)^{3/2}e^{\frac{𝗂\pi }{2k}\left(\frac{m}{p}+\frac{p}{m}+\frac{k}{2}\right)}_{C_\varphi }𝑑ze^{\pi mpk(z+\frac{𝗂}{2}z^2)}z^2\tau _L(\pi z),$$
where integration path $`C_\varphi `$ is defined in (8) with $`0<\varphi <\pi /2`$.
###### Proof.
The left hand side of eqn (7) vanishes at $`h=2\pi 𝗂/k`$ due to the factor $`\mathrm{sinh}(kh/2)`$. This means that the integral in the right hand side vanishes as well. So, differentiating simultaneously the $`\mathrm{sinh}`$-function in the left hand side and the integral in the right of eqn (7) with respect to $`h`$, then putting $`h=2\pi 𝗂/k`$, and rescaling the integration variable by $`\pi `$, we rewrite the result in the form of eqn (10). ∎
###### Proof of the Theorem.
At large $`k`$ one can use the steepest descent method for evaluation the integral in (10). The only stationary point at $`z=𝗂`$ is separated from the integration path by a finite number of poles of the function $`\tau _L(\pi z)`$ which are located at $`z_j𝗂j/mp`$, $`0<j<mp`$. Thus, taking into account convergence at infinity, we can shift path $`C_\varphi `$ by imaginary unit and add integration along a closed contour encircling points $`z_j`$ in the counterclockwise direction. The integration along the shifted path $`C_\varphi `$ can be transformed by the change of the integration variable $`zz+𝗂`$:
$$\begin{array}{c}_{𝗂+C_\varphi }𝑑ze^{\pi mpk(z+\frac{𝗂}{2}z^2)}z^2\tau _L(\pi z)\hfill \\ \hfill =e^{𝗂\pi mpk/2}_{C_\varphi }𝑑ze^{𝗂\pi mpkz^2/2}(z+𝗂)^2\tau _L(\pi z+𝗂\pi )\\ \hfill =2𝗂e^{𝗂\pi mpk/2}_{C_\varphi }𝑑ze^{𝗂\pi mpkz^2/2}z\tau _L(\pi z),\end{array}$$
where in the last line we have used the (quasi) periodicity property of the $`\mathrm{sinh}`$-function, the fact that $`m`$ and $`p`$ are mutually prime, and disregarded the odd terms with respect to the sign change $`zz`$. Now, the obtained formula straightforwardly leads to the asymptotic power series $`L_k^{(\mathrm{})}`$ in eqn (3) through the Taylor series expansion of the function $`z\tau _L(\pi z)`$ at $`z=0`$, and evaluation of the Gaussian integrals. The other terms in eqn (3) come from the evaluation of the contour integral by the residue method. ∎ |
no-problem/9912/astro-ph9912428.html | ar5iv | text | # POPULATION OF THE SCATTERED KUIPER BELT1footnote 11footnote 1Based on observations collected at Canada-France-Hawaii Telescope, which is operated by the National Research Council of Canada, the Centre National de la Recherche Scientifique de France, and the University of Hawaii.
## 1 Introduction
The outer solar system is richly populated by small bodies in a thick trans-Neptunian ring known as the Kuiper Belt (Jewitt et al. 1996). It is widely believed that the Kuiper Belt Objects (KBOs) are remnant planetesimals from the formation era of the solar system and are composed of the oldest, least-modified materials in our solar system (Jewitt 1999). KBOs may also supply the short-period comets (Fernández and Ip 1983; and Duncan et al. 1988). The orbits of KBOs are not randomly distributed within the Belt but can be grouped into three classes. The Classical KBOs constitute about 2/3 of the known objects and display modest eccentricities ($`e0.1`$) and semi-major axes ($`41\text{ AU}<a<46\text{ AU}`$) that guarantee a large separation from Neptune at all times. The Resonant KBOs, which comprise most of the remaining known objects, are trapped in mean-motion resonances with Neptune, principally the 3:2 resonance at 39.4 AU. In 1996, we discovered the first example of a third dynamical class, the Scattered KBOs (SKBOs): $`1996\mathrm{T}\mathrm{L}_{66}`$ occupies a large ($`a85`$ AU), highly eccentric ($`e0.6`$), and inclined ($`i25^{}`$) orbit (Luu et al. 1997). It is the defining member of the SKBOs, characterized by large eccentricities and perihelia near $`35`$ AU (Duncan and Levison 1997). The SKBOs are thought to originate from Neptune scattering, as evidenced by the fact that all SKBOs found to date have perihelia in a small range near Neptune ($`34\text{ AU}<q<36\text{ AU}`$).
In this letter we report preliminary results from a new survey of the Kuiper Belt undertaken at the Canada-France-Hawaii Telescope (CFHT) using a large format Charge-Coupled Device (CCD). This survey has yielded 3 new SKBOs. Combined with published results from an earlier survey (Jewitt et al. 1998) and with dynamical models (Duncan and Levison 1997), we obtain new estimates of the population statistics of the SKBOs. The main classes of KBOs appear in Figure 6, a plan view of the outer solar system.
## 2 Survey Data
Observations were made with the 3.6m CFHT and 12288 x 8192 pixel Mosaic CCD (Cuillandre et al., in preparation). Ecliptic fields were imaged within 1.5 hours of opposition to enhance the apparent sky-plane speed difference between the distant KBOs ($`3`$ arc sec/hr) and foreground main-belt asteroids ($`25`$ arc sec/hr). Parameters of the CFHT survey are summarized in Table 1, where they are compared with parameters of the survey earlier used to identify $`1996\mathrm{T}\mathrm{L}_{66}`$ (Luu et al. 1997 and Jewitt et al. 1998). We include both surveys in our analysis of the SKBO population.
Artificial moving objects were added to the data to quantify the sensitivity of the moving object detection procedure (Trujillo and Jewitt 1998). The seeing in the survey varied from 0.7 arc sec to 1.1 arc sec (FWHM). Accordingly, we analysed the data in 3 groups based on the seeing. Artificial moving objects were added to bias-subtracted twilight sky-flattened images, with profiles matched to the characteristic point-spread function for each image group. These images were then passed through the data analysis pipeline. The detection efficiency was found to be uniform with respect to sky-plane speed in the 1 – 10 arc sec/hr range, with efficiency variations due only to object brightness.
The seeing-corrected efficiency function was fitted with a hyperbolic tangent profile for use in the maximum-likelihood SKBO orbital simulation described in Section 3:
$$\epsilon =\frac{\epsilon _{\mathrm{max}}}{2}\left(\mathrm{tanh}\left(\frac{m_{\mathrm{R50}}m_\mathrm{R}}{\sigma }\right)+1\right),$$
(1)
where $`0<\epsilon <1`$ is the efficiency at detecting objects of red magnitude $`m_\mathrm{R}`$, $`\epsilon _{\mathrm{max}}=0.83`$ is the maximum efficiency, $`m_{\mathrm{R50}}=23.6`$ is the magnitude at which $`\epsilon =\epsilon _{\mathrm{max}}/2`$, and $`\sigma =0.4`$ magnitudes is the characteristic range over which the efficiency drops from $`\epsilon _{\mathrm{max}}`$ to zero.
The orbital parameters of the 4 established SKBOs are listed in Table 2. It should be noted that the 3 most recent KBOs have been observed for a timebase of about 3 months, so their fitted orbits are subject to revision. However, the 4 year orbit of $`1996\mathrm{T}\mathrm{L}_{66}`$ has changed very little since its initial announcement with a 3 month arc ($`a=85.754`$, $`e=0.594`$, $`i=23.9`$, $`\omega =187.7`$, $`\mathrm{\Omega }=217.8`$, $`M=357.3`$, and epoch 1996 Nov 13; Marsden 1997). A plot of eccentricity versus semi-major axis appears in Figure 6, showing the dynamical distinction of SKBOs from the other KBOs. All SKBOs have perihelia $`34\text{ AU}<q<36\text{ AU}`$. To avoid confusion with Classical and Resonant KBOs also having perihelia in this range, we concentrate on the SKBOs with semi-major axes $`a>50`$ AU. In so doing, we guarantee that our estimates provide a solid lower bound to the true population of SKBOs.
## 3 Population Estimates
The number of SKBOs can be crudely estimated by simple extrapolation from the discovered objects. The faintest SKBO in our sample, $`1999\mathrm{C}\mathrm{Y}_{118}`$, was bright enough for detection only during the 0.24% of its $`1000`$ year orbit spent near perihelion. Our 20.2 square degrees of ecliptic observations represent $`1/500`$th of the total $`\pm 15^{}`$ thick ecliptic, roughly the thickness of the Kuiper Belt (Jewitt et al. 1996). Thus, we crudely infer the population of SKBOs to be of order $`500/2.4\times 10^32\times 10^5`$. To more accurately quantify the population, we use a maximum-likelihood method to simulate the observational biases of our survey.
Our discovery of 4 SKBOs with the CFHT and UH telescopes provides the data for the maximum-likelihood simulation. We assess the intrinsic population of SKBOs by the following procedure (c.f. Trujillo 2000): (1) create a large number ($`10^8`$) of artificial SKBO orbits drawn from an assumed distribution; (2) determine the ecliptic latitude, longitude, and sky-plane velocity of each artificial object, using the equations of Sykes and Moynihan (1996; a sign error was found in equation 2 of their text and corrected before use), and compute the object’s brightness using the $`H`$, $`G`$ formalism of Bowell et al. (1989), assuming an albedo of 0.04; (3) determine the detectability of each artificial object based on the detection efficiency and sky-area imaged in the two surveys; (4) create a histogram of “detected” artificial objects in $`a`$-$`e`$-$`i`$-radius space, with a bin size sufficiently small such that binning effects are negligible; (5) based on this histogram, compute the probability of finding the true, binned, observed distribution of the four SKBOs, assuming Poisson detection statistics; and (6) repeat the preceding steps, varying the total number of objects in the distribution ($`N`$) and the slope $`(q^{}=3,4)`$ of the differential size distribution to find the model most likely to produce the observations. A full description of the model parameters is found in Table 3.
This model was designed to match the SKBO population inclination distribution, eccentricity distribution, and ecliptic plane surface density as a function of heliocentric distance, as found in the only published SKBO dynamical simulation to date, that of Duncan and Levison (1997). It should be noted that the results of the 4 billion year simulations of Duncan and Levison (1997) were based on only 20 surviving particles, averaged over the last 1 billion years of their simulation for improved statistical accuracy. The total number of objects and the size distribution of objects remained free parameters in our model.
## 4 Results
The results of the maximum-likelihood simulation appear in Figure 6. Confidence levels were placed by finding the interval over which the integrated probability distribution corresponded to 68.27% and 99.73% of the total, hereafter referred to as $`1\sigma `$ and $`3\sigma `$ limits, respectively. The total number of SKBOs with radii between 50 km and 1000 km is $`N=(3.1_{1.3}^{+1.9})\times 10^4`$ ($`1\sigma `$) in the $`q^{}=4`$ case, with $`4.0\times 10^3<N<1.1\times 10^5`$ $`3\sigma `$ limits. The $`q^{}=3`$ case is similar with $`N=(1.4_{0.5}^{+1.1})\times 10^4`$ ($`1\sigma `$), with $`2.0\times 10^3<N<5.3\times 10^4`$ $`3\sigma `$ limits. The $`q^{}=4`$ and $`q^{}=3`$ best fits are equally probable at the $`<1\sigma `$ level, however we prefer the $`q^{}=4`$ case as recent measurements of the Kuiper Belt support this value (Jewitt et al. 1998, Gladman and Kavelaars 1998, Chiang and Brown, 1999). This population is similar in number to the Kuiper Belt interior to 50 AU, which contains about $`10^5`$ objects (Jewitt et al. 1998). The observation that only a few percent of KBOs are SKBOs is due to the fact that the SKBOs are only visible in magnitude-limited surveys during the small fraction of their orbits when near perihelion. In addition, the high perihelion velocities, $`v`$, of eccentric SKBOs may have implications for erosion in the Kuiper Belt since ejecta mass, $`m_\mathrm{e}`$, scales with impact energy, $`E`$, as $`m_\mathrm{e}E^2v^4`$ (Fujiwara et al. 1977).
Extrapolating the $`q^{}=4`$ distribution to the size range $`1\text{ km}<r<10\text{ km}`$ yields $`N4\times 10^9`$ SKBOs in the same size range as the observed cometary nuclei. This is comparable to the $`10^9`$ needed for the SKBOs to act as the sole source of the short-period comets (Levison and Duncan 1997).
The total mass $`M`$ of objects assuming $`q^{}=4`$ is
$$M=2.3\times 10^3p_R^{3/2}\frac{\rho }{\text{kg }\text{m}^3}\mathrm{log}(\frac{r_{\mathrm{max}}}{r_{\mathrm{min}}})M_{},$$
(2)
where red geometric albedo $`p_\mathrm{R}=0.04`$, density $`\rho =2000\text{ kg m}^3`$, the largest object $`r_{\mathrm{max}}=1000`$ km (Pluto-sized), and smallest object $`r_{\mathrm{min}}=50`$ km yields $`M0.05M_{}`$, where $`M_{}=6\times 10^{24}`$ kg is the mass of the earth. This is comparable to the total mass of the Classical and Resonant KBOs (0.2 $`M_{}`$, Jewitt et al. 1998). If only 1% percent of SKBOs have survived for the age of the solar system (Duncan and Levison 1997), then an initial population of $`10^7`$ SKBOs ($`50\text{ km}<r<1000\text{ km}`$) and a SKBO primordial mass $`5M_{}`$ are inferred.
As our observations are concentrated near the ecliptic, we expected most of the discovered objects to have low inclinations. However, of the 4 established SKBOs, 3 have high inclinations ($`i20^{}`$). The possibility of a Gaussian distribution of inclinations centered at $`i20^{}`$ was tested, however, the fit was not better than the uniform $`i`$ case at the $`3\sigma `$ level. If the SKBOs were distributed as such, the total numbers of SKBOs reported here would be enhanced by a factor $`2`$. Combining this model-dependent uncertainty, the orbital uncertainties of the 4 SKBOs, and the computed Poisson noise, we estimate that our predictions provide an order-of-magnitude constraint on the population of the SKBOs. As additional SKBOs become evident in our continuing survey and as recovery observations of the SKBOs are secured, we will refine this estimate.
## 5 Summary
We have constrained the SKBO population using surveys undertaken on Mauna Kea with two CCD mosaic cameras on two telescopes. The surveys cover 20.2 square degrees to limiting red magnitude 23.6 and 51.5 square degrees to limiting red magnitude 22.5. We find the following:
(1) The SKBOs exist in numbers comparable to the KBOs. The observations are consistent with a population of $`N=(3.1_{1.3}^{+1.9})\times 10^4`$ ($`1\sigma `$) SKBOs in the radius range $`50\text{ km}<r<1000\text{ km}`$ with a differential power-law size distribution exponent of $`q^{}=4`$.
(2) The SKBO population in the size range of cometary nuclei is large enough to be the source of the short-period comets.
(3) The present mass of the SKBOs is approximately $`0.05M_{}`$. When corrected for depletion inferred from dynamical models (Duncan and Levison 1997), the initial mass of SKBOs in the early solar system approached $`5M_{}`$.
We are continuing our searches for these unusual objects as it is clear that they play an important role in outer solar system dynamics.
We thank Ken Barton for help at the Canada-France-Hawaii Telescope. This work was supported by a grant to DCJ from NASA.
## 6 Figure Captions |
no-problem/9912/nucl-th9912002.html | ar5iv | text | # NEUTRON STARS: RECENT DEVELOPMENTS
## 1 Introduction
Neutron stars are complicated many-body systems of neutrons, protons, electrons and muons with possibly additional intricate phases of pion, kaon, hyperon or quark condensates. It is therefore appropriate that neutron stars are included at this “Xth Intl. Conf. on Recent Progress in Many-body Theories”.
A brief history of the most important discoveries in this millennium concerning neutron stars listed in Table I. Most are well known except perhaps for the most recent ones discussed the following sections: Sec. 2 list neutron star masses from binary pulsars and X-ray binaries and specifically discuss recent masses and radii from quasi periodic oscillations in low mass X-ray pulsars. In Sec. 3 we turn to modern equation of states for neutron star matter with particular attention to the uncertainty in the stiffness of the equation of state at high densities and causality constraints. Sec. 4 attempts to give an up-to-date on possible phase transitions to kaon and pion condensates, hyperon and quark matter, superfluidity, etc. Sec. 5 contains the resulting structure of neutron stars, calculated masses and radii, and compares to observations. In Sec. 6 observational effects of glitches are described when phase transitions occur. Sec. 7 contain novel information on connections between supernova remnants and $`NO_3^{}`$ peaks in ice cores, X-ray bursts and thermonuclear explosions, gamma ray bursters and neutron star collapse to black holes. Finally, a summary and conclusion is given. For more details we refer to .
## 2 Observed neutron star masses and QPO’s
The best determined neutron star masses are found in binary pulsars and all lie in the range $`1.35\pm 0.04M_{}`$ . These masses have been accurately determined from variations in their radio pulses due to doppler shifts as well periastron advances of their close elliptic orbits that are strongly affected by general relativistic effects. One exception is the nonrelativistic pulsar PSR J1012+5307 of mass<sup>1</sup><sup>1</sup>1All uncertainties given here are 95% conf. limits or $`2\sigma `$ $`M=(2.1\pm 0.8)M_{}`$ .
Several X-ray binary masses have been measured of which the heaviest are Vela X-1 with $`M=(1.9\pm 0.2)M_{}`$ and Cygnus X-2 with $`M=(1.8\pm 0.4)M_{}`$. Their Kepler orbits are determined by measuring doppler shift of both the X-ray binary and its companion. To complete the mass determination one needs the orbital inclination which is determined by eclipse durations, optical light curves, or polarization variations.
The recent discovery of high-frequency brightness oscillations in low-mass X-ray binaries provides a promising new method for determining masses and radii of neutron stars. The kilohertz quasi-periodic oscillations (QPO) occur in pairs and are most likely the orbital frequencies
$`\nu _{QPO}=(1/2\pi )\sqrt{GM/R_{orb}^3},`$ (1)
of accreting matter in Keplerian orbits around neutron stars of mass $`M`$ and its beat frequency with the neutron star spin, $`\nu _{QPO}\nu _s`$. The accretion can for a few QPO’s be tracked to its innermost stable orbit
$`R_{ms}`$ $`=`$ $`6GM/c^2.`$ (2)
For slowly rotating stars the resulting mass is from Eqs. (1,2)
$`M`$ $``$ $`2.2M_{}{\displaystyle \frac{\mathrm{kHz}}{\nu _{QPO}}}.`$ (3)
For example, the maximum frequency of 1060 Hz upper QPO observed in 4U 1820-30 gives $`M2.25M_{}`$ after correcting for the $`\nu _s275`$ Hz neutron star rotation frequency. If the maximum QPO frequencies of 4U 1608-52 ($`\nu _{QPO}=1125`$ Hz) and 4U 1636-536 ($`\nu _{QPO}=1228`$ Hz) also correspond to innermost stable orbits the corresponding masses are $`2.1M_{}`$ and $`1.9M_{}`$. Evidence for the innermost stable orbit has been found for 4U 1820-30, where $`\nu _{QPO}`$ display a distinct saturation with accretion rate indicating that orbital frequency cannot exceed that of the innermost stable orbit. However, one caveat is that large accretion leads to radiation that can slow down the rotation of the accreting matter. More observations are needed before a firm conclusion can be made.
Large neutron star masses of order $`2M_{}`$ would restrict the equation of state (EoS) severely for dense matter as addressed in the following.
## 3 Modern nuclear equation of states
Recent models for the nucleon-nucleon (NN) interaction, based on the compilation of more than 5000 NN cross sections in the Nijmegen data bank, have reduced the uncertainty in NN potentials. The last Indiana run at higher momenta will further reduce uncertainties in NN interactions. Including many-body effects, three-body forces, relativistic effects, etc., the nuclear EoS have been constructed with reduced uncertainty allowing for more reliable calculations of neutron star properties . Likewise, recent realistic effective interactions for nuclear matter obeying causality at high densities, constrain the EoS severely and thus also the maximum masses of neutron stars. We have in elaborated on these analyses by incorporating causality smoothly in the EoS for nuclear matter allowing for first and second order phase transitions to, e.g., quark matter.
For the discussion of the gross properties of neutron stars we will use the optimal EoS of Akmal, Pandharipande, & Ravenhall (specifically the Argonne $`V18+\delta v+`$ UIX model- hereafter APR98), which is based on the most recent models for the nucleon-nucleon interaction, see Engvik et al. for a discussion of these models, and with the inclusion of a parametrized three-body force and relativistic boost corrections. The EoS for nuclear matter is thus known to some accuracy for densities up to a few times nuclear saturation density $`n_0=0.16`$ fm<sup>-3</sup>. We parametrize the APR98 EoS by a simple form for the compressional and symmetry energies that gives a good fit around nuclear saturation densities and smoothly incorporates causality at high densities such that the sound speed approaches the speed of light. This requires that the compressional part of the energy per nucleon is quadratic in nuclear density with a minimum at saturation but linear at high densities
$``$ $`=`$ $`E_{comp}(n)+S(n)(12x)^2`$ (4)
$`=`$ $`_0u{\displaystyle \frac{u2s}{1+su}}+S_0u^\gamma (12x)^2.`$
Here, $`n=n_p+n_n`$ is the total baryon density, $`x=n_p/n`$ the proton fraction and $`u=n/n_0`$ is the ratio of the baryon density to nuclear saturation density. The compressional term is in Eq. (4) parametrized by a simple form which reproduces the saturation density and the binding energy per nucleon $`_0=15.8`$MeV at $`n_0`$ of APR98. The “softness” parameter $`s0.2`$, which gave the best fit to the data of APR98 is determined by fitting the energy per nucleon of APR98 up to densities of $`n4n_0`$. For the symmetry energy term we obtain $`S_0=32`$ MeV and $`\gamma =0.6`$ for the best fit. The proton fraction is given by $`\beta `$-equilibrium at a given density.
The one unknown parameter $`s`$ expresses the uncertainty in the EoS at high density and we shall vary this parameter within the allowed limits in the following with and without phase transitions to calculate mass, radius and density relations for neutron stars. The “softness” parameter $`s`$ is related to the incompressibility of nuclear matter as $`K_0=18_0/(1+s)200`$MeV. It agrees with the poorly known experimental value , $`K_0180250`$MeV which does not restrict it very well. From $`(v_s/c)^2=P/(n)`$, where $`P`$ is the pressure, and the EoS of Eq. (4), the causality condition $`c_sc`$ requires
$$s\stackrel{>}{}\sqrt{\frac{_0}{m_n}}0.13,$$
(5)
where $`m_n`$ is the mass of the nucleon. With this condition we have a causal EoS that reproduces the data of APR98 at densities up to $`0.60.7`$ fm<sup>-3</sup>. In contrast, the EoS of APR98 becomes superluminal at $`n1.1`$ fm<sup>-3</sup>. For larger $`s`$ values the EoS is softer which eventually leads to smaller maximum masses of neutron stars. The observed $`M1.4M_{}`$ in binary pulsars restricts $`s`$ to be less than $`0.40.5`$ depending on rotation as shown in calculations of neutron stars below.
In Fig. 1 we plot the sound speed $`(v_s/c)^2`$ for various values of $`s`$ and that resulting from the microscopic calculation of APR98 for $`\beta `$-stable $`pn`$-matter. The form of Eq. (4), with the inclusion of the parameter $`s`$, provides a smooth extrapolation from small to large densities such that the sound speed $`v_s`$ approaches the speed of light. For $`s=0.0`$ ($`s=0.1`$) the EoS becomes superluminal at densities of the order of 1 (6) fm<sup>-3</sup>.
The sound speed of Kalogera & Baym is also plotted in Fig. 1. It jumps discontinuously to the speed of light at a chosen density. With this prescription they were able to obtain an optimum upper bound for neutron star masses and obey causality. This prescription was also employed by APR98. The EoS is thus discontinuously stiffened by taking $`v_s=c`$ at densities above a certain value $`n_c`$ which, however, is lower than $`n_s=5n_0`$ where their nuclear EoS becomes superluminal. This approach stiffens the nuclear EoS for densities $`n_c<n<n_s`$ but softens it at higher densities. Their resulting maximum masses lie in the range $`2.2M_{}\stackrel{<}{}M\stackrel{<}{}2.9M_{}`$. Our approach however, incorporates causality by reducing the sound speed smoothly towards the speed of light at high densities. Therefore our approach will not yield an absolute upper bound on the maximum mass of a neutron star but gives reasonable estimates based on modern EoS around nuclear matter densities, causality constraints at high densities and a smooth extrapolation between these two limits (see Fig. 1).
At very high densities particles are expected to be relativistic and the sound speed should be smaller than the speed of light, $`v_s^2c^2/3`$. Consequently, the EoS should be even softer at high densities and the maximum masses we obtain with the EoS of (4) are likely to be too high estimates.
## 4 Phase transitions
The physical state of matter in the interiors of neutron stars at densities above a few times normal nuclear matter densities is essentially unknown and many first and second order phase transitions have been speculated upon.
### 4.1 Kaon condensation
Kaon condensation in dense matter was suggested by Kaplan and Nelson , and has been discussed in many recent publications . Due to the attraction between $`K^{}`$ and nucleons its energy decreases with increasing density, and eventually if it drops below the electron chemical potential in neutron star matter in $`\beta `$-equilibrium, a Bose condensate of $`K^{}`$ will appear. It is found that $`K^{}`$’s condense at densities above $`34\rho _0`$, where $`\rho _0=0.16`$ fm<sup>-3</sup> is normal nuclear matter density. This is to be compared to the central density of $`4\rho _0`$ for a neutron star of mass 1.4$`M_{}`$ according to the estimates of Wiringa, Fiks and Fabrocini using realistic models of nuclear forces.
In neutron matter at low densities, when the interparticle spacing is much larger than the range of the interaction, $`r_0R`$, the kaon interacts strongly many times with the same nucleon before it encounters and interacts with another nucleon. Thus one can use the scattering length as the “effective” kaon-neutron interaction, $`a_{K^{}N}0.41`$fm, where we ignore the minor proton fraction in nuclear matter. The kaon energy deviates from its rest mass by the Lenz potential
$$\omega _{Lenz}=m_K+\frac{2\pi }{m_R}a_{K^{}N}n_{NM},$$
(6)
which is the optical potential obtained in the impulse approximation. If hadron masses furthermore decrease with density the condensation will occur at lower densities .
At high densities when the interparticle spacing is much less than the range of the interaction, $`r_0R`$, the kaon will interact with many nucleons on a distance scale much less than the range of the interaction. The kaon thus experiences the field from many nucleons and the kaon energy deviates from its rest mass by the Hartree potential:
$$\omega _{Hartree}=m_K+n_{NM}V_{K^{}N}(r)d^3r,$$
(7)
As shown in Ref. , the Hartree potential is considerably less attractive than the Lenz potential. Already at rather low densities, when the interparticle distance is comparable to the range of the $`KN`$ interaction, the kaon-nucleon and nucleon-nucleon correlations conspire to reduce the $`K^{}N`$ attraction significantly . This is also evident from Fig. 2 where the transition from the low density Lenz potential to the high density Hartree potential is calculated by solving the Klein-Gordon equation for kaons in neutron matter in the Wigner-Seitz cell approximation. Results are for square well $`K^{}N`$-potentials of various ranges $`R`$. For the measured $`K^{}n`$ scattering lengths and reasonable ranges of interactions the attraction is reduced by about a factor of 2-3 in cores of neutron stars. Relativistic effects further reduce the attraction at high densities. Consequently, a kaon condensate is less likely in neutron stars due to nuclear correlations.
If the kaon condensate occurs a mixed phase of kaon condensates and ordinary nuclear matter may coexist in a mixed phase depending on surface and Coulomb energies involved. The structures would be much like the quark and nuclear matter mixed phases described above.
### 4.2 Pion condensation
Pion condensation is like kaon condensation possible in dense neutron star matter (see, e.g., ). If we first neglect the effect of strong correlations of pions with the matter in modifying the pion self-energy, one finds it is favorable for a neutron on the top of the Fermi sea to turn into a proton and a $`\pi ^{}`$ when
$$\mu _n\mu _p=\mu _e>m_\pi ^{},$$
(8)
where $`m_\pi ^{}=139.6`$ MeV is the $`\pi ^{}`$ rest mass. As discussed in the previous subsection, at nuclear matter saturation density the electron chemical potential is $`100`$ MeV and one might therefore expect the appearance of $`\pi ^{}`$ at a slightly higher density. One can however not neglect the interaction of the pion with the background matter. Such interactions can enhance the pion self-energy and thereby the pion threshold density, and depending on the chosen parameters, see again Ref. , the critical density for pion condensation may vary from $`n_0`$ to $`4n_0`$. These matters are however not yet settled in a satisfying way, and models with strong nucleon-nucleon correlations tend to suppress both the $`\pi NN`$ and $`\pi \mathrm{\Delta }N`$ interaction vertices so that a pion condensation in neutron star matter does not occur.
A $`\pi ^0`$ condensate may also form as recently suggested by Akmal et al. and appears at a density of $`0.2`$ fm<sup>-3</sup> for pure neutron matter when the three-body interaction is included, whereas without $`V_{ijk}`$ it appears at much higher densities, i.e. $`0.5`$ fm<sup>-3</sup>. The $`\pi ^0`$ are virtual in the same way as the photons keeping a solid together are virtual.
### 4.3 Hyperon matter
Condensates of hyperons $`\mathrm{\Lambda },\mathrm{\Sigma }^{,0,+},\mathrm{}`$ and Deltas $`\mathrm{\Delta }^{,0,+,++}`$ also appear when their chemical potential exceeds their effective mass in matter. In $`\beta `$-equilibrium the chemical potentials are related by
$`\mu _n+\mu _e`$ $`=`$ $`\mu _\mathrm{\Sigma }^{}=\mu _\mathrm{\Delta }^{}`$ (9)
$`\mu _n`$ $`=`$ $`\mu _\mathrm{\Lambda }=\mu _{\mathrm{\Sigma }^0}=\mu _{\mathrm{\Delta }^0}`$ (10)
$`\mu _p`$ $`=`$ $`\mu _{\mathrm{\Delta }^+}.`$ (11)
The $`\mathrm{\Sigma }^{}`$ appears via weak strangeness non-conserving interactions $`e^{}+n\mathrm{\Sigma }^{}+\nu _e`$, when $`\mu _\mathrm{\Sigma }^{}>\omega _\mathrm{\Sigma }^{}`$ and $`\mathrm{\Lambda }`$ hyperons when $`\mu _\mathrm{\Lambda }>\omega _\mathrm{\Lambda }`$ If we neglect interactions, one would expect the $`\mathrm{\Sigma }^{}`$ to appear at lower densities than the $`\mathrm{\Lambda }`$, even though $`\mathrm{\Sigma }^{}`$ is more massive due to the large electron chemical potential. The threshold densities for noninteracting $`\mathrm{\Sigma }^{},\mathrm{\Delta }^{},\mathrm{\Lambda }`$ are relatively low indicating that these condensates are present in cores of neutron stars if their interactions can be ignored.
Hyperon energies are, however, strongly affected in dense nuclear matter by interactions and correlations with the nucleons and other hyperons in a condensate. If hyperons have the short range repulsion and three-body interactions as nucleons, a condensate of hyperons become less likely.
### 4.4 Quark matter
Eventually at high densities, we expect hadrons to deconfine to quark matter or, in other words, chiral symmetry to be restored. This transition has been extensively studied by the Bag model equation of state (EoS) for quark matter which leads to a first order phase transition from hadronic to quark matter at a density and temperature determined by the parameters going into both EoS. In the bag model the quarks are assumed to be confined to a finite region of space, the so-called ’bag’, by a vacuum pressure $`B`$. Adding the Fermi pressure and interactions computed to order $`\alpha _s=g^2/4\pi `$, where $`g`$ is the QCD coupling constant, the total pressure for three massless quarks of flavor $`f=u,d,s`$, is
$$P=\frac{3\mu _f^4}{4\pi ^2}(1\frac{2}{\pi }\alpha _s)B+P_e+P_\mu ,$$
(12)
where $`P_{e,\mu }`$ are the electron and muon pressure, e.g., $`P_e=\mu _e^4/12\pi ^2`$. A Fermi gas of quarks of flavor i has density $`n_i=k_{Fi}^3/\pi ^2`$, due to the three color states. A finite strange quark mass have minor effect on the EoS since quark chemical potentials $`\mu _q\stackrel{>}{}m_N/3`$ typically are much larger. The value of the bag constant B is poorly known, and we present results using two representative values, $`B=150`$ MeVfm<sup>-3</sup> and $`B=200`$ MeVfm<sup>-3</sup>. We take $`\alpha _s=0.4`$. However, similar results can be obtained with smaller $`\alpha _s`$ and larger $`B`$.
The quark and nuclear matter mixed phase has continuous pressures and densities due to the general Gibbs criteria for two-component systems . There are no first order phase transitions but at most two second order phase transitions. Namely, at a lower density, where quark matter first appears in nuclear matter, and at a very high density (if gravitationally stable), where all nucleons are finally dissolved into quark matter. This mixed phase does, however, not include local surface and Coulomb energies of the quark and nuclear matter structures. If the interface tension between quark and nuclear matter is too large, the mixed phase is not favored energetically due to surface and Coulomb energies associated with forming these structures . The neutron star will then have a core of pure quark matter with a mantle of nuclear matter surrounding it and the two phases are coexisting by a first order phase transition or Maxwell construction. For a small or moderate interface tension the quarks are confined in droplet, rod- and plate-like structures as found in the inner crust of neutron stars (see and Fig. 3).
### 4.5 Superfluidity in baryonic matter
The presence of neutron superfluidity in the crust and the inner part of neutron stars are considered well established in the physics of these compact stellar objects. In the low density outer part of a neutron star, the neutron superfluidity is expected mainly in the attractive $`{}_{}{}^{1}S_{0}^{}`$ channel. At higher density, the nuclei in the crust dissolve, and one expects a region consisting of a quantum liquid of neutrons and protons in beta equilibrium. The proton contaminant should be superfluid in the $`{}_{}{}^{1}S_{0}^{}`$ channel, while neutron superfluidity is expected to occur mainly in the coupled $`{}_{}{}^{3}P_{2}^{}`$-$`{}_{}{}^{3}F_{2}^{}`$ two-neutron channel. In the core of the star any superfluid phase should finally disappear.
Dilute Fermi systems can now be studied as atomic gases recently have been cooled down to nanokelvin similar to Bose-Einstein condensates. Degeneracy was observed and BCS gaps are currently searched for. According to Gorkov and Melik-Bharkuderov the gap is
$`\mathrm{\Delta }=\left({\displaystyle \frac{2}{e}}\right)^{7/3}{\displaystyle \frac{k_F^2}{2m}}\mathrm{exp}\left[{\displaystyle \frac{\pi }{2ak_F}}\right]`$ (13)
in the dilute limit where the Fermi momentum times the scattering length is small, $`k_F|a|1`$.
Recently color superconductivity in quark matter has been taken up again since the quark color interaction has been conjectured to be large. Correspondingly large gaps of order tens of MeV are found. If the strange quark mass is sufficiently small color-flavor locking occur between up and down or strange quarks. We refer to T. Schaefer these proceedings for further details.
## 5 Calculated neutron star masses and radii
In order to obtain the mass and radius of a neutron star, we have solved the Tolman-Oppenheimer-Volkov equation with and without rotational corrections. The equations of state employed are given by the $`pn`$-matter EoS with $`s=0.13,0.2,0.3,0.4`$ with nucleonic degrees of freedom only. In addition we have selected two representative values for the bag-model parameter $`B`$, namely 150 and 200 MeVfm<sup>-3</sup> for our discussion on eventual phase transitions. The quark phase is linked with our $`pn`$-matter EoS from Eq. (4) with $`s=0.2`$ through either a mixed phase construction or a Maxwell construction . For $`B=150`$ MeVfm<sup>-3</sup>, the mixed phase begins at 0.51 fm<sup>-3</sup> and the pure quark matter phase begins at $`1.89`$ fm<sup>-3</sup>. Finally, for $`B=200`$ MeVfm<sup>-3</sup>, the mixed phase starts at $`0.72`$ fm<sup>-3</sup> while the pure quark phase starts at $`2.11`$ fm<sup>-3</sup>. In case of a Maxwell construction, in order to link the $`pn`$ and the quark matter EoS, we obtain for $`B=150`$ MeVfm<sup>-3</sup> that the pure $`pn`$ phase ends at $`0.92`$ fm<sup>-3</sup> and that the pure quark phase starts at $`1.215`$ fm<sup>-3</sup>, while the corresponding numbers for $`B=200`$ MeVfm<sup>-3</sup> are $`1.04`$ and $`1.57`$ fm<sup>-3</sup>.
None of the equations of state from either the pure $`pn`$ phase or with a mixed phase or Maxwell construction with quark degrees of freedom, result in stable configurations for densities above $`10n_0`$, implying thereby that none of the stars have cores with a pure quark phase. The EoS with $`pn`$ degrees of freedom have masses $`M\stackrel{<}{}2.2M_{}`$ when rotational corrections are accounted for. With the inclusion of the mixed phase, the total mass is reduced since the EoS is softer. However, there is the possibility of making very heavy quark stars for very small bag constants. For pure quark stars there is only one energy scale namely $`B`$ which provides a homology transformation and the maximum mass is $`M_{max}=2.0M_{}(58\mathrm{M}\mathrm{e}\mathrm{V}\mathrm{f}\mathrm{m}^3/B)^{1/2}`$ (for $`\alpha _s=0`$). However, for $`B\stackrel{>}{}58\mathrm{M}\mathrm{e}\mathrm{V}\mathrm{f}\mathrm{m}^3`$ a nuclear matter mantle has to be added and for $`B\stackrel{<}{}58\mathrm{M}\mathrm{e}\mathrm{V}\mathrm{f}\mathrm{m}^3`$ quark matter has lower energy per baryon than <sup>56</sup>Fe and is thus the ground state of strongly interacting matter. Unless the latter is the case, we can thus exclude the existence of such $`2.22.3M_{}`$ quark stars.
In Fig. 4 we show the mass-radius relations for the various equations of state. The shaded area represents the allowed masses and radii for $`\nu _{QPO}=1060`$ Hz of 4U 1820-30. Generally,
$`2GM<R<\left({\displaystyle \frac{GM}{4\pi ^2\nu _{QPO}^2}}\right)^{1/3},`$ (14)
where the lower limit insures that the star is not a black hole, and the upper limit that the accreting matter orbits outside the star, $`R<R_{orb}`$. Furthermore, for the matter to be outside the innermost stable orbit, $`R>R_{ms}=6GM`$, requires that
$`M`$ $`\stackrel{<}{}`$ $`{\displaystyle \frac{1+0.75j}{12\sqrt{6}\pi G\nu _{QPO}}}2.2M_{}(1+0.75j){\displaystyle \frac{\mathrm{kHz}}{\nu _{QPO}}},`$ (15)
where $`j=2\pi c\nu _sI/M^2`$ is a dimensionless measure of the angular momentum of the star with moment of inertia $`I`$. The upper limit in Eq. (15) is the mass when $`\nu _{QPO}`$ corresponds to the innermost stable orbit. This is the case for 4U 1820-30 since according to $`\nu _{QPO}`$ saturates at $`1060`$ Hz with increasing count rate. The corresponding neutron star mass is $`M2.22.3M_{}`$ which leads to several interesting conclusions as seen in Fig. 4. Firstly, the stiffest EoS allowed by causality ($`s0.130.2`$) is needed. Secondly, rotation must be included which increase the maximum mass and corresponding radii by 10-15% for $`\nu _s300`$ Hz. Thirdly, a phase transition to quark matter below densities of order $`5n_0`$ can be excluded, corresponding to a restriction on the bag constant $`B\stackrel{>}{}200`$ MeVfm<sup>-3</sup>.
These maximum masses are smaller than those of APR98 and Kalogera & Baym who, as discussed above, obtain upper bounds on the mass of neutron stars by discontinuously setting the sound speed to equal the speed of light above a certain density, $`n_c`$. By varying the density $`n_c=25n_0`$ the maximum mass drops from $`2.92.2M_{}`$. In our case, incorporating causality smoothly by introducing the parameter $`s`$ in Eq. (4), the EoS is softened at higher densities in order to obey causality, and yields a maximum mass lower than the $`2.2M_{}`$ derived in APR98 for nonrotating stars.
If the QPOs are not from the innermost stable orbits and one finds that even accreting neutron stars have small masses, say like the binary pulsars, $`M\stackrel{<}{}1.4M_{}`$, this may indicate that heavier neutron stars are not stable. Therefore, the EoS is soft at high densities $`s\stackrel{>}{}0.4`$ or a phase transition occurs at a few times nuclear matter densities.
## 6 Glitches and phase transitions in rotating neutron stars
Younger pulsars rotate and slow down rapidly. Some display sudden speed ups referred to as glitches. The glitches observed in the Crab, Vela, and a few other pulsars are probably due to quakes occurring in solid structures such as the crust, superfluid vortices or possibly the quark matter lattice in the core . As the rotating neutron star gradually slows down and becomes less deformed, the rigid component is strained and eventually cracks/quakes and changes its structure towards being more spherical.
The moment of inertia of the rigid component, $`I_c`$, decreases abruptly and its rotation and pulsar frequency increases due to angular momentum conservation resulting in a glitch. The observed glitches are very small $`\mathrm{\Delta }\mathrm{\Omega }/\mathrm{\Omega }10^8`$. The two components slowly relaxate to a common rotational frequency on a time scale of days (healing time) due to superfluidity of the other component (the neutron liquid). The healing parameter $`Q=I_c/I_{tot}`$ measured in glitches reveals that for the Vela and Crab pulsar about $``$3% and $``$96% of the moment of inertia is in the rigid component respectively.
If the crust were the only rigid component the Vela neutron star should be almost all crust. This would require that the Vela is a very light neutron star - much smaller than the observed ones which all are compatible with $`1.4M_{}`$. If we by the lattice component include not only the solid crust but also the protons in nuclear matter (NM) (which is locked to the crust due to magnetic fields), superfluid vortices pinned to the crust and the solid QM mixed phase
$$I_c=I_{crust}+I_p+I_{sv}+I_{QM},$$
(16)
we can better explain the large $`I_c`$ for the Crab. The moment of inertia of the mixed phase is sensitive to the EoS’s used. For example, for a quadratic NM EoS decreasing the Bag constant from 110 to 95 MeVfm<sup>-3</sup> increases $`I_c/I_{total}`$ from $`20\%`$ to $`70\%`$ for a 1.4$`M_{}`$ neutron star - not including possible vortex pinning. The structures in the mixed phase would exhibit anisotropic elastic properties, being rigid to some shear strains but not others in much the same way as liquid crystals. Therefore the whole mixed phase might not be rigid.
As the neutron star slows down, pressures and densities increase in the center and a first order phase transition may occur. Does it leave any detectable signal at the corresponding critical angular velocity $`\mathrm{\Omega }_0`$ ? As described in detail in the general relativistic equations for slowly rotating stars can be solved even with first order phase transitions since only the monopole is important. The resulting moment of inertia have the characteristic behavior for $`\mathrm{\Omega }\stackrel{<}{}\mathrm{\Omega }_0`$
$$I=I_0\left(1+c_1\mathrm{\Omega }^2c_2(\mathrm{\Omega }_0^2\mathrm{\Omega }^2)^{3/2}+\mathrm{}\right).$$
(17)
Here, $`c_1`$ and $`c_2`$ are small parameters proportional to the density difference between the two phases; however, $`c_2=0`$ for $`\mathrm{\Omega }>\mathrm{\Omega }_0`$.
In order to make contact with observation, the temporal behavior of angular velocities must be considered. The pulsars slow down at a rate given by the loss of rotational energy which one usually assumes is proportional to the rotational angular velocity to some power (for dipole radiation $`n=3`$)
$$\frac{d}{dt}\left(\frac{1}{2}I\mathrm{\Omega }^2\right)=C\mathrm{\Omega }^{n+1}.$$
(18)
With the moment of inertia given by Eq. (17) the decreasing angular velocity can be found. The corresponding braking index depends on the second derivative $`I^{\prime \prime }=dI/d^2\mathrm{\Omega }`$ of the moment of inertia and thus diverges as $`\mathrm{\Omega }`$ approaches $`\mathrm{\Omega }_0`$ from below
$`n(\mathrm{\Omega })`$ $``$ $`{\displaystyle \frac{\ddot{\mathrm{\Omega }}\mathrm{\Omega }}{\dot{\mathrm{\Omega }}^2}}nc_1\mathrm{\Omega }^2+c_2{\displaystyle \frac{\mathrm{\Omega }^4}{\sqrt{\mathrm{\Omega }_0^2\mathrm{\Omega }^2}}}.`$ (19)
The observational braking index $`n(\mathrm{\Omega })`$ exhibits a characteristic behavior different from the theoretical braking index $`n`$ in case of a first order phase transition.
The critical angular velocities depend on the EoS and the critical densities. For best detection one would want the transition to occur for rapidly rotating pulsars such as millisecond pulsars, X-ray binaries or young neutron stars only a few years or centuries old. As pulsars slow down over a million years, their central densities span a wide range of order several $`n_0`$. As we are interested in time scales of years, we must instead study the $`1000`$ pulsars available. By studying the corresponding range of angular velocities for the sample of different star masses, the chance for encountering a critical angular velocity increases. Eventually, one may be able to cover the full range of central densities and find all first order phase transitions up to a certain size determined by the experimental resolution.
## 7 Other recent surprises related to neutron stars
### 7.1 Connecting $`NO_3^{}`$ peaks in ice core samples to supernovae
A curious connection between four $`NO_3^{}`$ peaks from south pole ice core samples and supernovae has recently been made. A small nearby but well hidden supernova remnant RX J0852.0-4622 was discovered in the southeast corner of the older Vela supernova remnant. Estimates of its distance and age seem to be compatible with the 4th $`NO_3^{}`$ peak in the 20 year old south pole core samples . The other three already agreed with the historical Crab, Tycho and Kepler supernovae within the ice dating accuracy of $`\pm 20`$ years, a millennium back. This seems to indicate that nearby supernova explosions can affect the climate on earth and lead to geophysical signals. The radiation does produce $`NO_3^{}`$ in the upper atmosphere but the quantitative amount of fall out on the poles by northern/southern light cannot be estimated. $`NO_3^{}`$ peaks were not found in the Vostok og Greenland drills possibly because the aurorae do not have fallout on these sites. Further drillings are needed to confirm the connection - in particular deeper ones which should also include the 1006 and 1054 supernovae. Mass extinctions may be caused by nearby supernova explosions after all.
### 7.2 X-ray bursts and thermonuclear explosions
Slow accretion from a small mass companion $`\stackrel{<}{}2M_{}`$ generates a continuous background of X-rays. Each nucleon radiates its gravitational energy of $`m_nGM/R100`$ MeV. After accumulating hydrogen on the surface, pressures and temperatures become sufficient to trigger an irregular runaway thermonuclear explosion every few hours or so seen as an X-ray burst (see, e.g., for a review). The energy involved is typical nuclear binding energies $`1`$ MeV, i.e., a percent of the time integrated background.
The microsecond time resolution in these X-ray spectra allow for Fourier transforms in millisecond time intervals which is much shorter than the burst duration of order a few seconds. In the case of 4U 1728-34 the power analysis shows a peak at the neutron star spin frequency at 364 Hz which, however, decreases to 362 Hz during the first 1-2 seconds of the burst . A simple explanation is that the thermonuclear explosion elevates the surface of the neutron star. Conserving angular momentum, $`LMR^2\nu `$, leads to a decrease in rotation by
$$\frac{\mathrm{\Delta }\nu }{\nu }2\frac{\mathrm{\Delta }R}{R}.$$
(20)
With a frequency change of $`\mathrm{\Delta }\nu 2`$ Hz and typical neutron star radii of order $`R10`$ km, we find an elevation of order $`\mathrm{\Delta }R20`$ m, which is roughly in agreement with expectations but much less than on earth due to the much stronger gravitational fields on neutron stars.
### 7.3 Gamma Ray Bursters and hypernovae
The recent discovery of afterglow in Gamma Ray Bursters (GRB) allows determination of the very high redshifts ($`z1`$) and thus the enormous distance and energy output $`E10^{53}`$ ergs in GRB if isotropically emitted. Very recently evidence for beaming or jets has been found corresponding to “only” $`E10^{51}`$ ergs. Candidates for such violent events include neutron star mergers or a special class of type Ic supernova (hypernovae) where cores collapse to black holes. Indication of such connections are brought by recent observations of a bright supernova coinciding with GRB 980326. Binary pulsars are rapidly spiraling inwards and will eventually merge and create a gigantic explosion and perhaps collapse to a black hole. From the number of binary pulsars and spiral rates, one estimates about one merger per million year per galaxy. With $`10^9`$ galaxies at cosmological distances $`z\stackrel{<}{}1`$ this gives about the observed rate of GRB. However, detailed calculations have problems with baryon contamination, i.e. the baryons ejected absorb photons in the relativistically expanding photosphere. Accreting black holes have also been suggested to act as beaming GRB.
So far, the physics producing these GRB is not understood. The time scales and the enormous power output points towards neutron star or black hole objects.
## 8 Summary
Modern nucleon-nucleon potentials have reduced the uncertainties in the calculated EoS. Using the most recent realistic effective interactions for nuclear matter of APR98 with a smooth extrapolation to high densities including causality, the EoS could be constrained by a “softness” parameter $`s`$ which parametrizes the unknown stiffness of the EoS at high densities. Maximum masses have subsequently been calculated for rotating neutron stars with and without first and second order phase transitions to, e.g., quark matter at high densities. The calculated bounds for maximum masses leaves two natural options when compared to the observed neutron star masses:
* Case I: The large masses of the neutron stars in QPO 4U 1820-30 ($`M=2.3M_{}`$), PSR J1012+5307 ($`M=2.1\pm 0.4M_{}`$), Vela X-1 ($`M=1.9\pm 0.1M_{}`$), and Cygnus X-2 ($`M=1.8\pm 0.2M_{}`$), are confirmed and complemented by other neutron stars with masses around $`2M_{}`$. The EoS of dense nuclear matter is then severely constrained and only the stiffest EoS consistent with causality are allowed, i.e., softness parameter $`0.13s\stackrel{<}{}0.2`$. Furthermore, any significant phase transition at densities below $`<5n_0`$ can be excluded.
That the radio binary pulsars all have masses around $`1.4M_{}`$ is then probably due to the formation mechanism in supernovae where the Chandrasekhar mass for iron cores are $`1.5M_{}`$. Neutron stars in binaries can subsequently acquire larger masses by accretion as X-ray binaries.
* Case II: The heavy neutron stars prove erroneous by more detailed observations and only masses like those of binary pulsars are found. If accretion does not produce neutron stars heavier than $`\stackrel{>}{}1.4M_{}`$, this indicates that heavier neutron stars simply are not stable which in turn implies a soft EoS, either $`s>0.4`$ or a significant phase transition must occur already at a few times nuclear saturation densities.
Surface temperatures can be estimated from spectra and from the measured fluxes and known distances, one can extract the surface area of the emitting spot. This gives unfortunately only a lower limit on the neutron star size, $`R`$. If it becomes possible to measure both mass and radii of neutron stars, one can plot an observational $`(M,R)`$ curve in Fig. (4), which uniquely determines the EoS for strongly interacting matter at low temperature.
Pulsar rotation frequencies and glitches are other promising signals that could reveal phase transitions. Besides the standard glitches also giant glitches were mentioned and in particular the characteristic behavior of angular velocities when a first order phase transition occurs right in the center of the star.
It is impossible to cover all the interesting and recent developments concerning neutron stars in these proceedings so for more details we refer to and Refs. therein.
## Acknowledgments
Thanks to my collaborators M. Hjorth-Jensen and C.J. Pethick as well as G. Baym, F. Lamb and V. Pandharipande.
## References |
no-problem/9912/astro-ph9912175.html | ar5iv | text | # Does Infall End Before the Class I Stage?
## 1 INTRODUCTION
For many years, infalling protostars were sought, but most claims of collapse were challenged. Through the discovery of a population of cold ($`T_D`$ $``$ 30 K), heavily enshrouded ($`A_V`$ $``$ 1000) objects by IRAS and groundbased telescopes, a number of protostellar candidate objects became available for study. Subsequent observations of these Class 0 sources (André et al. 1993) revealed spectral-line profiles indicative of infall in 13 objects (Walker et al. 1986; Zhou et al. 1993; Zhou et al. 1994; Moriarty-Schieven et al. 1995; Hurt et al. 1996; Myers et al. 1996; Ward-Thompson et al. 1996; Gregersen et al. 1997; Lehtinen 1997; Mardones et al. 1997). If these infall candidates actually represent infall, we can begin to study the evolution of infall by comparing line profiles of sources believed to be in different evolutionary states.
Mardones et al. observed 23 Class 0 sources ($`T_{bol}`$ $`<`$ 70 K) and 24 Class I sources (70 $``$ $`T_{bol}`$ $``$ 200 K) in the optically thick H<sub>2</sub>CO $`2_{12}1_{11}`$ and CS $`J=21`$ lines and the optically thin N<sub>2</sub>H<sup>+</sup> $`JF_1F_2=101012`$ line. $`T_{bol}`$, the temperature of a blackbody with the same mean frequency as the observed spectral energy distribution, increases with age from the highly embedded Class 0 sources to the pre-main sequence Class III sources ($`T_{bol}`$ $`>`$ 2800 K) (Myers and Ladd 1993). The upper boundary of the Class I category is 650 K but Mardones et al. chose the most deeply embedded Class I sources to study the earliest stages of accretion. Since protostellar collapse is expected to produce double-peaked profiles in optically thick lines with the blue peak stronger than the red peak or lines that are blue-skewed relative to the velocity of an optically thin line, Mardones et al. compared the percentage of sources in each Class that displayed such asymmetry and found that the “blue excess” (the number of sources with significant blue asymmetry minus the number of sources with significant red asymmetry divided by the total number of sources) was very different for Class 0 and Class I sources: for H<sub>2</sub>CO, 0.39 for Class 0 sources and 0.04 for Class I sources; and for CS, 0.53 for Class 0 sources and 0.00 for Class I sources. Park et al. (1999) have done a survey of Class 0 and I sources in HCN $`J=10`$ and observed similar results. Using the significance criterion of Mardones et al., the blue excess is 0.27 for Class 0 sources and 0.00 for Class I sources.
We decided to observe the Class I sources studied in Mardones et al. in HCO<sup>+</sup> $`J=32`$ for two reasons. First, the data obtained would be more easily compared with the HCO<sup>+</sup> Class 0 survey of Gregersen et al. (1997). Second, the HCO<sup>+</sup> $`J=32`$ line is usually stronger and more opaque than the other lines; consequently, it may be more sensitive to infall at later stages when much of the cloud material has accreted but infall is not complete.
The sources we observed are listed in Table 1. Our sample includes 4 Class 0 sources Mardones et al. observed that were not observed by Gregersen et al. as well as 10 sources not observed by Mardones et al. but which have been identified by others as either Class 0 or early Class I.
## 2 OBSERVATIONS AND RESULTS
We observed 16 Class I and 18 Class 0 sources in the HCO<sup>+</sup> $`J=32`$ line with the 10.4-m telescope of the Caltech Submillimeter Observatory (CSO)<sup>1</sup><sup>1</sup>1The CSO is operated by the California Institute of Technology under funding from the National Science Foundation, contract AST 90–15755. at Mauna Kea, Hawaii in December 1995, September 1996, April 1997, June 1997, December 1997 and July 1998. The sources are listed in Table 1 with their celestial coordinates, distances, $`T_{bol}`$ and the off position used for position switching. We used an SIS receiver (Kooi 1992). The backend was an acousto-optic spectrometer with 1024 channels and a bandwidth of 49.5 MHz. The frequency resolution was about 2 channels, which is 0.11 km s<sup>-1</sup> at 267 GHz, except for the December 1995 observations when the resolution was near 3 channels or 0.16 km s<sup>-1</sup> at 267 GHz. Chopper-wheel calibration was used to obtain the antenna temperature, $`T_A^{}`$. The lines we observed are listed in Table 2 with their frequencies, the velocity resolution, the beamsize and the main beam efficiency, $`\eta _{mb}`$. The main beam efficiencies were calculated using planets as calibration sources. Data from separate runs were resampled to the resolution of the run with the worst frequency resolution before averaging. A first order baseline was removed before spectra were averaged.
Line properties are listed in Table 3. $`T_A^{}`$ is the peak temperature in the line profile. For single-peaked lines, $`V_{LSR}`$, the line centroid, and $`\mathrm{\Delta }`$V, the line width, were found by fitting a single Gaussian to the line profile. For lines that have two blended peaks, we list two values of $`T_A^{}`$ and $`V_{LSR}`$, one for each peak, and one value for the line width, which is the width across the spectrum at the temperature where the weaker peak falls to half power.
We observed 34 sources in the HCO<sup>+</sup> $`J=32`$ line. Nine of these sources showed blue asymmetry (Figure 1) and six showed red asymmetry (Figure 2) as determined by visual inspection of the line profiles. Nineteen sources showed no significant asymmetry (Figures 3 and 4). We observed eight sources in the H<sup>13</sup>CO<sup>+</sup> $`J=32`$ line, five of which showed blue asymmetry in the HCO<sup>+</sup> $`J=32`$ line.
## 3 Individual Sources
#### IRAS03235+3004
Ladd et al. (1993) observed this source at H and K. Mardones et al. (1997) observed a red-skewed line profile in both CS and H<sub>2</sub>CO. The HCO<sup>+</sup> $`J=32`$ line is double-peaked with the blue peak stronger (Figure 1). Both the H<sub>2</sub>CO and CS lines of Mardones et al. peak at the HCO<sup>+</sup> red peak while the N<sub>2</sub>H<sup>+</sup> line peaks slightly to the blue of the HCO<sup>+</sup> dip. However, there is significant H<sub>2</sub>CO and CS emission at the velocity of the blue peak.
#### L1455
Frerking and Langer (1982) first detected a CO outflow in this source. Submillimeter observations revealed a small dense core surrounding the exciting source (Davidson and Jaffe 1984). Goldsmith et al. (1984) observed two well-collimated outflows. Mardones et al. observed a symmetric H<sub>2</sub>CO line. We observe a slightly blue-skewed line in HCO<sup>+</sup> $`J=32`$ which peaks 0.5 km s<sup>-1</sup> to the blue of the H<sub>2</sub>CO line (Figure 3).
#### IRAS03256+3055
This source is located near the NGC1333 complex. Mardones et al. observed a red-peaked line profile in both lines with the red peak at the N<sub>2</sub>H<sup>+</sup> velocity. The HCO<sup>+</sup> profile we see is symmetric (Figure 3).
#### SSV13
The Herbig-Haro objects HH 7-11 are excited by this source. Warin et al. (1996) have suggested that this source is triggering star formation in NGC 1333. Outflow wings are quite prominent in the CS, H<sub>2</sub>CO (Mardones et al.) and HCO<sup>+</sup> profiles (Figure 4), but no significant skewness exists in the central core of the line.
#### NGC 1333 IRAS 2
Sandell et al. (1994) and Hodapp and Ladd (1995) observed two outflows, suggesting that this source may be a protobinary. Ward-Thompson et al. (1996) modeled the HCO<sup>+</sup> and H<sup>13</sup>CO<sup>+</sup> $`J=43`$ spectra as infall. Both the CS and H<sub>2</sub>CO lines observed by Mardones et al. are skewed to the red. The HCO<sup>+</sup> $`J=32`$ has the same self-absorption dip and blue-skewed profile as the $`J=43`$ spectra (Figure 1).
#### IRAS03282+3035
Bachiller er al. (1991) observed a high velocity, well-collimated outflow in this source. Bachiller and Gomez-Gonzales (1992) identified this as an “extreme Class I” object, a category André et al. (1993) later called Class 0. The HCO<sup>+</sup> $`J=32`$ has a slight blue asymmetry (Figure 3) but it is insufficient to qualify this source as a collapse candidate.
#### HH211
McCaughrean et al. (1994) discovered a molecular hydrogen jet with a dynamical age of less than 1000 years excited by a source detected only at wavelengths greater than 350 $`\mu `$m. Like the HCO<sup>+</sup> $`J=32`$ spectra, the CS and H<sub>2</sub>CO lines have no significant asymmetry (Mardones et al. 1997) (Figure 4).
#### IRAS04166+2706
Mardones et al. observed a symmetric line in H<sub>2</sub>CO and a double-peaked line in CS with the stronger peak at the central velocity. The HCO<sup>+</sup> $`J=32`$ line is nearly symmetric with a weak dip on the red-shifted side (Figure 1). The weaker of the two CS peaks lies within the blue HCO<sup>+</sup> line wing.
#### IRAS04169+2702
Ohashi et al. (1997) detected evidence for infall in a 2200 $`\times `$ 1100 AU envelope around this source using channel maps from interferometric observations of C<sup>18</sup>O $`J=10`$. Mardones et al. saw outflow wings in CS. Our HCO<sup>+</sup> $`J=32`$ data shows a symmetric profile (Figure 3), which encompasses the range of infall velocities seen by Ohashi et al., at the rest velocity of the source.
#### L1551 IRS5
The bipolar outflow was first seen in this source (Snell et al. 1980). Butner et al. (1991) found a density gradient consistent with the collapse model of Adams et al. (1987). Submillimeter interferometry revealed an 80 AU disk (Lay et al. 1994). Looney et al. (1997) interpreted their 2.7 mm interferometer observations as evidence that this source is a protobinary with each source separated by 50 AU (too small to be resolved by Lay et al.) and having a disk with a radius $`<`$ 25 AU. Ohashi et al. (1996) observed in <sup>13</sup>CO $`J=21`$ a much larger central condensation, 1200 by 670 AU, which had infalling motions. The H<sub>2</sub>CO and CS lines (Mardones et al.), as well as our HCO<sup>+</sup> $`J=32`$ line, are symmetric with outflow wings (Figure 3).
#### L1535
The H<sub>2</sub>CO and CS observations of Mardones et al. show a symmetric line profile, as do our HCO<sup>+</sup> $`J=32`$ data (Figure 3).
#### TMC-1A
Wu et al. (1992) discovered a CO outflow in this source. CO $`J=10`$ interferometer observations suggested the existence of a 2500 AU flattened structure (Tamura et al. 1996). Chandler et al. (1996) noted the similarity of the outflow to those of the Class 0 sources L1448-C and B335. Ohashi et al. (1997) measured a velocity gradient that would be expected from a rotating disk perpendicular to the outflow over a 580 AU radius. Mardones et al. observed symmetric lines in H<sub>2</sub>CO and CS. The HCO<sup>+</sup> $`J=32`$ line is double peaked with a stronger blue peak, but the N<sub>2</sub>H<sup>+</sup> line of Mardones et al. (presumed to be optically thin) is near the red peak rather than between the two (Figure 1). The red peak of the HCO<sup>+</sup> $`J=32`$ line is coincident with the peak of the CS line. There is no CS emission at the velocity of the blue peak, suggesting that the blue peak might be outflow emission.
#### L1634
A powerful molecular hydrogen jet is present in this source (Hodapp and Ladd 1995; Davis et al. 1997). The HCO<sup>+</sup> spectra has the blue peak slightly stronger than the red peak (Figure 1).
#### MMS1
This source and the following five were discovered by Chini et al. (1997), in the Orion high-mass star-forming region. Most of the other sources observed in this paper occur in low-mass star-forming regions. They were identified as Class 0 objects from their strong millimeter continuum emission. The HCO<sup>+</sup> $`J=32`$ has a double-peaked profile with the red peak stronger (Figure 2).
#### MMS4
The red peak is slightly stronger than the blue peak in the HCO<sup>+</sup> $`J=32`$ spectra (Figure 2).
#### MMS6
Little noticeable asymmetry is seen in the HCO<sup>+</sup> $`J=32`$ spectra (Figure 3).
#### MMS9
The HCO<sup>+</sup> $`J=32`$ line is symmetric (Figure 4).
#### MMS7
The HCO<sup>+</sup> $`J=32`$ line has a slight red shoulder (Figure 4).
#### MMS8
This source has a highly collimated CO outflow (Chini et al. 1997). No asymmetry is observed in the HCO<sup>+</sup> $`J=32`$ spectra (Figure 3).
#### NGC2264G
Margulis and Lada (1986) discovered the molecular outflow which was later seen to be highly collimated and energetic (Margulis et al. 1988). Margulis et al. (1990) observed six near-infrared sources in this object, but Gomez et al. (1994) discovered the true driving source of the outflow, which was subsequently confirmed as a Class 0 object by Ward-Thompson et al. (1995). The HCO<sup>+</sup> $`J=32`$ spectrum has a slight red wing (Figure 3).
#### B228
Heyer and Graham (1989) found evidence for a stellar wind from observations of extended \[SiII\] emission. Mardones et al. observed a slightly blue-skewed H<sub>2</sub>CO line. The HCO<sup>+</sup> $`J=32`$ line has a double peaked profile with a stronger blue peak like that predicted by collapse models (Figure 1). The H<sup>13</sup>CO<sup>+</sup> $`J=32`$ line is coincident with the red peak, but the N<sub>2</sub>H<sup>+</sup> line peaks in the self-absorption dip of the HCO<sup>+</sup> line, suggesting that the H<sup>13</sup>CO<sup>+</sup> line may be somewhat optically thick.
#### YLW16
The H<sub>2</sub>CO has a prominent blue outflow shoulder (Mardones et al.). A similar feature appears in the HCO<sup>+</sup> $`J=32`$ line (Figure 4).
#### L146
Clemens et al. (1998) identified this as a Class 0 or I source, an identification that corresponds well with its $`T_{bol}`$ of 74 K. Mardones et al. observed a double-peaked line in CS and a blue-shouldered line in H<sub>2</sub>CO. The HCO<sup>+</sup> $`J=32`$ line is symmetric with a hint of a blue shoulder and agrees in velocity with the red peak in the CS spectrum (Figure 3).
#### S68N
McMullin et al. (1994) first identified this core from CS and CH<sub>3</sub>OH maps. Hurt and Barsony (1996) established this as a Class 0 source. Wolf-Chase et al. (1998) found an outflow flux consistent with those of other Class 0 sources. The CS and H<sub>2</sub>CO lines show blue asymmetry with a prominent outflow shoulder (Mardones et al.). The HCO<sup>+</sup> $`J=32`$ line has strong self-absorption with the red peak stronger than the blue peak. The red peak has the same velocity as the Mardones et al. spectra, while the blue peak has the same velocity as that of the CS profile. The H<sup>13</sup>CO<sup>+</sup> line occurs at the same velocity as the self-absorption dip (Figure 2)
#### SMM6
This source is better known as SVS20 (Strom et al. 1976). The HCO<sup>+</sup> spectra is symmetric (Figure 4).
#### HH108
The Herbig-Haro objects, HH108 and HH109, are produced by a 9.5 L IRAS source (Reipurth and Eiroa 1992). Reipurth et al. (1993) detected strong 1.3 mm emission. Both the CS $`J=21`$ and H<sub>2</sub>CO $`J=2_{12}1_{11}`$ lines have peaks shifted to the red of the N<sub>2</sub>H<sup>+</sup> velocity (Mardones et al.). Our HCO<sup>+</sup> line profile looks similar to that of the H<sub>2</sub>CO $`J=2_{12}1_{11}`$ line but with a slightly stronger blue peak (Figure 2).
#### CrA IRAS32
Both the H<sub>2</sub>CO $`J=2_{12}1_{11}`$ and CS $`J=21`$ lines are symmetric (Mardones et al.). The HCO<sup>+</sup> $`J=32`$ line has a double-peaked profile with a strong red peak and a dip near the velocity of the N<sub>2</sub>H<sup>+</sup> line (Figure 2). The H<sub>2</sub>CO and CS lines also peak in the HCO<sup>+</sup> dip.
#### L673A
This core appears slightly elongated in continuum emission and is about 0.2 pc away from another core (Ladd et al. 1991). Like the H<sub>2</sub>CO and CS spectra (Mardones et al.), the HCO<sup>+</sup> $`J=32`$ line is symmetric (Figure 4).
#### L1152
The H<sub>2</sub>CO $`J=2_{12}1_{11}`$ and CS $`J=21`$ lines are symmetric (Mardones et al.), as is the HCO<sup>+</sup> $`J=32`$ line (Figure 4).
#### L1172
Submillimeter continuum maps of Ladd et al. (1991) showed extended emission at 100 and 160 $`\mu `$m. The CS $`J=21`$ has a narrow self-absorption dip with the blue peak slightly stronger than the red peak (Mardones et al.). The HCO<sup>+</sup> spectrum (Figure 2) is slightly red-peaked and the two peaks agree in velocity with those in the CS spectrum (Mardones et al.). The N<sub>2</sub>H<sup>+</sup> line has its peak in the HCO<sup>+</sup> self-absorption dip.
#### L1251A
This source is located near the edge of its cloud and may be older than L1251B (Sato and Fukui 1989). The peak of the H<sub>2</sub>CO line is slightly blue-shifted. The HCO<sup>+</sup> line is symmetric with prominent outflow wings (Figure 4).
#### L1251B
Sato and Fukui (1989) discovered a molecular outflow in this source. Myers et al. (1996) matched a simple infall model to observed CS $`J=21`$ and N<sub>2</sub>H<sup>+</sup> $`JF_1F=101012`$ lines. The H<sub>2</sub>CO $`J=2_{12}1_{11}`$ line observed by Mardones et al. (1997) displays prominent self-absorption. The HCO<sup>+</sup> line profile is very strong with a prominent self-absorption dip and has the same velocity structure as the CS spectrum. The H<sup>13</sup>CO<sup>+</sup> $`J=32`$ line lies at the velocity of the dip, indicating that this may be a good candidate for protostellar collapse (Figure 1).
#### IRAS23011
Lefloch et al. (1996) identified this as a Class 0 source based on its millimeter continuum emission and its highly energetic outflow. Ladd and Hodapp (1997) observed a double outflow and measured a bolometric temperature of 61 K. The HCO<sup>+</sup> $`J=32`$ has collapse asymmetry with the H<sup>13</sup>CO<sup>+</sup> $`J=32`$ lying between the two peaks (Figure 1).
#### CB244
Launhardt et al. (1997) identified this as a Class 0 object from its spectral energy distribution. The CS spectrum is symmetric, but the line is blue-shifted. The H<sub>2</sub>CO $`J=2_{12}1_{11}`$ line is blue-shifted with a prominent outflow wing. The HCO<sup>+</sup> $`J=32`$ shows a blue-peaked line profile suggestive of collapse (Figure 1). The N<sub>2</sub>H<sup>+</sup> line observed by Mardones et al. is at the velocity of the self-absorption dip. The red peak in the HCO<sup>+</sup> $`J=32`$ spectrum is at the same velocity as the H<sub>2</sub>CO peak. The blue HCO<sup>+</sup> peak is at the same velocity as the CS line.
## 4 DISCUSSION
### 4.1 Line Profile Asymmetry
Models of a collapsing source with appropriate excitation requirements, a temperature and density gradient increasing toward the interior, predict that optically thick lines should show a double-peaked line profile with the blue-shifted peak stronger than the red-shifted peak and a self-absorption dip caused by the envelope at the rest velocity of the source (Leung and Brown 1977, Walker et al. 1986, Zhou 1992). The ratio of the peak temperatures of the blue and red peaks is a natural way to quantify this asymmetry, but this method requires lines that are double-peaked.
Since many of our line profiles are asymmetric without possessing two peaks, we also quantified the asymmetry using the asymmetry parameter defined in Mardones et al. as
$$\delta V=\frac{(V_{thick}V_{thin})}{\mathrm{\Delta }v_{thin}}$$
where $`V_{thick}`$ is the velocity of the peak of the optically thick HCO<sup>+</sup> $`J=32`$ line , $`V_{thin}`$ is that of the optically thin line as determined by a Gaussian fit, and $`\mathrm{\Delta }v_{thin}`$ is the line width of the optically thin line. The optically thin line we used was H<sup>13</sup>CO<sup>+</sup> $`J=32`$ or, where that was not available, the N<sub>2</sub>H<sup>+</sup> $`JF_1F=101012`$ line ($`\nu `$ = 93.176265 GHz, Caselli et al. 1995) observed by Mardones et al. In Figure 5, we plot a histogram of the velocity of the H<sup>13</sup>CO<sup>+</sup> line minus that of the N<sub>2</sub>H<sup>+</sup> line. The mean velocity difference between the two lines is 0.15 $`\pm `$ 0.20 km s<sup>-1</sup>. Following Mardones et al., we use $`\delta V`$ of $`\pm `$0.25 as the threshold to be counted as either blue (if negative) or red (if positive). Also, we follow Mardones et al. in not considering an asymmetry as significant if the difference in strength between the blue and red peaks is less than twice the rms noise. For example, in L1634, the blue peak is only 0.11 K stronger than the red peak while the rms noise is 0.10 K; we do not consider this source to be an infall candidate. The results for $`\delta V`$ are listed in Table 4 for the objects observed in this paper.
Among these sources, there are more blue than red objects. We can quantify this by using the blue excess, which is defined by Mardones et al. as
$$BlueExcess=\frac{N_{blue}N_{red}}{N_{total}}.$$
If the blue line asymmetries expected from collapse are dominant in our sample, this will be a positive number. For the sources observed in this paper, the blue excess is 0.28.
### 4.2 Evolutionary Trends of the Line Profile
The sources observed in this paper can be combined with the 20 Class 0 and 3 Class I sources observed in Gregersen et al. (1997) to form a sample stretching from $`T_{bol}`$ of 30 to 170 K, from the Class 0 stage well into the Class I stage. As in the previous section, we use for the optically thin line the H<sup>13</sup>CO<sup>+</sup> $`J=32`$ line when available and the N<sub>2</sub>H<sup>+</sup> $`JF_1F=101012`$ line when it is not. The results for $`\delta V`$ are plotted versus bolometric temperature in Figure 6.
For $`\delta V`$, there is no clear trend with bolometric temperature. We find a significant blue excess among both Class 0 and I sources. There is no dividing line past which the blue excess drops, a result markedly different from that of Mardones et al. who saw a disappearance of blue-skewed line profiles by the Class I stage. In fact, the blue peaked TMC-1A has the highest $`T_{bol}`$, 170 K, of any source in our sample. The results for $`\delta V`$ for each class as well as for the combined sample are listed in Table 5.
To study the difference between lines, we plot $`\delta V`$ as measured by the HCO<sup>+</sup> against those measured by the H<sub>2</sub>CO and CS lines (Figure 7). The sources that appear in our blue excess and not those of Mardones et al. are in the box bounded by $`\delta V`$ (H<sub>2</sub>CO or CS) $`>0.25`$ and $`\delta V`$(HCO<sup>+</sup>) $`<0.25`$. For example, TMC-1A and IRAS03235, sources with pronounced blue $`\delta V`$ in HCO<sup>+</sup>, show little asymmetry in CS and H<sub>2</sub>CO. However, most of the differences between the tracers involve sources with no significant asymmetry in CS or H<sub>2</sub>CO, but a significant one in HCO<sup>+</sup>. Only a few sources have significant disagreements (upper left and lower right boxes outside the dashed lines in Figure 7). This pattern would be expected if the HCO<sup>+</sup> were more sensitive to infall in envelopes of lower opacity.
There are some sources that show markedly opposite asymmetries in CS or H<sub>2</sub>CO as opposed to HCO<sup>+</sup>. In the upper left corner of both plots, L1551-NE, a Class 0 source within an outflow lobe of L1551 IRS5, has a complicated HCO<sup>+</sup> $`J=32`$ triple-peaked spectra (Gregersen et al.) with a very strong blue peak. The H<sub>2</sub>CO and CS spectra are also triple-peaked, but the peaks are not so prominent that Mardones et al. were unable to fit a Gaussian to the line. We do not consider this source to be an infall candidate. Fitting a Gaussian in this case will give a redder velocity and thus a positive $`\delta V`$. In the lower right corner of the H<sub>2</sub>CO-HCO<sup>+</sup> plot is L483, which has an H<sub>2</sub>CO spectra with similar velocity structure but is blue-peaked in H<sub>2</sub>CO and red-peaked in HCO<sup>+</sup>.
Mardones et al. found good correlation between the CS and H<sub>2</sub>CO $`\delta V`$ (the two measures agree in 28 out of 38 instances) and Park et al. (1999) also found a good correlation between CS and the HCN $`J=10`$ $`F=21`$ line ($`\nu `$ = 88.631847 GHz) and a fair correlation between H<sub>2</sub>CO and HCN. We consider agreement as when both tracers are significantly blue or red or symmetric, disagreement as when one is significantly blue and the other significantly red and neutrality when one is significantly blue or red and the other is symmetric. CS and HCO<sup>+</sup> agree 13 times, are neutral 15 times and disagree 6 times. For H<sub>2</sub>CO, there is agreement 16 times, neutrality 16 times and disagreement 5 times. For HCN, the asymmetries agree 7 times, are neutral 6 times and disagree 4 times.
### 4.3 An Evolutionary Model
Unlike the CS and H<sub>2</sub>CO observations of Mardones et al., our HCO<sup>+</sup> observations of Class I sources do not show a radical change in the excess of sources with blue asymmetry at the Class 0-Class I boundary. Why do the HCO<sup>+</sup> results differ from those of HCN, CS and H<sub>2</sub>CO? Perhaps infall asymmetry disappears at later times in HCO<sup>+</sup> than in HCN, CS and H<sub>2</sub>CO.
For models in which the infall velocity increases closer to the forming star, each line of sight through the infall region intersects two points on the locus of constant radial velocity. To display infall asymmetry, a spectral line must be subthermally excited and fairly opaque at the foreground point of intersection. Different lines will be more suitable at different points in the evolution, as the density drops. Among the lines in Table 6, the HCO<sup>+</sup> and H<sub>2</sub>CO lines require the highest densities to excite (see Table 1 in Evans 1999) and HCO<sup>+</sup> has a higher opacity for typical abundances. Therefore, it is possible that the HCO<sup>+</sup> $`J=32`$ line traces late stages of infall, when densities and opacities are lower, than the other lines.
To test this possibility and to study how the infall signature changes with time, one of the HCO<sup>+</sup> evolutionary models presented in Gregersen et al. has been extended to later times and collapse models have done for CS and H<sub>2</sub>CO at those same times. The lines modeled are HCO<sup>+</sup> $`J=32`$, CS $`J=21`$ and H<sub>2</sub>CO $`J=2_{12}1_{11}`$, the lines observed by Gregersen et al. and Mardones et al., respectively. (The CS line has been modeled for early times by Zhou (1992) and Walker et al. (1994).) Protostellar collapse was simulated in a cloud with a radius of 0.2 pc using the velocity and density fields of the Shu (1977) collapse model and a temperature distribution scaled upward from that of B335 (Zhou et al. 1990) to a luminosity of 6.5 L, the average luminosity of the sources observed in Gregersen et al. Six models were run for infall radii of 0.005, 0.03, 0.06, 0.10, 0.13 and 0.16 pc, corresponding to infall times of 2.3 $`\times `$ 10<sup>4</sup> yr for the earliest model to 7.5 $`\times `$ 10<sup>5</sup> yr for the latest model. The cloud had 30 shells, 15 inside the infall radius and 15 outside. The model produces the velocity, density, kinetic temperature, turbulent width and molecular abundance for each shell. The same abundance (6 $`\times `$ 10<sup>-9</sup>) was used for each molecule so these models would be the simplest possible comparisons of molecular tracers.
The output of the collapsing cloud simulation was used as input for a Monte Carlo code (Choi et al. 1995), which produces molecular populations in each shell. The output of the Monte Carlo code was used as input for a program that convolves a Gaussian beam with the emission from the cloud so we can simulate our HCO<sup>+</sup> CSO observations and the H<sub>2</sub>CO IRAM and CS Haystack observations of Mardones et al. The distance to the model cloud was 310 pc, the average distance to the observed sources. The resulting line profiles are plotted in Figure 8.
The HCO<sup>+</sup> line profiles are the strongest and, at late times, the most asymmetric. The dashed horizontal line across each panel is the blue-red ratio. For CS and H<sub>2</sub>CO, the blue-red ratio reaches a peak value and levels off, but the HCO<sup>+</sup> blue-red value keeps increasing. For all lines, the velocity of the blue-shifted peak becomes more negative with time from about $`0.22`$ km s<sup>-1</sup> to about $`0.33`$ km s<sup>-1</sup>. The profiles in Figure 8 support the conclusion that HCO<sup>+</sup> $`J=32`$ reveals infall more readily than the CS and H<sub>2</sub>CO lines. However, the model line profiles for those molecules predict a detectable signature as well. One possibility for obscuring the signature in other lines is that HCO<sup>+</sup> remains in the gas phase while the other species freeze out (Rawlings et al. 1992, Park et al. 1999).
One can question the relevance of our simulation, because none of the observed sources can be completely explained by the simple inside-out collapse model. For example, the simulated line profiles often have less extreme blue-red ratios than most of our infall candidates and the dip is much more extreme than in the observations. All of these sources are either turbulent, aspherical, or have very energetic bipolar outflows. Outflow is a large problem for HCO<sup>+</sup> $`J=32`$ because emission from outflows is prominent in this line. Also, recent studies of pre-protostellar cores (e.g. Tafalla et al. 1999), cores seemingly in the earliest stages of collapse, have challenged the inside-out collapse model.
On the whole, the HCO<sup>+</sup> $`J=32`$ line has both advantages and disadvantages in searching for infall. In the following section, we consider the information from all lines in assessing the best infall candidates.
### 4.4 New Infall Candidates?
Among the 34 sources observed here, we see 8 sources with the correct asymmetry for infall (Figure 1). (L1634 is eliminated from consideration for its weak asymmetry.) Three of these sources are Class 0, four are Class I and two are probably Class I. However, a blue-peaked optically thick line alone is not enough for a definite claim of infall.
For such a claim, the optically thin line must peak in the self-absorption dip of the optically thick line. We have observed the H<sup>13</sup>CO<sup>+</sup> $`J=32`$ line in 4 of our sources. In L1251B, NGC 1333 IRAS 2 and IRAS 23011, this condition is met. For sources where we did not observe the H<sup>13</sup>CO<sup>+</sup> $`J=32`$ line, we used the N<sub>2</sub>H<sup>+</sup> $`J=10`$ line of Mardones et al. to provide a rest velocity. Among the sources with blue peaks, N<sub>2</sub>H<sup>+</sup> peaks in the dip of CB244, TMC-1A, IRAS 03235 and IRAS 04166, so we consider these to be good infall candidates. In B228, the N<sub>2</sub>H<sup>+</sup> line is at the same velocity as the HCO<sup>+</sup> self-absorption dip, so we believe the H<sup>13</sup>CO<sup>+</sup> $`J=32`$ line may be optically thick in this source and we consider it to be a possible infall candidate. All the candidates that could have passed this test did so.
Since these sources have been observed in three optically thick lines, CS, H<sub>2</sub>CO and HCO<sup>+</sup>, with roughly comparable beamsizes, we can rank the worthiness of these sources as infall candidates (Table 6). Sources with a blue asymmetry in each of the three lines are the strongest infall candidates while those with blue asymmetries in two lines are the next strongest down to sources with one blue asymmetry. In each category, sources that display either blue or no asymmetry in each of the three lines are considered stronger candidates than those with red asymmetries. Sources above B228 are considered infall candidates. We include the HCN results of Park et al. for completeness, but consider them a secondary criteria since their beam was 1′, larger than that in the observations of Gregersen et al. and Mardones et al.
Based on these criteria, L1251B is the strongest infall candidate of the sources studied here and is followed by CB244. Among the other six sources with infall asymmetry in HCO<sup>+</sup>, NGC 1333 IRAS 2 has an H<sub>2</sub>CO spectrum with a strong red peak and so is ruled out as an infall candidate since outflow seems likely as the explanation for the asymmetric profiles.
## 5 CONCLUSIONS
We have observed 16 Class I and 18 Class 0 sources in HCO<sup>+</sup> $`J=32`$. Nine sources have a blue asymmetry and six have a red asymmetry. Infall asymmetries as defined by Mardones et al. are still present in Class I sources with $`T_{bol}`$ $`<`$ 170 K (blue excess = 0.31) to the same extent they are present among Class 0 sources with $`T_{bol}`$ $`<`$ 70 K (blue excess = 0.31). If the HCO<sup>+</sup> $`J=32`$ line is more sensitive to infall, as evolutionary models suggest, than the CS and H<sub>2</sub>CO lines studied by Mardones et al., then the end of the collapse phase still remains to be found. Among the sources we surveyed, we suggest six new infall candidates: CB 244, IRAS 03235, IRAS 04166, B228, IRAS23011 and TMC-1A. We confirm the suggestion of Myers et al. (1996) that L1251B is an infall candidates. NGC 1333 IRAS 2 is ruled out as an infall candidate, but has the weakest claim of any source studied here since it has blue asymmetry in HCO<sup>+</sup> and red asymmetry in H<sub>2</sub>CO.
Although all the sources in this paper are young and protostellar, making an unambiguous claim for infall is somewhat difficult. If these sources are undergoing infall, then surely all of these sources should have blue asymmetries in every line. There is the optical depth effect previously mentioned where CS and H<sub>2</sub>CO are optically thinner than HCO<sup>+</sup>. However, there is also the problem that infall velocities are roughly the same speed or less than turbulent and outflow motions. In the infall model previously presented, if such a source was observed with the resolution of our and Mardones et al.’s observations, at the radius of our beam, $``$ 25″, for a source at 310 pc, infall speeds range from .07 km s<sup>-1</sup> at the earliest times to 0.60 km s<sup>-1</sup> at the latest stages. In real cores, for example, L1544, a pre-protostellar core probably at the beginning of the collapse phase, infall speeds are $``$ .1 km s<sup>-1</sup>(Tafalla et al. 1999). Other caveats include beam size, the particular molecular transition chosen and the source mass. Detecting infall is a challenge.
For future work, we should observe CO to determine what velocity ranges are affected by outflows. Also, we should observe even later Class I sources to see if asymmetry is still common. Work is needed with higher spatial resolution to separate outflow from infall. Quantitative infall rates also need to be measured to see how the infall rate changes with time.
We would like to thank Daniel Jaffe, Wenbin Li, Kenji Mochizuki and Yancy Shirley for their help with observations. We would also like to thank Daniel Jaffe, John Lacy, Charles Lada and John Scalo for their comments on an early draft of this paper and the anonymous referee for helpful comments. This work was supported by NSF grant AST-9317567. D.M. thanks support from FONDECYT grant 1990632 and Fundacion Andes grant C-13413/7. |
no-problem/9912/cond-mat9912221.html | ar5iv | text | # Reply to “Comment on ‘Inverse exciton series in the optical decay of an excitonic molecule’ ”
## Abstract
As a reply to the Comment by I.S. Gorban et al. (Phys. Rev. B, preceding paper) we summarize our criticism on their claim of the first observation of the $`M`$ series in $`\beta `$-ZnP<sub>2</sub>. We support our analysis by reporting the first observation of inverse polariton series from the excitonic molecules selectively generated at $`𝐊_m\mathrm{𝟎}`$ in a CuCl single crystal. This observation and its explanation within the bipolariton model complete our proof of the biexcitonic origin of the inverse series.
A monocrystal CuCl is a prototype material in the physics of excitonic molecules, due to the polarization isotropy of the optical transitions to the excitonic states and due to the large binding energy of the molecules $`ϵ^m`$ 32 meV . In a recent paper we reported the observation of the $`M`$ series, which originates from the optical decay of the molecules into the excitonic states $`1s`$, $`2s`$, $`3s`$, $`4s`$, and $`\mathrm{}s`$. The lines $`M_{n2s}`$ are extremely weak and their detection requires a very modern spectroscopic technique and a high-quality CuCl single crystal. In our paper we explained briefly why the claim by Gorban et al. to the observation of the $`M`$ series in $`\beta `$-ZnP<sub>2</sub> is not justified (see reference of Ref. ). By their Comment the authors force us to present a more detailed criticism of their experiments.
Monoclinic zinc diphosphide ($`\beta `$-ZnP<sub>2</sub>) is an optically highly-anisotropic semiconductor with eight molecules per unit cell, which give rise to $`72`$ phonon modes. As a result, the excitonic spectra are very complicated and strongly depend on the excitation and polarization geometry. Thus the correct interpretation of the optical spectra of $`\beta `$-ZnP<sub>2</sub> requires a very accurate experiment, a high-quality sample, and a critical analysis of the experimental data. The first reports on the observation of the inverse hydrogen-like series in $`\beta `$-ZnP<sub>2</sub> attribute the series to a bielectron-impurity complex. To the best of our knowledge, these results have never been confirmed. The next reports on the inverse series ascribe the photoluminescence (PL) spectra of highly-excited $`\beta `$-ZnP<sub>2</sub> to the $`M`$ series. Even without commenting on Gorban et al.’s experiments, we will show below that their interpretation of the PL spectra in terms of the $`M`$ series is self-contradictory and incorrect.
Firstly, from their experimental data Gorban et al. conclude that the biexciton binding energy $`ϵ^m`$ in $`\beta `$-ZnP<sub>2</sub> is unusually high, i.e., $`ϵ^m=14.9`$ meV ($`0.32`$ of the excitonic Rydberg $`ϵ^x`$). In order to explain this result the authors claim without any justifications that the relevant electron and hole effective masses in $`\beta `$-ZnP<sub>2</sub> are given by $`m_e=1.7m_0`$ and $`m_h=21.3m_0`$ ($`\sigma =m_e/m_h=0.08`$) . This strongly contradicts the Kane model, which basically requires $`m_hm_0`$, and the first experiments on excitonic polaritons in $`\beta `$-ZnP<sub>2</sub> which yield the effective excitonic mass $`M_x=m_e+m_h3m_0`$. The latest high-precision experiments by two-photon and magneto-optical spectroscopies indicate the high anisotropy of $`\beta `$-ZnP<sub>2</sub> single crystals with $`m_e=0.7m_0`$, $`m_h=1.1m_0`$ along the $`a`$-axis, $`m_e=1.1m_0`$, $`m_h=1.4m_0`$ along the $`b`$-axis, and $`m_e=0.18m_0`$, $`m_h=0.20m_0`$ along the $`c`$-axis, respectively. These values clearly demonstrate that the effective masses chosen by Gorban et al. are irrelevant. Furthermore, with reference to $`\sigma =0.08`$ the authors claim that their $`ϵ^m=0.32ϵ^x`$ “corresponds to the variational calculations of Refs. ” (see p. 339 of Ref. ). This is a misinterpretation of the theoretical results of Refs. . An inspection of Fig. 1 of Ref. and Fig. 3 of Ref. shows that the mass ratio $`\sigma =0.08`$ yields $`ϵ^m0.12ϵ^x`$. As a result, even for the above incorrect value of the ratio $`m_e/m_h`$ the molecule binding energy should be only $`ϵ^m=`$ 5.64 meV rather than 14.9 meV.
Secondly, the intensity ratio between two main replicas of the series reported by Gorban et al. is too high to be ascribed to the $`M`$ series. The intensity ratio of two lines marked by $`M_1`$ and $`M_2`$ in Fig. 1 of Ref. is given by $`I_{M_2}/I_{M_1}1/3`$ ($`I_{M_1}=I_{M_{1T}}+I_{M_{1L}}`$). In Ref. the authors explain that the above ratio should be corrected due to the saturation of the $`M_{1T}`$ line and that the true value is $`I_{M_2}/I_{M_1}1/9.08`$ as shown in the intensity dependence in Fig. 2 of Ref. . The bipolariton model with the Akimoto-Hanamura trial wave function yields $`I_{M_2}/I_{M_1}1/5000`$ for excitonic molecules in CuCl (the electron-hole mass ratio $`\sigma =0.28`$). This value is in agreement with our experiments, which indicate $`I_{M_2}/I_{M_1}1/7000`$. Using a precise wave function of the hydrogen molecule , we find $`I_{M_2}/I_{M_1}1/800`$ in the limit $`\sigma =0`$. Thus, within the uncertainty in the value of $`\sigma `$ in $`\beta `$-ZnP<sub>2</sub>, we estimate, for excitonic molecules in this semiconductor, $`I_{M_2}/I_{M_1}1/1000`$. The latter value is at least two orders of magnitude smaller than that reported by Gorban et al. . Note that in the Comment the authors have corrected their intensity ratio as $`I_{M_2}/I_{M_1}1/230`$ referring to the $`new`$ intensity dependence in Fig. 2 of the Comment, but this intensity dependence contradicts that reported in Fig. 2 of Ref. .
The low-temperature PL spectra of $`\beta `$-ZnP<sub>2</sub> single crystals at moderate optical excitations have been studied in Ref. . According to Ref. , the $`M_1`$ emission band indeed develops at an optical excitation power of about 10 kW/cm<sup>2</sup>, however, without any indications of the $`M`$ series. Instead, below the $`M_1`$ band another PL line arises (at 1.523 eV), which grows quadratically with the excitation intensity and has a comparable intensity with the $`M_1`$ band. This $`M`$-like emission line is attributed to excitonic molecules trapped by surface defects, because a similar spectral line is presented in the reflection spectra but is absent in the absorption spectra. Furthermore, the line is not reproducible from sample to sample but found only for a few samples. Note that the above described $`M`$-like emission at 1.523 eV exactly coincides with the PL line labeled by $`M_2`$ in Fig. 1 of the Comment (see also Fig. 1 of Ref. ). According to Ref. , the molecule energy is 3.1102 $`\pm `$ 0.0002 eV. Therefore the inverse $`M_2`$ emission should arise at 1.519 eV, rather than at 1.523 eV in the Gorban et al.’s data.
Because of its extremely weak intensity, the confirmation of the inverse series requires many severe experimental checks. The two-photon resonant generation of excitonic molecules with translational momentum $`\mathrm{}𝐊_m`$ is crucial for unambiguous discrimination of the $`M`$ series in our experiments. In Fig. 1 we compare the PL spectra of a high-quality CuCl single crystal (sample $`No.1`$) recorded under the selective excitation of the molecules at $`𝐊_m=2𝐤_0`$ (solid curve) and under the interband excitation (dashed curve). Here $`𝐤_0`$ is the wave vector of the pump polaritons. In the latter case the $`M_T`$ and $`M_L`$ emission lines are absent because of weak cw-laser excitation, but only $`1s`$ exciton emissions and the relavant bound exciton lines are present. As shown by the solid curve, even under the two-photon excitation of the molecules, bound exciton structures similar to those in the dashed curve grow quadratically with the excitation intensity together with the $`M`$ series. They originate from the secondary excitons generated by the molecule decay. As shown by the dashed curve, an emission from impurity-bound excitons occurs at 3.082 eV. Therefore, the line marked by $`M_{Z_{1,2}}`$ cannot be attributed to the inverse series, although its energy coincides with the molecule emission leaving the longitudinal $`Z_{1,2}`$ exciton behind . On the other hand, our samples are free from any impurity emissions at the spectral position of the $`M_{2s}M_\mathrm{}s`$ lines. Thus we assign these lines to the inverse $`M`$ series. Furthermore, we checked the reproducibility of the $`M_{2s,3s,4s}`$ series for several high-quality samples of a CuCl single crystal. Figure 2 shows the PL spectrum of sample $`No.2`$ under resonant generation of excitonic molecules at $`𝐊_m=2𝐤_0`$. The period of interference pattern of sample $`No.2`$ is different from that of sample $`No.1`$. However, the characteristics of the $`M`$ series are identical for both samples.
At the present time we have improved the experimental setup and can observe the optical decay of excitonic molecules selectively excited at various $`𝐊_m`$ ($`0K_m2k_0`$). Figure 3 shows the inverse emission series from excitonic molecules with $`𝐊_m\mathrm{𝟎}`$. Now the molecule cannot decay into the pure excitonic states $`1s`$, $`2s`$, …, but resonantly dissociates into two outgoing polaritons associated with the corresponding optical transitions. Thus the $`M`$ series of Fig. 3 is described as an inverse polariton series. Because the main contribution to the series is due to the resonant dissociation of the molecule into the lower dispersion branch polaritons, we designate the series by $`LP_{ns}`$. Note that the intensity ratio $`I_{LP_{2s}}/I_{LP_{1s}}`$ is about $`1/100`$, i.e., by two orders of magnitude larger than $`I_{M_{2s}}/I_{M_{1s}}`$ corresponding to the molecules initially photogenerated at $`𝐊_m=2𝐤_0`$ . With the bipolariton model of an excitonic molecule , both of the inverse series, $`M_{ns}`$ of Fig. 1 and $`LP_{ns}`$ of Fig. 3, are explained self-consistently using the same trial molecule wave function . This completes our proof of the biexcitonic origin of the inverse series observed in CuCl single crystals. The use of a weak selective resonant excitation of the molecules allows us to avoid the nonlinear reabsorption processes and a thermal distribution of the molecules which considerably changes the intensity ratios of the inverse series.
In their experiments, Gorban et al. have used only the interband excitation of $`\beta `$-ZnP<sub>2</sub>. Such an excitation is not suitable for the discrimination of the $`M`$ series as we have shown above. Furthermore, while the authors do not observe their $`M`$ series at moderate interband excitations of $`\beta `$-ZnP<sub>2</sub> (Ar-laser of a few kW/cm<sup>2</sup> intensity), they describe a corresponding rich PL spectrum mainly in terms of the optical decay of nonlocalized excitonic molecules (see the lines marked by $`M_1`$, $`C`$, and $`G`$ in Fig. 3 of Ref. ). According to Gorban et al., the $`G`$-band is due to the degenerate two-photon radiative annihilation of biexcitons with $`𝐊_m\mathrm{𝟎}`$ and the $`C`$-line is due to a “dielectric liquid” of excitonic molecules. We consider this interpretation as absolutely speculative and unjustified. In particular, the $`M_{1s}`$ band of thermally distributed excitonic molecules is not expected to show any fine structure due to the degenerate two-photon annihilation of the molecules . Finally, if the nonlocalized biexcitons are mainly responsible for the complicated PL spectrum plotted in Fig. 3 of Ref. , as the authors incorrectly interpret, it is not understandable why in this case they do not observe their $`M`$ series.
Once the inverse $`M`$-series is observed , the question on the reconstruction of the internal molecule wave function naturally arises. Here, the relevant theoretical questions are how the molecule decays radiatively to the $`ns`$ exciton states and how the relative intensities of the $`M_{ns}`$ replicas relate to the molecule wave function. The theory of the optical decay of an excitonic molecule presented by Gorban et al. is incorrect.
Firstly, the use of Wang’s trial wave function, Eq. (1) of Ref. , means that the biexciton is identified with the hydrogen molecule. The Wang wave function is suitable for H<sub>2</sub> due to the adiabatic approximation, which requires $`(m_e/m_h)^{1/4}1`$. The latter inequality definitely does not hold even for the incorrect ratio $`m_e/m_h=0.08`$ used by the authors for biexcitons in $`\beta `$-ZnP<sub>2</sub>. In other words, an envelope wave function for the relative motion of two constituent excitons is absent in Eq. (1) of Ref. . However, the authors ignore the above standard arguments and take the results of the well-known calculations on the hydrogen molecule with the same values of the fitting parameter ($`Z=1.166`$) and binding energy ($`0.31`$ Rydberg). The Wang wave function is irrelevant for excitonic molecules. The corresponding evaluation of the molecule binding energy within the four-particle Schrödinger equation, claimed by the authors, cannot be reproduced.
Secondly, Eq. (2) of Ref. proposed for the intensities of the $`M_{ns}`$ replicas is incorrect, because (i) two of the arguments of the molecule wave function are fixed by $`r_{a1}=0`$ and $`r_{b1}=a_{biex}`$, (ii) one additional integral convolution with the ground-state excitonic envelope wave function is absent (the integration over $`d𝐫_{a1}`$). As a result, the authors end up with an incorrect conclusion, that “the probability of the two-photon radiative transition is $`(a_{biex}/a_{ex})^6`$, in contrast to the probability of the two-photon absorptive one which is $`(a_{biex}/a_{ex})^3`$”. Furthermore, Eq. (2) of Ref. does not include the polariton effects, i.e., the both crucial factors, the density of states and Hopfield’s coefficients, are omitted.
Thirdly, the expansion of the molecule wave function given by Eq. (3) of Ref. makes little sense, because (i) the biexciton wave function is assumed to be dependent only on two coordinates $`𝐫_{a1}`$ and $`𝐫_{b2}`$, while the correct wave function should be specified by three independent vector coordinates, (ii) the r.h.s. of Eq. (3) is not symmetric with respect to the permutation of two electrons or holes. The correct expansion of the molecule wave function is given by Eq. (5) of Ref. .
In order to assign the $`M`$-like emissions to the inverse exciton series one needs a severe critical analysis of the experimental data. While the observation of the $`M`$ series in CuCl requires a highly sensitive detection technique and a high-quality sample free from impurity emissions, our results are very stable and reproducible. Furthermore, $`𝐤`$-selective excitation of excitonic molecules is crucial both for the final evidence of the observation of the $`M`$ series and for the reconstruction of the molecule wave function. The bipolariton model allows us to describe self-consistently a continuous, smooth change of the intensities and spectral positions of the $`M_{ns}`$ replicas with the total molecular wavevector $`𝐊_m`$ decreasing from $`2𝐤_0`$ towards $`𝐊_m=\mathrm{𝟎}`$ (compare Fig. 1 and Fig. 3). From the criteria discussed above we conclude that Gorban et al. have not yet presented a convincing set of the experimental data on the $`M`$ series in $`\beta `$-ZnP<sub>2</sub>. |
no-problem/9912/cond-mat9912318.html | ar5iv | text | # Models of the Pseudogap State in Cuprates
## Abstract
We review a certain class of (“nearly”) exactly solvable models of electronic spectrum of two-dimensional systems with fluctuations of short range order of “dielectric” (e.g. antiferromagnetic) or “superconducting” type, leading to the formation of anisotropic pseudogap state on certain parts of the Fermi surface. The models are based on recurrence procedure for one- and two-electron Green’s functions which takes into account of all Feynman diagrams in perturbation series with the use of the approximate Ansatz for higher-order terms in this series. These models can be applied to calculation of spectral density, density of states and conductivity in the normal state, as well as to calculation of some properties of superconducting state.
The model of “nearly – antiferromagnetic” Fermi-liquid is based upon the picture of well developed fluctuations of AFM short range order in a wide region of the phase diagram. In this model the effective interaction of electrons with spin fluctuations is described via dynamic spin susceptibility $`\chi _𝐪(\omega )`$, which is determined mainly from the fit to NMR experiments :
$$V_{eff}(𝐪,\omega )=g^2\chi _𝐪(\omega )\frac{g^2\xi ^2}{1+\xi ^2(𝐪𝐐)^2i\frac{\omega }{\omega _{sf}}}$$
(1)
where $`g`$ is coupling constant, $`\xi `$–correlation length of spin fluctuations, $`𝐐=(\pi /a,\pi /a)`$–vector of antiferromagnetic ordering in insulating phase, $`\omega _{sf}`$–characteristic frequency of spin fluctuations, $`a`$–lattice spacing.
As dynamic spin susceptibility $`\chi _𝐪(\omega )`$ has peaks at the wave vectors around $`(\pi /a,\pi /a)`$ there appear “two types” of quasiparticles —“hot quasiparticles” with momenta in the vicinity of “hot spots” on the Fermi surface and “cold” quasiparticles with momenta on the other parts of the Fermi surface, e.g. around diagonals of the Brillouin zone $`|p_x|=|p_y|`$ .
In the following we shall consider the case of high enough temperatures when $`\pi T\omega _{sf}`$ which corresponds to the region of “weak pseudogap” . In this case spin dynamics is irrelevant and we can limit ourselves to static approximation.
We can greatly simplify all calculations if instead of (1) we use another form of model interaction:
$$V_{eff}(𝐪)=\mathrm{\Delta }^2\frac{2\xi ^1}{\xi ^2+(q_xQ_x)^2}\frac{2\xi ^1}{\xi ^2+(q_yQ_y)^2}$$
(2)
where $`\mathrm{\Delta }`$ is some phenomenological parameter, determining the effective width of the pseudogap. In fact (2) is qualitatively similar to the static limit of (1) and differs from it very slightly in most important region of $`|𝐪𝐐|<\xi ^1`$.
The spectrum of “bare” (free) quasiparticles can be taken in the form :
$$\xi _𝐩=2t(\mathrm{cos}p_xa+\mathrm{cos}p_ya)4t^{^{}}\mathrm{cos}p_xa\mathrm{cos}p_ya$$
(3)
where $`t`$–nearest neighbor transfer integral, $`t^{^{}}`$–second nearest neighbor transfer integral on the square lattice, $`\mu `$ is chemical potential.
Much simpler model of electronic spectrum assumes that the Fermi surface of two – dimensional system to have nesting (“hot”) patches of finite angular size $`\alpha `$ in $`(0,\pi )`$ and symmetric directions in the Brillouin zone, as shown in Fig.2 . Similar Fermi surface was observed in a number of ARPES experiments on cuprate superconductors. Here we assume that fluctuations interact only with electrons from the “hot” (nesting) patches of the Fermi surface, and scattering vector is either $`Q_x=\pm 2p_F`$$`Q_y=0`$ or $`Q_y=\pm 2p_F`$$`Q_x=0`$ for incommensurate fluctuations, while $`𝐐=(\pi /a,\pi /a)`$ for commensurate case. It is easily seen that this scattering is in fact of one – dimensional nature. In this case non-trivial contributions of interaction with fluctuations appear only for electrons from “hot” patches, while electrons on “cold” parts of the Fermi surface remain free.
These models can be solved exactly in the limit of infinite correlation length $`\xi \mathrm{}`$, using methods developed in Refs. . For the case of finite $`\xi `$ we can use an approximate Ansatz, proposed for one – dimensional case in Ref. and further developed for two – dimensional system in Refs. . According to this Ansatz the contribution of an arbitrary diagram for electron self-energy of $`N`$-th order in interaction (2) has the form:
$$\mathrm{\Sigma }^{(N)}(\epsilon _n𝐩)=\mathrm{\Delta }^{2N}\underset{j=1}{\overset{2N1}{}}\frac{1}{i\epsilon _n\xi _j+in_jv_j\kappa }$$
(4)
where for the “hot spots” model $`\xi _j=\xi _{𝐩+𝐐}`$ and $`v_j=|v_{𝐩+𝐐}^x|+|v_{𝐩+𝐐}^y|`$ for odd $`j`$ and $`\xi _j=\xi _𝐩`$ and $`v_j=|v_𝐩^x|+|v_𝐩^y|`$ for even $`j`$ – appropiate combinations of velocity projections, determined by the “bare” spectrum (3). For the “hot patches” model $`\xi _j=(1)^j\xi _𝐩`$, $`v_j=v_F`$. Here $`n_j`$ is the number of interaction lines, enveloping $`j`$-th Green’s function in a given diagram. In this case any diagram with intersecting interaction lines is actually equal to some diagram of the same order with noncrossing interaction lines. Thus in fact we can consider only diagrams with nonintersecting interaction lines, taking into account diagrams with intersecting lines introducing additional combinatorial factors into interaction vertices. This method was used for one-dimensional model of the pseudogap state in Refs. .
As a result we obtain the following recursion relation for one-electron Green’s function (continuous fraction representation) :
$$G^1(\epsilon _n\xi _𝐩)=G_0^1(\epsilon _n\xi _𝐩)\mathrm{\Sigma }_1(\epsilon _n\xi _𝐩)$$
(5)
$$\mathrm{\Sigma }_k(\epsilon _n\xi _𝐩)=\mathrm{\Delta }^2\frac{v(k)}{i\epsilon _n\xi _k+ikv_k\kappa \mathrm{\Sigma }_{k+1}(\epsilon _n\xi _𝐩)}$$
(6)
Combinatorial factor:
$$v(k)=k$$
(7)
corresponds to the case of commensurate fluctuations with $`𝐐=(\pi /a,\pi /a)`$ . For incommensurate case :
$$v(k)=\{\begin{array}{cc}\frac{k+1}{2}& \text{for odd }k\\ \frac{k}{2}& \text{for even }k\end{array}$$
(8)
In Ref. a spin-structure of effective interaction within the model of “nearly antiferromagnetic” Fermi-liquid was taken into account (spin-fermion model). This leads to more complicated combinatorics of diagrams. Spin-conserving part of the interaction gives formally commensurate combinatorics, while spin-flip scattering is described by diagrams with combinatorics of incommensurate type. In this case :
$$v(k)=\{\begin{array}{cc}\frac{k+2}{3}& \text{for odd }k\\ \frac{k}{3}& \text{for even }k\end{array}$$
(9)
This approach was generalized for calculation of two – electron Green’s function in Ref. , where we formulated some recursion relations for the vertex part, describing electronic response to an external electromagnetic field, allowing calculations of conductivity.
The Ansatz of (4) is in fact not precisely exact , however the analysis of Ref. , shows that it is quantitatively good for most interesting cases. These conclusions were confirmed in recent papers , where the one – dimensional version of our model was solved by exact numerical diagonalization. In Fig.3 we show the comparison of results obtained for the density of states in case of incommensurate fluctuations by exact numerics and via our recursion relations. We can see an extremely good correspondence, sufficient for any practical purposes. In case of commensurate fluctuations in one – dimension the Ansatz of (4) misses certain Dyson type singularity of the density of states, appearing at the center of the pseudogap , but this is apparently absent in two – dimensional case.
Consider one-electron spectral density:
$$A(E𝐩)=\frac{1}{\pi }ImG^R(E𝐩)$$
(10)
where $`G^R(E𝐩)`$ is retarded Green’s function, obtained by the usual analytic continuation of (5) to the real axis of energy $`E`$. In Fig.4 we show typical energy dependencies of $`A(E𝐩)`$ obtained for the “hot spots” model. More detailed results can be found in Refs. . Similar non Fermi – liquid like behavior of the spectral density is easily obtained on “hot patches” of the Fermi surface shown in Fig.2 (Cf. Ref. ).
Let us stress that our solution (6) is exact both for $`\xi \mathrm{}`$ and $`\xi 0`$, while in the region of finite $`\xi `$ it provides apparently very good interpolation.
The one-electron density of states:
$$N(E)=\underset{𝐩}{}A(E,𝐩)=\frac{1}{\pi }\underset{𝐩}{}ImG^R(E𝐩)$$
(11)
is determined by the integral of spectral density $`A(E𝐩)`$ over the Brillouin zone. Detailed results for the “hot spots” model were obtained in Ref. , demonstrating smeared pseudogap with weak dependence on the correlation length $`\xi `$. For “hot patches” model this pseudogap is more pronounced, depending on the size of these patches.
“Hot patches” model was applied by us to calculations of optical conductivity and also (for the simplest case of $`\xi \mathrm{}`$) to the study of the main superconducting properties .
Pseudogap phenomena can also be explained using the idea of fluctuation Cooper pairing at temperatures higher than superconducting transition temperature $`T_c`$. Anticipating the possibility of both $`s`$-wave and $`d`$-wave pairing, we introduce the pairing interaction of the simplest (separable) form:
$$V(𝐩,𝐩^{})=Ve(\varphi )e(\varphi ^{})$$
(12)
where $`\varphi `$ is polar angle determining the direction of electronic momentum $`𝐩`$ in the plane, while for $`e(\varphi )`$ we assume model dependence:
$$e(\varphi )=\{\begin{array}{cc}1& s\text{-wave pairing}\\ \sqrt{2}\mathrm{cos}(2\varphi )& d\text{-wave pairing}\end{array}$$
(13)
Analogously to transition from (1) to (2) we introduce the model interaction (static fluctuation propagator of Cooper pairs):
$$V_{eff}(𝐪)=\mathrm{\Delta }^2e^2(\varphi )\frac{2\xi ^1}{\xi ^2+q_x^2}\frac{2\xi ^1}{\xi ^2+q_y^2}$$
(14)
where $`\mathrm{\Delta }`$ determines the width of superconducting pseudogap. We can see that mathematically this problem is practically the same as the “hot spot” model, but always with combinatorics of incommensurate case . Then we again obtain the recurrence relation for the Green’s function of the type of (6):
$`\mathrm{\Sigma }_k(\epsilon _n\xi _𝐩)=`$
$`{\displaystyle \frac{\mathrm{\Delta }^2e^2(\varphi )v(k)}{i\epsilon _n(1)^k\xi _𝐩+ik(|v_x|+|v_y|)\kappa \mathrm{\Sigma }_{k+1}(\epsilon _n\xi _𝐩)}}`$ (15)
where $`v_x=v_F\mathrm{cos}\varphi `$, $`v_y=v_F\mathrm{sin}\varphi `$, $`\kappa =\xi ^1`$, $`\epsilon _n>0`$ and $`v(k)`$ was defined in (8).
Energy dependencies of the spectral density $`A(E𝐩)`$ for one-particle Green’s function (10), can be calculated from (15) for different values of polar angle $`\varphi `$, determining the direction of electronic momentum in the plane, for the case of fluctuations of $`d`$-wave pairing. Calculations show that in the vicinity of the point $`(\pi /a,0)`$ in Brillouin zone this spectral density is non Fermi-liquid like (pseudogap behavior). As vector $`𝐩`$ rotates in the direction of the zone diagonal the two peak structure gradually disappears and spectral density transforms to the typical Fermi-liquid like with a single peak, which narrows as $`\varphi `$ approaches $`\pi /4`$. Analogous transformation of the spectral density takes place as correlation length $`\xi `$ becomes smaller. In case of fluctuation pairing of $`s`$-wave type the pseudogap appears isotropically on the whole Fermi-surface. |
no-problem/9912/astro-ph9912445.html | ar5iv | text | # PN – ISM interaction: The observational evidence
## 1. Introduction
Many aspects of the physics and shapes of PNe can successfully be explained in terms of the two-wind model by Kwok, Purton, & Fitzgerald (1978) as the product of the mass-loss history of the star on the asymptotic giant branch (AGB) and during the central star evolution.
Our observations indicate that at some point in the evolution of PNe other factors may become very important for the further development, and for such objects the two-wind paradigm may break down. Old PNe interacting with the ISM are a case in point. This work has implications for a hitherto neglected aspect of PNe: they give evidence for an important process that remains very difficult to study; the return of processed nuclear matter to the ISM. This material leads to the chemical evolution of galaxies. Old PNe in decay are the very last objects that can be observed before the nebular material is fully dispersed and mixes with the ISM.
Actually this is one of the very few methods to study the properties of the ISM directly, in this sense PNe can act as a probe performing an active experiment on the ISM.
## 2. The observations
Earlier work on interacting PNe by Tweedy & Kwitter (1996) and Xilouris et al. (1996) has given us images of a sample of very large (5 to 20 arcmin in diameter) PNe at an angular resolution of about 5 arcsec. In our survey (Kerber 1998) we have concentrated on high resolution imaging of smaller ($`<`$ 5 arcmin) PNe combined with long slit spectroscopy.
We have collected the largest, homogeneous data set on old PNe interacting with the ISM for study of the physical properties of these objects. In Table 1 our sample is summarized. A more detailed description of the observations and some individual objects can be found in Rauch et al. (2000 this volume).
## 3. Observational Results
In our sample of 21 low surface brightness PNe we have found signs of interaction with the ISM in 15 cases. This unexpectedly large percentage may be the result of an observational bias: the interaction leads to a – usually asymmetric – brightness enhancement in these low surface brightness objects facilitating their discovery, or to put it differently: some of the nebulae may not have been discovered if it had not been for the interaction with the ISM.
For at least five of these 15 PNe the central star has been found to be displaced from the geometrical center indicating a very advanced stage of interaction. Combined with the fact that the spectroscopically derived electron densities are low in all cases and very low for most of the objects (Tab. 3), this is clear evidence that the interaction is common in evolved PNe. Using long-slit spectroscopy we have for the first time been able to characterize the plasma parameters of these nebulae demonstrating that the interaction regions usually show an increased electron density and – in most cases – a pronounced enhancement of the low-ionization stages. The \[N ii\]/H$`\alpha `$ ratios show absolute values of 1 to 4 with one example reaching 12 and increase by factors of up to 3.5 compared with the inner parts of the nebulae which are not effected by the interaction. A correspondingly lower excitation class is also observed.
By combining both imaging and spectroscopy, we are therefore able to diagnose the degree of interaction.
All of the above is consistent with the current theoretical understanding of the interaction process as described by Borkowski, Sarazin, & Soker (1990), Soker, Borkowski, & Sarazin (1991) and Dgani (2000 this volume). A schematic description of the interaction process can be found in Rauch et al. (2000).
Morphologically our deep high-resolution images show an enormous amount of detail and clearly show indication of instability, in some cases the nebulae are obviously in the process of being broken apart, for example see the images of MeWe 1-4 and KFR 1 in Rauch et al. (2000).
It has been shown recently by Soker & Dgani (1997) and Dgani & Soker (1998) that the effect of the interstellar medium’s magnetic field (ISMF) can be extremely important. We see evidence for the ISMF’s action in some of our objects in the form of stripes or rolls as described by Dgani (2000) in this volume.
## 4. Future work
In this project we have already begun to extend the sample to the northern hemisphere and have included very large objects requiring wide-field imaging. Another important aspect will be the study of the central stars giving us information on their evolutionary status, as well as the spectroscopic distance.
## Acknowledgments
This work was supported by travel grants from the Austrian Ministerium für Wissenschaft, Forschung und Verkehr and the Universität Innsbruck and by the DLR under grant 50 OR 9705 5 (TR).
## References
Borkowski K.J., Sarazin C.L., & Soker N., 1990, ApJ 360, 173
Dgani R., & Soker N., 1998, ApJ 495, 337
Kerber F., 1998, Rev. in Mod. Astronomy 11, ed. R.E. Schielicke, Astronomische Gesellschaft, Jena, p. 161
Kwok S., Purton C.R., & Fitzgerald P.M., 1978, ApJ 219, L 125
Soker N., & Dgani R., 1997, ApJ 484, 277
Soker N., Borkowski K.J., & Sarazin C.L., 1991, AJ 102, 1381
Tweedy R.W., & Kwitter K.B., 1996, ApJS 107, 255
Xilouris K.M., Papamastorakis J., Paleologou E., & Terzian Y., 1996, A&A 310, 603 |
no-problem/9912/astro-ph9912137.html | ar5iv | text | # An Accretion Model for Anomalous X-Ray Pulsars
## 1 Introduction
Anomalous X-ray pulsars (AXPs) are a subclass of X-ray pulsars comprising approximately half a dozen members with certain well-determined properties (e.g. Mereghetti & Stella 1995; van Paradijs, Taam & van den Heuvel 1995). AXPs are sources of pulsed X-ray emission with steadily increasing periods lying in a narrow range, $`P612`$ seconds, and having characteristic ages $`P/2\dot{P}10^310^5`$ years. While it is generally believed that AXPs are neutron stars, they differ significantly from other manifestations of these objects. AXPs have periods more than an order of magnitude larger than those of typical radio pulsars. Compared with high-mass X-ray binary pulsars, AXPs have relatively low luminosities, $`L_x10^{35}10^{36}`$ erg/sec, and soft spectra. Generally, AXP spectra are well-fitted by a combination of blackbody and power-law contributions, with effective temperatures and photon indices in the range $`T_e0.30.4`$ keV and $`\mathrm{\Gamma }34`$, respectively. In addition, AXPs have no detectable binary companions and at least two have been associated with young supernova remnants.
A particularly uncertain aspect of AXPs is the energy source that supports their X-ray emission. From timing measurements, it is clear that they cannot be rotation-powered. For values characteristic of AXPs, the rate of loss of rotational energy is $`|\dot{E}|4\pi ^2I\dot{P}/P^310^{32.5}`$ erg/sec, orders of magnitude smaller than their X-ray luminosities. Motivated by this discrepancy, two competing theories for AXPs have emerged. In one, the energy source for the X-rays is internal and the AXPs are modeled as isolated, ultramagnetized neutron stars (“magnetars”; Duncan & Thompson 1992), powered either by residual thermal energy (Heyl & Hernquist 1997a) or by magnetic field decay (Thompson & Duncan 1996). The observed X-ray luminosities place severe limitations on both possibilities. If residual thermal energy drives the emission, then the envelope of the neutron star must consist primarily of light elements (Heyl & Hernquist 1997b). If magnetic field decay supplies the energy, then non-standard decay processes may be required unless the field is $`B\stackrel{>}{}10^{16}`$ G (Heyl & Kulkarni 1998). In either case, the measured periods and estimated period derivatives and ages are consistent with field strengths $`B10^{14}10^{15}`$ G. These values are similar to those inferred for soft gamma repeaters (SGRs) from timing data (e.g. Kouveliotou et al. 1998, 1999; but see, however, Marsden, Rothschild & Lingenfelter 1999).
The second class of theories for AXPs invokes accretion to power the X-ray emission. Several variants of this hypothesis have been put forward. Mereghetti & Stella (1995) suggested that AXPs are neutron stars with magnetic fields similar to those of radio pulsars, accreting from binary companions of very low mass. Wang (1997) proposed that the AXP candidate RX J0720.4-3125 is an old neutron star accreting from the interstellar medium. (For an interpretation of this object as an ultramagnetized star, see Heyl & Hernquist 1998a.) In another model, AXPs are descendants of high-mass X-ray binaries and are accreting from the debris of a disrupted binary companion, following a period of common-envelope evolution (van Paradijs et al. 1995; Ghosh, Angelini & White 1997). In all these analyses, the magnetic fields inferred from the observed luminosities and spin periods is $`B\stackrel{<}{}10^{12}`$ G, if the neutron star is accreting near its equilibrium spin period.
A potential advantage of accretion-powered models for AXPs over those based on ultrastrong magnetic fields is that these neutron stars then have intrinsic properties similar to those of ordinary radio pulsars and luminous X-ray pulsars. However, efforts to detect binary companions of AXPs have been unsuccessful and have placed severe limits on companion masses (e.g. Mereghetti, Israel & Stella 1998; Wilson et al. 1998). Moreover, it is clear that if AXPs are accreting, then they cannot be in equilibrium with their accretion disks, since all these objects are observed to spin down. Ghosh et al. (1997) suggest that this deficiency can be overcome by time-dependent accretion, but do not demonstrate if a young, rapidly spinning neutron star can be spun down to periods $`P10`$ seconds on timescales consistent with, e.g., age estimates based on associations with supernova remnants (e.g. Vasisht & Gotthelf 1997).
In this paper, we propose that AXPs are indeed accreting, but that the accretion disk was established as part of the process which formed the neutron star, through fallback of material from the progenitor following the supernova explosion. After an initial phase of transient evolution, the accretion rate in this scenario declines steadily as the material in the disk is depleted. At late times, the accretion rate evolves self-similarly, and the torques on the star spin it down in a regular manner. We demonstrate that for plausible choices of the parameters, this time-dependent accretion yields periods, ages, and luminosities consistent with the observed properties of AXPs.
## 2 Accretion from a Debris Disk
Following the formation of a neutron star through the core collapse of a massive progenitor, it is plausible that mass ejection during the explosion will not be perfectly efficient and that a small amount of mass, $`M_{fb}`$, can fall back. The amount of this fallback is uncertain, but various lines of argument suggest that $`M_{fb}\stackrel{<}{}0.1M_{}`$ is not unreasonable (Lin, Woosley & Bodenheimer 1991; Chevalier 1989). Much of this material will be directly accreted by the neutron star, and some may be expelled when the accretion rate exceeds the Eddington limit, depending on the relative contribution of neutrino emission to the energy loss rate. However, some of the fallback is likely to carry sufficient angular momentum that it will settle into an accretion disk of mass $`M_d`$ around the neutron star. The relationship between $`M_{fb}`$ and $`M_d`$ depends on the detailed properties of the progenitor, but our model for AXP formation does not require large disk masses and can, in principle, operate even if $`M_dM_{fb}`$.
The fate of the material settling into the disk is uncertain, but the problem we are describing is similar to tidal disruption of stars by massive black holes. Numerical simulations show that material captured by the black hole in such an event will circularize into an accretion disk on roughly the local dynamical time and that the subsequent evolution of the disk through viscous processes can be characterized in a simple manner (Cannizzo, Lee & Goodman 1990). In our analysis, we assume that fall back will establish a thin accretion disk on a timescale $`T1`$ msec. Detailed calculations show that the long-term behavior of the system is insensitive to the numerical value of $`T`$.
The accretion disk will exist for radii only beyond the magnetospheric radius $`R_m0.5r_A=6.6\times 10^7B_{12}^{4/7}\dot{m}^{2/7}`$ cm (Frank, King & Raine 1992), where the Alfvén radius, $`r_A`$, indicates the transition point at which the flow becomes dominated by the magnetic field, and $`B_{12}`$ is the surface magnetic field strength of the star in units of $`10^{12}`$ G. We parameterize the mass accretion rate through the disk, $`\dot{M}`$, in ratio to the Eddington rate, $`\dot{M}_E=9.75\times 10^{17}`$ g/s, by $`\dot{m}\dot{M}/\dot{M}_E`$. (In numerical estimates, we always take the neutron star mass and radius to be $`M_{}=1.4M_{}`$ and $`R_{}=10^6`$ cm, respectively.) Note, however, that we allow for the possibility that the mass accretion rate onto the surface of the star, $`\dot{M_X}`$, will be different from $`\dot{M}`$, if some of the material is driven from the system prior to reaching the neutron star surface. Thus, the X-ray luminosity will be set by $`\dot{M_X}`$ rather than by $`\dot{M}`$, but we will generally take the location of $`R_m`$ and the torque applied to the neutron star to be determined by $`\dot{M}`$.
Whether or not the disk will influence the spindown of the neutron star or lead to accretion-induced X-ray emission will depend on the location of $`R_m`$ relative to the light cylinder radius, $`R_{lc}=c/\mathrm{\Omega }`$, and the corotation radius $`R_c`$ defined by $`\mathrm{\Omega }=\mathrm{\Omega }_K(R_c)`$, where $`\mathrm{\Omega }`$ is the angular rotation frequency of the star and $`\mathrm{\Omega }_K(R)`$ is the Keplerian rotation rate at radius $`R`$ (e.g. Shapiro & Teukolsky 1983). If at any time $`R_m>R_{lc}`$, we assume that the disk will effectively evolve independently and will not affect the neutron star. Thus, we anticipate that even if all neutron stars acquire debris disks through fallback, most will live as ordinary radio pulsars, for appropriate choices of $`M_d`$, $`B`$, and initial spin period, $`P_0`$. In other cases, accretion will be permitted and the neutron star will behave as an accreting X-ray source. As we shall see, according to this model it is possible for a given neutron star to spend a portion of its life either as a radio pulsar or as an accreting source.
During accretion, the interaction between the disk and the star can lead to several distinct phases of evolution. If $`R_mR_c`$, which corresponds to $`\mathrm{\Omega }\mathrm{\Omega }_K(R_m)`$, the “propeller” effect will operate, so that accretion onto the surface of the star will be inefficient owing to centrifugal forces acting on the matter (Illarionov & Sunyaev 1975). The details of this process are uncertain. During the propeller phase, we assume that $`\dot{M_X}\dot{M}`$ but that spindown will nevertheless be efficient. In other words, matter will be driven to $`R_m`$ by viscous processes where it can torque the star, but mass ejection occurs subsequently before the material reaches the star’s surface. Thus, such an object will spin down rapidly but will be X-ray faint. Later, as $`\mathrm{\Omega }`$ approaches $`\mathrm{\Omega }_K(R_m)`$, the system enters a quasi-equilibrium “tracking” phase, in which the spin of the star roughly matches the rotation period of the disk at $`R_m`$. However, because the mass accretion rate declines steadily, a true equilibrium is never attained, unlike in the case of accreting neutron stars in binaries. In this phase, we assume $`\dot{M_X}\dot{M}`$ and that the star will be X-ray bright. We parameterize the time of transition from the propeller phase to the tracking phase by $`t_{trans}`$.
Analyses of accretion onto black holes suggests that the infall will eventually become an advection-dominated accretion flow (an ADAF; Narayan & Yi 1994, 1995; Abramowicz et al. 1995; Narayan, Mahadevan & Quataert 1998; Kato, Fukue, & Mineshige 1998) and that much of the mass will be ejected prior to reaching the surface of the star (e.g. Blandford & Begelman 1999, Menou et al. 1999, Quataert & Narayan 1999). The ADAF transition will occur at a characteristic time $`tt_{ADAF}`$, when the associated accretion luminosity falls to $`0.01L_E`$ (this is a rough estimate, cf. Narayan & Yi 1995), where the Eddington luminosity is $`L_E=1.8\times 10^{38}`$ ergs/s. At the beginning of the ADAF phase, we expect the star to be X-ray bright, but the X-ray luminosity will decline rapidly with time as $`\dot{M_X}`$ becomes much smaller than $`\dot{M}`$. Thus we expect to observe bright sources only between $`t_{trans}`$ and $`2t_{ADAF}`$. We identify this phase of evolution with the AXPs. For the parameters of interest (§4), the AXP phase corresponds to no more than an order of magnitude in time, say from $`t5000`$ yr to $`t40000`$ yr. It is this slender range that we hope to exploit to explain the surprisingly narrow distribution of AXP periods.
If, on the other hand, the neutron star enters the ADAF phase before reaching the efficiently accreting tracking phase, then it remains a dim “propeller system” for most of its lifetime, and is never seen as a bright AXP.
## 3 Details of the Model
Cannizzo et al. (1990) have shown that once the accretion disk is established and evolves under the influence of viscous processes, the accretion rate declines self-similarly according to $`\dot{m}t^\alpha `$. Motivated by these results, we choose to parameterize the accretion rate in our model by
$$\dot{m}=\dot{m}_0,\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}0}<t<T$$
$$\dot{m}=\dot{m}_0\left(\frac{t}{T}\right)^\alpha ,tT,$$
$`(1)`$
where $`T`$ is of order the dynamical time in the inner parts of the disk early on, and $`\dot{m}_0`$ is a constant, which we normalize to the total mass of the disk, $`M_d=_0^{\mathrm{}}\dot{M_d}𝑑t`$, by $`\dot{m}_0=[(\alpha 1)M_d]/[\alpha \dot{M}_ET]`$, assuming $`\alpha >1`$. Cannizzo et al. (1990) find $`\alpha =19/16`$ for a disk in which the opacity is dominated by electron scattering and $`\alpha =1.25`$ for a Kramers opacity. The evolution at early times during the initial settling of the disk is uncertain, and will probably need to be studied numerically. It is possible that $`\alpha `$ could differ from of order unity during an early phase when the accretion is highly super-Eddington, thereby compromising the relationship between $`M_d`$ and $`\dot{m}_0`$. However, our results at late times are insensitive to this initial transient behavior and the form for the accretion rate given in equation (1) should be appropriate.
Given the accretion history from equation (1), the time-dependence of the stellar spin can be computed from
$$I\dot{\mathrm{\Omega }}=\dot{J},$$
$`(2)`$
where $`I`$ is the moment of inertia of the star and $`\dot{J}`$ is the torque acting on the star from the accretion disk. Throughout, we assume that torque will be applied only when $`R_m<R_{lc}`$. For the range of parameters chosen below, $`R_m<R_{}`$ at $`t=0`$, and hence the accretion disk extends down to the surface of the star. Thus, the torque exerted on the star is $`\dot{J}=\dot{M}_ER_{}^2\mathrm{\Omega }_K(R_{})`$ (see, e.g., Popham & Narayan 1991); note that we have assumed that the accretion rate at the very surface of the star can be at most at the Eddington rate, with any excess material being blown away before it can reach the star’s surface by the pressure of radiation from the star. If $`t_R_{}`$ is the time at which $`R_m`$ equals $`R_{}`$, we take the subsequent torque to be given by the following simple heuristic formula (see Menou et. al. 1999):
$$\dot{J}=2\dot{M}R_m^2\mathrm{\Omega }_K(R_m)\left(1\frac{\mathrm{\Omega }}{\mathrm{\Omega }_K(R_m)}\right).$$
$`(3)`$
When $`\mathrm{\Omega }`$ is very large compared to the “equilibrium” angular velocity $`\mathrm{\Omega }_K(R_m)`$, we have the propeller phase in which the spin-down torque is large as a result of the large angular momentum transferred to the ejected material. The torque approaches zero as $`\mathrm{\Omega }`$ approaches $`\mathrm{\Omega }_K(R_m)`$, and becomes a spin-up torque for $`\mathrm{\Omega }<\mathrm{\Omega }_K(R_m)`$ (this never happens in our scenario, except for a brief period of a month or so very early in the history of the neutron star, before it enters the propeller phase).
Equation (2) coupled with equation (3) for the torque can be integrated to yield an analytic solution for $`\mathrm{\Omega }(t)`$ in terms of incomplete gamma functions for arbitrary $`\alpha `$. A particularly simple form results from the choice $`\alpha =7/6`$. Since this value is very nearly that found by Cannizzo et al. (1990) for disks dominated by either electron scattering or Kramers opacities, we will adopt $`\alpha =7/6`$ in what follows. The solution of equation (3) for $`t>t_R_{}`$ is then
$$\mathrm{\Omega }(t)=\mathrm{\Omega }(t_R_{})e^{2c_1(t_R_{}^{1/2}t^{1/2})}+2c_2\left[Ei(2c_1t^{1/2})Ei(2c_1t_R_{}^{1/2})\right]e^{2c_1t^{1/2}},$$
$`(4)`$
where $`Ei(x)=_{\mathrm{}}^x(e^y/y)𝑑y`$ is an exponential integral, $`c_1=2\dot{m_0}\dot{M}_ER_{m,0}^2T^{1/2}/I`$, and $`c_2=2\dot{m_0}\dot{M}_ER_{m,0}^2\mathrm{\Omega }_K(R_{m,0})T/I`$. Quantities subscripted by zero refer to their values at $`t=0`$.
For arbitrary $`\alpha `$, the solution to equation (3) in the limit $`tt_R_{}`$ can be obtained using an asymptotic expansion, yielding $`\mathrm{\Omega }(t)\mathrm{\Omega }_K(R_{m,0})(t/T)^{3\alpha /7}`$. The corresponding characteristic age, $`\tau _c\mathrm{\Omega }/2\dot{\mathrm{\Omega }}`$ and braking index $`n\mathrm{\Omega }\ddot{\mathrm{\Omega }}/\dot{\mathrm{\Omega }^2}`$ then become $`\tau _c7t/6\alpha `$ and $`n(7+3\alpha )/3\alpha `$. Note that for $`\alpha =7/6`$, $`\tau _ct`$ and $`n3`$, which are identical to those for a radio pulsar spinning down by magnetic dipole radiation. Thus, remarkably, the steady timing behavior of a star being spun down by a fossil accretion disk with $`\alpha =7/6`$ will be indistinguishable from that of a radio pulsar spinning down by emitting magnetic dipole radiation.
The evolutionary phases described in Section 2 can be identified from equation (4). When the first term in equation (4) dominates, $`\mathrm{\Omega }e^{2c_1t^{1/2}}`$ and spin-down is rapid; this is the propeller phase. When the second term dominates, $`\mathrm{\Omega }t^{1/2}`$ and the spin period of the star nearly equals the evolving equilibrium period; this is the tracking phase. Note that the spin-down torque is applied by the full $`\dot{m}`$, which can be vastly super-Eddington at early times (for the choice of parameters below), although presumably the X-ray luminosity would never exceed the Eddington limit.
## 4 Representative Results
For the examples shown here we choose an initial disk mass of $`M_d=0.006M_{}`$, an initial dynamical time of $`T=1`$ ms, an initial neutron star spin period of $`P_0=15`$ ms, and $`\alpha =7/6`$. Although arbitrary, these are plausible values; for instance, $`M_d`$ is a very small fraction of the likely fallback mass $`M_{fb}`$, and $`P_0`$ is comparable to the estimated initial spin period of the Crab pulsar. We assume that neutron stars are born with magnetic field strengths in the range $`\mathrm{log}(B_{12})110`$ (Narayan & Ostriker 1990). Figure 1 shows the spin histories of 10 neutron stars with field strengths spanning this range.
For the assumed values of $`M_d`$, $`T`$, $`P_0`$, and $`\alpha `$, a neutron star with a field strength $`B_{12}510`$ becomes an AXP (Fig. 1). Early on, for a period ranging from a few days to up to a year, depending on the magnetic field strength, the star accretes at the Eddington rate and is very bright. This phase is not visible since it is lost in the emission from the supernova explosion. As $`\dot{M}`$ decreases with time, $`R_m`$ increases, and the system switches to a much dimmer propeller phase which lasts for about $`10^4`$ yr. In this phase, the star spins too rapidly to accrete much mass; it spins down rapidly, however, as a result of the large decelerating torque due to the material that is flung out by centrifugal forces. Ultimately, at $`t=t_{trans}10^4`$ yr, the star achieves quasi-equilibrium with the accretion disk and enters the tracking phase where it becomes bright in X-rays. For the examples shown, the luminosity is initially $`L_x10^{36.5}\mathrm{erg}\mathrm{s}^1`$, with $`L_x`$ decreasing as $`t^\alpha `$. The bright phase lasts for only a short time. When $`t=t_{ADAF}19000`$ yr (i.e., when $`\dot{m}0.01`$), the accretion flow switches from a thin disk to an ADAF. Thereafter, the mass accretion rate onto the neutron star falls rapidly as a result of heavy mass loss in a wind. By $`t2\times t_{ADAF}10^{4.6}`$ yr, the mass accretion rate is likely to be quite low and the system will be dim once again.
A neutron star with an intermediate magnetic field strength, such as $`B_{12}=4`$ (Fig. 1), enters the efficiently accreting tracking phase at a time $`t_{trans}`$ which is greater than $`2\times t_{ADAF}`$, the cutoff for visibility assumed in this model. Such a “propeller system” would thus remain in the dim propeller phase for almost the whole of its lifetime before directly entering the ADAF phase.
For smaller field strengths, $`B_{12}13`$, the behavior is quite different (Fig. 1). Like their strong field counterparts, these stars have an initial brief Eddington-bright accretion phase, followed by a propeller phase. However, because of their smaller $`R_m`$ (a consequence of their smaller $`B_{12}`$), the spindown due to accretion is less effective and $`R_{lc}`$ remains small. Therefore, with time, as $`\dot{M}`$ is reduced, a stage is reached when $`R_m`$ becomes greater than $`R_{lc}`$. At this point, accretion ceases completely and the neutron star becomes a radio pulsar. The pulsar phase then proceeds in the normal fashion, lasting for millions of years until the star crosses the “death line” (e.g. Bhattacharya & van den Heuvel 1991). By this time, the fossil disk would most likely have dissipated completely, so it is very unlikely that the dead pulsar would be resurrected in a late phase of accretion.
Figure 2 exhibits the different phases discussed above in a $`B_{12}P`$ diagram. Note the clean division between radio pulsars and AXPs. Let $`B_{12,RP}`$ be the critical field below which neutron stars become radio pulsars, and let $`B_{12,AXP}`$ be the critical field above which neutron stars go through a bright AXP phase. Then, for a choice of $`M_d=0.006M_{}`$ and $`P_0=15`$ ms, $`B_{12,RP}=3.9`$, and $`B_{12,AXP}=4.2`$. The “propeller” systems occur for intermediate values of field strength. For the above choice of parameters, note also the narrow range of resulting AXP spin periods.
The precise values of $`B_{12}`$ that result in radio pulsars and AXPs, and the period range over which AXPs are seen, both depend on the parameters of the model, notably $`M_d`$ and $`P_0`$. In the above example, we have chosen values for these parameters that seem best to produce results in agreement with observations. While we defer to a future paper the question of how these conclusions would change were the parameters to be drawn from broad distributions instead of from delta functions, we give below the results for a few other choices of parameter values.
As mentioned above, for $`M_d=0.006M_{}`$ and $`P_0=15`$ ms, $`B_{12,RP}=3.9`$ and $`B_{12,AXP}=4.2`$; a neutron star with the critical field $`B_{12}=4.2`$ would just enter the tracking phase at the cutoff time $`2\times t_{ADAF}=38000`$ yr with a period of $`7.1`$ s; neutron stars with higher fields would enter the tracking phase before this cutoff time, and thus one would have a distribution of periods on both sides of 7.1 s.
For $`M_d=0.006M_{}`$ and $`P_0=20`$ ms, $`B_{12,RP}=0.8`$ and $`B_{12,AXP}=4.1`$; at the cutoff time $`2\times t_{ADAF}=38000`$ yr, a neutron star with $`B_{12}=4.1`$ would have a period of 6.9 s. For $`M_d=0.012M_{}`$ and $`P_0=15`$ ms, $`B_{12,RP}=0.5`$ and $`B_{12,AXP}=2.4`$; at the cutoff time $`2\times t_{ADAF}=70000`$ yr, a neutron star with $`B_{12}=2.4`$ would have a period of 4.3 s.
For $`M_d=0.006M_{}`$ and $`P_0=10`$ ms, $`B_{12,RP}=36.0`$ and $`B_{12,AXP}=4.4`$; since $`B_{12,RP}>B_{12,AXP}`$, any system that does not become a radio pulsar goes through a bright AXP phase, and there are no “propeller systems” that directly enter the ADAF phase without going through a brightly accreting phase; a neutron star with the critical field strength $`B_{12}=36.0`$ enters the tracking phase at $`t_{trans}=266`$ yr with a period of 3.7 s and at the cutoff time $`2\times t_{ADAF}=38000`$ yr has a period of 53.9 s. Similarly, for $`M_d=0.003M_{}`$ and $`P_0=15`$ ms, $`B_{12,RP}=31.1`$ and $`B_{12,AXP}=7.4`$; a neutron star with $`B_{12}=31.1`$ enters the tracking phase at $`t_{trans}=713`$ yr with a period of 7.3 s and at the cutoff time $`2\times t_{ADAF}=21000`$ yr has a period of 47.0 s.
Thus, it is clear that not all choices of parameter values produce a narrow range of periods; it remains to be seen whether this narrow range would persist if the parameters were drawn from distributions centered at or near the “best case” values used in the figures.
## 5 Conclusions
We have described a model in which a newly-born neutron star can experience accretion at a time-varying rate from a disk formed as a result of fallback following a supernova explosion. As shown above, for plausible values of the magnetic field strength, initial spin period, and disk mass ($`B_{12}\stackrel{>}{}5`$, $`P_0=0.015`$ s, $`M_d=0.006M_{}`$), a neutron star will be spun down to periods similar to those of observed anomalous X-ray pulsars, $`P10`$ seconds, on timescales characteristic of the estimated ages of AXPs, $`10,00040,000`$ years. These conditions are met if the object is seen as an AXP during the tracking phase of evolution discussed previously up to roughly twice the age at which the accretion flow evolves into an ADAF, $`t_{ADAF}`$. Early during this period, the X-ray luminosity of the source is high compared with observed AXPs; however, the luminosity is expected to fall quickly after about $`t_{ADAF}`$, and so it could mean, for example, that the known AXPs are seen some time after $`t_{ADAF}`$. For the above parameter values, this model thus matches the properties of observed AXPs. Different choices of parameters (for example, lowering the field strength to $`B_{12}=3`$) yields a radio pulsar for an extended period of time. Yet other values of the surface magnetic field strength (for example, $`B_{12}=4`$) produce systems that remain in the X-ray faint propeller phase right until the time they make the transition into the dim ADAF phase. In this paper, we have merely pointed out the existence of a region in parameter space that produces objects with properties resembling AXPs. Whether or not this model is capable of accounting for the observed relative birthrates of radio pulsars and AXPs will require a more thorough analysis.
Our model has a number of advantages over previous theories for AXPs as accreting sources. The accretion flow is time-varying, and thereby provides a natural explanation for the increasing periods of AXPs. Accretion from a fossil disk evades limits on companion masses associated with AXPs from timing measurements. Thompson et al. (1999) have recently pointed out that large recoil velocities severely limit the radial extents of disks that would remain bound to a neutron star. This is not a difficulty for our model, as the disk is tightly bound initially, before spreading radially as it enters a period of self-similar evolution.
The other leading model for AXPs invokes ultramagnetized neutron stars ($`B10^{14}10^{15}`$ G) to account for their timing behavior (e.g. Thompson & Duncan 1996) and their X-ray emission (e.g. Heyl & Hernquist 1997b, Heyl & Kulkarni 1998). An advantage of our scenario is that it relies on a standard population of isolated neutron stars having magnetic fields similar to those inferred for radio pulsars and binary X-ray pulsars and does not require the existence of a separate class of neutron stars.
How can we discriminate between these two models? By chance, the steady spindown caused by accretion from a fossil disk is indistinguishable from that describing the evolution of ordinary radio pulsars. Thus, it appears unlikely that this aspect of AXPs will unambiguously distinguish these two models. However, these theories differ significantly in detail, and many observational tests can, in principle, separate them.
Torque fluctuations are commonly seen in X-ray binaries (e.g. Bildsten et al. 1997). Whether or not the torque fluctuations seen in AXPs can be explained by our model (or, indeed any of the proposed accretion scenarios) remains to be seen, but it is at least plausible that an accretion flow can produce torque noise. The origin of this behavior according to the ultramagnetized neutron star model is uncertain. Proposals include fluctuations induced by glitches (Heyl & Hernquist 1999) or “radiative precession” (Melatos 1999).
The narrow distribution of AXP periods (a factor $`2`$) is puzzling, and lacks a definitive explanation. In our model, neutron stars accreting from a debris disk would be visible as AXPs for only a relatively brief period of time, and hence would be seen in similar evolutionary states. Whether or not this is sufficient to reproduce the observed population of AXPs will depend principally on the distribution of neutron star-disk systems in the parameter space defined by magnetic field strength ($`B_{12}`$), initial spin period ($`P_0`$), and disk mass ($`M_d`$), and how or if these parameters are correlated. The other two relevant parameters are $`\alpha `$ and $`T`$, which define the mass accretion rate in equation (1); our long-term results are quite insensitive to the precise value of $`T`$, and the value of $`\alpha `$ is rather closely fixed by the results of Cannizzo et al. (1990).
An interesting question concerns the relationship of AXPs to SGRs. These sources appear similar in most respects, except that SGRs are subject to occasional, energetic outbursts. Previously, it has been suggested that bursts of gamma-ray emission could be produced by interchange instabilities in the crusts of slowly accreting neutron stars (Blaes et al. 1990) or by solid bodies impacting neutron stars (Harwitt & Salpeter 1973; Tremaine & Zytkow 1986). Conceivably, these processes could play a role in our model, but at present we have no specific, testable proposal that would distinguish AXPs from SGRs in this context.
Certainly, the most unambiguous discriminant between the accretion and ultramagnetized neutron star models would be a direct measurement of the magnetic field strength (i.e., not derived from timing data). We predict that AXPs should have magnetic fields $`B10^{13}`$ G, above average compared with those of radio pulsars, but lying within the same observed range. According to the magnetar hypothesis, however, AXPs have much larger magnetic fields, $`B10^{14}10^{15}`$ G. Like some binary X-ray pulsars, it is possible that AXPs will contain information in their spectra that would provide a determination of the magnetic field strength.
Existing AXP spectra are well-fitted by thermal profiles with power-law tails at high energies (e.g. Corbet et al. 1995, Oosterbroek et al. 1998, Israel et al. 1999), but lack the resolution to exhibit discrete spectral features. Power-law tails at high energy are characteristic of accreting neutron stars at low luminosities (e.g. Asai et al. 1996, Zhang, Yu & Zhang 1998, Campana et al. 1998), though the high energy tails are typically harder than in the AXPs. In the ultramagnetized neutron star model, the tails presumably arise as a result of departures from blackbody emission (e.g. Heyl & Hernquist 1998b). At present, therefore, the spectral information is ambiguous. However, the recently deployed Chandra X-ray satellite and the forthcoming XMM mission promise to revolutionize the field of high-precision X-ray spectroscopy, and we are thus optimistic that the true nature of AXPs will be revealed in the near future.
Several arguments against an accretion model of AXPs have been presented in the literature. Some of these arguments apply to our model as well. Tight limits on optical and especially infrared counterparts of AXPs (e.g. Coe & Pightling 1998) severely constrain the amount of emission that can come from the disk. For reasonable parameters, the disks in our model expand to about an AU during the AXP phase, and it might be difficult to make the disk as dim as the observations require. Kaspi, Chakrabarty & Steinberger (1999) have shown that the AXP 1E 2259+586 has an extraordinarily low level of timing noise. This is hard to explain in an accretion-based model because the disk, by virtue of its turbulent nature, is likely to produce a noisy torque. Note, however, that the fossil disks are likely to differ from conventional disks in accreting binary systems by virtue of, e.g., their composition, so the implications of the observations for our model are uncertain. Li (1999) has argued that AXPs cannot respond quickly enough to a changing $`\dot{M}`$ to track the equilibrium spin period of a time-dependent disk. This is clearly not a problem for our model. As Fig. 1 shows, the systems that become AXPs ($`B_{12}\stackrel{>}{}5`$) spin down rapidly enough in the propeller phase to be able to attain periods close to their (evolving) equilibrium spin periods in $`10^310^4`$ years.
We would like to thank Deepto Chakrabarty, Jeremy Heyl, and Vicki Kaspi for useful discussion. This work was supported in part by NSF grant AST 9820686. |
no-problem/9912/cond-mat9912365.html | ar5iv | text | # Dissipation and Vortex Creation in Bose-Einstein Condensed Gases
## Abstract
We solve the Gross-Pitaevskii equation to study energy transfer from an oscillating ‘object’ to a trapped Bose-Einstein condensate. Two regimes are found: for object velocities below a critical value, energy is transferred by excitation of phonons at the motion extrema; while above the critical velocity, energy transfer is via vortex formation. The second regime corresponds to significantly enhanced heating, in agreement with a recent experiment.
PACS numbers: 03.75.Fi, 67.40.Vs, 67.57.De
The existence of a critical velocity for dissipation is central to the issue of superfluidity in quantum fluids. The concept was first introduced by Landau in his famous criterion , where elementary excitations are produced above a velocity $`v_L`$. In liquid $`{}_{}{}^{4}\mathrm{He}`$ this process refers to the excitation of rotons, with $`v_L58\mathrm{ms}^1`$. However, much smaller critical values are observed experimentally, which prompted Feynman to propose that quantized vortices may be responsible .
Vortex nucleation in superfluid $`{}_{}{}^{4}\mathrm{He}`$ is difficult to explain quantatively. Strong interactions within the liquid, plus thermal and quantum fluctuations, impede formulation of a satisfactory microscopic theory. In contrast, Bose-Einstein condensation (BEC) in trapped alkali gases provides a relatively simple system for exploring superfluidity. Weakly-interacting condensates can be produced with a negligibly small non-condensed component. This allows an accurate description by a nonlinear Schrödinger equation, often known as the Gross-Pitaevskii (GP) equation. The system also offers excellent control over the temperature, number of atoms and interaction strength, as well as allowing manipulation of the condensate using magnetic and optical forces .
Recent experiments have produced vortices by coherent excitation and cooling of a rotating cloud . The existence of vortices was also inferred by Raman et al. , where a condensate was probed by an oscillating laser beam blue-detuned far from atomic resonance. The optical dipole force expels atoms from the region of highest intensity, resulting in a repulsive potential. Although vortices were not directly imaged, significant heating of the cloud was observed only above a critical velocity, indicating a transition to a dissipative regime. This heating was found to depend upon the existence of a condensate, indicating that it must be due to the production of elementary excitations, that subsequently populate the non-condensed fraction.
Critical velocities for vortex formation in superflow past an obstacle have been studied numerically by solution of the GP equation in a homogeneous condensate . Simulations have also confirmed that vortices are nucleated when a laser beam is translated inside a trapped Bose condensed gas . In this paper, we attempt to clarify the role of vortices in the MIT experiment by presenting 2D and 3D simulations of an oscillating repulsive potential in a condensate. The motion transfers energy to the condensate, and we observe that the transfer rate increases significantly above the critical velocity for vortex formation.
Our simulations employ the GP equation for the condensate wavefunction $`\mathrm{\Psi }(𝒓\text{,t})`$ in a harmonic trap $`V_{\mathrm{trap}}(𝒓)=\frac{m}{2}_j\omega _j^2j^2`$; $`j=x,y,z`$. For convenience, we scale in harmonic oscillator units (h.o.u.), where the units of length, time, and energy are $`(\mathrm{}/2m\omega _x)^{1/2}`$, $`\omega _x^1`$, and $`\mathrm{}\omega _x`$, respectively. The scaled GP equation is then
$$i_t\mathrm{\Psi }=(^2+V+C|\mathrm{\Psi }|^2)\mathrm{\Psi },$$
(1)
where $`V`$ represents a time-dependent ‘object’ potential superimposed upon a stationary trap: $`V=\frac{1}{4}(x^2+ϵy^2+\eta z^2)+V_{\mathrm{ob}}(𝒓\text{,t})`$. The atomic interactions are parameterized by $`C=(NU_0/\mathrm{}\omega _x)(2m\omega _x/\mathrm{})^{\gamma /2}`$: $`N`$ atoms of mass $`m`$ interact with a $`s`$-wave scattering length $`a`$, such that $`U_0=4\pi \mathrm{}^2a/m`$. The number of dimensions is $`\gamma `$. For most of the simulations here, $`\gamma =2`$, corresponding to the limit $`\eta 0`$. In this case, $`N`$ represents the number of atoms per unit length along $`z`$. For the general 3D situation, a moving laser beam focused to a waist $`\stackrel{~}{w}_0`$ (in h.o.u.) at ($`0,y^{}(t),0`$), is simulated using
$$V_{\mathrm{ob}}(𝒓\text{,t})=\frac{U_{\mathrm{ob}}}{\sigma }\mathrm{exp}\left[\frac{2(x^2+(yy^{}(t))^2)}{\sigma \stackrel{~}{w}_0^2}\right],$$
(2)
where $`\sigma =1+(z/z_0)^2`$. The Rayleigh range is $`z_0=\pi \stackrel{~}{w}_0^2/\lambda `$, where $`\lambda `$ is the laser wavelength .
Our numerical methods are discussed elsewhere . Briefly, initial states are found by propagating (1) in imaginary time with $`V_{\mathrm{ob}}(𝒓\text{,0})`$, using a spectral method. Then, real-time simulations are performed subject to motion of the object potential. To recover the essential physics behind the MIT experiment , we describe the oscillatory motion by $`y^{}(T)=\alpha vT`$ ($`T<1/2f`$) and $`y^{}(T)=vT3\alpha `$ ($`1/2f<T<1/f`$), where $`T=ts/f`$ and $`s`$ is the number of completed oscillations. The velocity between the motion extrema is constant, $`𝒗=\pm 4\alpha f\widehat{𝒚}`$, where $`\alpha `$ is the amplitude and $`f`$ is the frequency. The condensate is anisotropic, with its long axis along $`y`$ ($`ϵ<1`$). As a consequence, for small $`\alpha `$, the beam moves through regions of near-constant density. Initially, the object creates a density minimum at $`y=\alpha `$, which follows closely behind the moving object. For $`v>v_c`$, where $`v_cc_s`$ and $`c_s=\sqrt{2C|\mathrm{\Psi }|^2}`$ is the sound velocity, the density inside the beam evolves to zero. This is accompanied by a $`\pi `$ phase slip , at which point the density minimum splits into a pair of vortex lines of equal but opposite circulation . The vortex pair separates, and the process begins again.
The creation of phonons or vortices increases the energy of the condensate, which was calculated numerically using the functional $`E=(|\mathrm{\Psi }|^2+V|\mathrm{\Psi }|^2+\frac{C}{2}|\mathrm{\Psi }|^4)d^3r`$. The time-independent ground state of the wavefunction represents the minimum of this functional. The energy is related to the drag force on the object $`𝑭_{\mathrm{ob}}`$ by $`dE/dt=𝑭_{\mathrm{ob}}.𝒗`$. The drag can be calculated independently over the whole condensate using $`𝑭_{\mathrm{ob}}=|\mathrm{\Psi }|^2V_{\mathrm{ob}}d^3r`$, allowing a numerical check. Superfluidity corresponds to the situation where $`E`$ remains constant when $`V_{\mathrm{ob}}`$ is time-dependent; i.e. when there is no drag on the object.
Fig. 1 shows the energy and drag as a function of time, as calculated for two different frequencies in 2D simulations. At low frequency, the energy transfer is relatively small and characterized by ‘jumps’ at the motion extrema, whereas at higher frequency the energy transfer is two orders of magnitude larger and more continuous. Further insight can be gained by considering the drag. At low $`f`$, there is little drag except at the motion extrema (Fig. 1(c)), while at high $`f`$ appreciable drag is observed at all times (Fig. 1(d)).
To measure the average rate of energy transfer, a linear regression analysis is performed on the energy-time data. The gradients are plotted against $`v`$ in Fig. 2. It can be seen that the curves are characterized by two different regimes. Small energy transfer at low $`v`$, gives way to enhanced heating above the critical velocity, $`v_c`$. At high $`v`$, the three plots follow a single linear curve.
Energy transfer below $`v_c`$ arises due to emission of sound waves at the motion extrema. This process (henceforth referred to as phonon heating) is found to approximately scale with $`v^3`$, indicating that at each extremum (which are reached at a rate $`v`$) a sound wave with energy $`v^2`$ is emitted. Note phonon emission by the object is not inconsistent with Landau’s criterion. In particular, the Landau argument relies on use of Galilean invariance, which breaks down when the condensate density varies, or when the velocity changes abruptly.
For the parameters we have explored, phonon heating is found to be relatively small compared to the energy transfer from vortex formation above $`v_c`$. The heating rate in the latter regime is found to scale approximately linearly with $`v`$. This implies that the drag force is constant. Indeed, we observe that the drag saturates as $`v`$ increases. This behavior contrasts with that of steady flow, where the drag $`v^k`$ (where $`k1`$ at $`v`$ close to $`v_c`$, and $`k2`$ for $`v>c_s`$) . The difference arises from the oscillatory motion: as the object travels back through its own wake, a large pressure imbalance across the object does not develop.
Fig. 3 plots the mean energy transferred against the number of vortex pairs (counted in the simulated wavefunction). The energy transfer per vortex pair is approximately constant, leading to an estimate of the pair energy, which is plotted as a function of nonlinearity in Fig. 3 (inset). The energy of a vortex pair in an homogeneous condensate with number density $`n`$, is given by
$$E_{\mathrm{pair}}=\frac{2\pi n\mathrm{}^2}{m}\mathrm{ln}\left(\frac{d}{\xi }\right),$$
(3)
where $`\xi `$ is the healing length and $`d`$ is the distance between the vortices. Equation (3) is valid for the inhomogeneous condensate when $`\xi dR`$, where $`R`$ is the radial extent of the condensate. Eq. (3) with $`d=2w_0`$ is plotted in Fig. 3 (inset), and is found to agree with the numerical data. Recall that the vortex pair separates immediately after formation, when the pair still resides within the density minimum created by the object. The pair also moves in the direction of the object motion: however, it is slower, and is eventually left behind. At this point, it has an energy approximately equal to $`E_{\mathrm{pair}}`$, and the formation process is complete. The heating rate can be expressed as $`dE/dt=E_{\mathrm{pair}}f_s`$ , where $`f_s`$ is the shedding frequency, which is found to be proportional to $`v`$. This accounts for the linear dependence of the energy transfer rate.
The subsequent vortex dynamics involve an interplay between velocity fields induced by other vortices, and effects arising from the condensate inhomogeneity. In the absence of the object, an isolated pair follows a trajectory similar in character to that of a vortex ring , culminating in self-annihilation. However, the object moves back through its wake, interacting with the original pairs and creating more vortices. The circulation of a pair depends upon the direction of the object motion when it is created. So, vortex pairs of opposite circulation are formed and interact when sufficiently close. This leads to situations where vortices annihilate or move towards the edge. The number of vortices remaining within the condensate bulk is found to reach an equilibrium value.
The critical velocity for vortex formation, $`v_c`$, as a function of potential height and nonlinear coefficient is shown in Fig. 4. The critical velocity is not as well defined as in the homogeneous case for a number of reasons. First, a density inhomogeneity along the direction of motion leads to a variation in $`c_s`$, and therefore $`v_c`$. However, this is less than $`3\%`$ in the simulations considered here. The oscillatory nature of the object motion is important. The time taken for a vortex pair to form diverges to infinity as $`v`$ approaches $`v_c`$ from above. So, the measured value of $`v_c`$ increases from its true value as $`\alpha `$ decreases. In addition, the object travels through its own low-density wake, where $`c_s`$ is lower. Vortices can therefore be formed after the first half-oscillation, when $`v`$ is slightly below $`v_c`$. Nevertheless, we can obtain a good estimate for $`v_c`$ by choosing intermediate values of the amplitude (e.g. $`\alpha =4`$) and considering only vortex formation during the first half-cycle.
Fig. 4 demonstrates that $`v_c`$ decreases as a function of increasing object potential height, $`U_{\mathrm{ob}}`$, allowing an experimental diagnostic for vortex formation at varying beam intensities. This behavior agrees with simulations of 1D soliton creation and vortex ring formation in 3D . We have also studied the case of $`U_{\mathrm{ob}}<0`$, which corresponds to a red-detuned laser. Atoms are attracted to the potential minimum, creating a density peak which moves with the beam. Vortex pairs are created from a density minimum which develops ahead of the beam. Fig. 4 (inset) shows $`v_c`$ as a function of $`C`$. The critical velocity tends to a constant value as $`C`$ increases.
Simulations in 3D were performed, and the mean energy transfer rate as a function of velocity is presented in Fig. 5. Similar behaviour to 2D is observed, with smaller critical velocities: a result of the beam intersecting the condensate edge where the speed of sound is lower. Accordingly, vortex lines first appear in these regions and penetrate into the center. This conclusion agrees with the experiment , where a relatively low critical velocity ($`v_c0.26c_s`$) was measured. The dependence of $`v_c`$ on $`U_{\mathrm{ob}}`$ and $`C`$ was found to be similar to 2D, where e.g. $`v_c0.29c_s`$ for $`C=4000`$ and $`U_{\mathrm{ob}}=35`$. Enhanced heating is also observed for $`v_c>c_s`$, due to phonon emission between the extrema.
In this paper, we have studied the role of vortex formation in the breakdown of superfluidity, by an oscillating object in a trapped Bose-Einstein condensate. We find that at low object velocities, energy is transferred by phonon emission at the motion extrema, while a much larger energy is transferred above the critical velocity for vortex formation. To generalize these conclusions to realistic experimental situations, the model should include the non-condensed thermal cloud. Energy would then be transferred from the condensed to the thermal cloud by phonon damping , or vortex decay .
We acknowledge financial support from the EPSRC. |
no-problem/9912/astro-ph9912205.html | ar5iv | text | # Synchrotron Radiation as the Source of GRB Spectra, Part I: Theory.
## Introduction
It has been suggested (e.g. katz94 ) that synchrotron emission is a likely source of radiation from GRBs, and later shown tav96 that an optically thin synchrotron spectrum is a good fit to some bursts. However, some features seen in the low energy portion of GRB spectra can not be explained by the simple synchrotron model (SSM) - optically thin synchrotron emission from a power law distribution of relativistic electrons with a minimum energy cutoff. This model predicts that the asymptotic value of the low energy photon index, $`\alpha `$, should be a constant value of $`2/3`$. However, the data show an $`\alpha `$ distribution with a mean of about $`1.1`$ and a standard deviation of about $`1`$. Furthermore, there are a significant fraction of bursts with $`\alpha >2/3`$ \- above the so-called “line of death” pree98 . In addition, spectral evolution of $`\alpha `$ and the peak of $`\nu F_\nu `$, $`E_p`$, are inconsistent with an instantaneous optically thin synchrotron spectrum in an external shock model crid97 . Consequentially, other models - usually involving inverse compton scattering brain94 , lian96 \- were invoked to explain these “anomolous” spectral behaviors.
In this paper, we discuss how GRB spectra can be accomodated by synchrotron emission, including those spectra not explained by the SSM. We discuss the various spectral shapes from a general form for synchrotron emission, allowing for the possibility for self-absorption and a smooth cutoff to the electron energy distribution, and show that these models fit GRB spectra well. We show there is a correlation between $`\alpha `$ and $`E_p`$ as determined by fits using the Band band93 (and similiar) spectral forms. Finally, we briefly discuss the variety of spectral evolution behaviors seen in GRBs in the context of synchrotron emission. In Part II (Lloyd et al., these proceedings, PartII ), we compare our theoretical predictions with the data and show how synchrotron emission can explain the spectral behavior of GRBs.
## Synchrotron Emission
The general form for an instantaneous synchrotron spectrum for a power law distribution in the electron energy with a sharp cutoff, $`N(E)=N_oE^p`$, $`E>E_{min}`$, is given by pach
$$F_\nu =A\nu ^{5/2}[\frac{I_1}{I_2}]\times [1.0\mathrm{exp}[Q\nu ^{(p+4)/2}I_2]]$$
(1)
$$I_1=_0^{\frac{\nu }{\nu _{min}}}𝑑xx^{(p1)/2}_x^{\mathrm{}}K_{5/3}(z)𝑑z,I_2=_0^{\frac{\nu }{\nu _{min}}}𝑑xx^{p/2}_x^{\mathrm{}}K_{5/3}(z)𝑑z$$
(2)
$`A`$ is the normalization and contains factors involving the perpendicular component of the magnetic field, $`B_{}`$, bulk Lorentz factor, $`\mathrm{\Gamma }`$, and number of electrons, $`N_o`$. The frequency $`\nu _{min}=(\mathrm{\Gamma }E_{min}^2B_{}3e)/(m^34\pi c^2)`$. The parameter $`Q`$ represents the optical depth of the medium (for example, if $`\nu \nu _{min}`$, the photon spectrum will be absorbed at the frequency $`\nu _{abs}Q^{2/(p+4)}`$). The high energy asymptotic behavior is the usual $`F_\nu \nu ^{(p1)/2}`$. The low energy asymptotic forms of the function depend on the relative values of $`\nu _{min}`$ and $`\nu _{abs}`$: $`F_\nu \nu ^{5/2}`$ for $`\nu _{min}<\nu \nu _{abs}`$, $`F_\nu \nu ^2`$ for $`\nu \mathrm{min}[\nu _{abs},\nu _{min}]`$, $`F_\nu \nu ^{1/3}`$, for $`\nu _{abs}<\nu <\nu _{min}`$.
Note that we do not address the case of cooling electrons, which will have the effect of increasing the electron power law distribution index $`p`$ by 1, $`pp+1`$ at some characteristic cooling energy.
## The Electron Distribution
In most models of synchrotron emission, the electron distribution is modeled by a power law with a sharp cutoff at some minimum energy (as done in the previous section). This is not a realistic (and may even be an unstable) distribution. We characterize the electron distribution by the following equation:
$$N(E)=N_o\frac{(E/E_{})^q}{1+(E/E_{})^{p+q}}$$
(3)
where $`E_{}`$ is some critical energy that characterizes where the electron distribution changes. For $`EE_{}`$, $`N(E)E^p`$, while for $`EE_{}`$, $`N(E)E^q`$. Hence, q characterizes the “smoothness” of the cutoff. This has a significant impact on the low energy portion of the synchrotron spectrum. An optically thin synchrotron spectrum takes the form:
$$F_\nu =A(\nu /\nu _{})^{(q+1)/2}_0^{\mathrm{}}𝑑x\frac{x^{(q+p)/2}}{1+((\nu /\nu _{})^{(q+p)/2}x^{(p+q)/2})}_x^{\mathrm{}}K_{5/3}(z)𝑑z$$
(4)
where $`\nu _{}=c_1B_{}E_{}^2`$. Depending on the smoothness of the cutoff, the spectrum of the emitted photons can change significantly. The peak of the spectrum is shifted to lower energies as the cutoff becomes smoother (q smaller), and the width of the spectrum increases, which implies that it takes longer for the spectrum to reach its low energy asymptotic value.
## Spectral Fits
Synchrotron emission models fit the GRB data well. We fit 11 bursts with 256 channel energy resolution to the synchrotron spectral forms described above. Five of these bursts have a low energy photon index (as determined by fitting a Band spectrum) above the “line of death” for optically thin synchrotron emission; that is, $`\alpha >2/3`$. In all of these cases, we found that including an absorption parameter will accomodate the hardness of the low energy index, and provided the best fit. Figure 1 shows the spectra for 2 GRBs in our sample (burst triggers 3893 and 3253). A self-absorbed spectrum in which the absorption frequency just enters the BATSE window best fits 3893, while an optically thin spectrum is the best fit to 3253. For a more complete discussion of the spectral fits, see inprep .
## A Range of Spectra
Figure 2a shows the many types of low energy spectral behavior one can obtain from the above synchrotron models, normalized to the peak of $`F_\nu `$ (at $`500`$ keV). The vertical lines mark the approximate width of the BATSE spectral window.
Now, the $`\alpha `$ distribution will depend largely on how quickly the spectrum reaches its low energy asymptote or how well spectral fits can determine the asymptote. As $`E_p`$ moves to lower and lower energies, we get less and less of the low energy portion of the spectrum; in this case, our spectral fits probably will not be able to determine the asymptote and will measure a lower (softer) value of $`\alpha `$. \[Preece et al. pree98 pointed out this effect and attempt to minimize it by defining an effective $`\alpha `$, which is the slope of the spectrum at 25keV (the edge of the BATSE window). However, a correlation between $`\alpha _{eff}`$ and $`E_p`$ will still exist if the asymptote is not reached well before 25keV.\] This difficulty becomes more severe the smoother the cutoff to the electron distribution, because the spectrum takes longer to reach its asymptote. To test this, we produce sets of data from optically thin synchrotron models with different parameters ($`\nu _{min}`$, $`q`$, etc.), all of which have a low energy asymptote of $`2/3`$. We fit a Band spectrum to this data (to be conservative, we extended the range of BATSE’s sensitivity to $`10`$ keV). Figure 2b shows the value of the asymptote as determined by the Band spectrum, as a function of $`E_p`$, for different degrees of the smoothness of the electron energy distribution cutoff. Not surprisingly, there is a strong correlation between the value of $`E_p`$ and the value of the “asymptote”, $`\alpha `$, as determined by a Band fit to the data. We can use this relationship and knowledge of the $`E_p`$ distribution to determine the resultant distribution for $`\alpha `$. This is discussed in Part II PartII .
## Spectral Evolution
The behavior of the spectral characteristics with time throughout the GRB can give us information about the environment of the emission region and conceivably constrain the emission mechanism. Given the apparent correlation between $`\alpha `$ and $`E_p`$ induced by the fitting procedure, we expect evolution of $`\alpha `$ (obtained from such fits) to mimic the behavior of $`E_p`$ in time during a pulse or spike. Note, however, if each pulse in the time profile is a separate emission episode (as in an internal shock scenario), parameters such as $`q`$ and the optical depth can vary from shock to shock; this can create a change in $`\alpha `$ from pulse to pulse, independent of $`E_p`$.
## Conclusions
Synchrotron emission can produce a variety of GRB spectral shapes, particularly when one allows for a smooth cutoff to the electron distribution and includes effects of self-absorption. In addition, we expect a relationship between $`\alpha `$ and $`E_p`$, as a consequence of the fitting procedure; this will have implications for the observed distributions and temporal evolution of spectral parameters. We test this model against the data in Part II PartII . |
no-problem/9912/astro-ph9912323.html | ar5iv | text | # 1 Relevance of Planetary Nebulae
## 1 Relevance of Planetary Nebulae
Planetary Nebulae represent the fate of most stars of masses in the range 1–10 M. The effects and consequences of their evolution are of great importance in stellar population studies, since PNe represent the progeny of the most common stars in galaxies.
PNe have been observed for centuries in the Galaxy, and more recently in other galaxies such as the Magellanic Clouds, and between galaxies. They have been identified as far as in the Virgo group. Planetary Nebulae studied in different galaxies are of exceptional interest to probe stellar populations in a variety of environments. These studies would also allow the direct comparison of similar stellar populations in the different host galaxies (see also Maciel, this volume).
The ejection of the stellar envelopes of Asymptotic Giant Branch (AGB) stars yields to the PN formation, and also marks the beginning of the cosmical recycling process. In fact, PNe are among the most important cosmic sources of carbon and other elements whose abundance is modified in the interiors of low- and intermediate-mass stars. The carbon, oxygen, nitrogen, and helium abundances of PNe are excellent tracers of the evolutionary paths of their progenitors, and their initial mass (Renzini & Voli 1981, Iben & Renzini 1983). It is worth mentioning that the chemical yields for AGB stars are at variation with respect to the initial chemical environment (van den Hoek & Groenewegen 1997), and that rotation and other stellar effects may severely affect the classical view of elemental production in AGB stars (e.g., Charbonnel 1995).
Planetary Nebulae also eject into the interstellar medium material that has not been processed during the life of the low-mass star, such as argon, neon, and sulphur. These elements are uncorrupted by the evolution of low- and intermediate-mass stars, thus their abundance mix allows to infer the chemical conditions of the interstellar medium (ISM) at the time of the formation of the PN progenitor.
Planetary Nebulae appear in a variety of morphologies, which have been related to their chemical content (Peimbert 1978, Maciel 1989, Torres-Peimbert & Peimbert 1997), and to stellar populations in the Galaxy (e.g., Stanghellini et al. 1993). It is thus important to study the morphology of PNe for an independent test on stellar evolution and populations.
Most of relations between PN morphology versus the evolution of the progenitor stars, and stellar populations, have been obtained for Galactic PNe. Only the use of the HST cameras allows the spatial resolution of extragalactic PNe. In $`\mathrm{\S }`$2 we discuss the work on Galactic PN morphology, based on contemporary databases. These results may be biased by the large uncertainties of the Galactic PN distance scale (Terzian 1993), and with the selection against PNe of low surface brightnesses, due to Galactic dust absorption. To circumvent these difficulties, and to study PNe in a different chemical environment, we have embarked a Large Magellanic Cloud PN survey, of which we give a preview in $`\mathrm{\S }`$3. Section 4 includes a summary and drafts future developments.
## 2 Morphology of Galactic Planetary Nebulae
Galactic PN morphology has been related to nebular and stellar physical properties in the past few decades (a review in Stanghellini 1995). The importance of these investigations is two-fold. First, the correlations between PN morphology and central star evolution allow us to use morphology to infer the evolutionary paths of the stellar progenitors (Stanghellini et al. 1993; Stanghellini & Pasquali 1995). Second, by relating the PN morphology to kinematic, spatial, and chemical quantities it is possible to use morphology as tracer of stellar populations (Peimbert 1978, Stanghellini et al. 1993).
In Figure 1 we show the distribution of Galactic PN as a function of their physical radii and the altitude above (or below) the Galactic plane. The various morphologies are coded with different symbols. It is evident that most bipolar PNe (sample from Manchado et al. 1996) lie closer to the Galactic plane than the round and elliptical PNe, leading to the conclusion that bipolar PNe are the progeny of a young stellar population. This result is not new; since the seventies Greig (1972), found that binebulous PNe constitute a younger PN population (Peimbert’s type I PNe). The importance of this plot is that it is based on a homogeneous and complete PN sample, placing this conclusion on firmer ground.
In Figure 2 we examine the mass distribution of central stars of PNe across morphology. This plot is based on the same sample as Figure 1, and on new Zanstra analysis (Stanghellini et al. in preparation), and it confirms what already found by Stanghellini et al. (1993): bipolar PNe host stars in a wide range of masses, while very few high-mass central stars are found within the round PNe. One again, this is indicative of two different stellar populations, and it is not in disagreement with the findings by Peimbert and his collaborators.
## 3 Morphology of Magellanic Cloud Planetary Nebulae
The importance of the results discussed in the previous session is somewhat dimmed by the difficulty to establish a Galactic distance scale to PNe. Even in the most optimistic view, the distance to Galactic PNe is known to within 50$`\%`$, thus only the statistical distances to the selected samples have a scientific significance, while the individual distances are very strongly biased.
To alleviate this problem we are observing a large sample of Large Magellanic Cloud (LMC) PNe (Cycle 8; Program ID: 8271; PI: Stanghellini; Co-Is: Shaw, Blades, Balick). The main goal of our program is to detect LMC PN morphology and find the connection to the physical conditions of the nebulae and their central stars. We obtain the STIS/HST data in slitless mode. The advantage of slitless spectroscopy can be summarized in the following points: (1) each exposure gives spatial and spectral information at once, obtaining narrow band PN images plus excitation/ionization information and velocity information all at the same time; (2) we also contemporary obtain the spectrum of the central star, if bright enough to be visible. The caveat of slitless spectroscopy is that spatial and spectral information may overlap for PNe larger than the spectral segregation between nearby lines. For example, this technique can not be used for PNe larger than 1.5 arcsec in diameter. Interestingly enough, these observations are almost ideally suited for LMC PNe, which typically span 0.8 arcsec in size.
From the space-based observations we can thus derive (a) the PN shape and size in the prominent emission lines (H$`\alpha `$, H$`\beta `$, \[O III\] $`\lambda `$5007, \[N II\], etc); (b) the stellar and nebular spectrum, stellar continuum, central star identification, luminosity and temperature. Our HST program is about half completed, the other half to be finished after the third HST servicing mission (SM3a). Among the LMC PNe observed so far, we were able to recognize all the morphological classes that we were familiar with in Galactic PN studies.
From a related ground-based program (with NTT/EMMI at ESO) we will derive detailed abundance and plasma diagnostics. At the time of writing, the ground-based observations (acquired in October 1999) are completed but not analyzed yet, thus we use the literature abundances (main source: Leisy & Dennefeld 1996) for the preliminary analysis shown in this paper.
By correlating the PN morphology and chemical abundance we find that the yields of the products of stellar evolution change across morphology. We know that the carbon, nitrogen, and oxygen abundance of PNe vary with progenitor stellar mass. In particular, carbon is produced during core helium burning, but it is partially converted into nitrogen only in the massive progenitors (e.g. van den Hoek & Groenewegen 1997). The oxygen yield also varies with mass and other evolutionary parameters, but it seem to be consistently stable at the LMC metallicities (see Fig. 1 in van den Hoek & Groenewegen 1997). We thus can use oxygen as a reference element for the discussion on the evolutionary enrichment.
Figure 3 shows the evolution of the C/O versus N/O in LMC PNe across morphological types. We see that almost all targets are enriched in N/O with respect to the local environment. It is also evident that most symmetric PNe have high C/O and low N/O ratios, and asymmetric PNe behaving the opposite way. This figure give us a lot of confidence in what already found in Galactic PNe: asymmetric PNe may have more massive progenitors that symmetric PNe.
Figure 4 shows the marked difference of population-tracking elements between symmetric and asymmetric PNe. Since both oxygen and neon should not change during evolution through the PN phase, both elements describe the chemical environment prior to the formation of the PN progenitors. Even with relatively small number statistics, we can infer that asymmetric PN progenitors have formed in an environment that is chemically enriched with respect to where the progenitor of symmetric PN originate. Most asymmetric PNe have a higher neon content than the local H II regions, while only one symmetric PN is markedly enriched in neon with respect to the H II regions. Most of the symmetric PNe have lower oxygen abundance than their surroundings, while more than half asymmetric PNe are oxygen rich, again with respect to the local average. Since there is such a sharp contrast in the abundance of primordial elements across morphological classes, we also test whether there are spatial segregations of either morphological class or chemistry in the LMC. We did not find any convincing segregation. Our result indicates that the (low- and intermediate-mass) stellar populations are mixed within the LMC.
## 4 Summary
By analyzing Galactic and LMC PNe, we searched for the relations between morphological class and chemical and spatial characteristics to test whether PNe can be used as probes of stellar evolution and populations. The previously known correlations of Galactic PN show a marked segregation of high-mass stars to form asymmetric PNe, which is in agreement with the distribution of these PNe within the Galactic disk. Novel studies in the Large Magellanic Clouds hint to connection between morphology and stellar evolution, and formation/population as well. The extragalactic studies are based on a database in the making. The preliminary results lead to the existence of two PN populations, distinct in their masses and chemical contents, that can be resolved though morphological analysis.
Ours future plans include the completion of our Cycle 8 HST program, and of the ground-based spectroscopy, to constitute a new, homogeneous, chemical/morphological database. We will derive the morphological sequences and the dynamical ages, the stellar masses and evolutionary ages, the abundances, and finally we will model the surface brightness decline for each morphological class, as described in Stanghellini et al. (1999).
We also hope to embark a similar study in the Small Magellanic Cloud (SMC), in order to extend the chemical baseline for PN formation to a low-metallicity environment.
## 5 Acknowledgements
I thank Dick Shaw, Arturo Manchado, and Eva Villaver for contributing substantially to the science included in this paper, and Max Mutchler for the HST data analysis. I whish to express my gratitude to Francesca Matteucci for inviting me to an outstanding meeting. |
no-problem/9912/nucl-th9912025.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Recently, several sets of new data about antinucleon annihilation on nucleons and nuclei at very low energy have become available, and further measurements could be performed in the next years. Whenever a comparison between targets or projectiles with different electric charge is required, for better understanding the underlying strong interaction effects (e.g. for isolating pure isospin contributions), it is necessary to be able to subtract Coulomb effects. The aim of this work is to calculate, as precisely and univoquely as possible, Coulomb effects as functions of the target mass and charge numbers $`A,Z,`$ and of the projectile momentum $`k`$ in the range 30$``$400 MeV/c. We define $`R_{A,Z}(k)`$ $`=`$ $`\sigma _{charged}/\sigma _{neutral}`$ as the ratio between $`\overline{p}`$nucleus annihilation cross sections calculated including or excluding Coulomb effects, at the projectile momentum $`k`$ (MeV/c) in the laboratory frame.
In $`qualitative`$ sense the action of Coulomb effects in $`\overline{p}p`$ annihilations is a well understood process. In a semiclassical interpretation, the electrostatic attraction acts as a focusing device, which deflects $`\overline{p}`$ trajectories towards the annihilation region. In quantum sense we may simply say that it increases the relative probability for $`\overline{p}`$ to be in the annihilation region. An estimation of this effect is possible by assuming that the actual annihilation center is pointlike and that there is complete independence, or factorization, between the effects of strong and Coulomb forces. Then $`R_{A,Z}`$ $``$ $`|\mathrm{\Psi }_Z(0)/\mathrm{\Psi }_o(0)|^2`$, where $`\mathrm{\Psi }_o(\stackrel{}{r})`$ is the function describing the free motion of a charge zero projectile, and $`\mathrm{\Psi }_Z(\stackrel{}{r})`$ describes the motion of $`\overline{p}`$ in the Coulomb field of a pointlike central charge $`+Ze`$. In this approximation $`R_{A,Z}`$ $`=`$ $`2\pi \lambda [1exp(2\pi \lambda )]^1`$, with $`\lambda `$ $`=`$ $`Ze^2/\mathrm{}\beta `$ ($`\beta `$ is the relative velocity and $`R_{A,Z}`$ only depends on $`A`$ via c.m. motion within this approximation), and becomes $`R_{A,Z}`$ $``$ $`2\pi \lambda `$ for small velocities. Usually, at small velocities, the cross sections for esoenergetic reactions between neutral particles follow the $`1/\beta `$ law, which means constant frequency of annihilation events. The velocity comes in when the annihilation rate is divided by the incoming flux (perhaps suggesting that the cross section is not the most useful observable at very low energies). In the case of opposite charges for the particles in the initial state, the above approximation suggests a $`Z/\beta ^2`$ law, at least at small $`\beta `$. However there are some limitations:
(1) The experiments which are of interest for us cover a range of momenta (30$``$400 MeV/c) where velocities are not always small.
(2) Proton and nuclear charges are not pointlike.
(3) Some interplay may exist between the strong central potential and the action of the Coulomb forces that breaks the factorization of the two effects.
(4) Some lower cutoff (in the momentum scale) must exist due to the action of the electron screening.
Concerning the last point, we have attempted some calculation with a modified version of the codes used for the rest of this work. The modifications were such as to take into account the electron screening, with Thomas-Fermi distributions, for heavy nuclei. As far as we trust the modified codes, we don’t see relevant screening effects at momenta $``$ 10 MeV/c. Apparently the code outputs are stable and reliable, at least at these kinematics and for large nuclei. Nevertheless the need to have our codes covering with precision very different space scales (atomic and nuclear, with a difference of many orders of magnitude) suggests a certain care. E.g., we don’t get reliable results for larger momenta or very light nuclei (small variations of the parameters produce unstable results). So we will postpone a discussion on this point to the time when we have some alternative cross checks of these screening effects. Magnitude considerations anyway suggest that they should not be relevant at 30 MeV/c. In heavy nuclei the Thomas-Fermi approximation suggests a distance $`r_B/Z^{1/3}`$ between the nucleus and the bulk of the electronic cloud surrounding it, which is much larger than 1/(30 MeV/c) $``$ 6 fm also for $`Z`$ $`=`$ 100.
As we can see later (see e.g. figs. 1 and 2) the limitations (1) (2) and (3) are effective, and our results show large disagreements with the above $`Z/\beta ^2`$ law, especially with heavy nuclei.
In our calculations, the electrostatic potential has been produced by a uniform charge distribution with radius 1.25 $`A^{1/3}`$ fm. The annihilation is reproduced by an optical potential of Woods-Saxon form. For all but the lightest nuclei we have chosen zero real part, radius 1.3 $`A^{1/3}`$ fm, diffuseness 0.5 fm, and strength 25 MeV for the imaginary part. We will name this potential “standard nuclear potential” (SNP). The calculations have been repeated after changing the optical model parameters, to check for dependence of Coulomb corrections on these parameters (more details are given in the next sections). For the cases of Hydrogen, Deuteron and <sup>4</sup>He targets, where low energy data are available, we have compared the results of the SNP with the outcome of more specifical (and rather different) potentials, which better fit the data.
The two reasons that are behind the parameters of the SNP are that (i) its radius and diffuseness are consistent with the $`A`$systematic parameters of the nuclear density, and (ii) for $`A`$ $`=`$ 1 this potential reproduces very well the $`\overline{p}p`$ annihilation data in all of the range 30$``$400 MeV/c. Many other choices with and without a real part (both attracting and repulsive) or with different shapes can reproduce the same $`\overline{p}p`$ data (an example is given below), however a direct generalization of many among these potentials to nuclear targets is not so easy.
Our ideal aim would be to be able to produce a curve $`R_{A,Z}(k)`$ which is independent on the specific potential used to simulate the strong interactions. For $`k`$ $`>`$ 20 MeV/c this is possible with very good precision in light nuclei and within a 10 % uncertainty in heavy ones, as we show later on. Larger uncertainties are confined to the region of very small momenta ($`k`$ $`<`$ 20 MeV/c).
The greatest source of interplay between the annihilation potential and the Coulomb interaction is the inversion mechanism at low energies. As widely discussed elsewhere and as seemingly measured, at very low energies it may happen that a modification of the features of the nuclear targets, which apparently should imply more effective annihilation properties, gets the opposite results. E.g., $`\overline{p}p`$ annihilation cross sections are larger than $`\overline{p}`$D and $`\overline{p}^4`$He ones at low energies. Moreover mechanisms (like Coulomb forces) that could be expected to enhance the reaction rates can loose effectiveness in presence of a very strong annihilation core. E.g., inversion is present for $`k`$ $`<`$ 200 MeV/c in the potential used by Brückner $`et`$ $`al`$ (we name this potential BP from now onwards) to fit elastic $`\overline{p}p`$ data at $`k`$ $``$ 200 MeV/c. The inversion property was not reported by these authors because, at that time, annihilation data at lower momenta were not available, so they didn’t perform calculations for the inversion region. With lesser adjustments of the parameters, their potential (imaginary part: strength 8000 MeV, radius 0.41 fm and diffuseness 0.2 fm; corresponding parameters for the actractive real part: 46 MeV, 1.89 fm and 0.2 fm) can reasonably fit the $`\overline{p}p`$ annihilation data which have been measured in later years. However, it is easy to verify that any increase in the strength or radius of the imaginary part of their optical potential leads to a $`decrease`$ in the corresponding annihilation cross section for $`k`$ $`<`$ 200 MeV/c. In addition, putting the elastic part of this potential to zero leads to a twice as large elastic cross section. Unfortunately, since this potential (which has the advantage to reproduce elastic $`\overline{p}p`$ data too) uses radius $``$ 0.4 fm (for the imaginary part) and 1.9 fm (for the real part), and diffuseness 0.2 fm (for both), its generalization to heavier nuclear targets is not straightforward. So with heavy nuclei we prefer to use the above SNP with standard nuclear density parameters. Although the inversion properties of the SNP are not so evident as in the BP case, also its outcomes are by far not $`A`$linear at low momenta and this introduces a dependence of $`R_{A,Z}`$ on the chosen parameters. Anyway, the results that we show in the next section suggest that strong model dependence is confined to $`k`$ $`<`$ 20 MeV/c, even though the inversion mechanism is effective at larger momenta.
Center of mass corrections have been included in all calculations and they are particularly relevant for comparison between $`\overline{p}p`$, $`\overline{p}`$D and $`\overline{p}^4`$He annihilations.
Another general remark, which has a certain importance, is that the Coulomb focusing effect acts on the atomic scale and it is relevant at momenta that are smaller than the typical nuclear momenta. The consequence is that, if one uses $`\overline{p}(D,p)X`$ reactions to calculate the $`\overline{p}n`$ annihilation cross sections, these cross sections are as much Coulomb affected as the $`\overline{p}p`$ ones. This happens because the projectile is attracted by the deuteron charge more or less the same way, either it annihilates on the proton or on the neutron. So, while isospin invariance requires complete equality between $`\overline{p}n`$ and $`\overline{n}p`$ annihilation rates, in practics it will not be so, unless one is able to use free neutron targets or antiproton targets.
## 2 Qualitative trends and dependence on the annihilation parameters
In fig.1 we show the ratio $`R_{A,Z}`$, calculated with the SNP, for targets H, <sup>4</sup>He, <sup>20</sup>Ne, and for $`A`$ $`=`$ 50, 100, 150, 200 and charge $`Z`$ $`=`$ $`A/2`$. It can give an idea, for each nuclear charge, of the momentum below which it is not possible to neglect Coulomb effects anymore.
Since $`R_{A,Z}`$ changes by orders of magnitude at low momenta, a more reasonable quantity to be used to verify the dependence of $`R_{A,Z}`$ on the annihilation parameters is the ratio $`F_{A,Z}`$ between $`R_{A,Z}`$ and its “pointlike” prediction $`R_{A,Z}^{(p)}`$ $`=`$ $`2\pi \lambda [1exp(2\pi \lambda )]^1`$, with $`\lambda `$ $`=`$ $`Ze^2/\mathrm{}\beta `$. This ratio, shown in fig.2 for the same target nuclei of fig.1, is interesting in itself, because its deviations from $`F_{A,Z}`$ $`=`$ 1 give an idea of the separation between the pointlike approximation and the actual nuclear behavior (notice: as we have tested, if one limits the pointlike approximation to the factor $`2\pi \lambda `$ things change little). Not accidentally, the “pontlike” approximation is much worse in heavy nuclei. It is not so bad as far as the $`k`$dependence is concerned, whereas it overestimates much the role of the nuclear charge. Indeed, in a wide range of momenta, we can write $`R_{A,Z}`$ $``$ $`Z_{eff}(k)/k`$, where the effective charge $`Z_{eff}(k)`$ has a relatively slow dependence on $`k`$ and becomes, at increasing $`A`$, much smaller than the real electric charge. The fact that with a proton or Helium target the pointlike approximation is good for $`k`$ $`>`$ 100 MeV/c is of little relevance: as one can deduce by looking at fig.1, for light nuclei the charge has no role at these momenta.
A look at a log-log plot of the annihilation cross sections versus $`k`$ with and without electric charge for heavy nuclei ($`A`$ $`=`$ 50, 100, 150 and 200, in fig.3) shows that the “neutral” cross section is the one that behaves in the most unpredictable way: it has a very small k-dependence for 30 MeV/c $`<`$ $`k`$ $`<`$ 300 MeV/c, and turns to a $`k^1`$ law at some $`k`$ $`<`$ 20 MeV/c. In the region of $`k`$independence these cross sections are roughly proportional to $`A^{2/3}`$, but become less $`A`$dependent at decreasing momenta, in agreement with the described inversion. For $`k`$ between 100 and 300 MeV/c the “charged” annihilation conforms to a rough $`k^1`$ law, and for smaller $`k`$ to something like $`k^{1.7}`$ or $`k^{1.8}`$. “Charged” and “neutral” cross sections are roughly proportional to a similar factor $`Z^\gamma `$ or, in other words, $`A^\gamma `$, with $`\gamma `$ close to one. We notice that, if the charge were fully effective, the most obvious predictions, alternative to the optical potential model, would suggest a proportionality comprised between $`ZA^{1/3}`$ and $`ZA^{2/3}`$ for the “charged” cross sections, and between $`A^{1/3}`$ and $`A^{2/3}`$ for the “neutral” ones; the first law corresponds to the S-wave geometrical approximation $`\sigma _{ann}`$ $``$ $`R_{nucleus}/k`$, assuming imaginary scattering length $``$ $`R_{nucleus}`$; the second law is the Distorted Wave Impulse Approximation, where the nuclear cross section is more or less the sum of the cross sections of those nucleons lying on the nuclear surface. In all models, at $`k`$ large enough, the charge effect should disappear.
With approximation 10 % (or slightly worse), for nuclei from intermediate to heavy we have found that it is possible to write $`\sigma _{ann}(\overline{p}A)\beta /Z`$ $``$ 10 mb, for 100 MeV/c $`<`$ $`k`$ $`<`$ 300 MeV/c. For 10 MeV/c $`<`$ $`k`$ $`<`$ 100 MeV/c a corresponding law is $`\sigma _{ann}(\overline{p}A)\beta ^\alpha /Z^{3/2}`$ $``$ 7 mb, with $`\alpha `$ $`=`$ 1.7$`÷`$1.8. Of course, in these $`Z`$ and $`Z^{3/2}`$ dependences, charge and mass effects mix. In the fitting formulas of the next paragraph the roles of $`A`$ and $`Z`$ will be clearly separated (for the needs of the application to heavy nuclei with $`Z`$ $`<`$ $`A/2`$).
Only at very low momenta $`\overline{p}A`$ annihilation cross sections follow the expected $`k^2`$ law. We have compared annihilations on nuclei with doubled momentum, i.e. $`k`$ $`=`$ 2 MeV/c versus $`k`$ $`=`$ 4 MeV/c and so on. With the $`k^2`$ law fully enforced, the corresponding ratio of annihilation cross sections should be 4. With Hydrogen target, $`\sigma _{ann}(k)/\sigma _{ann}(2k)`$ $`=`$ 4 within four figures for $`k`$-$`2k`$ $`=`$ 1-2 MeV/c, 3.65 for 10-20 MeV/c, 3.4 for 15-30 MeV/c and 3.2 for 20-40 MeV/c. Things are better with heavy nuclei: With $`A`$ $`=`$ 200 and $`Z`$ $`=`$ 100 we get 3.85 at 20-40 MeV/c.
This suggests that calculations of scattering lengths and similar low-energy quantities, based on the presently available annihilation data, and where Coulomb effects are subtracted via the pointlike approximation, should be at least compared with optical potential analogous calculations.
A last observation is that in fig.1 the ratio $`R_{A,Z}`$ is almost identical, despite the charge difference, for Hydrogen and <sup>4</sup>He targets. This is due to the compensation between center of mass momentum shift and Coulomb focusing. Not considering the center of mass transformation can lead to large errors in the interpretation of light nucleus data.
Concerning the problem of the dependence of $`R_{A,Z}`$ on the nuclear potential parameters, we must distinguish between the case of light and heavy nuclei. In the first case we have some low energy data that allow for preparing ad hoc potentials which, although perhaps artificially (e.g. via repulsive interactions), permit a reasonable reproduction of the available data. This allows us, with any specific light nuclear target, for a comparison between the outcome of the SNP and the outcome of a pretty different potential. This comparison does not show any relevant model dependence for $`R_{A,Z}`$, as showed in detail in the following. With heavy nuclei we have no alternatives to the SNP, so that comparisons have been performed by simply attempting some changes in the SNP strength and diffuseness and comparing the outputs. With this procedure, we can estimate a model dependence of $`R_{A,Z}`$ below 10 % for heavy nuclei at momenta around 30 MeV/c, half of it at 50 MeV/c and a satisfactory model independence over 100 MeV/c.
This is clearly showed in fig.4, where we present $`F_{A,Z}`$ for the cases $`A`$ $`=`$ 50 and 200, $`Z`$ $`=`$ $`A/2`$, comparing the results for the three choices (i) $`W`$ $`=`$ 25 MeV, $`a`$ $`=`$ 0.5 fm, (ii) $`W`$ $`=`$ 12.5 MeV, $`a`$ $`=`$ 0.5 fm, (iii) $`W`$ $`=`$ 18 MeV, $`a`$ $`=`$ 0.6 fm. A variation of the optical potential by a factor two should include all the reasonable possibilities (much larger variations of the potential strength are allowed only jointly with compensating variations of the radius or of the diffuseness, which for heavy nuclei would not make too much sense). As one can clearly see in the figure, to stay safely within 10 %, one must select momenta from 30 MeV/c upward. Below 20 MeV/c the curves corresponding to different models seem to increase their separation in a less controllable way.
In general, one would need data to impose stricter constraints on the optical model parameters. In fact, a qualitative synthesis of many attempts with different potentials, in light and heavy nuclei (more details on light nuclei are presented in the next section), suggests that whenever two different potentials are such as to reproduce similar “charged” cross sections, also the corresponding “neutral” cross sections will be similar. So we can say that a certain value of annihilation cross section is associated with a certain $`R_{A,Z}`$, whatever potential has been chosen to reproduce this cross section.
## 3 Fits of $`R_{A,Z}`$.
In this section we synthetize the results of the calculation of $`R_{A,Z}`$ on a wide spectrum of nuclei. We don’t show figures, since these would simply report curves all very similar to the previous ones. We give analytical fits of these curves, which in subintervals of 30$``$400 MeV/c reproduce them within specified errors.
All the reported fits have the general form
$`R_{A,Z}=1+C_\alpha Z\beta _{cm}^\alpha ,`$ (1)
where $`C_\alpha `$ is a constant coefficient, and $`\beta _{cm}`$ is the relative velocity in the center of mass frame, calculated via the relativistic relations between center of mass momentum, energy and velocity for a projectile with reduced mass $`AM_p/(A+1)`$. Actually, there is some $`small`$ difference between relativistic and nonrelativistic quantities at the larger momenta of the range only, so this precisation is not necessary, and one may take $`\beta _{cm}`$ $`=`$ $`\beta _{lab}`$, since at nonrelativistic level the relative velocity does not depend on the reference frame.
With nuclei with $`A`$ $`>`$ 50 the data for $`R_{A,Z}`$ can be fit within a few percent, in the range of laboratory momenta 50$``$400 MeV/c, by the relation:
$`R_{A,Z}=1+10^5(450.0075A)Z\beta _{cm}^\alpha ,A>50,50<k<400MeV/c.`$ (2)
Choosing $`\alpha `$ $`=`$ 2.07 one gets a precision of some percent in all of the range 50$``$400 MeV/c, whereas choosing $`\alpha `$ $`=`$ 2.08 the fit becomes particularly precise in the region 100$``$400 MeV/c (in practics one does not distinguish the original and the fitted curve anymore), precise within 10 % at 70 MeV/c and within 20 % at 50 MeV/c.
The above fit gets worse with nuclei with $`A`$ $`<`$ 50. For $`A`$ $`=`$ 40 it still gives a 10 % precision in the region 100$``$400 MeV/c (and a little worse for lower momenta). However a better fit (within 5 %) is:
$`R_{40,Z}=1+0.00051Z\beta _{cm}^2,50<k<400MeV/c.`$ (3)
Following the heavy nuclei rule, the coefficient of $`Z/\beta _{cm}^{2.07}`$ would be 0.00041, instead of 0.00051. Actually the value 0.00051 is a compromise one. With 0.00052 there is better precision (almost perfect superposition of curves) for $`k`$ $`>`$ 100 MeV/c and 10 % overestimation at $`k`$ $`=`$ 50 MeV/c. In the region 50$``$100 MeV alone a very good fit is:
$`R_{40,Z}=1+0.00084Z\beta _{cm}^{1.8},50<k<100MeV/c.`$ (4)
For the relevant lighter nuclei, precise fits can only be obtained, as in the previous case, by systematically splitting the momentum range into two parts: 50$``$100 MeV/c and 100$``$400 MeV/c. We use the same formulas, with the same exponents and different coefficients. If we call $`C_2`$ and $`C_{1.8}`$ the coefficients of the $`Z\beta ^2`$ and $`Z\beta ^{1.8}`$ terms, we get:
For $`A`$ $`=`$ 20, $`C_2`$ $`=`$ 0.00066 (which, apart from almost perfectly reproducing the range 100$``$400 MeV/c, gives a 12 % overestimation at 50 MeV/c) and $`C_{1.8}`$ $`=`$ 0.97. It is nevertheless possible a fit of all the range within a few percent with $`1+0.00088Z\beta ^{1.83}`$.
For $`A`$ $`=`$ 12 we don’t find a unique satisfactory fit for all of the required range. For the split ranges we get: $`C_2`$ $`=`$ 0.00080, and $`C_{1.8}`$ $`=`$ 0.0011. The two above coefficients allow for a precision within some percent.
The coefficients for all of the nuclei with $`A`$ between 12 and 40 can be interpolated quadratically exploiting the values given for the cases $`A`$ $`=`$ 12, 20, 40. This procedure should not introduce bigger errors than those ones which are related with the model dependence.
Among the light targets, the most important case is Hydrogen, for which we need Coulomb corrections down to 35 MeV/c. In this case it is possible to calculate the corrections related with two completely different models for the central potential, i.e. the SNP and the BP. Both produce the same annihilation rates in all of the considered range.
In the first case all the range 35$``$400 MeV/can be fitted within 5 % by
$`R_{1,1}=1+C_2\beta _{cm}^2,35<k<400MeV/c,`$ (5)
with $`C_2`$ $`=`$ 0.00030. However, in the subrange 70$``$400 MeV/c the $`\beta ^2`$ law can be more precise, with $`C_2`$ $`=`$ 0.0040. This allows for an error within 1 % from 70 to 400 MeV/c, 2 % at 60 MeV/c and 5 % at 50 MeV/c. For the subrange 30$``$70 MeV/c an almost perfect fit is given by the law
$`R_{1,1}=1+C_{1.4}\beta _{cm}^{1.4},30<k<70MeV/c,`$ (6)
with $`C_{1.4}`$ $`=`$ 0.0120.
When one repeats the same fitting procedure starting with the BP for the annihilation core, differences are small. In the range 70$``$400 MeV/c the previous $`\beta ^2`$ law is as good as in the previous case, with exactly the same coefficient $`C_2`$ $`=`$ 0.0040. For the range 30$``$70 MeV/c the $`\beta ^{1.4}`$ law is still very good, with a small modification in $`C_{1.4}`$. With the previous coefficient one gets an almost uniform 1-2 % overestimation of $`R_{1,1}`$. In this case the best coefficient is $`C_{1.4}`$ $`=`$ 0.0110. A fit within 5 % of the full range 35$``$400 MeV/c is possible by the $`\beta ^2`$ form with $`C_2`$ $`=`$ 0.028 (instead of the previous 0.30).
It must be noticed that the fact that the calculated value of $`R_{1,1}`$ is almost the same with two such different potentials makes this result quite reliable. The output of the two in terms of total reaction cross sections is the same, but their geometrical properties are completely different.
With <sup>4</sup>He and D targets the understanding of the structure of the annihilation potential is not very good yet, both because of controversial interpretation of data (which show strong inversion properties) and because it is here impossible to rely on systematical nuclear properties. As in the H case we compare fits to the outcomes of two different potentials.
With <sup>4</sup>He we have data starting from 45 MeV/c. We first calculate $`R_{4,2}`$ with the SNP, that produces annihilation curves that don’t pass too close to the two data points at 45 MeV/c and 70 MeV/c. Then we even try with a peculiar potential which, due to a slightly different annihilation core and to a repulsive elastic force, produces a better fit to the experimental data in the full range 45$``$400 MeV/c. The exact values are: imaginary strength 40 MeV, real (repulsive) strength 28 MeV, radius 1.1$`4^{1/3}`$ fm, diffuseness 0.7 fm (radius and diffuseness are equal for the real and imaginary parts).
In the case of the SNP, the range 80$``$400 MeV/c can be fitted with very good precision by the $`Z\beta ^2`$ law with $`C_2`$ $`=`$ 0.00130. The $`Z\beta ^{1.4}`$ law allows for a rather good fit (within 3 %) in all of the range 40$``$400 MeV/c, with $`C_{1.4}`$ $`=`$ 0.0040. An improvement of the fit (to 1 %) in the region 40$``$100 MeV/c can be obtained by the law $`Z\beta ^{1.25}`$ with $`C_{1.25}`$ $`=`$ 0.0070.
With the second kind of potential the above $`Z\beta ^{1.4}`$ fit even improves a little its accuracy (errors within 2 % in all of the range 40$``$400 MeV/c). No relevant differences are found between the $`R_{4,2}`$ calculated via the two potentials.
With a Deuteron target we, again, apply different choices for the potential. First the nuclear standard (which in the deuteron case is surely not adherent to the physical situation) and then a completely different one (which better reproduces the lowest energy $`\overline{p}`$deuteron data) with imaginary strength 750 MeV, repulsive real strength 400 MeV, real and imaginary radius 0.1 fm, real and imaginary diffuseness 0.6 fm. In practics the latter one is an exponentially decaying potential, having radius much smaller than diffuseness.
With the SNP we can perfectly fit $`R_{2,1}`$ in the range 30$``$200 MeV/c with the $`\beta ^{1.4}`$ law, with $`C_{1.4}`$ $`=`$ 0.0060. The same fit can be extended to the region 200$``$400 MeV/c with error within 1 %. In the latter region a better fit coefficient would be 0.0040 (this does not make a relevant difference, but with this coefficient the fitting law can be extended to much larger momenta). With the other potential, nothing changes. To be more precise, the calculated cross sections at $`k`$ $`<`$ 200 MeV/c are rather different at momenta below 200 MeV/c, but $`R_{2,1}`$ is the same in both cases.
The above comparisons between couples of pretty different potentials confirm that for light nuclei the calculation of $`R_{A,Z}`$ is, at all practical purposes, model independent.
## 4 Summary and conclusions
To summarize, in the full range 30$``$400 MeV/c we are not able to give a simple and general law for the Coulomb corrections, of the kind of the one derived from the approximation of a pointlike annihilation center. We have shown that such an approximation is rather poor in this momentum range. We have given analytical approximations, within reported errors, for the calculated values of the Coulomb correction with several relevant target nuclei: H, D, <sup>4</sup>He, and then $`A`$ $`=`$ 12, 20, 40, 50, 100, 150, 200, and variable $`Z`$. By interpolation it should be possible to reproduce Coulomb corrections for most nuclear targets, starting from our formulas. These analytical approximations are all of the form $`R_{A,Z}`$ $`=`$ $`1+C_\alpha Z\beta ^\alpha `$, with $`\alpha `$ ranging from 1.25 to 2.08 and $`C_\alpha `$ $`<<`$ 1. For light nuclei (H to <sup>4</sup>He) they should be reliably model independent, while for heavier nuclei it is safer to assume a residual 10 % dependence on the details of the specific model used for describing the annihilation process. |
no-problem/9912/hep-ph9912546.html | ar5iv | text | # Quantum Transparency of Barriers for Structure Particles
## I Introduction
Quantum tunnelling through a barrier is one of the most common problems in many trends of physics. Physical processes of tunnelling are usually considered as a penetration of a structureless particle through the barrier; whereas in realistic physics, we are, as a rule, dealing with the problem of penetration of structure particles through the barrier. It is clear that when the spatial size of a barrier is much larger than the typical dimension of an incident complex, we could expect insignificant distinctions of the penetration probability of the complex from that of structureless particles. The situation drastically changes when the size of the complex is larger than the spatial width of the barrier. In this case, there arise mechanisms that increase the barrier transparency (see, for instance, ref. and references therein). The simplest of them arises when only part of the complex interacts with the barrier, i.e. the penetration probability depends on the mass smaller than the complex mass.
In this paper, we consider the mechanism of drastic transparency of a barrier that implies possible formation of a barrier resonance. To this end, it is necessary that at least two particles interact with the barrier. It is easy to imagine the mechanism of formation of such a resonance state. Let through the barrier, only one of particles pass, and forces coupling the pair be sufficient to keep the particles on different sides of the barrier. Then, such a resonance state will live unless one of the particles penetrates through the barrier. The barrier width will determine the lifetime of a resonance of that sort. As will be shown below, the probability of tunnelling through the barrier can reach unity. Physically, this effect is explained by the interference suppression of a reflected wave, since the presence of a barrier resonance is simply described by the effective interaction in the variable of motion of the centre of inertia of the pair, whose spatial form has a local minimum in the barrier centre. Therefore, the suppression of a reflected wave can be explained by the interference phenomenon well known in optics and used in coating lenses – the difference in path between the wave reflected from the first peak and the one reflected from the other peak should equal one-half the wave length.
In this paper, the above effect of transparency is demonstrated analytically and numerically for a pair of identical particles coupled by an oscillator interaction (in what follows, an oscillator) that penetrates through a one-dimensional repulsive barrier of the Gaussian type. This choice of interactions is, on the one hand, due to the system being extremely simple and allowing the reduction of three-dimensional scattering of a three-dimensional pair of particles on a one-dimensional barrier to the scattering of a one-dimensional oscillator on a one-dimensional barrier. On the other hand, just this type of interaction is taken in the literature devoted to the decay probability of false vacuum in the high energy particle collisions (see, for instance ). It was pointed out that the processes of transition from the false vacuum could be described on the basis of quantum-mechanical tunnelling of a pair of particles through a barrier, but a system was investigated, in which only one particle of the oscillator interacted with the barrier. In this note it is shown that when two particles interact with the barrier, the penetration probability can be essentially higher that in the systems considered earlier.
## II Equations
Consider the penetration of a pair of identical particles with masses $`m_1=m_2=m`$ and coordinates $`𝐫_1`$ and $`𝐫_\mathrm{𝟐}`$ coupled by an oscillator through the potential barrier $`V_0(x_1)+V_0(x_2)`$. The Hamiltonian of this system ($`\mathrm{}`$ = 1)
$$\frac{1}{4m}\mathrm{}_R\frac{1}{m}\mathrm{}_r+\frac{m\omega ^2}{4}r^2+V_0(𝐑𝐫/\mathrm{𝟐})+V_0(𝐑+𝐫/\mathrm{𝟐}),$$
written in terms of coordinates of the centre of inertia of the pair $`𝐑=(𝐫_1+𝐫_2)/2`$ and internal variable of the relative motion $`𝐫=𝐫_1𝐫_2`$ describes the three-dimensional motion of a three-dimensional oscillator. Since the potential barrier depends only on one variable, and the oscillator interaction is additive in projections of $`𝐫`$, the wave function is factorized, and its nontrivial part describing scattering depends only on two variables. It is convenient to represent these variables in the form
$$x=\sqrt{\frac{m\omega }{2}}(x_1x_2),y=\sqrt{\frac{m\omega }{2}}(x_1+x_2).$$
The Schroedinger equation in these variables is of the form
$$\left(_x^2_y^2+x^2+V(xy)+V(x+y)E\right)\mathrm{\Psi }=0,$$
(1)
where the energy $`E`$ is written in units $`\omega /2`$, and the potential barrier $`V(x\pm y)=\frac{2}{\omega }V_0((x\pm y)/\sqrt{2m\omega })`$ is below written in the convenient form $`V(X)=\frac{A}{\sqrt{2\sigma \pi }}\mathrm{exp}(X^2/(2\sigma )).`$ Here, the amplitude $`A`$ is a parameter describing the energy height of the barrier, and $`\sigma `$ determines its spatial width. Let the process of scattering go from left to right, and the oscillator initial state correspond to state $`n`$. Then the boundary conditions are written in the form
$$\begin{array}{cc}\underset{y\mathrm{}}{lim}\mathrm{\Psi }\hfill & \hfill \mathrm{exp}(ik_ny)\phi _n(x)\underset{jN}{}S_{nj}\mathrm{exp}(ik_jy)\phi _j(x),\\ \underset{y+\mathrm{}}{lim}\mathrm{\Psi }\hfill & \hfill \underset{jN}{}R_{nj}\mathrm{exp}(ik_jy)\phi _j(x),\\ \underset{x\pm \mathrm{}}{lim}\mathrm{\Psi }\hfill & \hfill 0.\end{array}$$
(2)
The oscillator wave functions $`\phi _j(x)`$ obey the Schroedinger equation
$$\left(_x^2+x^2\epsilon _i\right)\phi _i=0,$$
with energy $`\epsilon _j=2j+1`$ ($`j=0,1,2,\mathrm{}`$), momenta $`k_j=\sqrt{E\epsilon _j}`$, and with $`N`$ being the number of the last open channel ($`E\epsilon _{N+1}<0`$). Below, we consider an oscillator composed of bosons, whose spectrum is conveniently numbered from 1. Thus, in what follows, $`\epsilon _j=4j3`$ ($`j=1,2,\mathrm{}`$).
We define the probabilities of penetration $`W_{ij}`$ and reflection $`D_{ik}`$ as the ratio of the density of a transmitted or reflected flux to that of incident particles, i.e.
$$W_{ij}=|R_{ij}|^2\frac{k_j}{k_i},D_{ij}=|S_{ij}|^2\frac{k_j}{k_i}.$$
It is clear that $`\underset{jN}{}W_{ij}+D_{ij}=1.`$
This problem of determination of penetration (reflection) probabilities requires solution of a two-dimensional differential equation. The aim of this paper is to demonstrate the quantum transparency of a barrier. Therefore, we take advantage of the well-known adiabatic approximation successfully applied in various three-body problems (see, for instance, review ). To this end, we introduce basis functions $`\mathrm{\Phi }_i`$ obeying the equation
$$\left(_x^2+x^2+V(xy)+V(x+y)ϵ_i(y)\right)\mathrm{\Phi }_i(x;y)=0,$$
(3)
and use them for the expansion $`\mathrm{\Psi }(x,y)=\underset{i}{}f_i(y)\mathrm{\Phi }_i(x;y)`$. Inserting this expansion into Eq.(1) and projecting onto the basis, we arrive at the system of equations
$$\left(\left(_y^2+ϵ_iE\right)\delta _{ij}Q_{ij}_y_yQ_{ij}+P_{ij}\right)f_j=0,$$
(4)
where the effective interaction in channel $`i`$: $`E_i=ϵ_i+P_{ii}`$ corresponds to the diagonal part of the interaction, and functions derived in projecting $`Q_{ij}=\mathrm{\Phi }_i,_y\mathrm{\Phi }_j`$ and $`P_{ij}=_y\mathrm{\Phi }_i,_y\mathrm{\Phi }_j`$ correspond to the coupling of channels. Brackets mean integration over the whole region of $`x`$. By definition, the functions $`Q_{ij}`$ is antisymmetric, and $`P_{ii}`$ is positive. As a rule, the coupling of channels is small, and the processes of scattering can be described by a limited number of equations. As in our case the spectrum of Eq.(3) is discrete, a good description of the scattering processes is achieved with the use of all the channels open in energy . At large $`|y|`$, the effective energy $`E_i\epsilon _i`$, and $`\mathrm{\Phi }_i(x;y)\phi _i(x)`$, which allows us to easily rewrite the boundary conditions (2) in the channel form.
Considering the boson case, we show for one channel that the effective interaction $`E_i`$ ($`i=1,2`$) possesses a clear minimum, resulting in the resonance mechanism of transparency, and that the inclusion of the second channel does not change this picture.
## III One-channel approximation
Within the chosen approach, the effect of quantum transparency is observed even in the one-channel approximation, i.e. in the Born-Oppenheimer approximation. In Fig.1, the dependencies $`E_1(y)`$ are drawn, determined by numerical solution of Eq.(3) at $`\sigma =0.01`$ and three values of the amplitude $`A`$ denoted by letters A, B, and C, respectively. These values of parameters were taken to demonstrate the formation of a potential well that provides the resonance peculiarities of scattering. For comparison, shown in Fig.1 are the initial potentials of barriers at $`x=0`$, i.e. $`2V(y)`$, that describe the scattering of structureless (or extremely bound) particles. For convenience, they are shifted by the binding energy of a pair.
In Fig.2 , we present the probabilities of penetration of a pair through barriers determined by numerical solution of Eq.(4) and corresponding to the potentials drawn in Fig.1. It is clearly seen that for $`A=1`$, the scattering of an oscillator and a structureless particle with a doubled mass are slightly different. At $`A=5`$, the resonance component of scattering appears, and at $`A=10`$, a clear resonance with $`W_{11}=1`$ at the peak is observed for energy $`E_r8.12`$. It is just this behaviour that is defined by the term ”quantum transparency of barriers”. Note for comparison that the probability of penetration through a barrier $`2V(y)`$ is as few as $`0.012`$.
Complete transparency of a barrier may happen to be somewhat surprising. Simple analogies with optical phenomena were presented in the Introduction. Below, we write simple expressions valid in the case of rectangular barriers and in the quasi-classical approximation which testify to the possibility for a barrier being completely transparent. To this end, we take the potential of form $`C`$ given in Fig.1 with two clear peaks. Since the one-dimensional problem of penetration through a barrier can be found in many textbooks (see, e.g., ), here we present only the scheme of solution of the problem of penetration through a two-peak barrier. Denoting 3 regions of classically allowed motion from left to right by numbers 1,2,3 and introducing upper indices for the amplitudes and probabilities of penetration from the region marked by the left index to the region marked by the right index, we easily obtain
$$R^{(13)}=\frac{R^{(12)}R^{(23)}}{1S^{(21)}S^{(23)}}.$$
For simplicity, the lower index of channel 1 is omitted. Then the probability of penetration through a two-peak channel is expressed through the probabilities of penetration through each peak as follows:
$$W^{(13)}=\frac{W^{(12)}W^{(23)}}{1+|S^{(21)}|^2|S^{(23)}|^22|S^{(21)}||S^{(23)}|\mathrm{cos}(\theta )},$$
where $`\theta `$ is a doubled difference of phases (or action in quasi-classics) of motion between the left and right peaks. Time-reversal invariance leads to the principle of detailed balance (see, e.g. ) that in our case results in the equality $`|S^{(21)}|=|S^{(12)}|`$.
For a symmetric potential ($`W^{(12)}=W^{(23)}`$), the penetration probability $`W^{(13)}`$ reaches a maximum at $`\theta =2\pi `$n (n=1,2,…). Note that it is just the condition that in a quasi-classical approach determines the spectrum of bound states for infinitely broad peaks. Provided that $`|S^{(ij)}|^2=1W^{(ij)}`$, it is not difficult to verifies that at these energies, $`W^{(13)}=1`$. i.e. a complete transparency occurs.
Parameters of the barrier potential $`V`$ were chosen so that the resonance energy $`E_r`$ would be higher than the energy of the second channel $`\epsilon _2=5`$. This is necessary for proving that the inclusion of inelastic processes does not change the resonance picture of transparency.
It is necessary to note, that the parameters of a potential V differ from parameters of a potential of Ref., where power height of a barrier is comparable to energy of elementary excitation ($`V(0)=1,\omega =1/2`$). The parameters of potentials will become close if magnitude $`\omega `$ from Ref. will be in 50 times less.
## IV Two-channel approximation
In Fig.3, we show the results of numerical solution of Eq.(3) for the second channel. It is seen that the coupling functions of channels $`Q_{12}`$ and $`P_{12}`$ are about 2 ordered as small as diagonal values $`E_2`$. The effective energy $`E_2`$ is more complicated in form than $`E_1`$ and can also generate extra resonances, whose correct consideration requires inclusion of the third channel (energies above 9). This goes beyond the scope of our problem of demonstrating the transparency of a barrier, and here we only mention the presence of peaks of $`E_1`$ in $`E_2`$.
In Fig. 4, we plot the probabilities of penetration through a barrier of an oscillator in the ground state. The elastic peak $`W_{11}`$ is conserved though has a shift ($`E_r5.58`$) and a considerable smaller width, about 3 times. Its maximal value $`0.94`$ does not reach 1 owing to the opened second channel. The quantities $`W_{12}`$ and $`D_{12}`$ shown in the region of resonance energies at the top of the Figure (insertion) amount to $`0.03`$ and the total probability of penetration through a barrier reaches $`0.97`$, which allows us to speak about a considerable, though not 100%, transparency. Note that the probability of penetration of the barrier $`2V(y)`$ in this region is only $`0.0075`$. The quantity $`D_{11}`$ is very closed to zero ($`0.0007`$), thus demonstrating the above optical effect of suppression of a reflected wave even in the two channel case.
The second peak at energy $`E_r9.6`$ shown in Fig.4 cannot be considered reliable since at these energies, account is to be made of the third channel. This peak demonstrates as interesting peculiarity – all the probabilities of both channel 1 and channel 2 amount to 1/4. Moreover, the behaviour of probabilities of transition from state 2 ( $`W_{22},W_{21},D_{21}`$) in the energy region of the second peak (not shown in Fig.4) is visually nondistinguishable from the behaviour of inelastic components from state 1. Owing to the principle of detailed balance , the equalities $`W_{21}=W_{12}`$ and $`D_{21}=D_{12}`$ demonstrate only the accuracy of calculation. Of surprise is the behaviour of the probability $`W_{22}`$ whose energy dependence with 5% accuracy reproduces the behaviour of inelastic components around this peak.
## V Conclusion
The considered mechanism of transparency of barriers for a coupled pair of particles is clearly observed for narrow and high barriers, as compared to characteristic sizes and energy of an oscillator. These conditions do not eliminate spatially asymmetric barrier potentials since the symmetry of effective interaction $`E_i(y)`$ is determined only by particles being identical. Therefore, the effects of quantum transparency can occur in various fields of physics. Here note is to be made that when it is allowable to describe the processes of the false vacuum disintegration in high energy collisions by means of quantum tunnelling of a pair of particles coupled by an oscillator interaction, there arise mechanisms of the resonance transparency of a barrier, much increasing the decay probabilities of the false vacuum.
Acknowledgements. The author is indebted to V.A.Rubakov, which has marked not full adequacy of the considered quantum mechanical system to a problem of induced disintegration of false vacuum for a very useful discussions. |
no-problem/9912/cond-mat9912136.html | ar5iv | text | # Antiferromagnetism in a doped spin-Peierls model: classical and quantum behaviors
## 1 Introduction
The spin-Peierls transition at $`T_{\mathrm{SP}}14`$ K in the inorganic quasi one dimensional spin-Peierls compound CuGeO<sub>3</sub> has attracted much interest . This transition is characterized by the appearance of a finite dimerization in the CuO<sub>2</sub> chains, and the opening of a spin gap. With CuGeO<sub>3</sub>, it became possible to experiment the effect of doping in a spin-Peierls system. An antiferromagnetic (AF) phase was discovered upon replacing a fraction of the Cu ions (with $`S=1/2`$) by magnetic ions with a different spin: Ni (with $`S=1`$) or Co (with $`S=3/2`$), or non magnetic ions: Zn or Mg . Also, the Ge sites (outside the CuO<sub>2</sub> chains) can be substituted with Si , leading to antiferromagnetism at low temperature.
Recent experiments by Manabe et al. have shown the existence of a finite Néel temperature $`25`$ mK, with a doping concentration as low as $`0.12`$ $`\%`$ . The doping dependence of the Néel temperature obtained in these experiments suggests the absence of a critical concentration for the appearance of antiferromagnetism: at low doping the ordering temperature scales like $`\mathrm{ln}T_N1/x`$ .
Early theoretical works have focussed on the identification of the relevant low energy degrees of freedom. Fukuyama, Tanimoto and Saito have shown the coexistence between antiferromagnetism and dimerization in a doped spin-Peierls model. The degrees of freedom relevant to the low energy physics are solitonic spin-$`1/2`$ excitations pinned at the impurities . These excitations are the building blocks of the theory in Refs. . These spin-1/2 objects interact via an exchange decaying exponentially with distance. Interchain interactions can be incorporated by considering the existence of a transverse correlation length, approximately one tenth of the longitudinal correlation length, as recently proposed independently by Dobry et al. , and Fabrizio, Mélin and Souletie . Numerical calculations with realistic spin-phonon couplings have provided a link between the microscopic Hamiltonian and the effective model of interacting spin-$`1/2`$ moments . The approach followed in Refs. and continued in the present work relies on the treatment of disorder in the effective Hamiltonian. This allows to discuss the qualitative physics of large scale systems at a finite temperature, while the numerical methods have so far been limited to the ground state properties . The scope of this article is to analyze the model beyond the percolation approximation used in Ref. , both at the classical and quantum levels. We first show in section 3 that the physics of the quantum Hamiltonian is already present in the classical Ising Hamiltonian, and give a rigorous derivation of mean field theory via a Bethe-Peierls treatment. The second purpose of the article is to show that the quantum Hamiltonian has an antiferromagnetic behavior at low temperature. The quantum model is treated in a cluster renormalization group (RG) calculation in sections 4 and 5.
## 2 The model
We recall the model proposed in Ref. . When impurities are introduced in a dimerized background (for instance non magnetic impurities such as Zn), spin-1/2 solitonic moments are released out of the dimerized pattern. These magnetic moments are pinned at the impurities due to interchain interactions . This picture is in agreement with susceptibility experiments , indicating the release of one spin-$`1/2`$ moment per Zn impurity at low doping. The interaction between two magnetic moments at a distance $`d`$ originates from virtual excitations of the gaped dimerized background, and decays exponentially with distance, with a characteristic length set by the correlation lengths $`\xi _x9`$ c along the chain direction (c-axis) , and $`\xi _y\xi _x/10`$ in the b-axis direction. These exchange interactions as well as the relevance of disorder were identified in Ref. to play a crucial role in the establishment of antiferromagnetism. The low energy physics of a doped spin-Peierls system is represented by spin-$`1/2`$ solitonic magnetic moments distributed at random with a concentration $`x`$ on a square lattice, and interacting via a Heisenberg Hamiltonian
$$H=\underset{i,j}{}J_{ij}𝐒_i.𝐒_j,$$
(1)
the exchange in Eq. 1 being staggered and decaying exponentially with distance:
$$J_{ij}=(1)^{d_x+d_y+1}\mathrm{\Delta }\mathrm{exp}\left(\sqrt{\left(\frac{d_x}{\xi _x}\right)^2+\left(\frac{d_y}{\xi _y}\right)^2}\right),$$
(2)
with $`\xi _x9`$ c the correlation length along the c-axis and $`\xi _y0.1\times \xi _x`$ the correlation length along the b-axis. Correlations along the a-axis are neglected.
## 3 Classical Ising model
We consider a model with Ising degrees of freedom, distributed randomly and interacting via the exchange in Eq. 2. The classical antiferromagnet has the same transition temperature as the classical ferromagnet. We consider therefore the ferromagnetic model to calculate the ordering temperature.
### 3.1 One dimensional model
A high temperature expansion leads to the exact form of the correlations in terms of a product over the bonds between the spins at sites $`0`$ and $`L`$: $`\sigma _0\sigma _L=\mathrm{tanh}(\beta J_{i,i+1}).`$ We calculated numerically the disorder average to obtain the correlation length at a finite temperature. As shown on Fig. 1, the average correlation length of the disordered model is larger than the typical correlation length $`\xi =1/[x\mathrm{ln}(\mathrm{tanh}(\beta T^{}))]`$, with $`T^{}=\mathrm{\Delta }\mathrm{exp}(1/(x\xi ))`$ the exchange of the particular disorder realization where the magnetic moments are equally spaced. We calculated in Ref. the correlation length of the quantum chain and found a similar result: the enhancement of the magnetic correlations above $`T^{}`$ due to disorder does not rely on the quantum nature of the coupling Hamiltonian in spite of a random singlet physics in the quantum chain, not present in the Ising chain.
### 3.2 Determination of the exchange distribution
We consider the exchanges to be drawn independently in a distribution $`P(J)`$ resulting from the combination of randomness in the spatial distribution of the magnetic moments and exponentially decaying interactions, Eq. 2. The relevant exchanges are set by the spins the closest to each other. Therefore, given a spin at site $`(x_0,y_0)`$, we need to determine the probability that one spin is found on the periphery of the ellipse $`[(xx_0)/\xi _x]^2+[(yy_0)/\xi _y]^2=\gamma ^2`$, with no other spin inside the ellipse, and therefore an exchange $`\mathrm{\Delta }\mathrm{exp}(\gamma )`$. We consider a system of total area $`A`$ containing $`n`$ spins. The probability to find no spin inside a subsystem of area $`\delta A`$ is $`P_0=\left(1\frac{\delta A}{A}\right)^n\mathrm{exp}(x\delta A)`$, with $`x=n/A`$ the doping concentration. Now the spacing distribution is
$$P(\gamma )=xL(\gamma )\mathrm{exp}(x\delta A(\gamma )),$$
(3)
with $`L(\gamma )=d[A(\gamma )]/d\gamma `$. In the one dimensional model, we have $`\delta A(\gamma )=2\gamma \xi _x`$, and $`L(\gamma )=2\xi _x`$. In the two dimensional isotropic model with $`\xi _x=\xi _y=\xi `$, we have $`\delta A(\gamma )=\pi \gamma ^2\xi ^2`$, and $`L(\gamma )=2\pi \gamma \xi `$. In the quasi one dimensional model, $`\delta A(\gamma )=\pi \gamma ^2\xi _x\xi _y`$, and $`L(\gamma )=2\pi \gamma \xi _x\xi _y`$. The distribution $`P(\gamma )`$ of the isotropic and anisotropic two dimensional models is a Wigner distribution with a short scale “distance repulsion”. This repulsion will be shown not to affect the ordering properties.
### 3.3 Bethe-Peierls transition in the infinite coordination limit
We now consider the Bethe-Peierls solution of the Ising model . The lattice has a tree topology, with a forward branching ratio $`z1`$, and we calculate the magnetization of the site with the highest hierarchical level (top spin), in the presence of the other sites. We consider $`z1`$ trees and connect them to obtain a tree with one more generation (see Fig. 2). The recursion of the average magnetization of the top spin reads
$$X=\frac{_{i=1}^{z1}(1+Y_i\mathrm{tanh}(\beta J_i))_{i=1}^{z1}(1Y_i\mathrm{tanh}(\beta J_i))}{_{i=1}^{z1}(1+Y_i\mathrm{tanh}(\beta J_i))+_{i=1}^{z1}(1Y_i\mathrm{tanh}(\beta J_i))},$$
(4)
where $`Y_i`$, $`i=1,\mathrm{},z1`$ are the magnetizations of the top spins with $`n`$ generations, and $`X`$ the magnetization of the top spin with $`n+1`$ generations. We first consider the artificial situation where the ordering temperature is large compared to the exchange: $`T_{\mathrm{bp}}\mathrm{\Delta }`$, which turns out to be equivalent to assuming a large coordination. Eq. 4 is linearized into $`X=_{i=1}^zY_i\mathrm{tanh}(\beta J_i)`$, leading to the recursion of the magnetization $`X_{n+1}=(z1)\mathrm{tanh}(\beta J)X_n`$, with the subscript $`n`$ labeling the number of generations. This leads to the ordering temperature $`T_{\mathrm{bp}}=(z1)J`$, far above $`\mathrm{\Delta }`$ if $`z1`$, and consistent with the initial assumption. We can calculate the ordering temperature $`T_{\mathrm{bp}}`$ with the different distributions $`P(J)`$ derived in section 3.2. We find:
* with the one dimensional model distribution: $`T_{\mathrm{bp}}=2(z1)x\xi \mathrm{\Delta }/(1+2x\xi )`$;
* with the isotropic two dimensional model distribution: $`T_{\mathrm{bp}}2(z1)\pi x\xi ^2\mathrm{\Delta }`$ in the dilute regime $`x\xi ^2<1`$.
* with the quasi one dimensional model distribution: $`T_{\mathrm{bp}}2(z1)\pi x\xi _x\xi _y\mathrm{\Delta }`$ in the dilute regime $`x\xi _x\xi _y<1`$.
The three limits therefore show a similar behavior $`T_{\mathrm{bp}}(z1)x\mathrm{\Delta }`$, showing that the short distance Wigner repulsion in the spacing distribution Eq. 3 does not affect the ordering properties. Comparing the Bethe-Peierls ordering temperature to the ordering temperature obtained from the Stoner criterion in Ref. , we see that $`z1`$ should be identified with the interchain coupling. The small-$`z`$ regime, relevant to weak interchain correlations, is now discussed in sections 3.4 and 3.5.
### 3.4 Bethe-Peierls transition with a finite coordination: (i) percolation approximation
We now consider the physics at a finite $`z=3`$. In this regime, the Bethe-Peierls method takes into account the inhomogeneities of the magnetization, not included in the Stoner criterion mean field solution in Ref. . We first consider a “percolation approximation” in which we assume the bonds $`JT`$ ($`JT`$) to be set to zero (infinity) in the effective percolation problem. With $`z=3`$, the Bethe-Peierls iteration Eq. 4 reads
$$X=\frac{Y\mathrm{tanh}(\beta J_y)+Z\mathrm{tanh}(\beta J_z)}{1+YZ\mathrm{tanh}(\beta J_y)\mathrm{tanh}(\beta J_z)},$$
and is approximated into: (i) $`TJ_y`$, $`TJ_z`$: $`XY+Z`$; (ii) $`TJ_y`$, $`TJ_z`$: $`XZ`$; (iii) $`TJ_y`$, $`TJ_z`$: $`XY`$; (iv) $`TJ_y`$, $`TJ_z`$: $`X0`$. The recursion of the average magnetization is therefore $`X_{n+1}2\lambda X_n`$, with the percolation parameter
$$\lambda =_T^+\mathrm{}P(J)𝑑J.$$
(5)
With the one dimensional distribution, we have $`\lambda =1(T/\mathrm{\Delta })^{2x\xi _x}`$, which yields a transition at the temperature
$$T^{}=\mathrm{\Delta }\mathrm{exp}\left(\frac{2\mathrm{ln}2}{x\xi _x}\right),$$
(6)
exponentially small in $`1/(x\xi _x)`$. This behavior is compatible with Ref. where we have shown the absence of a true ordering transition in the percolation approximation of a two dimensional anisotropic model, while the model was shown to percolate in a finite size.
### 3.5 Bethe-Peierls transition with a finite coordination: (ii) beyond the percolation approximation
We now solve the Ising model beyond the percolation approximation. We take into account the iteration of small exchanges to lowest order, with the following approximate iteration: (i) $`TJ_y`$, $`TJ_z`$: $`XY+Z`$; (ii) $`TJ_y`$, $`TJ_z`$: $`X\beta J_yY+Z`$; (iii) $`TJ_y`$, $`TJ_z`$: $`XY+\beta J_zZ`$; (iv) $`TJ_y`$, $`TJ_z`$: $`X\beta J_yY+\beta J_zZ`$. The dominant contribution originates from the region (iv) of the couplings: $`X_{n+1}(2/T)\mu (1\lambda )X_n`$, with $`\lambda `$ in Eq. 5, and $`\mu =_0^TJP(J)𝑑J`$. With the one dimensional distribution for $`P(J)`$, we have $`\mu 2x\xi \mathrm{\Delta }`$ and therefore the same critical temperature $`T_{\mathrm{bp}}=4x\xi \mathrm{\Delta }`$ as in the model with a large connectivity $`z`$ (with $`z=3`$ in this calculation). It is remarkable that the correct treatment of the small exchanges restores a transition temperature $`x\xi \mathrm{\Delta }`$. This shows the relevant role played by energy scales smaller than the temperature.
The main unsolved question regarding the classical model behavior is to determine whether the finite dimensional model has a true thermodynamic transition at a temperature $`x\xi \mathrm{\Delta }`$. The Bethe-Peierls solution orders at a temperature $`x\xi \mathrm{\Delta }`$ because of the strong short range correlations. This does not necessarily mean that the finite dimensional model also has a true thermodynamic transition at this temperature. Instead, we believe it possible that the classical model has a cross-over to a Griffiths physics at a temperature $`x\xi \mathrm{\Delta }`$ and a true thermodynamic transition with a diverging correlation length at a temperature $`T^{}`$, which would also be a behavior compatible with a low temperature antiferromagnetic susceptibility. At the present stage, we cannot make the distinction between these two behaviors.
## 4 Quantum isotropic model
We first consider the artificial situation where the correlation lengths are identical in the two directions: $`\xi _x=\xi _y=\xi =9`$. The tendency to ordering in this isotropic model is overestimated compared to the anisotropic model with $`\xi _x=9`$, $`\xi _y=0.1\times \xi _x`$. We are lead to consider the class of interactions
$$J_{ij}=(1)^{d_x+d_y+1}\mathrm{\Delta }\mathrm{exp}\left(\left[\left(\frac{d_x}{\xi _x}\right)^2+\left(\frac{d_y}{\xi _y}\right)^2\right]^{\alpha /2}\right),$$
(7)
decaying faster than the interactions in Eq. 2 if $`\alpha >1`$. The cluster RG (see the Appendix) generates large energy scales in the parameter range $`\alpha <\alpha _01.2`$. It turns out that $`\alpha _0<1`$ in the model with anisotropic exchanges, and we therefore consider only the regime $`\alpha >\alpha _01.2`$ in the isotropic model.
The gap distribution is shown on Fig. 3 for decreasing temperatures. It is visible that the RG produces gaps of order of the temperature $`T`$ unlike in the case of the infinite randomness fixed point where the opposite occurs (see Ref. for the one dimensional Heisenberg chain with an infinite randomness, random singlet behavior; see Ref. for the infinite randomness behavior in the two dimensional Ising model in a transverse field). As in the Ising model analysis, we calculate the susceptibility in two ways: (i) we assume a paramagnetic behavior of the set of effective moments; (ii) we incorporate the correlations induced by exchanges $`\mathrm{\Delta }T`$, in which case an antiferromagnetic behavior in the susceptibility is restored.
### 4.1 Infinite randomness calculation
We first consider all the exchanges $`J<T`$ to be set to zero: the set of effective spins is viewed as a paramagnet with a susceptibility
$$\chi =\frac{1}{TL^2}\underset{i=1}{\overset{N^{(\mathrm{eff})}}{}}S_i^{(\mathrm{eff})}(S_i^{(\mathrm{eff})}+1),$$
(8)
where $`N^{(\mathrm{eff})}`$ the number of effective spins. We have discarded a prefactor $`1/3`$ in Eq. 8, not relevant to the present calculation. The low temperature susceptibility is therefore controlled by two quantities: (i) the density of free spins $`n_{\mathrm{eff}}=N^{(\mathrm{eff})}/(xL^2)`$; and (ii) the magnitude of the effective spin.
The number of effective moments scales like $`N_{\mathrm{eff}}(T/\mathrm{\Delta })xL^2`$, as it is visible on Fig. 4. The squared effective moment shows two regimes:
* High temperature regime: The high temperature average squared effective moment scales like $`[S^{\mathrm{eff}}]^2\mathrm{\Delta }/T`$ (see Fig. 5). The susceptibility per unit volume is $`\chi x/T`$.
* Low temperature percolation regime: At low temperature, the squared effective moment scales like $`S^2axL^2`$, with $`a`$ some constant (see the insert Fig. 5). The susceptibility per unit volume is $`\chi (ax^2L^2)/\mathrm{\Delta }`$. In this regime, a cluster has percolated through the finite size system. Its magnetization results from summing $`xL^2`$ variables $`S_i^z=\pm 1/2`$, corresponding to the two sublattices magnetizations. Therefore, $`S\sqrt{xL^2}`$, and $`S^2xL^2`$.
The cross-over between these regimes occurs at the temperature scale $`T_{\mathrm{co}}=(\mathrm{\Delta }/a)(xL^2)^1`$, which decreases to zero when the system size is increased. Therefore in the thermodynamic limit, only the high temperature paramagnetic behavior survives while in a finite size, a low temperature tail is present in the susceptibility (see Fig. 6). Now the situation changes when correlations between spins coupled by exchanges of order $`T`$ are included.
### 4.2 Finite randomness calculation
To schematically incorporate the correlations at energy scales of order of the temperature, we consider as frozen the spins connected by an exchange with a gap between $`T/2`$ and $`T`$. This freezing results in a staggered magnetization because the set of effective moments is unfrustrated (see the Appendix). The resulting susceptibility is shown on Fig. 7. It is visible that $`\chi T`$ is linear in $`T`$ at small $`T`$, with therefore a finite susceptibility at zero temperature. This shows qualitatively how an antiferromagnetic behavior can be restored because of the correlations at energy scales $`\mathrm{\Delta }T`$.
## 5 Quantum anisotropic model
### 5.1 One dimensional model
In one dimension, the RG equations of a model in which only AF nearest neighbor exchanges are retained can be solved exactly (see Ref. ). We note $`x=J/[\text{Max}(J)]`$ the exchange normalized to the maximal exchange. The distribution of the variable $`x`$ is $`P(x)=(\overline{f}/\mathrm{\Gamma })x^{\overline{f}/\mathrm{\Gamma }1}`$, with $`\mathrm{\Gamma }=\mathrm{ln}(\mathrm{\Delta }/\text{Max}(J))`$, and $`\overline{f}/\mathrm{\Gamma }=x\xi /(1+\mathrm{\Gamma }x\xi )`$. The weight on the strongest exchanges $`x1`$ is $`x\xi `$ above the cross-over temperature $`T^{}=\mathrm{\Delta }\mathrm{exp}(1/x\xi )`$, and decreases to zero at temperatures below $`T^{}`$, where the system has crossed over to the random singlet fixed point. As a test of our program, we considered the cluster RG of a one dimensional model, with the exchanges not restricted to nearest neighbors (see Eqs. 12). For any practical temperature above $`T^{}`$, the weight on energy scales of order $`T`$ is found to remain constant as the system is renormalized, with therefore the same behavior as in the one dimensional model with AF nearest neighbor exchanges only.
### 5.2 Anisotropic model
We show on Fig. 8 the evolution of the gap distribution as the system is scaled down, with the parameters $`\xi _x=9`$, $`\xi _y=0.1\times \xi _x`$, relevant to CuGeO<sub>3</sub>. As in the isotropic model, energy scales of order $`T`$ are generated upon renormalizing the system. To qualitatively include the effects of correlations at energy scales $`\mathrm{\Delta }T`$, we consider the spins connected by an exchange with a gap between $`T/2`$ and $`T`$ to be frozen, and obtain a low temperature power-law Curie susceptibility, $`\chi T^\alpha `$, with $`\alpha =0.7`$ if $`x=0.01`$ (see Fig. 9). The low temperature susceptibility diverges slower than a Curie law, which is a behavior characteristic of an antiferromagnet. We did not succeed to obtain $`\alpha >0`$ as it is the case in doped CuGeO<sub>3</sub>. Therefore, we cannot rigorously conclude on whether antiferromagnetism is long ranged or associated to a zero temperature transition. A precise discussion of this point is an open question, and would require the correlations at energy $`\mathrm{\Delta }T`$ to be incorporated beyhond our present treatment. For instance the cluster RG could be used to renormalize the high energy physics and the low energy effective Hamitonian could be treated by exact diagonalizations.
## 6 Conclusions
We have shown that the physics of the quantum Hamiltonian Eqs. 12 was already present at the level of the classical Ising model. A Bethe-Peierls treatment of the classical model has been given in which a transition at a temperature $`x\xi \mathrm{\Delta }`$ was found. The quantum Hamiltonian has been treated in a cluster RG. The model was shown to have a finite randomness behavior. We have shown at a qualitative level how a low temperature antiferromagnetic susceptibility can be obtained.
Two questions are left open:
* The Bethe-Peierls solution orders at a temperature $`x\xi \mathrm{\Delta }`$. We do not know whether the two dimensional model has also a thermodynamic transition at a temperature $`x\xi \mathrm{\Delta }`$, or whether this temperature scale corresponds to a cross-over to a Griffith physics. Both behaviors would be a priori compatible with the existence of a maximum in the susceptibility of the antiferromagnet at a temperature $`x\xi \mathrm{\Delta }`$.
* The quantum model susceptibility shows an antiferromagnetic behavior at low temperature due to correlations at energies $`\mathrm{\Delta }T`$. The isotropic model shows a finite susceptibility at low temperature while the quasi one dimensional has a susceptibility diverging slower than a Curie law. A precise investigation of the low temperature susceptibility would require a treatment going beyond our present analysis, for instance by treating the low energy effective Hamiltonian by exact diagonalizations.
Two other proposals to explain antiferromagnetism in doped in CuGeO<sub>3</sub> have been made: Fukuyama, Tanimoto and Saito and Mostovoy, Khomskii, and J. Knoester . These proposals are quite different from ours, and we have exposed previously why we think they our model is more relevant . The inclusion of interchain interaction in our model in Ref. and the present work, points strongly towards an compatibility with experiments.
## Acknowledgements
The author thank M. Fabrizio and J. Souletie for numerous fruitful discussions, their encouragements, and for useful comments on the manuscript. M. Fabrizio pointed out to me the possibility of cluster RG calculations as well as the proof that the effective problem remains unfrustrated as the system is scaled down. J. Souletie suggested the existence of a similar physics in the classical and quantum models. The cluster RG calculations have been performed on the CRAY T3E supercomputer of the Centre Grenoblois de Calcul Vectoriel of the Commisariat à l’Energie Atomique.
## Appendix A Renormalization equations
We use a cluster RG to renormalize the quantum Hamiltonian Eqs. 12. The method relies on a perturbative expansion in the inverse of the largest exchange, and was originally proposed by Dasgupta and Ma in the context of disordered Heisenberg chains. The cluster RG was applied by Bhatt and Lee to a model of phosphorus doped silicon. Fisher used the method to solve exactly the random singlet fixed point . The cluster RG was also used to investigate the low energy physics of disordered spin chains: the dimerized Heisenberg chain with random exchanges ; the spin-one chain with random exchanges ; Heisenberg chains with random ferromagnetic and antiferromagnetic couplings . Recently, Motrunich et al. shown the existence of an infinite randomness fixed point in two dimensions in the Ising model in a transverse field. At such a fixed point, inhomogeneities in the disorder grow indefinitely as the system is scaled down, as in the random singlet fixed point in one spatial dimension. We now give a short derivation of the RG equations.
We isolate two spins $`𝐒_1`$ and $`𝐒_2`$ coupled by an exchange $`J_{12}`$. This sets an energy scale given by the gap between the ground state and the first excited multiplet: if $`J_{12}>0`$ is antiferromagnetic, the ground state has a spin $`S=|S_1S_2|`$ and the first excited multiplet has $`S=|S_1S_2|+1`$, with a gap $`\mathrm{\Delta }_{12}=|J_{12}|(|S_1S_2|+1)`$. If $`J_{12}<0`$ is ferromagnetic, the ground state has $`S=S_1+S_2`$ and the first excitated multiplet has $`S=S_1+S_21`$, with a gap $`\mathrm{\Delta }_{12}=|J_{12}|(S_1+S_2)`$. Among all possible pairs of spins, we consider the one with the strongest gap $`\mathrm{\Delta }_{12}`$. This energy scale is identified to the system temperature. If $`𝐒_1`$ and $`𝐒_2`$ are coupled ferro ($`J_{12}<0`$) , the two spins $`𝐒_1`$ and $`𝐒_2`$ are replaced by an effective spin $`S=S_1+S_2`$. If they are coupled antiferro, they are replaced by an effective spin $`S=|S_1S_2|`$. $`S_1=S_2`$ with an AF coupling $`J_{12}`$ leads to singlet formation while a residual moment is formed otherwise.
### A.1 Residual moment formation
Let us first consider the case where a residual moment is formed corresponding to (a) and (b) on Fig. 10. We specialize a spin $`𝐒_3`$ among the other spins and denote by $`J_{ij}`$ the exchange between spins $`i`$ and $`j`$, with $`i,j=1,\mathrm{},3`$. The coupling Hamiltonian between the spins $`𝐒_1`$ and $`𝐒_2`$ is $`H_{12}=J_{12}𝐒_1.𝐒_2`$ while the remaining couplings
$$H_I=J_{13}𝐒_1.𝐒_3+J_{23}𝐒_2.𝐒_3$$
(9)
are treated in a first order perturbation. This leads to the renormalized coupling Hamiltonian $`H_I=\stackrel{~}{J}_3𝐒_3.𝐒`$, with the renormalized exchange
$$\stackrel{~}{J}_3=J_{13}c(S_1,S_2,S)+J_{23}c(S_2,S_1,S),$$
(10)
with
$$c(S_1,S_2,S)=\frac{S(S+1)+S_1(S_1+1)S_2(S_2+1)}{2S(S+1)}$$
derived in Ref. . The sublattice on which the residual spin is placed is determined as follows: if $`S_1>S_2`$, the residual spin $`S`$ is placed on the same sublattice as $`S_1`$ while it is placed on the sublattice of $`S_2`$ if $`S_1<S_2`$.
### A.2 Singlet formation
We now consider singlet formation, with $`S_1=S_2`$ coupled AF. The renormalized couplings are obtained in a second order perturbation theory. Generalizing the calculation in Ref. to the coupling Hamiltonian (9), we find the renormalized exchange
$$\stackrel{~}{J}_{34}=J_{34}+\frac{2S_1(S_1+1)}{3J_{12}}(J_{13}J_{23})(J_{24}J_{14}),$$
(11)
where $`S_1=S_2`$ denote the spins at site $`1`$ and $`2`$. In the 1D limit $`J_{23}=J_{14}=0`$ Eq. 11 reproduces the result in Ref. , and the spin-1/2 limit $`S_1=1/2`$ reproduces the result in Ref. .
### A.3 Absence of frustration
We show that frustration is not generated by the RG procedure. We assume an unfrustrated starting Hamiltonian, and show that the different RG operations are compatible with the sublattice structure. We distinguish three cases:
* $`S_1`$ and $`S_2`$ belong to different sublattices and are coupled antiferro. We assume $`S_1>S_2`$ and the effective spin $`S=S_1S_2`$ replaces the spin $`S_1`$. The renormalized coupling to another spin $`S_3`$ in Eq. 10 is
$$\stackrel{~}{J}_3=J_{13}+(J_{13}J_{23})\frac{S_2}{S+1}.$$
$`J_{13}>0`$ and $`J_{23}<0`$ leads to $`\stackrel{~}{J}_3>0`$. $`J_{13}<0`$ and $`J_{23}>0`$ leads to $`\stackrel{~}{J}_3<0`$. The renormalized coupling $`\stackrel{~}{J}_3`$ has thus a sign compatible with the sublattice structure.
* $`S_1`$ and $`S_2`$ belong to the same sublattices and are coupled ferro. $`J_{13}`$ and $`J_{23}`$ have the same sign. We have $`c(S_1,S_2,S)>0`$ and $`c(S_2,S_1,S)>0`$. The renormalized coupling $`\stackrel{~}{J}_3`$ has the same sign as $`J_{13}`$ and $`J_{23}`$, compatible with the sublattice structure.
* $`S_1=S_2`$ are coupled antiferro and a singlet is formed. If $`S_3`$ and $`S_4`$ are coupled ferro and in the same sublattice, $`\stackrel{~}{J}_{34}<0`$ in Eq. 11. If $`S_3`$ and $`S_4`$ are coupled antiferro and in the opposite sublattice, $`\stackrel{~}{J}_{34}>0`$ in Eq. 11. The singlet formation is thus compatible with the sublattice structure. |
no-problem/9912/hep-ex9912063.html | ar5iv | text | # References
ELECTRON BEAM MØLLER POLARIMETER AT JLAB HALL A
A.V. Glamazdin, V.G. Gorbenko, L.G. Levchuk, R.I. Pomatsalyuk, A.L. Rubashkin, P.V. Sorokin
Kharkov Institute of Physics and Technology, Kharkov, 310108, Ukraine
D.S. Dale, B.Doyle, T. Gorringe, W. Korsch, V. Zeps
University of Kentucky, Department of Physics and Astronomy, Lexington, KY 40506-0055, USA
J.P.Chen, E. Chudakov, S. Nanda, A. Saha
Thomas Jefferson National Accelerator Facility, Newport News, 23606-4350, VA, USA
A. Gasparian
Hampton University, Hampton, Department of Physics, VA, 23668, USA
ABSTRACT
As part of the spin physics program at Jefferson Laboratory (JLab), a Møller polarimeter was developed to measure the polarization of electron beam of energies $`0.8`$ to $`5.0`$ $`GeV`$. A unique signature for Møller scattering is obtained using a series of three quadrupole magnets which provide an angular selection, and a dipole magnet for energy analysis. The design, commissioning, and the first results of the polarization measurements of this polarimeter will be presented as well as future plans to use its small scattering angle capabilities to investigate physics in the very low $`Q^2`$ regime.
Møller polarimeters are widely used for electron beam polarization measurements in the $`GeV`$ energy range. The high quality of polarization experiments anticipated at new-generation CW multi-GeV electron accelerators, such as Jefferson Laboratory (JLab), require precise measurements of electron beam parameters. One of the parameters is the electron beam polarization. The Hall A beam line at JLab is equipped with a Møller polarimeter. It was designed and constructed in collaboration with Jefferson Laboratory, the Kharkov Institute of Physics and Technology and the University of Kentucky.
The polarimeter is schematically presented in Fig. 1. The horizontal plane in Hall A is in accordance to the polarimeter reaction plane. The polarimeter consists of a polarized electron target, three quadrupole magnets, a dipole magnet and a detector. The polarimeter quadrupole magnets make it possible to keep the position of all polarimeter elements unchanged within the whole range of JLab energies. Their primary purpose is to focus the divergent trajectories of Møller electrons in the scattering plane into a paired trajectories aligned with the axis of the beam at the exit of the last quadrupole. The dipole is the main element of the polarimeter magnetic system. It provides the energy analysis, thus separating the Møller scattered electrons ($`E_o/2`$, $`\mathrm{\Theta }_{Moll}`$) from electrons coming from the Mott scattering peak ($`EE_o`$, $`\mathrm{\Theta }\mathrm{\Theta }_{Moll}`$) and thereby suppressing the background. It also bends the Møller electrons from the reaction plane, allowing their detection away from the electron beam. The dipole has a magnetic shielding insertion in the center of the magnetic gap. The Møller electrons pass through the dipole on the left and right sides of this shielding insertion. The primary electron beam passes through a $`4`$ $`cm`$ diameter hole bored in the shielding insertion letting its passage to the Hall A beam dump with small influence of the dipole magnetic field.
The Møller polarimeter detector is located in the shielding box downstream of the dipole and consists of two modules (left and right) for coincidence measurements. Each part of the detector includes an aperture detector made of plastic scintillator and four blocks of lead glass.
The polarized electron target chamber contains two target frames with different ferromagnetic foils:
1. $`99.95\%`$ pure iron foil $`10.9`$ $`\mu m`$ thick with effective polarization $`7.12\%`$ in an applied magnetic field of about $`28`$ $`mT`$;
2. foil of Supermendur $`13.9`$ $`\mu m`$ thick with an effective polarization $`7.6\%`$ in an applied magnetic field of about $`28`$ $`mT`$.
The targets are mounted on a vertical ladder and are cooled with liquid nitrogen down to $`115`$ $`K`$. They can be rotated in an angular range $`\pm (20^o160^o)`$. The beam-polarization measurements are made with one or the other foil in the beam (TOP or BOTTOM target position). The third position (HOME) is used when beam polarization measurement is not in progress. The target foil magnetization is measured by a series of pickup coils. The first polarization measurement with the Hall A Møller polarimeter was done in June 1997, regular measurements with the polarimeter are made. Results of a measurement are shown in Fig. 2.
The Hall A Møller polarimeter covers the energy range $`0.8`$ to $`5.0`$ $`GeV`$ and can be used for measurements with beam currents from $`0.55.0`$ $`\mu A`$. About twenty minutes of measurement time is needed to take data with a statistical error of less than $`1\%`$. Although the polarimeter quadrupole magnets are part of the regular Hall A beam transport, it is not necessary to change the quadrupole magnet settings or the primary beam trajectory in switching from data taking with the Hall A physics target to a polarization measurement. Also, the polarization measurements can be done with the Hall A fast raster on.
A typical beam current for the polarization measurement is $`1.5`$ $`\mu A`$ when the iron target heats up to $`150`$ $`K`$. This target heating provides a relative target depolarization of $`0.3\%`$. The Helmholtz coils provide a $`28`$ $`mT`$ magnetic field in the area in which the electron beam passes through the target. The experimentally measured target polarization is $`7.6\%`$ for the Supermendur foil and $`7.12\%`$ for the iron foil. The background in coincidence measurements is negligible. The typical detector acceptance in the reaction plane for an energy range $`25`$ $`GeV`$ is $`\mathrm{\Delta }\mathrm{\Theta }_{Moll}\pm 14\text{o}`$ in c.m. and is about $`76\%`$ in analyzing power. The Levchuk effect , is estimated to be about $`2`$ $`\%`$ and was not observed at the $`3`$ $`\%`$ level of the systematic error of our measurements. Other sources of systematic errors were considered and are summarized in Table 1.
| Parameter | $`<A_{zz}>`$ | $`\mathrm{cos}\mathrm{\Theta }_{targ}`$ | $`P_{targ}`$ | $`\mathrm{𝐓𝐨𝐭𝐚𝐥}:`$ |
| --- | --- | --- | --- | --- |
| Error | $`0.25`$ $`\%`$ | $`1`$ $`\%`$ | $`3`$ $`\%`$ | $`3\%`$ |
In addition to being a part of the standard beam line instrumentation in Hall A, the small scattering angle capabilities of the Møller polarimeter, coupled with the momentum analyzing capabilities of its dipole, present unique opportunities to do physics in very low $`Q^2`$ regime. The $`QQQD`$ design of the Møller spectrometer will make possible the electron scattering experiments at electron scattering angles ranging from about three degrees to less than one degree with $`\mathrm{\Delta }p^{}/p^{}`$ of about $`10^3`$. As an initial area of investigation, we intend to measure the neutral pion form factor, $`F_{\gamma ^{}\gamma \pi ^o}`$, at low $`Q^2`$ via the virtual Primakoff effect , i.e. $`\pi ^o`$ electroproduction in the Coulomb field of heavy nucleus. The slope of this form factor in the low $`Q^2`$ range to be measured, $`0.005`$ $`(GeV/c)^2`$ to $`0.04`$ $`(GeV/c)^2`$, gives a measure of the mean square $`\gamma ^{}\gamma \pi ^o`$ interaction radius and is sensitive to the constituent quark mass. Such an experiment can be performed by removing the third quadrupole magnet, installing position sensitive detectors in the focal plane, and placing a series of lead glass photon detectors upstream of the dipole to measure the $`\pi ^o`$ decay photons from the $`Pb(e,e^{}\pi ^o)Pb`$ reaction.
A double arm Møller polarimeter, used to measure the polarization of a $`0.85.0`$ $`GeV`$ primary electron beam in Hall A of JLab, has been described. It is used for the extensive planned spin physics program in Hall A. The polarimeter has been found to be robust and stable. Statistical errors of between $`0.2`$ and $`0.8\%`$ per measurement have made precision tests of possible systematic shifts in the data possible. Combining the systematic uncertainties leads to a final determination of the beam polarization with a relative uncertainty of $`3\%`$. More detailed information about the polarimeter design, status and current measurements is available at http:$`\backslash \backslash `$www. jlab.org$`\backslash `$~moller.
Acknowledgments
This research was supported in part by Ukrainian Ministry of Science and Technology under Contract No. F5/1758-97 project No. 2.5.1/27. |
no-problem/9912/hep-ph9912491.html | ar5iv | text | # REFERENCES
DSF-T-99/46
Heavy neutrino mass scale
and quark-lepton symmetry
D. Falcone
Dipartimento di Scienze Fisiche, Università di Napoli,
Mostra d’Oltremare, Pad. 19, I-80125, Napoli, Italy
e-mail: falcone@na.infn.it
## Abstract
Assuming hierarchical neutrino masses we calculate the heavy neutrino mass scale in the seesaw mechanism from experimental data on oscillations of solar and atmospheric neutrinos and quark-lepton symmetry. The resulting scale is around or above the unification scale, unless the two lightest neutrinos have masses of opposite sign, in which case the resulting scale can be intermediate.
PACS: 12.15.Ff, 14.60.Pq
Keywords: Neutrino Physics; Grand Unified Theories
Recent results on atmospheric and solar neutrinos support the idea that neutrinos have tiny masses. A popular mechanism for achieving small Majorana masses for left-handed neutrinos is the seesaw mechanism , where the Majorana mass matrix of the left-handed neutrino is given by
$$M_L=M_DM_R^1M_D^T,$$
(1)
with $`M_D`$ the Dirac mass matrix and $`M_R`$ the Majorana mass matrix of a heavy right-handed neutrino, related to lepton number violation at high energy . If the Dirac masses are supposed to be of the same order of magnitude of the up-quark masses (quark-lepton symmetry), as suggested by GUTs , then light left-handed neutrinos appear. In such a case the heavy neutrino mass scale should be at the unification or intermediate scale , because in most GUTs the Higgs field that gives mass to neutrinos is the same that breaks the GUT group or the intermediate group to the standard model. Also, at that scale the relation $`m_b=m_\tau `$, resulting from quark-lepton symmetry, should hold.
In this short paper the question of calculating the heavy neutrino mass scale from experimental data and quark-lepton symmetry is addressed , assuming $`|m_3||m_{1,2}|`$, where $`m_i`$ are the light neutrino masses, and it is shown that such a scale is at or above the unification scale, unless $`m_1m_2`$. For nearly opposite $`m_1`$, $`m_2`$, the heavy neutrino mass scale is around the intermediate scale of a GUT such as $`SO(10)`$ with $`SU(4)\times SU(2)\times SU(2)`$ as intermediate symmetry .
Assuming maximal mixing for atmospheric neutrinos and that the electron neutrino does not contribute to the atmospheric oscillations , the matrix $`M_L`$ can be written as
$$M_L=\left(\begin{array}{ccc}2ϵ& \delta & \delta \\ \delta & \sigma & \rho \\ \delta & \rho & \sigma \end{array}\right)$$
(2)
with
$`ϵ`$ $`=`$ $`{\displaystyle \frac{1}{2}}(m_1c^2+m_2s^2)`$ (3)
$`\delta `$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(m_1m_2)cs`$ (4)
$`\sigma `$ $`=`$ $`ϵ_2+{\displaystyle \frac{m_3}{2}}`$ (5)
$`\rho `$ $`=`$ $`ϵ_2{\displaystyle \frac{m_3}{2}}`$ (6)
$`ϵ_2`$ $`=`$ $`{\displaystyle \frac{1}{2}}(m_1s^2+m_2c^2),`$ (7)
and $`c=\mathrm{cos}\theta `$, $`s=\mathrm{sin}\theta `$, where $`\theta `$ is the mixing angle of solar neutrinos. The inverse of $`M_L`$ is given by
$$M_L^1=\left(\begin{array}{ccc}\sigma ^2\rho ^2& \delta (\rho \sigma )& \delta (\rho \sigma )\\ \delta (\rho \sigma )& 2ϵ\sigma \delta ^2& \delta ^22ϵ\rho \\ \delta (\rho \sigma )& \delta ^22ϵ\rho & 2ϵ\sigma \delta ^2\end{array}\right)\frac{1}{m_1m_2m_3}.$$
(8)
Without loss of generality we can assume $`m_3>0`$. Moreover, if $`m_3|m_2|,|m_1|`$, with $`\mathrm{\Delta }m_{sun}^2=m_2^2m_1^2`$ and $`\mathrm{\Delta }m_{atm}^2=m_3^2m_{1,2}^2`$, then
$$M_L^1=\left(\begin{array}{ccc}0& \delta & \delta \\ \delta & ϵ& ϵ\\ \delta & ϵ& ϵ\end{array}\right)\frac{1}{m_1m_2}.$$
(9)
In the basis where the charged lepton mass matrix $`M_e`$ is diagonal, for $`M_D`$ we take
$$M_D=\frac{m_\tau }{m_b}\text{diag}(m_u,m_c,m_t).$$
(10)
An almost diagonal form of this kind for $`M_D`$ is again suggested by quark-lepton symmetry and GUTs , and the factor is due to running from unification or intermediate scale , corresponding to supersymmetric and nonsupersymmetric cases, respectively. From the seesaw formula (1) and eqn.(10) we obtain
$$M_R=M_DM_L^1M_D$$
(11)
and, due to the form of $`M_L^1`$ and $`M_D`$ in eqns.(9),(10), two elements of $`M_R`$ have to be considered to discover the scale of heavy neutrino mass:
$`M_{R33}`$ $`=`$ $`k{\displaystyle \frac{m_1c^2+m_2s^2}{m_1m_2}}m_t^2,`$ (12)
$`M_{R13}`$ $`=`$ $`k{\displaystyle \frac{1}{\sqrt{2}}}{\displaystyle \frac{(m_1m_2)cs}{m_1m_2}}m_um_t,`$ (13)
with $`k=(m_\tau /m_b)^2`$. We will consider two cases for $`s`$, namely $`s=0`$ (single maximal mixing) and $`s=1/\sqrt{2}`$ (bimaximal mixing ). For $`s=0`$ we get the scale
$$M_{R33}=k\frac{m_t^2}{m_2}.$$
(14)
For $`s=1/\sqrt{2}`$ we have three subcases: $`|m_2||m_1|`$ which gives
$$M_{R33}=k\frac{1}{2}\frac{m_t^2}{m_1};$$
(15)
$`m_2m_1`$ yielding
$$M_{R33}=k\frac{m_t^2}{m_{1,2}};$$
(16)
and $`m_2m_1`$ for which the scale is given by
$$M_{R13}=k\frac{1}{\sqrt{2}}\frac{m_um_t}{m_{1,2}},$$
(17)
while $`M_{R33}`$ is much smaller. The case $`s=0`$ is near the small mixing MSW solution of the solar neutrino problem , while the case $`s=1/\sqrt{2}`$ is near both the large mixing MSW and especially the vacuum oscillations . Using the same numerical values of ref. we find $`M_{R33}10^{15}`$ GeV for $`s=0`$. For $`s=1/\sqrt{2}`$, the three eqns.(15),(16) and (17) lead respectively to: $`M_{R33}10^{16}`$ GeV (large mixing MSW), $`M_{R33}10^{18}`$ GeV (vacuum oscillations); $`M_{R33}10^{15}`$ GeV (LMSW and VO); $`M_{R13}10^{10}`$ GeV (LMSW and VO). Therefore we find that the heavy neutrino mass scale can be at the intermediate scale only if $`m_2m_1`$ (see ref.), and $`M_R`$ has a roughly off-diagonal form. As the intermediate scale is related to the nonsupersymmetric case , we get that nearly opposite masses are also related to that case (we are in a CP conserving framework, the mass sign corresponds to CP parity ). For positive masses the two MSW solutions are in agreement with the unification scale while the vacuum oscillation solution gives a scale well above the unification scale, unless $`m_2m_1`$. It is worth noting that a smaller mass is obtained for opposite $`m_1`$, $`m_2`$, due to the appearance of $`m_um_t`$, rather than $`m_t^2`$, in eqn.(17). Moreover, from eqns.(14-17) one can read the dependence of the heavy neutrino mass scale on light neutrino masses. This dependence is in agreement with the analyses given in refs., where only positive values of the light neutrino masses are considered. The three eigenvalues of $`M_R`$ have a strong hierarchy, unless there is one negative mass, in which case the structure of $`M_R`$ is very different from that of $`M_D`$. Finally, we notice that the bimaximal case with opposite masses has a different impact on the neutrinoless double-beta decay parameter $`_{ee}=2ϵ`$, with respect to the bimaximal case with positive masses and that, for a hierarchical spectrum, neutrinos are not relevant for hot dark matter.
In conclusion, we have calculated the scale of heavy neutrino mass in the seesaw mechanism assuming hierarchical light neutrino masses, maximal or bimaximal mixing (as suggested by experimental data) and quark-lepton symmetry. The main results are: intermediate scale is obtained only if the two lightest left-handed neutrinos have nearly opposite masses; the vacuum oscillation case with positive and full hierarchical masses leads to a scale near the Planck mass. |
no-problem/9912/astro-ph9912473.html | ar5iv | text | # Constraining Antimatter Domains in the Early Universe with Big Bang Nucleosynthesis
## Abstract
We consider the effect of a small-scale matter-antimatter domain structure on big bang nucleosynthesis and place upper limits on the amount of antimatter in the early universe. For small domains, which annihilate before nucleosynthesis, this limit comes from underproduction of $`{}_{}{}^{4}\mathrm{He}`$. For larger domains, the limit comes from $`{}_{}{}^{3}\mathrm{He}`$ overproduction. Most of the $`{}_{}{}^{3}\mathrm{He}`$ from $`\overline{\mathrm{p}}{}_{}{}^{4}\mathrm{He}`$ annihilation is annihilated also. The main source of $`{}_{}{}^{3}\mathrm{He}`$ is photodisintegration of $`{}_{}{}^{4}\mathrm{He}`$ by the electromagnetic cascades initiated by the annihilation.
If the early universe was homogeneous, antimatter annihilated during the first millisecond. However, baryogenesis could have been inhomogeneous, possibly resulting in a negative net baryon number density in some regions. After local annihilation these regions would have only antimatter left, resulting in a matter-antimatter domain structure.
There are many proposed mechanisms for baryogenesis. In models connected with inflation, there is no a priori constraint on the distance scale of the matter-antimatter domain structure that may be generated. If the distance scale is small, the antimatter domains would have annihilated in the early universe, and the presence of matter today indicates that originally there was less antimatter than matter.
We consider here such a scenario: a baryoasymmetric universe where the early universe contains a small amount of antimatter in the form of antimatter domains surrounded by matter. We are interested in the effect of antimatter on big bang nucleosynthesis (BBN). Much of the earlier work on antimatter and BBN has focused on a baryon-antibaryon symmetric cosmology or on homogeneous injection of antimatter through some decay process.
The smaller the size of the antimatter domains, the earlier they annihilate. Domains smaller than 100 m at 1 MeV, corresponding to $`2\times 10^5`$ pc today, would annihilate well before nucleosynthesis and would leave no observable remnant.
The energy released in annihilation thermalizes with the ambient plasma and the background radiation, if the energy release occurs at $`T>1`$ keV. If the annihilation occurs later, Compton scattering between heated electrons and the background photons transfers energy to the microwave background, but is not able to fully thermalize this energy. The lack of observed distortion in the cosmic microwave background (CMB) spectrum constrains the energy release occuring after $`T=1`$ keV to below $`6\times 10^5`$ of the CMB energy. This leads to progressively stronger constraints on the amount of antimatter annihilating at later times, as the ratio of matter and CMB energy density is getting larger. Above $`T0.1`$ eV the baryonic matter energy density is smaller than the CMB energy density, so the limits on the antimatter fraction annihilating then are weaker than $`6\times 10^5`$.
For scales larger than 10 pc (or $`10^{11}`$ m at $`T=1`$ keV) the tightest constraints on the amount of antimatter come from the CMB spectral distortion, and for even larger scales from the cosmic diffuse gamma spectrum.
We consider here intermediate domain sizes, where most of the annihilation occurs shortly before, during, or shortly after nucleosynthesis, at temperatures between 1 MeV and 10 eV. The strongest constraints on the amount of antimatter at these distance scales will come from BBN affected by the annihilation process.
Rehm and Jedamzik considered annihilation immediately before nucleosynthesis, at temperatures T = 80 keV – 1 MeV. Because of the much faster diffusion of neutrons and antineutrons (as compared to protons and antiprotons) the annihilation reduces the net neutron number, leading to underproduction of $`{}_{}{}^{4}\mathrm{He}`$. This sets a limit $`R<`$ few % to the amount of antimatter relative to matter in domains of size $`r_A1`$ cm at $`T`$ = 100 GeV ($`4\times 10^6`$ m at $`T`$ = 1 keV).
We extend these results to larger domain sizes, for which annihilation occurs during or after nucleosynthesis. Since our results for the small domains and early annihilation agree with Rehm and Jedamzik, we concentrate on the larger domains and later annihilation in the following discussion. Below, all distance scales given in meters will refer to comoving distance at $`T=1`$ keV.
The case where annihilation occurs after nucleosynthesis was considered in Refs. . Because annihilation of antiprotons on helium would produce D and $`{}_{}{}^{3}\mathrm{He}`$ they estimated that the observed abundances of these isotopes place an upper limit $`R10^3`$ to the amount of antimatter annihilated after nucleosynthesis. As we explain below, the situation is actually more complicated.
Consider the evolution of an antimatter domain (of diameter $`2r`$) surrounded by a larger region of matter. At first matter and antimatter are in the form of nucleons and antinucleons, after nucleosynthesis in the form of ions and anti-ions. Matter and antimatter will get mixed by diffusion and annihilated at the domain boundary. Thus there will be a narrow annihilation zone, with lower density, separating the matter and antimatter domains. At lower temperatures ($`T<`$ 30 keV) the pressure gradient drives matter and antimatter towards the annihilation zone. This flow is resisted by Thomson drag, which leads to diffusive flow.
Before nucleosynthesis, the mixing of matter and antimatter is due to (anti)neutron diffusion. When $`{}_{}{}^{4}\mathrm{He}`$ is formed, free neutrons disappear, and the annihilation practically ceases. If annihilation is not complete by then, it is delayed significantly because ion diffusion is much slower than neutron diffusion. There will then be a second burst of annihilation well after nucleosynthesis, at $`T1`$ keV or below. Indeed, depending on the size of the antimatter domains, most of the annihilation occurs either at $`T>80`$ keV (for $`r<2\times 10^7`$ m) or at $`T<3`$ keV (for $`r>2\times 10^7`$ m).
The annihilation is so rapid that the outcome is not sensitive to the annihilation cross sections. The exact yields of the annihilation reactions are more important. From the Low-Energy Antiproton Ring (LEAR) at CERN, we have data for antiprotons on helium, and also for some other reactions with antiprotons.
The annihilation of a nucleon and an antinucleon produces a number of pions, on average 5 with 3 of them charged. The charged pions decay into muons and neutrinos, the muons into electrons and neutrinos. The neutral pions decay into two photons. About half of the annihilation energy, 1880 MeV, is carried away by the neutrinos, one third by the photons, and one sixth by electrons and positrons.
If the annihilation occurs in a nucleus, some of the pions may knock out other nucleons. Part of the annihilation energy will go into the kinetic energy of these particles and the recoil energy of the residual nucleus. Experimental data on the energy spectra of these emitted nucleons are well approximated by the formula $`Ce^{E/E_0}`$, with average energy $`E_070`$ MeV, corresponding to a momentum of 350 MeV/$`c`$.
After $`{}_{}{}^{4}\mathrm{He}`$ synthesis, the most important annihilation reactions are $`\overline{\mathrm{p}}\mathrm{p}`$ and $`\overline{\mathrm{p}}{}_{}{}^{4}\mathrm{He}`$. According to Balestra et al., a $`\overline{\mathrm{p}}{}_{}{}^{4}\mathrm{He}`$ annihilation leaves behind a $`{}_{}{}^{3}\mathrm{H}`$ nucleus in $`43.7\pm 3.2`$ % and a $`{}_{}{}^{3}\mathrm{He}`$ nucleus in $`21.0\pm 0.9`$ % of the cases. The rms momentum of the residual $`{}_{}{}^{3}\mathrm{He}`$ was found to be $`198\pm 9`$ MeV/$`c`$.
It is important to consider how these annihilation products are slowed down. If they escape far from the antimatter domain, they will survive; but if they are thermalized close to it they will soon be sucked into the annihilation zone.
Fast ions lose energy by Coulomb scattering on electrons and ions. If the velocity of the ion is greater than thermal electron velocities, the energy loss is mainly due to electrons. At lower energies the scattering on ions becomes more important. Below $`T=30`$ keV, when the thermal electron-positron pairs have disappeared, the penetration distance of an ion of initial energy $`E`$ depends on the ratio $`E/T`$. For $`E(M_{\mathrm{ion}}/m_e)T`$, the penetration distance is
$$l=\frac{m_e}{M_{\mathrm{ion}}}\frac{E^2}{4\pi n_e(Z\alpha )^2\mathrm{\Lambda }}2\times 10^9\frac{1}{AZ^2}\frac{1}{\eta _{10}}\frac{E^2}{T^3},$$
(1)
where $`\mathrm{\Lambda }15`$ is the Coulomb logarithm, giving a comoving distance
$$l_{\mathrm{comoving}}\frac{1}{AZ^2}\frac{1}{\eta _{10}}\left(\frac{E}{T}\right)^20.4\mathrm{m}.$$
(2)
For smaller $`E/T`$, $`l`$ keeps getting shorter, but not as fast as Eqs. (1) and (2) would give.
For $`{}_{}{}^{3}\mathrm{H}`$ and especially for $`{}_{}{}^{3}\mathrm{He}`$, $`l`$ would become comparable to the original size of the antimatter domain only well after the annihilation is over. Thus only a small fraction of these annihilation products escape annihilation. For D this fraction is larger, but still small, except for the largest domains considered here.
Neutrons scatter on ions, losing a substantial part of their energy in each collision. The neutrons from annihilation reactions have sufficient energy to disintegrate a $`{}_{}{}^{4}\mathrm{He}`$ nucleus. This hadrodestruction of $`{}_{}{}^{4}\mathrm{He}`$ causes some additional $`{}_{}{}^{3}\mathrm{He}`$ and D production. Because protons are more abundant than $`{}_{}{}^{4}\mathrm{He}`$ nuclei, a neutron is more likely to scatter on a proton. The mean free path $`\lambda =1/(\sigma _{np}n_p)`$ is larger than the distance scales considered here, so the annihilation neutrons are spread out evenly. At lower temperatures ($`T1`$ keV), neutrons decay into protons before thermalizing. At higher temperatures, the stopped neutrons form deuterium with protons.
The high-energy photons and electrons from pion decay initiate electromagnetic cascades. Below $`T=30`$ keV, the dominant processes are photon-photon pair production and inverse Compton scattering
$$\gamma +\gamma _{bb}e^++e^{},e+\gamma _{bb}e^{}+\gamma ^{},$$
(3)
with the background photons $`\gamma _{bb}`$. The cascade photon energies $`E_\gamma `$ fall rapidly until they are below the threshold for pair production, $`E_\gamma ϵ_\gamma =m_e^2,`$ where $`ϵ_\gamma `$ is the energy of the background photon. Because of the large number of background photons, a significant number of them have energies $`T`$, and the photon-photon pair production is the dominant energy loss mechanism for cascade photons down to
$$E_{\mathrm{max}}=\frac{m_e^2}{22T}.$$
(4)
When the energy of a $`\gamma `$ falls below $`E_{\mathrm{max}}`$, its mean free path increases and it is more likely to encounter an ion.
As the background temperature falls this threshold energy rises, and below $`T5`$ keV, $`E_{\mathrm{max}}`$ becomes larger than nuclear binding energies, and photodisintegration becomes important. Photodisintegration of D begins when the temperature falls below 5.3 keV, photodisintegration of $`{}_{}{}^{3}\mathrm{He}`$ ($`{}_{}{}^{3}\mathrm{H}`$) below 2.2 keV (1.9 keV) and photodisintegration of $`{}_{}{}^{4}\mathrm{He}`$ below 0.6 keV.
Thus there are two regimes for photodisintegration: (1) between $`T=5.3`$ keV and $`T=0.6`$ keV, where the main effect is photodisintegration of D, $`{}_{}{}^{3}\mathrm{H}`$, and $`{}_{}{}^{3}\mathrm{He}`$; and (2) below $`T=0.6`$ keV, where the main effect is production of these lighter isotopes from $`{}_{}{}^{4}\mathrm{He}`$ photodisintegration. Because of the much larger abundance of $`{}_{}{}^{4}\mathrm{He}`$, even a small amount of annihilation during the second regime swamps the effects of the first regime, and only in the case that annihilation is already over by $`T=0.6`$ keV, is D photodisintegration important. Because of the difference in the neutron and proton diffusion rates, domains this small have already significant (neutron) annihilation before $`{}_{}{}^{4}\mathrm{He}`$ synthesis.
For the larger domain sizes, the most significant effect of antimatter domains on BBN turns out to be $`{}_{}{}^{3}\mathrm{He}`$ production from $`{}_{}{}^{4}\mathrm{He}`$ photodisintegration.
We have done numerical computations of nucleosynthesis with antimatter domains. Our inhomogeneous nucleosynthesis code includes nuclear reactions, diffusion, hydrodynamic expansion, annihilation, spreading of annihilation products, photodisintegration of $`{}_{}{}^{4}\mathrm{He}`$ and disintegration by fast neutrons. Because of the lack of data on the yields of annihilation reactions between nuclei and antinuclei, we have not incorporated antinucleosynthesis in our code, but the antimatter is allowed to remain as antinucleons. This could affect our results at the 10% level.
For photodisintegration of $`{}_{}{}^{4}\mathrm{He}`$ we use the results of Protheroe et al. scaled by the actual local $`{}_{}{}^{4}\mathrm{He}`$ abundance. The $`{}_{}{}^{3}\mathrm{He}`$ yield is an order of magnitude greater than the D yield. The Protheroe et al. results assume a standard cascade spectrum. This will not be valid for temperatures below 100 eV, since $`E_{\mathrm{max}}`$ becomes comparable or greater than typical energies of the initial $`\gamma `$’s from annihilation. It would be important to find out the true cascade spectrum for these low temperatures, since this will affect our results at the largest scales.
We have in mind a situation where antimatter domains of typical diameter $`2r`$ are separated by an average distance $`2L`$. This we represent with a spherically symmetric grid of radius $`L`$ with antimatter at the center with radius $`r`$. For simplicity we assume equal homogeneous initial densities for both the matter and antimatter regions. This density is set so that the final average density after annihilation will correspond to a given baryon-to-photon ratio $`\eta `$. Since we are looking for upper limits to the amount of antimatter which come from a lower limit to the $`{}_{}{}^{4}\mathrm{He}`$ abundance and an upper limit to the $`{}_{}{}^{3}\mathrm{He}`$ abundance, we choose $`\eta =6\times 10^{10}`$, near the upper end of the acceptable range in standard BBN, giving high $`{}_{}{}^{4}\mathrm{He}`$ and low $`{}_{}{}^{3}\mathrm{He}`$.
We show the $`{}_{}{}^{3}\mathrm{He}`$ yield as a function of the antimatter domain radius $`r`$ and the antimatter/matter ratio $`R`$ in Fig. 1. For domains smaller than $`r=10^5`$ m, annihilation happens before weak freeze-out, and has no effect on BBN. For domains between $`r=10^5`$ m and $`r=10^7`$ m, neutron annihilation before $`{}_{}{}^{4}\mathrm{He}`$ formation leads to a reduction in $`{}_{}{}^{4}\mathrm{He}`$ and $`{}_{}{}^{3}\mathrm{He}`$ yields. For domains larger than $`r=2\times 10^7`$ m, most of the annihilation happens after $`{}_{}{}^{4}\mathrm{He}`$ synthesis. Antiproton-helium annihilation then produces $`{}_{}{}^{3}\mathrm{He}`$ and D, but most of this is deposited close to the annihilation zone and is soon annihilated. The much more important effect is the photodisintegration of $`{}_{}{}^{4}\mathrm{He}`$ by the cascade photons, since it takes place everywhere in the matter region and thus the photodisintegration products survive. This leads to a large final $`{}_{}{}^{3}\mathrm{He}`$ and D yield. The same applies to $`\mathrm{n}{}_{}{}^{4}\mathrm{He}`$ reactions by fast neutrons from $`\overline{\mathrm{p}}{}_{}{}^{4}\mathrm{He}`$ annihilation, but the effect is much smaller, because $`\overline{\mathrm{p}}{}_{}{}^{4}\mathrm{He}`$ annihilation is less frequent than $`\overline{\mathrm{p}}\mathrm{p}`$ annihilation, and a smaller part of the annihilation energy goes into neutrons than in the electromagnetic cascades.
We obtain upper limits to the amount of antimatter in the early universe by requiring that the primordial $`{}_{}{}^{4}\mathrm{He}`$ abundance $`Y_p`$ must not be lower than $`Y_p=0.22`$, and that the primordial $`{}_{}{}^{3}\mathrm{He}`$ abundance must not be higher than $`{}_{}{}^{3}\mathrm{He}`$/H = $`10^{4.5}`$. (The standard BBN results for $`\eta =6\times 10^{10}`$ are $`Y_p=0.248`$ and $`{}_{}{}^{3}\mathrm{He}`$/H = $`1.1\times 10^5`$.) For domain sizes $`r10^{11}`$ m (or 10 parsecs today), these limits are stronger than those from the CMB spectrum distortion. See Fig. 2.
For $`r<10^5`$ m, there is no BBN constraint on antimatter. For $`r=10^5`$$`10^7`$ m, the amount of antimatter can be at most a few per cent, to avoid $`{}_{}{}^{4}\mathrm{He}`$ underproduction. Our limit is somewhat weaker than that of Rehm and Jedamzik, since they considered a lower $`\eta =3.4\times 10^{10}`$.
For larger domains, antimatter annihilation causes $`{}_{}{}^{3}\mathrm{He}`$ production from $`{}_{}{}^{4}\mathrm{He}`$ photodisintegration and the limit reaches $`R=2\times 10^4`$ at $`r10^9`$ m.
There may exist small regions of parameter space where acceptable light element yields would be obtained for “nonstandard” values of $`\eta `$ and large $`R`$. Clearly the simultaneous reduction of $`{}_{}{}^{4}\mathrm{He}`$ and increase of $`{}_{}{}^{3}\mathrm{He}`$ and D suggest such a possibility for large $`\eta `$.
We thank T. von Egidy, A.M. Green, K. Jedamzik, K. Kajantie, J. Rehm, J.-M. Richard, M. Sainio, G. Steigman, M. Tosi, and S. Wycech for useful discussions. We thank M. Shaposhnikov for suggesting this problem to us and P. Keränen for reminding us about photodisintegration. We thank the Center for Scientific Computing (Finland) for computational resources. |
no-problem/9912/astro-ph9912253.html | ar5iv | text | # The collision strength of the [Ne v] infrared fine-structure linesBased on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA
## 1 Introduction
The calculation of accurate collision strengths for atomic transitions has been a long standing problem in the field of quantitative spectroscopy. Any calculation involving atoms in non-LTE conditions requires the knowledge of vast numbers of collision strengths in order to make these calculations realistic and accurate. Until recently the computing power was simply not available to calculate collision strengths in a systematic way. One either had to resort to simpler and more approximate methods or one had to limit the calculations to only the most important transitions. This situation has now changed with the start of the Iron Project (Hummer et al. 1993), which aims to produce a large database of accurately calculated collision strengths.
The collision strength for an atomic transition depends
strongly on the energy of the colliding electron and shows many resonances (e.g., Aggarwal 1984). Such resonances occur when the total energy of the target ion and the colliding electron correspond to an auto-ionizing state. In order to calculate these resonances accurately, a fine grid of energy points is necessary. This is a type of problem for which R-matrix methods are very well suited. However, a source of uncertainty in these calculations is that the energies of most auto-ionizing states have not been measured in the laboratory and therefore need to be derived from calculations. It is a well known fact that the resulting energies are not very accurate and hence the positions of the resonances are also uncertain. Since the collision strengths are usually folded with a Maxwellian energy distribution, this is not a major problem for high temperature (i.e., X-ray) plasmas where the distribution is much broader than the uncertainty in the position of the resonances. However, for low temperature (e.g., photo-ionized) plasmas this can lead to problems if a resonance is present near the threshold energy for the transition. If only the high-energy tail of the Maxwellian distribution is capable of inducing a collisional transition, then a small shift in the position of a near-threshold resonance can have a severe impact on the effective collision strength. This effect would be even more pronounced if the resonance shifts below the transition threshold and disappears completely.
The inclusion of resonances in the calculation of collision strengths can lead to much higher values than were previously published (see Table 2 in Oliva et al. 1996). This is also illustrated in Table 1 where we show various calculations of the effective collision strength of transitions within the ground term of Ne<sup>4+</sup>. One can see that the R-matrix calculations of Aggarwal (1983) and Lennon & Burke (1991, 1994) yield substantially larger results than the previous calculations. This is caused by the presence of a large complex of strong resonances at low energies (see Fig. 5 in Aggarwal 1984). This large difference has led to a discussion of the validity the R-matrix calculations (Clegg et al. 1987, C87; Oliva et al. 1996, O96). Both authors tested the calculations by comparing predicted flux ratios with observations and both concluded that the R-matrix calculations yielded results that are too high. Nebulae offer powerful tests of atomic physics, and have revealed incomplete treatment in the past (Péquignot et al. 1978, Harrington et al. 1980). However, both C87 and O96 base their conclusions on only one nebula and both include LRS data in their analysis. Since the accuracy of LRS data is limited to approximately 30 % (Pottasch et al. 1986), a re-analysis based on more accurate SWS data is warranted. In this paper we will present a test of the R-matrix calculations for Ne<sup>4+</sup> by applying them to observational data of NGC 3918, NGC 6302 (the nebulae studied by C87 and O96, respectively) and NGC 7027. This will yield a larger and more accurate sample for the discussion.
## 2 The observational data and analysis
Several SWS observations were obtained for the objects studied in this paper. A log of the observations is shown in Table 2. The instrument is described in de Graauw et al. (1996). The observations used three different templates: SWS01 – a spectral scan from 2.4 $`\mu `$m to 45 $`\mu `$m, SWS02 – a set of grating scans of individual lines, and SWS06 – a high resolution grating scan. All SWS spectra of NGC 7027 were obtained during calibration time. The complete SWS06 spectrum of NGC 6302 is published in Beintema & Pottasch (1999). An SWS spectrum of NGC 3918 was also obtained, but not used. Due to inaccurate pointing the source was partially outside the aperture.
The line fluxes were measured in spectra reduced with the SWS interactive-analysis software. The line fluxes were virtually identical to those derived from the standard ISO auto-analysis products. The various SWS measurements were subsequently averaged. Table 3 shows the dereddened line fluxes we have adopted for our study. The values adopted by C87 and O96 are also shown for comparison. The extinction corrected \[Ne v\] 342.6 nm fluxes were taken from the original publications. Both for NGC 7027 and NGC 6302 the dereddening is complicated by the fact that the extinction varies over the nebula. The correction for the blend in NGC 3918 is discussed below. The infrared line fluxes were dereddened using the law from Mathis (1990).
To calculate the diagnostic diagrams we used the effective collision strengths given in Lennon & Burke (1994) and adopted the transition probabilities from Baluja (1985). We used a 5-level atom to calculate the relative level populations. The results of our analysis are shown in Table 4 and Fig. 1. The line flux ratios are defined as: $`R_1=I(14.32)/I(342.6)`$ and $`R_2=I(14.32)/I(24.32)`$. To determine the uncertainties in the line ratios we included contributions from the absolute calibration accuracy of the UV and IR data, the internal calibration accuracy of the SWS data and the uncertainty in the extinction correction. We will now discuss the results for each nebula separately.
### 2.1 NGC 7027
We have included this nebula in our sample because it is bright and well studied. This assures that accurate values for the fluxes needed in our analysis are available. Additionally, we have a detailed photo-ionization model (Beintema et al. 1996) which allows us to derive good estimates for the expected electron temperature and density. For a high excitation nebula like NGC 7027, significant temperature stratification of the plasma can be expected. We have illustrated this effect in Table 5 where we show the average electron temperature in various line forming regions. One can see that the electron temperature rises almost monotonically with the ionization potential $`\chi _{\mathrm{ion}}`$. It is important to note that the temperature in the Ne<sup>4+</sup> region is substantially higher than the temperature in any other line forming region observed in the spectrum. The expected values for the electron temperature and density are based on our photo-ionization model. The analysis of the \[Ne v\] lines gives a result which deviates less than 3 $`\sigma `$ from these values.
### 2.2 NGC 6302
This is the nebula studied by O96. The \[Ne v\] temperature and density for this nebula were already derived from the SWS06 spectrum by Pottasch & Beintema (1999). In our analysis we will include the SWS01–4 spectrum as well. The expected values for the electron temperature and density are those of O96. Our analysis gives a result which deviates slightly more than 1 $`\sigma `$ from these values. The preferred temperature of O96 is based on various determinations using ions with lower ionization potentials than Ne<sup>4+</sup>. In view of the discussion in the previous section concerning temperature stratification, this estimate is probably too low. Especially in view of the temperatures derived by O96 from rather low excitation line ratios like \[S iii\], \[Ar iii\] and \[O iii\] which range between 18 100 K and 19 400 K, the \[Ne v\] temperature may be expected to be considerably higher than 19 000 K. A temperature of 22 000 K is more realistic (see the discussion in Pottasch & Beintema 1999). This value for the temperature is indicated by an arrow in Fig. 1. After this correction the discrepancy is slightly less than 1 $`\sigma `$.
### 2.3 NGC 3918
This is the nebula studied by C87. The intensity for the \[Ne v\] 342.6 nm line is in doubt. C87 quote $`I(342.6)=80`$, but it is not clear how this value was obtained. We decided to use the value quoted in Aller & Faulkner (af64 (1964)) instead. From the discussion in that article it is not clear whether the data were corrected for interstellar extinction. The intensities they quote for other strong blue emission lines compare well with the dereddened intensities given by C87 and we therefore assume that the Aller & Faulkner (af64 (1964)) data are corrected for interstellar extinction. They give $`I(342.6+342.9+344.4)=60`$. The correction for the blend with the O iii 342.9 nm and 344.4 nm Bowen resonance-fluorescence lines is easy, since the O iii 313.3 nm line has been measured by C87. The O iii 313.3 nm, 342.9 nm and 344.4 nm lines all originate from the same upper level ($`2s^2\mathrm{\hspace{0.17em}2}p\mathrm{\hspace{0.17em}3}d`$ $`{}_{}{}^{3}P_{2}^{}`$) and the intensity ratio of the lines is simply given by the ratio of the transition probabilities times the photon energy. Using Opacity Project data (Luo et al. 1989) one finds $`I(313.3)`$ : $`I(342.9)`$ : $`I(344.4)`$ = 10.94 : 1.00 : 2.94. C87 gives $`I(313.3)=85`$. Hence $`I(342.9+344.4)=30.6`$ and $`I(342.6)=29.4`$. This result is substantially lower than the value used by C87. We were not able to correct the SWS spectrum accurately for aperture effects and therefore preferred to use the LRS flux for the \[Ne v\] 14.32 $`\mu `$m line. To complete the data set, we assumed a value for the 24.32 $`\mu `$m flux such that the resulting density agreed with the expected value. None of the flux values we adopted for this nebula can be considered accurate and re-measurement is warranted. The expected values for the electron temperature and density were determined by averaging the data in Table 12 of C87. For the temperature we only used the values derived from the \[Ar v\], \[Ne iv\] and \[Ne v\] line ratios, for the density we used all values except those derived from \[Mg i\], \[N iv\] and \[O iv\] lines. One can see that there is a slightly more than 2 $`\sigma `$ discrepancy for the electron temperature. Again the expected value for the electron temperature may be underestimated due to temperature stratification. We think 15 000 K is in all probability a more realistic, though still conservative, estimate. This value for the temperature is indicated by an arrow in Fig. 1. After this correction there is a 1.5 $`\sigma `$ discrepancy.
## 3 Discussion
A major advance in atomic theory in the past decade has been close coupling calculations that include resonances. These new calculations (carried out with an R-matrix code) can raise the collision strength by an order of magnitude or more compared to older calculations. These large differences have led to a discussion of the validity of these calculations. Nebulae offer powerful laboratories for verifying atomic processes, and two studies (C87 and O96) used this approach to test the R-matrix calculations for Ne<sup>4+</sup>. They found that spectra of planetary nebulae did not agree with the R-matrix results. This casted doubt on the validity of the close coupling calculations.
In this paper we have redone the analysis carried out by C87 and O96 using newer, more accurate data. We also included the well studied nebula NGC 7027 in the analysis to obtain a larger sample. We found that the expected values of the electron temperature and density all were within 3 $`\sigma `$ of our results. Hence there is no proof for significant problems with the R-matrix calculations. On closer inspection one sees that the largest discrepancy is for NGC 7027, the best studied nebula. Also the electron temperature derived from our analysis is systematically higher than the expectation value for all nebulae. This could point to inaccuracies in the collision strengths. To check this point further, we have re-analyzed our sample using effective collision strengths which were lowered by 30 % for transitions within the $`{}_{}{}^{3}P`$ ground term. The results are shown in the bottom row of Fig. 1. They are in good agreement with the expected values. Hence the R-matrix calculations for Ne<sup>4+</sup> could be off by 30 %, but certainly not more. We point out that this is significantly less than the factor $``$2.7 suggested by O96. An alternative explanation could be that the \[Ne v\] 342.6 nm fluxes are systematically overestimated due to a problem with the extinction curve, or a combination of the two effects.
We reach the following main conclusions:
1. The discrepancies found by C87 can be explained by the inaccuracy of the \[Ne v\] 342.6 nm flux they adopted. The discrepancies found by O96 mainly stem from the inaccuracy of the LRS measurement of the \[Ne v\] 14.32 $`\mu `$m line.
2. Based on the data presented in this paper there is no reason to assume that there are any problems with the collision strengths for Ne<sup>4+</sup> calculated by Lennon & Burke (1994). Our analysis has shown that the data are accurate at the 30 % level or better. This confirms the validity of close coupling calculations.
###### Acknowledgements.
In this paper data from the Atomic Line List v2.02 (`http://www.pa.uky.edu/~peter/atomic`) were used. We thank the NSF and NASA for support through grants AST 96-17083 and GSFC–123. |
no-problem/9912/quant-ph9912062.html | ar5iv | text | # Generation of continuous-wave THz radiation by use of quantum interference
## I Acknowledgments
We are very grateful to Prof. L. Windholz for his continuous interest to this work and useful discussions. D.V. Kosachiov thanks the members of the Institut für Experimentalphysik, TU Graz, for hospitality and support. This study was supported by the Austrian Science Foundation under project No. P 12894-PHY.
E. Korsunsky’s e-mail address is e.korsunsky@iep.tu-graz.ac.at.
Figure captions
Fig. 1. Closed $`\mathrm{\Lambda }`$ system with two metastable states $`|1`$ and $`|2`$. $`\omega _{31}`$ and $`\omega _{32}`$ are the optical frequencies, $`\omega _T`$ is the THz-range frequency.
Fig. 2. Spatial variations of the optical (a) and THz (b) field intensities and the relative phase $`\mathrm{\Phi }`$ (c) in a vapor of <sup>24</sup>Mg atoms interacting with radiation in a closed $`\mathrm{\Lambda }`$ configuration of levels $`3^3P_13^3P_2`$ $`\mathrm{\hspace{0.17em}4}^3S_1`$. For this system, the relaxation rates are $`\gamma _{31}=3.4610^7`$ sec<sup>-1</sup>, $`\gamma _{32}=1.66\gamma _{31}`$, $`\gamma _{21}=2.610^{14}\gamma _{31}`$, the wavelengths $`\lambda _{31}=517.27nm`$, $`\lambda _{31}=518.36nm`$. Other parameters are: vapor temperature $`T=10^3`$ $`K`$, $`\mathrm{\Gamma }=0`$, detunings $`\mathrm{\Delta }_{31}=\mathrm{\Delta }_{32}=0`$, Rabi frequencies of input fields $`g_{31}\left(\tau =0\right)=10\gamma _{31},g_{32}\left(\tau =0\right)=0.1\gamma _{31}`$ and $`g_{12}\left(\tau =0\right)=0`$. The dotted curve in (b) is a calculation with formula (27) for $`u_{20}^2=0.5510^4`$.
Fig. 3. Spatial variations of the optical (a) and THz (b) field intensities and the relative phase $`\mathrm{\Phi }`$ (c) in a vapor of <sup>24</sup>Mg atoms for vapor temperature $`T=800`$ $`K`$, $`\mathrm{\Gamma }=10^4\gamma _{31}`$, detunings $`\mathrm{\Delta }_{31}=\mathrm{\Delta }_{32}=0`$, Rabi frequencies of input fields $`g_{31}\left(\tau =0\right)=60\gamma _{31},g_{32}\left(\tau =0\right)=20\gamma _{31}`$ and $`g_{12}\left(\tau =0\right)=0`$. Other parameters are the same as in Fig. 2.
Fig. 4. Spatial variations of the THz radiation intensity in a vapor of <sup>24</sup>Mg atoms for $`\mathrm{\Gamma }=210^3\gamma _{31}`$ and Rabi frequencies of input fields: (a) $`g_{31}\left(\tau =0\right)=60\gamma _{31},g_{32}\left(\tau =0\right)=20\gamma _{31}`$ and (b) $`g_{31}\left(\tau =0\right)=300\gamma _{31},g_{32}\left(\tau =0\right)=100\gamma _{31}`$. Other parameters are the same as in Fig. 3. Notice the different length scales in (a) and (b). |
no-problem/9912/cond-mat9912006.html | ar5iv | text | # Dynamics of the Number of Trades of Financial Securities
## I Introduction
In the field of econophysics several empirical researches have been performed to investigate the statistical properties of stock price and volatility time series of assets traded in financial markets (for a recent overview see for example ). Comparably less attention has been devoted to the investigation of statistical properties of the dynamics of the number of trades of a given asset. A similar investigation is relevant for the basic problem of a quantitative assessment of the liquidity of the considered asset. There are two general aspects of liquidity for an asset traded in a market. The first concerns how easily assets can be converted back into cash in a market, whereas the second refers to the general problem of the ability of the market to absorb a given amount of buying and selling of shares of a given security at reasonable price changes. In the present study we consider the statistical properties of trading related to the second aspect of liquidity cited above. Specifically we investigate the temporal dynamics of the number of daily trades of 88 stocks traded in the New York Stock Exchange. The 88 stocks are selected at different values of their capitalization to provide a set of stocks representative of different levels of capitalizations. The interval of capitalization of our set is spanning more than three orders of magnitude in capitalization. With this choice we are able to investigate both heavily traded and less frequently traded stocks. In the present study we focus our attention on the time memory of the dynamics of the number of trades of a set of financial securities. We find that most capitalized stocks are characterized by a dynamics of the number of trades having interesting statistical properties. The time evolution of the number of trades has associated a power spectrum which is well approximated by a $`1/f`$-like behavior. We interpret this result as a quantitative evidence that the level of activity of highly capitalized and frequently traded stocks in a financial market does not possess a typical time scale. The same behavior is also qualitatively detected in less capitalized stocks although the value of the exponent $`\gamma `$ of the power spectrum $`S(f)1/f^\gamma `$ deviates from the value $`\gamma =1`$ and moves to lower values for stocks with lower capitalization. The modeling of systems with $`1/f`$ power spectra is still a theoretical unsolved problem. Hence the level of activity of a given asset in a financial market presents a statistical behavior whose theoretical modeling is certainly challenging. The paper is organized as follows: in the next section we illustrate the analyses performed by discussing the results obtained for one of the most capitalized stocks of the New York Stock Exchange. Section 3 presents the results obtained by investigating the selected set of stocks and in Section 4 we briefly state our conclusions.
## II Single stock analysis
We analyze the number of daily trades for a selected set of securities traded in the New York Stock Exchange (NYSE). Our database is the trade and quote (TAQ) database for the three-year period from Jan. 1995 to Dec. 1997. The typical number of daily trades $`n(t)`$ depends on the stock considered and varies from a few to few thousands units per day. We first analyze the dynamics of the number of daily trades for the General Electrics Co. (GE), which is one of the most capitalized stocks in the NYSE in the investigated period. The time series of the daily number of trades is shown in the inset of Fig. 1. The number of daily trades fluctuates and the time series is non-stationary. In order to test the presence of long range correlation in this time series, we determine its spectral density. The spectral density for the time series of the number of trades of the GE is shown in the upper curve of Fig. 1. For comparison we also show the spectral density of the logarithm of the daily price of the same security. We fit both spectral densities with a power-law function $`S(f)1/f^\gamma `$ using a logarithmic binnig in order to not overestimate the high-frequency components. Our best estimation for the exponent $`\gamma `$ of the spectral density of the logarithm of the price is $`\gamma =1.93`$.
Hence, the spectral density is well approximated by $`S(f)1/f^2`$, which is the prediction for the spectral density of a random walk. This form of the spectral density is related to the fact that returns of the price are short time correlated. This result is well-established in financial literature . We observe a different behavior for the spectral density of the daily number of trades. Our best fit for the exponent $`\gamma `$ gives $`\gamma =1.19`$. This value of the exponent is compatible with a $`1/f`$ noise . The time series of the GE daily trades is non stationary and shows a clear trend. We check the effect of these features on the fitting of the spectral density by investigating surrogate data obtained removing the trend by evaluating a running average of the original time series computed using a window of length $`L`$. With this procedure, we obtain a detrended time series by dividing the original number of trades at a given day $`t`$ by the average of the number of trades in a period of length $`L`$ centered in $`t`$. We take the values of $`L`$ equal to $`11,21,41`$ and $`81`$ trading days. The new time series do not present a global trend. In Fig. 2 we scomponents lower than $`1/L`$ are affected by the running average procedure. We perform a power law fit in the interval of frequency higher than $`1/L`$ and our best fit for the exponent $`\gamma `$ gives $`\gamma =1.09,0.85,0.85,0.91`$ for $`L=11,21,41,81`$, respectively. The estimation of the exponent is weakly affected by the running averaging procedure. These results for the spectral density in the original and detrended time series indicate that the number of trades is intrinsically long-range correlated.
## III Comparison between stocks
In this section we study the general validity of the results illustrated in the previous section by studying a specific stock. In particular we determine the spectral density of the logarithm of the price and of the number of daily trades for a set of $`88`$ selected stocks traded in the NYSE. We select the stocks randomly in a wide range of capitalization. The capitalization of our set ranges from $`3210^6`$ USD, which is the capitalization of the Sun Energy (SLP), to $`8010^9`$ USD for the GE. We determine the spectral density for the $`\mathrm{ln}S_i(t)`$ and of the $`n_i(t)`$, where $`S_i(t)`$ and $`n_i(t)`$ are the closure price and the number of trades of company $`i`$ at day $`t`$, respectively. The subscript $`i=1,\mathrm{},88`$ labels the company. For each spectral density we perform a power-law fit $`S(f)1/f^\gamma `$. We check the robustness of our fittings by determining their correlation coefficient and we find absolute values ranging from $`0.87`$ to $`0.99`$ (average value $`0.97`$) for the logarithm of the price and from $`0.56`$ to $`0.96`$ (average value $`0.86`$) for the number of trades. In Fig. 3 we show the two exponents as a function of the capitalization of the investigated security. The large majority of values of the exponent $`\gamma `$ of the spectral densities of the logarithm of the price is close to the value $`\gamma =2`$. On the other hand the large majority of values of the exponent $`\gamma `$ of the spectral densities of the number of daily trades is close to the value $`\gamma =1`$, in particular for the most capitalized stocks. Moreover, the exponent $`\gamma `$ increases towards the value $`\gamma =1`$ as far as the capitalization increases. When we consider less capitalized stocks the typical number of daily trades can be very small, of the order of $`10`$ or less. This fact leads to a problem of high digitalization noise of the time series,
because the number of trades is an integer and the quantization noise is relevant. As a consequence the statistical analysis on these time series may present a non negligible white noise contribution. This fact may be one reason for the observed decreases of the value of the exponent $`\gamma `$ in less capitalized stocks.
## IV Conclusion
The statistical properties of (i) the logarithm of the price and (ii) the number of daily trades of a security traded in a financial market are quite different. Specifically, we confirm that the time series of the logarithm of the price are characterized by a $`1/f^2`$ spectral density, whereas we observe that the time series of the daily number of trades show a $`1/f`$-like spectral density. The $`1/f^2`$ behavior observed in the time evolution of the logarithm of the price reflects the short time temporal memory of price returns. This kind of short-range time memory is needed to ensure absence of arbitrage opportunities in the market. On the other hand the $`1/f`$-like behavior observed in the daily number of trades manifests the absence of a typical scale in the time memory of this variable. In other words the activity of trading is not constant in time even for most capitalized stocks and its modeling needs to take into account phenomena occurring at a variety of time scales. Realistic models of trading activities in financial markets should take into account this feature empirically observed.
## V Acknowledgements
The authors thank INFM and MURST for financial support. This work is part of the FRA-INFM project ’Volatility in financial markets’. G. Bonanno and F. Lillo acknowledge FSE-INFM for their fellowships. |
no-problem/9912/astro-ph9912189.html | ar5iv | text | # The COINS Sample – VLBA Identifications of Compact Symmetric Objects
## 1 Introduction
Compact Symmetric Objects (CSOs) are compact ($`<`$1 kpc) sources with lobe emission on both sides of an active core (Wilkinson et al. 1994). The study of these objects is of particular interest because the small size of the CSOs is thought to be attributable to the youth of the source itself (ages 10<sup>3</sup> – 10<sup>4</sup> yr), rather than its confinement by a dense medium (Readhead et al. 1996a). Unifying evolutionary models have been proposed (Readhead et al. 1996b, Fanti et al. 1995) whereby these CSOs evolve into Compact Steep Spectrum (CSS) sources, and then into Fanaroff-Riley (1974) Type II objects.
Recent studies have shown that the majority of detections of Hi absorption in galaxies has been in CSOs and Steep Spectrum Core (SSC) objects (Conway 1996; Peck et al. 1999), rather than core-dominated radio sources. Of the theories proposed to explain this difference, the existence of a circumnuclear disk or torus structure seems the most likely. In this scenario, it is the orientation and geometry of the sources which is the cause of the discrepancy. The core-dominated sources are oriented close to the line of sight and the jets are comprised of extremely high velocity outflow. This causes the approaching jet to be strongly Doppler boosted, while the counterjet is Doppler dimmed. In CSOs, on the other hand, most of the continuum emission is not beamed, and thus the counterjet can contain up to half of the flux density of the radio source. Obscuration by a circumnuclear torus can then be seen against this counterjet, and if the structure has a significant fraction of atomic gas, Hi absorption can be expected (Conway & Blanco 1995). In some cases (e.g. 1946+708 – Peck, Taylor & Conway 1999) free-free absorption provides further evidence of a dense circumnuclear torus.
Another benefit of studying sources oriented at small angles to the plane of the sky is that in some cases jet velocities can be measured for both the jet and counterjet. Under the assumption that components are ejected simultaneously from the central engine, observations of proper motions can provide a direct measure of the distance to a source, and constraints can be placed on the value of H<sub>0</sub> (Taylor & Vermeulen 1997).
Unfortunately, only a few CSOs have been studied to date. In these few CSOs, the kinematics of the atomic hydrogen seen in absorption on parsec scales is intriguing, but complicated. It is clear that a more comprehensive sample is required to fully understand the nature of the circumnuclear gas in these objects. The CSOs Observed in the Northern Sky (COINS) sample, defined in §2, is an attempt to identify a larger sample of CSOs which can be comprehensively studied using VLBI techniques. Here we identify 18 previously know CSOs, listed in Table 1. Hi absorption studies toward several of these are currently underway, with 4 yielding published detections. Table 1 also lists 33 new and one previously known but unconfirmed, CSO candidates. In §3 we describe multi-frequency VLBI polarimetric follow-up observations of sources in the COINS sample. Results are presented in §4 and discussed in §5. Future papers will address bidirectional jet motions and free-free absorption in the COINS sample. Before Hi absorption studies can be undertaken, optical spectroscopy is required to determine the redshifts of the sources. The current status of the source redshifts, and Hi absorption studies, is given in Peck et al. (1999).
## 2 Sample Selection Criteria
The class of Compact Symmetric Objects are distinguished from other compact radio sources by their high degree of symmetry. An explanation for this symmetry is that CSOs tend to lie close to the plane of the sky. For this reason, relativistic beaming plays a minor role in their observed properties, resulting in little continuum variability (Wilkinson et al. 1994). They usually have well-defined lobes and edge-brightened hotspots on either side of an active core, often exhibiting a striking ”S” shaped symmetry (Taylor, Readhead, & Pearson 1996a).
Since CSOs are rare ($``$2% of compact objects, see §5), it is necessary to start with large VLBI surveys and go to moderately low flux density levels ($``$100 mJy at 5 GHz) in order to obtain the 52 CSO candidates that make up the COINS sample. The sources in the COINS sample have been identified based on images in the Pearson-Readhead (PR; Pearson & Readhead 1988), Caltech-Jodrell Bank (CJ; Polatidis et al. 1995; Taylor et al. 1994) and VLBA Calibrator (VCS; Peck & Beasley 1998) Surveys. The sources in the VCS were primarily selected from the Jodrell Bank-VLA Astrometric Survey (JVAS; Patnaik et al. 1992).
The sources in the COINS sample are described in Table 1. Column (1) lists the J2000 convention source name of the CSO candidate. Column (2) provides an alternate name, with those prefaced by PR or CJ indicating selection from that survey. Columns (3) and (4) list the RA and Declination of the source in J2000 coordinates, and column (5) shows the optical identification of the source.
The 19 sources chosen from the PR and CJ surveys were selected based on criteria described in Readhead et al. (1996a) and in Taylor et al. (1996a). In brief, an initial selection of objects was made having (1) at 5 GHz a nearly equal double structure or (2) extended emission on either side of a strong unresolved component. These identifications are reasonably secure although it is possible further observations could eliminate one or two sources. The remaining 33 candidates were selected from the VCS, and the follow-up observations are described below. The VCS is an on-going project to image $``$2000 sources at 2.7 and 8.4 GHz for use as VLBI phase-reference calibrators. These sources, (in the declination $``$30° to +90° range) have been observed for $``$3 minutes at both frequencies, and the images are available on the World Wide Web at http://magnolia.nrao.edu/vlba\_calib/. The sources in this study were selected from images of the 1500 positive declination sources in the VCS. The CSO candidates were identified based on at least one of the following criteria: a) double structure at 2.7 GHz, 8.4 GHz or both, where “double structure” is considered to mean having two distinct components with an intensity ratio $`<10:1`$; b) a strong central component with possible extended structure on both sides at one or both frequencies; c) possible edge-brightening of one or more components.
## 3 Observations and Analysis
Observations of the sources selected from the PR and CJ surveys are reported in Readhead et al. (1996) and in Taylor et al. (1996a,1996b). The 33 remaining candidates listed in Table 1 which were chosen from the VCS were observed as described here along with 1 candidate CSO from the CJ survey.
The followup observations were made with the 10 element VLBA and a single VLA antenna, and consisted of 2 observations of 24 hours each. The first of these took place on 1997 December 28 (1997.99) when 22 sources were observed for 15 minutes each at 8.4 GHz, and 10 of these having a peak flux $`>`$100 mJy at 8 GHz in the VCS were also observed at 15 GHz for $``$55 minutes each. The time spent on each source was divided into 7 scans which were spread out in hour angle to obtain good (u,v) coverage. The second observation took place on 1998 March 16 (1998.21). The remaining 12 candidates were observed for 15 minutes each at 8.4 GHz, and 17 candidates were also observed at 5 GHz for $``$25 minutes each. In addition, 3 weak sources, (having peak fluxes at 8 GHz $`<`$40 mJy in the VCS), as well as one source which was found to be very weak at 15 GHz and one which was found to have extended emission, were observed at 1.6 GHz for $``$25 minutes each.
At all frequencies, right and left circular polarizations were recorded using 2 bit sampling with a total bandwidth of 8 MHz. Amplitude calibration was derived using the measurements of antenna gain and system temperature recorded at each antenna, and refined using the calibrator 3C 279 (1253$``$055). Global fringe fitting was performed using the AIPS task FRING with the following solution intervals; 7 minutes (for the 15 GHz data), 6 minutes (8 GHz & 5 GHz), and 5 minutes (1.6 GHz). In both sets of observations, the versatile calibrator source 3C 279 was also used for bandpass calibration and polarization calibration. Following the application of all calibration solutions in AIPS, the data were averaged in frequency to a single channel. Editing, imaging and deconvolution were then performed using Difmap (Shepherd, Pearson, & Taylor 1995). Details of each image, including the restoring beam, peak, and rms noise, are given in Tables 2 and 4. Flux densities of the various components in each source were estimated by fitting elliptical Gaussian models to the self-calibrated visibility data using Difmap.
## 4 Results
Positive identification of CSOs is contingent upon acquiring multi-frequency observations in order to correctly identify the core of the source, which is expected to have a strongly inverted spectrum (Taylor et al. 1996a). This eliminates any asymmetric core-jet sources in which a compact jet component might appear similar to the core component at the discovery frequency. We have attempted to pinpoint the location of the core component for each source using the criteria of (1) a flat or inverted spectrum; (2) compactness; and (3) low fractional polarization. When extended emission (jets, hot-spots or lobes) are found on both sides of the core we classify the object as a CSO. When the core component is found to be at an extreme end of the source it is rejected as a CSO, and when no core component can be reliably identified the source is retained as a candidate CSO.
Images of the sources selected from the VCS which are deemed CSOs or CSO candidates are shown in Figure 1. Where possible, the core component has been identified with a cross. The frequency of the image shown is indicated in the upper right corner of each plot, and the beam is displayed in the lower left. Image parameters are outlined in Table 2. The last column in Table 2 indicates which sources can most reliably be identified as CSOs. The spectral index distributions for six of the newly identified CSOs are shown in Figure 2.
Table 3 lists the flux densities of each component at all frequencies at which follow-up observations were made. Column (1) lists the J2000 convention source name of the CSO candidate. Column (2) lists each component as identified in Figure 1. Columns (3) through (6) indicate the total flux density of each component, in mJy/beam, at 1.6, 5, 8.4, and 15 GHz respectively. The spectral indices for each component between 5 and 8.4 GHz are shown in column (7) and those between 8.4 and 15 GHz in column (8). Spectral indices between 1.6 GHz and other available frequencies were not calculated because the large differences in angular resolution and ($`u,v`$) coverage would render the results unreliable. Upper limits on polarized flux density at 8.4 GHz are shown (in mJy/beam) in column (9).
Images of the sources which have been determined not to be CSOs are shown in Figure 3. The frequency of the image shown is indicated in the upper right corner of each plot, and the beam is displayed in the lower left. Image parameters are outlined in Table 4. The flux densities of the components are detailed in Table 5, which is organized in the same manner as Table 3 above. Column (9) indicates the peak polarized flux for source components in which this was measurable, and gives an upper limit for the rest. Images of the polarized flux density in the 5 sources with significant detections are shown in Figure 4. The polarized flux is shown in grayscale, with contours from the 8 GHz continuum image superposed.
## 5 Discussion
### 5.1 The Incidence of CSOs
In the surveys we find the incidence of CSOs is 7/65 (11%) in PR, 18/411 (4.4%) in PR+CJ, and $``$39/1900 (2.1%) in PR+CJ+VCS. The main difference between these samples is the parent sample flux limit at 5 GHz which goes from 1.3 Jy in PR, to 0.35 Jy in CJ to $``$100 mJy in the VCS. Although the parent VLBI samples have somewhat different selection criteria, making it difficult to assess the significance of the difference in incidence, there does appear to be a trend to a lower CSO incidence among fainter sources. Complicating matters further, however, is the fact that since both the $`(u,v)`$ coverage and sensitivity of the VCS are considerably worse than the PR and CJ surveys, it is possible that some CSOs have been missed in the VCS. Data quality in the CJ survey was generally better than that from the earlier PR survey since many more telescopes were available.
### 5.2 Depolarization by the circumnuclear torus
Despite the fact that synchrotron emission is intrinsically polarized up to 70% (Burn 1966), less than 0.5% fractional polarization is seen in low resolution studies of CSOs at frequencies up to 5 GHz (Pearson & Readhead 1988, Taylor et al. 1996b). Even going to high resolution, Cawthorne et al. (1993) found less than 4 mJy of polarized flux (non-detections) at 5 GHz in the PR CSOs J0111+3906 and J2355+4950. In Table 3 we present limits on the polarized flux density at 8.4 GHz and $``$1 mas resolution for 21 CSOs and candidates in the COINS sample. In general our 3$`\sigma `$ limits on polarized flux density are less than 1.2 mJy/beam. These correspond to typical limits on the fractional polarization of $`<`$1%, and in stronger components to as low as $`<`$0.3%. These results are in sharp contrast to the fractional polarization of 1–20% typically seen in the jets of most compact sources (see Cawthorne et al. 1993 and our Table 5).
One possible explanation for the low observed linear polarization from CSOs is that their radiation is depolarized as it passes through a magnetized plasma associated with the circumnuclear torus. In order to depolarize the radio emission within our 8 MHz IF at 8.4 GHz the Faraday rotation measures could be larger than 5 $`\times `$ 10<sup>5</sup> radians m<sup>-2</sup>, or alternatively the magnetic fields in the torus could be tangled on scales smaller than the telescope beam of $``$1 mas to produce gradients of 1000 radian m<sup>-2</sup> mas<sup>-1</sup> or more.
## 6 Summary
From the 33 sources initially selected from the VLBA Calibrator Survey, we find 10 sources which we can securely classify as Compact Symmetric Objects. Thirteen sources, including one previously unconfirmed candidate from the CJ sample, have been ruled out based on morphology, spectral index and polarization. Eleven of the original VCS sources require further investigation.
Once the redshifts of the remaining newly identified CSOs can be determined, extensive high spatial and spectral resolution studies of the neutral hydrogen, as well as the radio continuum and ionized gas distribution, will be undertaken. A complete sample of such sources will yield unique information about accretion processes and the fueling mechanism by which these young radio galaxies might evolve into much larger FRII type sources. Future observations of CSOs identified in the COINS sample will also be used to measure bi-directional motions and employ them as cosmological probes. Studies of the hot spot advance speeds in the COINS sample should eventually yield kinematic age estimates for the sources and further advance our understanding of the evolution of radio galaxies.
Finally, it is worth mentioning that CSOs are often useful calibrators since they are compact, fairly constant in total flux density, and have very low polarized flux density.
The authors thank Miller Goss for illuminating discussions and encouragement in the initial stages of this project. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under a cooperative agreement by Associated Universities, Inc. AP is grateful for support from NRAO through the pre-doctoral fellowship program. AP acknowledges the New Mexico Space Grant Consortium for partial publication costs. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. |
no-problem/9912/astro-ph9912525.html | ar5iv | text | # The Radio Jets and Accretion Disk in NGC 4261
## 1 Introduction
The structure of accretion disks in the inner parsecs of active galactic nuclei can be probed on sub-parsec scales through VLBI observations of free-free absorption of synchrotron radiation from the base of radio jets. The nearby radio galaxy NGC 4261 (3C270) contains a pair of highly symmetric kpc-scale jets (Birkinshaw and Davies, 1985), and optical imaging with HST has revealed a large ($`300`$ pc), nearly edge-on nuclear disk of gas and dust (Ferrarese, Ford, and Jaffe, 1996). This suggests that the radio axis is close to the plane of the sky. Consequently, relativistic beaming effects should be negligible. This orientation also precludes gravitational lensing by the central black hole from affecting the observed jet-to-counterjet brightness (Bao and Wiita, 1997). In addition, the central milliarcsecond (mas) scale radio source is strong enough for imaging with VLBI (Jones, Sramek, and Terzian, 1981). The combination of high angular resolution provided by VLBI at 43 GHz and the relative closeness of NGC 4261 (41 Mpc; Faber, et al. (1989)) give us a very high linear resolution, approximately 0.1 mas $``$ 0.02 pc $``$ 4000 AU $``$ 400 Schwarzchild radii for a $`5\times 10^8\mathrm{M}_{}`$ black hole. Thus, NGC 4261 is a good system both for the study of an edge-on accretion disk and for intrinsic differences between a jet and counterjet.
Our previous 8.4 GHz VLBA image of NGC 4261 (Figure 4 in Jones and Wehrle (1997), hereafter JW; reproduced here as Figure 1) showed the inner parsec of the jet and counterjet, including a surprising narrow gap in emission at the base of the counterjet which we suspected was caused by free-free absorption in an accretion disk.
The central brightness peak at 8.4 GHz was identified as the core by JW based on its inverted spectral index between 8.4 and 1.6 GHz. The gap seen at 8.4 GHz is similar to the even more dramatic gap seen in the center of NGC 1052 (0238-084) at 15 GHz by Kellermann, et al. (1998). We report here 22 and 43 GHz VLBA observations which were made to map this gap with higher resolution. Our goals were to derive the physical characteristics of the disk, namely the allowed ranges of thickness, diameter, electron density, and magnetic field strength.
## 2 Observations
We observed NGC 4261 using 9 stations of the VLBA with full tracks on 7 September 1997, alternating 22-minute scans between 22.2 and 43.2 GHz. Good quality data were obtained from the Hancock, Fort Davis, Pie Town, Kitt Peak, Owens Valley, and Mauna Kea station, while rain affected North Liberty and Los Alamos and technical problems occurred at Brewster. The Saint Croix antenna was stowed due to hurricane Erika. Left circular polarization was recorded at both frequencies, with a total bandwidth of 64 MHz. Phase offsets between frequency channels were determined and corrected using both 3C273 and 1308+326.
The data were calibrated and fringe-fit using standard routines in AIPS<sup>1</sup><sup>1</sup>1The Astronomical Image Processing System was developed by the National Radio Astronomy Observatory. and imaging, deconvolution, and self-calibration were carried out in Difmap (Shepherd, Pearson, and Taylor, 1994). Amplitude calibration of VLBI data at 22 and 43 GHz is often problematic due to rapidly changing tropospheric water vapor content. We checked our a priori amplitude calibration by comparing short-baseline flux densities of 3C273 with total flux density measurements made at 22 and 37 GHz with the 14-m Metsäehovi antenna. In both cases our flux densities agree with those from Metsaehovi to within 15%.
An 8.4 GHz image of NGC 4261, made in a similar manner from VLBA data obtained in April 1995, is shown in Figure 2 for comparison with our newer, higher frequency images. The beam size at 8.4 GHz was $`1.84\times 0.80`$ mas with the major axis almost exactly north-south. Figure 3 illustrates a possible geometry of the central region which is consistent with the radio morphology seen in Figures 1 and 2.
The full resolution (uniform weighting, no taper) VLBA images at 22 and 43 GHz are shown in Figures 4 and 5, respectively. The beam sizes at 22 and 43 GHz are $`1.06\times 0.29`$ mas and $`0.61\times 0.16`$ mas, with the major axis within 20 of north-south. We also made naturally weighted images at both frequencies to search for more extended emission, but no detectable emission was found outside of the area shown in Figures 4 and 5.
In addition, Figure 6 shows the 43 GHz image after convolution with the same restoring beam as used in Figure 4 to allow spectral index measurements. The same field of view is shown in Figures 4 and 6.
The wide range of baseline lengths provided by the VLBA should make it possible to obtain sufficient overlap of the (u,v) coverage between 22 and 43 GHz for spectral index determinations. However, the correct offset between our 22 and 43 GHz images is not known a priori because we do not have absolute positions. A range of plausible offsets was tried. However, we found that relatively small changes in offset (less than half the E-W beam width) produce large changes in the spectral index map. Offsets which do not produce unreasonably large or small spectral index values tend to show that the most inverted spectral slope occurs at the position of the presumed accretion disk absorption and not at the position of the bright compact core, and that the spectrum becomes steeper away from the central core and accretion disk region. However, our data do not constrain the spectral index distribution in detail.
To see if amplitude self-calibration had significantly changed the flux density scales of our 22 or 43 GHz images, we compared the peak surface brightness from images made with only phase self-calibration and images made with full amplitude and phase self-calibration. The resulting amplitude corrections applied to the data were 19$`\%`$ at 22 GHz and 12$`\%`$ at 43 GHz. Uncertainties in the absolute flux density scales can shift all of the values in a spectral index map by a constant amount, but will not change the shape of the distribution.
## 3 Results
### 3.1 The Parsec-scale Radio Jets
The position angle of the jets is $`87^{}\pm 8^{}`$ in our VLBI images, and $`88^{}\pm 2^{}`$ on VLA images (Birkinshaw and Davies, 1985). The orientation of the jet axis remains the same on kpc and sub-pc scales, indicating that the spin axis of the inner accretion disk and black hole has remained unchanged for at least $`10^6`$ years (assuming an average expansion speed of 0.1 c), and possibly much longer.
A comparison of our 8, 22, and 43 GHz VLBA images indicates that the region just east of the core, including the first half parsec of the counterjet, has a highly inverted spectrum ($`\alpha >0`$, where $`S_\nu \nu ^\alpha `$). It is plausible that free-free absorption by gas in the central accretion disk is responsible for this. The jet and counterjet both have steep radio spectra ($`\alpha <0`$) far from the core, as expected. Note that if the free-free absorption model is correct, the most inverted spectrum may not be located at the position of the true core (the “central engine”).
We can set an upper limit for the jet opening angle by noting that the jet appears unresolved in the transverse (north-south) direction out to at least 4 mas from the absorption feature in Figure 2. Using one quarter of the N-S beam width as an upper limit to the extent of emission in the transverse jet direction gives an upper limit of $`5^{}`$ for the full opening angle during the first 0.8 pc of the jet. A lower limit for the opening angle can be obtained by requiring that the angular size of emission at the location of the bright peak 1 mas from the absorption feature in Figure 2 be large enough to avoid synchrotron self-absorption. This requires an opening angle of $`>0.3^{}`$ during the first 0.2 pc of the jet. Since we believe that the radio jets in NGC 4261 are nearly perpendicular to our line of sight, projection effects should be minimal and the intrinsic jet opening angle $`\theta `$ is $`0.3^{}<\theta <20^{}`$ during the first 0.2 pc of the jet and $`\theta <5^{}`$ during the first 0.8 pc of the jet.
Since emission from both the jet and counterjet is detectable with VLBI, it may be possible to measure proper motions on both sides of the core in this source. If so, the orientation of the radio jets with respect to our line of sight can be found, and any resulting small relativistic beaming effects on the jet/counterjet brightness can be taken into account. A more sensitive 43 GHz VLBI image (where free-free absorption is minimal) can then be used to see just how similar the jet acceleration and collimation processes are on both sides of a “central engine” at the same epoch.
### 3.2 The Inner Accretion Disk
The fact that our 1.6 GHz image (see Figure 1 in JW) is highly symmetric lets us set an upper limit to the angular size of any absorption feature. At this frequency the free-free optical depth should be large, so to avoid detection the angular size of the absorption feature must be much smaller than our angular resolution at 1.6 GHz. The restoring beam in Figure 1 of JW has an east-west size of 9 mas. The jet/counterjet brightness ratio is unity from the core out to at least $`\pm 25`$ mas. Using the observed brightness ratio of $`1`$ at $`\pm 10`$ mas ($`\pm 2`$ pc), just larger than our resolution, tells us that the transverse size of any deep absorption feature is $`<<2`$ pc. It is expected that the inner pc or so of an accretion disk orbiting a massive ($`10^810^9\mathrm{M}_{}`$) black hole will be geometrically thin, and consequently we will assume a typical disk thickness of $`<<0.1`$ pc and a nominal line-of-sight path length through the inner disk of $`0.1`$ pc. We now use the HI and CO column densities measured by Jaffe and McNamara (1994), $`10^{21}\mathrm{cm}^2`$, to estimate an electron number density of $`n_e3\times 10^3(0.1/L)\mathrm{cm}^3`$, where $`L`$ is the path length in pc. A slightly lower density would be deduced using the X-ray absorption column density of $`\mathrm{N}_\mathrm{H}<4\times 10^{20}`$ cm<sup>-2</sup> from Worrall and Birkinshaw (1994). In both cases it is assumed that there will be on average one free electron per nucleus. This is a conservative assumption since the inner 0.1 pc of the disk should be highly ionized and have a density at least as large as the outer neutral regions. The inclination of the disk is needed to further constrain the path length $`L`$ and thus the average electron density $`n_e`$.
To get an upper limit for $`n_e`$, we note that the jet/counterjet brightness ratio at 43 GHz near the core in Figure 6 is also small ($`2`$). This implies a low optical depth at 43 GHz. Assuming an electron temperature of $`10^4`$ K (plausible at a disk radius of 0.1 pc), an optical depth $`\tau <1`$ at 43 GHz requires $`n_e<4\times 10^5\sqrt{0.1/L}\mathrm{cm}^3`$. However, the temperature of the disk increases at smaller radii, possibly reaching $`10^710^8`$ K near the inner edge. At these temperatures, an electron number density of $`10^8\sqrt{0.1/L}\mathrm{cm}^3`$ is needed to produce an optical depth of unity at 43 GHz. The upper limit for electron number density will be somewhere between $`4\times 10^5\sqrt{0.1/L}`$ and $`10^8\sqrt{0.1/L}\mathrm{cm}^3`$, depending on the actual range of electron temperatures along the line of sight.
Thus, the electron number density of the inner accretion disk, averaged over the line of sight, is
$$3\times 10^3(0.1/L)<n_e<10^8\sqrt{0.1/L}\mathrm{cm}^3.$$
For comparison, the inner accretion disk believed to be responsible for the unusually inverted spectrum of the radio core in Centaurus A has an average electron number density of at least $`10^5\mathrm{cm}^3`$ for a path length of 1 pc (Jones, et al., 1996). Assuming a thin inner disk in NGC 4261 with a radius of 1 pc and a thickness of 0.01 pc, an average electron density of $`10^6`$ cm<sup>-3</sup>, and one proton per electron, the mass of ionized gas in the disk is $`10^3\mathrm{M}_{}`$. Of course, a larger assumed thickness or radius would lead to a larger inner accretion disk mass. Even $`10^3\mathrm{M}_{}`$ is sufficient to fuel the central engine for $`10^6`$ years at an accretion rate of $`10^3\mathrm{M}_{}`$ per year, consistent with the observed luminosity. A more realistic disk model with thickness increasing with radius and density decreasing with radius is described in Appendix A. This model predicts a lower total mass of ionized gas within 1 pc, which implies that material must migrate through the disk from radii $`>1`$ pc during the source lifetime. It is interesting to note that the electron number density we derive at 0.1 pc is comparable to that in the solar corona at $`2\mathrm{R}_{}`$. That is, the disk is tenuous and optically thin to visible light.
Equating the thermal gas pressure in the disk with the local magnetic field ($`B^2=8\pi \alpha n_ekT`$, where $`\alpha 0.01`$ is the usual viscosity parameter; Shakura and Sunyaev (1973)) gives a disk magnetic field of $`10^210^4`$ gauss at 0.1 pc. General expressions for estimating physical parameters in an optically thin, gas pressure dominated accretion disk at radii of $`0.11`$ pc are derived in Appendix A. Table 1 compares these parameter values with those for other galaxies in which there is evidence for an inner accretion disk. Of the five nearby ($`<100`$ Mpc) radio galaxies for which some inner accretion disk characteristics are known, only two (NGC 4261 and NGC 4258) have well-determined central black hole masses (see Table 1).
## 4 Discussion
A geometrically thin inner disk which would not completely obscure optical emission from the core is a plausible model given the very high percentage of low luminosity 3CR radio galaxies which have bright, unresolved optical continuum sources visible on HST/WFPC2 images (Chiaberg, Capetti, and Celotti, 1998). This includes NGC 4261. A geometrically and optically thick dusty torus would obscure the central optical continuum source in many low luminosity objects with FR I radio morphology, since these sources are expected to be oriented at large angles to our line of sight. A nearly edge-on disk orientation may also explain the low bolometric luminosity of the NGC 4261 nucleus and lack of an ultraviolet excess in its spectral energy distribution (Ho, 1999). Additional constraints on the inclination and density of the inner disk could come from observations of free-free radio emission from ionized disk gas or radio emission from the central jets scattered by electrons in the far inner edge of the accretion disk (Gallimore, Baum, and O’Dea, 1997). However, the dynamic range of our VLBA images is not adequate to detect this emission in the presence of the bright parsec-scale radio jets. Another way to constrain the radio axis, and thus the inner disk, orientation would be measurement of proper motions in both the jet and counterjet (e.g., Taylor, Wrobel, and Vermeulen (1998)). VLBA observations to attempt this are underway.
The above analysis makes use of the fact that the observed jet/counterjet brightness ratio in NGC 4261, and in some other galaxies which are well-observed with VLBI, peaks at intermediate frequencies and falls to nearly unity at both low frequencies (where the beam is much larger than the angular size of the absorbing material) and high frequencies (where the free-free optical depth becomes very small). See, for example, Figure 15 in Krichbaum, et al. (1998). At low frequencies the brightness ratio $`R`$ should decrease approximately linearly with frequency ($`\theta _{\mathrm{beam}}\nu ^1`$), while at high frequencies the fall-off of $`R`$ should be more rapid ($`\tau _{\mathrm{f}\mathrm{f}}\nu ^2`$). With VLBA images at four frequencies we can not confirm this behaviour in detail, but it is clear that near the core the brightness ratio $`R`$ is greater at 8.4 GHz than at 1.6, 22, or 43 GHz.
The mass of the central black hole in NGC 4261 is $`(5\pm 1)\times 10^8\mathrm{M}_{}`$ (Ferrarese, Ford, and Jaffe, 1996). Thus, the mass of material in the inner accretion disk is negligible compared to the black hole mass, and the orbital period of material in the inner accretion disk at a radius $`r`$ (in pc) is $`4\times 10^9(r/0.1)^{3/2}`$ seconds ($`10^2`$ years for $`r`$ = 0.1 pc). The spin rate of the central black hole is unknown, but is predicted to be small to moderate by the model of Meier (1999). A lower limit on the spin of the black hole can be derived from the known mass of the hole, the assumed accretion rate ($`10^3\mathrm{M}_{}`$ per year), and the spin axis alignment timescale ($`>10^6`$ years) implied by the co-linear kpc scale jets. The resulting dimensionless spin parameter (see Misner, Thorne, and Wheeler (1973)) is $`>2\times 10^4`$, which allows either high or low black hole spin rates.
The angular momentum of gas in the inner accretion disk is expected to be aligned with the spin axis of the black hole (Bardeen and Petterson, 1975). If the black hole is spinning slowly, its spin axis will eventually become aligned with the angular momentum of the accreting gas at large radii (Natarajan and Pringle, 1998). However, the long-term directional stability of the radio jets in NGC 4261 implies that the gas falling into the central region of the galaxy and supplying the central engine has had a constant angular momentum direction for most of the life of the radio source. This in turn implies that a single merger event may be responsible for the supply of gas in the nucleus of NGC 4261.
## 5 Conclusions
We have imaged the nuclear radio source in NGC 4261 with the VLBA at four frequencies. We find that the jet/counterjet brightness ratio near the core is larger at 8.4 GHz than at lower frequencies. This can be explained by a combination of two effects: low angular resolution at 1.6 GHz which masks small-scale brightness variations, and low absorption by ionized thermal gas at 22 and 43 GHz. The brightness asymmetry at 8.4 GHz could be caused by a nearly edge-on inner accretion disk. If so, the (model dependent) electron density in the inner 0.1 pc of the disk has an average value between $`3\times 10^3`$ and $`10^8\mathrm{cm}^3`$. Future observations to measure proper motions in the jet and counterjet will better define the orientation of the radio axis and thus the inclination of the inner accretion disk. This will determine the path length $`L`$ through the disk, and consequently reduce the allowable range of electron number density averaged over the path length.
The optically thin, uniform temperature model disk described in the appendix, derived only from accretion physics and values of the black hole mass and accretion rate consistent with NGC 4261, is remarkably similar to the disk that we observe in free-free absorption against the radio jet. We therefore believe that these observations have detected the sub-pc accretion disk powering the active nucleus in NGC 4261.
The Very Long Baseline Array is part of the National Radio Astronomy Observatory, which is a facility of the National Science Foundation operated by Associated Universities, Inc., under a cooperative agreement with the NSF. A.W. gratefully acknowledges support from the NASA Long Term Space Astrophysics Program. We thank H. Teräsranta for making the Metsäehovi flux density measurements of 3C273 available, and the anonymous referee for helpful suggestions. This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
## Appendix A Isothermal, Optically-thin Accretion Disks far from Black Holes
In this appendix we describe a simple accretion disk model, suitable for optically thin gas-pressure dominated flow far from the black hole, in which the plasma temperature is a very slowly varying function of distance from the hole. The canonical temperature is assumed to be $`10^4\mathrm{K}`$ for the range of interest in this model. (Such a temperature is typical of an optically thin, warm plasma undergoing moderate heating to keep it mostly ionized.) Following Shakura and Sunyaev (1973), and ignoring the factor of $`(1r^{1/2})1`$ (since at $`0.11.0`$ pc, $`r=R/3R_{Sch}=340034,000`$ for a $`10^8\mathrm{M}_{\mathrm{}}`$ black hole), we obtain the following equations for the disk structure, expressed in units of $`M_8M_{BH}/10^8\mathrm{M}_{\mathrm{}}`$ for the black hole mass, $`\dot{M}_3\dot{M}/(10^3\mathrm{M}_{\mathrm{}}\mathrm{yr}^1)=\dot{M}/(6.3\times 10^{22}\mathrm{g}\mathrm{s}^1)`$ for the accretion rate, and $`R_{18}R/10^{18}\mathrm{cm}`$ for the distance from the black hole.
From the equation for hydrostatic equilibrium in the direction perpendicular to the disk, the disk half-thickness is
$$H=\mathrm{\hspace{0.33em}0.0026}\mathrm{pc}M_8^{1/2}R_{18}^{3/2}$$
(A1)
For NGC 4261, with $`M_8=5`$, the full disk thickness at a radius of 1 pc ($`R_{18}=3`$) is 0.012 pc, very close to the 0.01 pc thickness assumed for the simple uniform thickness disk in section 3.2. Adopting a radiative cooling rate of $`3\times 10^{23}n^2\mathrm{erg}\mathrm{cm}^3`$ at $`10^4\mathrm{K}`$ (Krolik, 1999) and setting it equal to the energy generated locally by viscous (accretion) heating, we obtain the particle density
$$n=\mathrm{\hspace{0.33em}1.5}\times 10^4\mathrm{cm}^3M_8^{3/4}\dot{M}_3^{1/2}R_{18}^{9/4}$$
(A2)
The approximate optical depth of the disk in a direction perpendicular to it, due to the inverse absorption processes (at $`\nu kT/h`$ optical frequencies), is
$$\tau _{abs}=\mathrm{\hspace{0.33em}1.4}\times 10^{11}M_8\dot{M}_3R_{18}^3$$
(A3)
and that due to electron scattering is
$$\tau _{es}\mathrm{\hspace{0.33em}1.5}\times 10^4M_8^{1/4}\dot{M}_3^{1/2}R_{18}^{3/4}$$
(A4)
(an upper limit, since the gas is not fully ionized). At IR-optical-UV wavelengths, therefore, the disk is very optically thin. It becomes optically thick to free-free absorption in the perpendicular direction ($`\kappa _\nu m_pn\mathrm{\hspace{0.17em}2}H1`$) only below the frequency
$$\nu _{perp}=\mathrm{\hspace{0.33em}250}\mathrm{MHz}M_8^{1/2}\dot{M}_3^{1/2}R_{18}^{3/2}$$
(A5)
However, if viewed nearly edge-on, with the line-of-sight passing roughly from $`R_{18}/3`$ to $`3R_{18}`$ (i.e., $`0.11.0\mathrm{pc}`$), the disk will be optically thick to free-free ($`\kappa _\nu m_pn𝑑R1`$) below the frequency
$$\nu _{edgeon}=\mathrm{\hspace{0.33em}8}\mathrm{GHz}M_8^{3/4}\dot{M}_3^{1/2}R_{18}^{7/4}$$
(A6)
For $`M_8=5`$ we get
$$\nu _{edgeon}=\mathrm{\hspace{0.33em}27}\mathrm{GHz}\dot{M}_3^{1/2}R_{18}^{7/4}$$
(A7)
The mass of the disk within this radius range, and flow time scale ($`R/v_R`$), are
$`M_{disk}=\mathrm{\hspace{0.33em}3.9}\mathrm{M}_{\mathrm{}}M_8^{1/4}\dot{M}_3^{1/2}R_{18}^{5/4}`$ (A8)
$`t_{acc}=\mathrm{\hspace{0.33em}1200}\mathrm{yr}M_8^{1/4}\dot{M}_3^{1/2}R_{18}^{5/4}`$ (A9)
Three consistency checks on the model are the ratio of the half-thickness to disk radius at a given point, the ratio of the accretion (inflow) velocity to the Keplerian velocity, and the ratio of inward heat advection to radiative cooling, all of which should be small, given the assumptions of the model. We find that
$$\frac{H}{R}=\mathrm{\hspace{0.33em}0.008}M_8^{1/2}R_{18}^{1/2}$$
(A10)
for the disk height,
$$\frac{v_R}{v_K}=0.23M_8^{3/4}\dot{M}_3^{1/2}R_{18}^{1/4}$$
(A11)
for the inflow velocity, and
$$\frac{\dot{\epsilon }_{adv}}{\dot{\epsilon }_{rad}}=\mathrm{\hspace{0.33em}2.5}\times 10^4M_8^1R_{18}$$
(A12)
for the magnitude of the advection of thermal energy. Each of these three ratios will be even smaller if $`M_8=5`$, the value appropriate for NGC 4261, is used. (Note that, because of the low optical depth, the photon distribution is not a black body and has an even lower energy density than the thermal gas.) The assumption of a thin, radiating, $`10^4\mathrm{K}`$, Keplerian accretion disk, therefore, is consistent with the model calculations, as well as with our observational results for the absorbing disk in NGC 4261.
| Table 1: Characteristics of Accretion Disks | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Source | Dist. | $`S_{\mathrm{jet}}`$ | Height | Radius | $`\mathrm{n}_e`$ | Disk mag. | $`\mathrm{M}_{\mathrm{BH}}`$ | Assumed | |
| name | Mpc | (Jy) | (pc) | (pc) | ($`\mathrm{cm}^3`$) | field (G) | ($`\mathrm{M}_{}`$) | tmp. (K) | Ref. |
| NGC 5128 | 3.5 | 7.0 | $`<1`$ | $`>1`$ | $`>10^5`$ | $`>2\times 10^3`$ | | $`10^4`$ | 1 |
| NGC 4258 | 6.4 | 0.003 | $`<0.1`$ | $`0.3`$ | | $`3\times 10^1`$ | $`3.5\times 10^7`$ | N/A | 2 |
| NGC 1052 | 17 | 1.0 | $`0.1`$ | $`>1`$ | $`10^5`$ | $`3\times 10^3`$ | | $`10^4`$ | 3 |
| NGC 4261 | 41 | 0.5 | $`0.01`$ | $`0.1`$ | $`10^6`$ | $`10^410^2`$ | $`5\times 10^8`$ | $`10^4`$ | 4 |
| Mkn 348 | 56 | 0.5 v | $`0.1`$ | $`1`$ | $`10^510^7`$ | $`6\times 10^4`$ | | $`8\times 10^3`$ | 5 |
| Mkn 231 | 168 | 0.03 v | $`0.1`$ | $`1`$ | $`10^510^7`$ | $`6\times 10^4`$ | | $`8\times 10^3`$ | 5 |
| Cygnus A | 225 | 1. v | $`<0.3`$ | $`<15`$ | $`>7\times 10^3`$ | $`>4\times 10^4`$ | | $`>3\times 10^3`$ | 6 |
| 1946+708 | 400 | 0.3 | $`<20`$ | $`<50`$ | $`>2\times 10^2`$ | $`>10^5`$ | | $`8\times 10^3`$ | 7 |
| NGC 1275 | $`700`$ | 30. v | $`0.1`$ | $`>25.0`$ | $`7\times 10^4`$ | $`8\times 10^4`$ | $`10^8`$ | $`10^4`$ | 8 |
Using H<sub>0</sub> = 75 km s<sup>-1</sup> Mpc<sup>-1</sup> for distances given as redshifts. The four nearest galaxies have published distances which do not depend on H<sub>0</sub>.
“v” indicates variability
Table 1 references:
1. Jones, et al. 1996; Tingay, et al. 1998.
2. Herrnstein, et al. 1997; 1998.
3. Braatz, et al. 1999; Kellermann, et al. 1999.
4. This paper.
5. Ulvestad, et al., 1999.
6. Krichbaum, et al. 1998.
7. Peck, et al. 1999.
8. Dhawan, et al. 1998. |
no-problem/9912/quant-ph9912063.html | ar5iv | text | # Efficient microwave-induced optical frequency conversion
## I Acknowledgments
We are very grateful to Prof. L. Windholz for his continuous interest to this work and useful discussions. D.V. Kosachiov thanks the members of the Institut für Experimentalphysik, TU Graz, for hospitality and support. This study was supported by the Austrian Science Foundation under project No. P 12894-PHY.
Figure captions
Fig. 1. $`\mathrm{\Lambda }`$ system with two metastable states $`|1`$ and $`|2`$. $`\omega _{31}`$ and $`\omega _{32}`$ are the optical frequencies, $`\omega _m`$ is the microwave frequency.
Fig. 2. Spatial variations of: (a) optical field intensities $`I_{31}`$ (solid curve) and $`I_{32}`$ (dashed curve) in units of input intensity $`I_0I_{31}\left(\zeta =0\right)`$, (b) the relative phase $`\mathrm{\Phi }`$, in a vapor of <sup>23</sup>Na atoms interacting with radiation in a $`\mathrm{\Lambda }`$ configuration of levels $`3^2S_{1/2}(F=1)3^2S_{1/2}(F=2)`$ $`\mathrm{\hspace{0.17em}3}^2P_{1/2}`$. Vapor temperature $`T=440`$ $`K`$, $`\mathrm{\Gamma }=10^4\gamma `$, detunings $`\mathrm{\Delta }_{31}=\mathrm{\Delta }_{32}=0`$, Rabi frequencies of input fields $`g_{31}\left(\zeta =0\right)=2.0,g_m=0.02`$.
Fig. 3. Generation of the $`E_2`$ wave (in units of input intensity $`I_0I_{31}\left(\zeta =0\right)`$) as a function of the microwave frequency $`\omega _m`$ (in units of the excited state relaxation rate $`\gamma `$). Other parameters are the same as in Fig. 2.
Fig. 4. Dependence of the generated intensity $`I_{32}/I_0`$ on detuning $`\mathrm{\Delta }_{31}`$ (in units of the excited state relaxation rate $`\gamma `$) for fixed $`\omega _m=\left(_2_1\right)/\mathrm{}`$, at the optical length $`\zeta =340`$. Other parameters are the same as in Fig. 2.
Fig. 5. Dependence of the generated intensity $`I_{32}/I_0`$ on the Rabi frequency of microwave field $`g_m`$ at the optical length $`\zeta =340`$. Other parameters are the same as in Fig. 2. Inset shows the range of small $`g_m`$. |
no-problem/9912/astro-ph9912340.html | ar5iv | text | # HST and VLA Observations of the H2O Gigamaser Galaxy TXS2226-1841
## 1. Introduction
In recent years a number of active galaxies have been found to have powerful H<sub>2</sub>O maser emission in their nuclei (e.g. Braatz, Wilson, & Henkel 1994; 1996). It is known that the H<sub>2</sub>O megamaser phenomenon is associated with nuclear activity since all such megamaser sources are in either Seyfert 2 or LINER nuclei. The standard model for Seyfert galaxies involves a central engine (black hole and accretion disk) producing ionizing radiation, and an “obscuring torus” which shadows the ionizing radiation into bi-conical beams along its rotation axis (see Antonucci 1993 for a review). This beaming is readily seen in some Seyferts as bi-conical emission-line structures (e.g. Pogge 1989). Extended radio emission, when present, is usually aligned with the emission-line gas (e.g. Wilson & Tsvetanov 1994). Detailed studies also indicate a strong interaction between the radio ejecta and the optically visible ionized gas (Capetti et al. 1996; Falcke et al. 1996; Falcke, Wilson, & Simpson 1998; Ferruit et al. 1999).
It appears reasonable to infer that the masers trace molecular material associated with the obscuring torus or an accretion disk that feeds the nucleus. This notion was confirmed in great detail by VLBI observations of the megamaser in NGC 4258 (Miyoshi et al. 1995; Greenhill et al. 1995). The positions and velocities of the H<sub>2</sub>O maser lines show that the masing region is a thin disk in Keplerian rotation around a central mass of $`3.910^7M_{}`$ at a distance of $``$0.16 pc from that mass (Herrnstein et al. 1999).
Although plausible scenarios for the megamaser phenomenon exist (e.g. Neufeld & Maloney 1995), it is by no means clear how the material which obscures the nucleus (the “obscuring torus”) and the masing disk are related. The masing disk may be part of a geometrically thin, molecular accretion disk at smaller radii than the torus, or the thin, central plane of a thick torus in which the column density is high enough for strong amplification. Alternatively, the whole structure could be a warped thin disk, so the masing gas might be misaligned with the central accretion disk. The most straightforward picture consistent with current data would, however, have the masing disk, obscuring torus and any more extended molecular cloud distribution as one coherent accretion structure feeding the central engine, with the ionized thermal and non-thermal radio plasma roughly along the rotation axis.
We have therefore started a program to observe the narrow-line regions (NLR) of all known megamaser galaxies with the Hubble-Space-Telescope (HST) to establish this often suggested link between the molecular disk responsible for the maser emission and the obscuring torus responsible for the ionization cones. We are also obtaining continuum color images to search for the obscuring material directly.
The most luminous known H<sub>2</sub>O maser source is found in the galaxy TXS2226-184<sup>1</sup><sup>1</sup>1The name used by Koekemoer et al. (1995) does not follow the suggested and by now accepted convention used later in the Texas survey (Douglas et al. 1996). (IRAS F22265-1826; Koekemoer et al. 1995), at a redshift of z=0.025 (luminosity distance D=101 Mpc for $`H_0=75`$ km sec<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.5`$; in the images 0$`\stackrel{}{\mathrm{.}}`$1 correspond to 46 pc). Koekemoer et al. (1995) referred to this object as a gigamaser in view of its isotropic luminosity in the 1.3 cm water line of 6100$`\pm 900L_{\mathrm{}}`$. In this paper, we present H$`\alpha `$+\[N II\]$`\lambda \lambda `$6548,6583 and broad-band continuum observations of TXS2226-184, obtained with the HST and the VLA. Our results indeed show a linear H$`\alpha `$+\[N II\] structure along the radio axis and perpendicular to a dust lane. This supports the connection between megamaser emission, dusty disk, obscuring torus, and the narrow-line region discussed above. We also classify the host galaxy as a spiral.
## 2. Observations and Data Reduction
### 2.1. HST Observations
TXS2226-184 was observed with the Planetary Camera (PC) on board the HST (pixel scale is 0$`\stackrel{}{\mathrm{.}}`$0455/pixel) in three filters: F814W (red continuum); F547M (green continuum); and F673N (redshifted H$`\alpha `$+\[N II\]$`\lambda \lambda `$6548,6583). The total integration times were 120 sec, 320 sec, and 1200 sec respectively, all exposures being split into two or three integrations to allow cosmic ray rejection. All observations were performed within one orbit on 1998 December 6.
### 2.2. HST Data Reduction
The images were processed through the standard Wide-Field and Planetary Camera 2 (WFPC2) pipeline data reduction at the Space Telescope Science Institute. Further data reduction was done in IRAF and included: cosmic ray rejection, flux calibration, and rotation to the cardinal orientation. The zero of magnitude for each continuum filter was determined from the HST data handbook in the VEGAMAG<sup>2</sup><sup>2</sup>2A system in which Vega has magnitude zero in all HST filters. The zero-points of the canonical Johnson-Cousins system differ from the corresponding HST filters by up to 0.02 magnitudes for closely matched filters and up to 0.2 mag for the rest. system. Sometimes we will refer to the red and green continuum filters as I and V, respectively, even though F547M is not a good match to Johnson-Cousins V; an error of 0.2 mag can be expected. For the continuum filters a constant background level was determined in an emission-free region of the PC (to represent sky brightness) and subtracted from the image. This correction is mainly important for obtaining good color information in faint regions. The galaxy continuum near the H$`\alpha `$+\[N II\] line was determined by combining the red and green continuum images, scaled to the filter width of F673N and weighted by the relative offset of their mean wavelengths from the redshifted H$`\alpha `$+\[N II\] emission. The continuum was then subtracted from the on-band image to obtain an image of H$`\alpha `$+\[N II\]. We did not apply any shifts between the images because they were all taken within one orbit and at the same position on the PC chip. From the two broad-band images, we constructed a color map by dividing the green by the red filter image, including only pixels where the flux was at least five times the average noise level in each frame. To increase the signal-to-noise at larger radii, we also computed color maps in which the original image was block averaged by $`2\times 2`$ or $`4\times 4`$ pixels. Each of these maps was also clipped at its 5 $`\sigma `$ level and sampled at the PC pixel scale. The three maps were then combined, with each image being weighted by its inverse blocking size. This allows one to have a composite color map in which the bright center is shown at full resolution and the outer, low-surface brightness regions (which were clipped in the full resolution map) are seen at lower resolution. This is similar to an unsharp mask technique.
### 2.3. VLA Observations and Data Reduction
We observed the galaxy with the VLA in A-configuration at 8.46 GHz and 15 GHz on 1999 August 01 in snapshot mode for 5 mins, and at 4.85 GHz on 1998 May 21 for 10 mins. We observed a phase calibrator at the beginning and end of the scan and 3C 48 as a flux density calibrator. Using the AIPS software, the data were self-calibrated and maps were produced.
## 3. Results
### 3.1. Radio Map
A slightly super-resolved map of TXS 2226-184 at 8.46 GHz using a circular restoring beam of 0.2″ is shown in Figure 1 (bottom) where we have subtracted the central point source to show the extended emission more clearly. The source is resolved with a peak flux density of 15 mJy and a total flux density of 23 mJy. The emission is elongated in PA $`37^{}`$ towards the NW and in PA $`146^{}`$ towards the SE. No further extended emission was detected in our maps. This is also true for lower-resolution maps (VLA C- & B-configuration) at 5 and 8 where the flux densities agree with ours (Golub & Braatz 1998). The total fluxes at 4.85 GHz and 14.94 GHz are 37 and 13 mJy respectively. At these frequencies the source is extended in the same direction as at 8.46 GHz. If we compare our total flux densities with the flux density the galaxy had in the Texas survey at 365 MHz (198 mJy; Douglas et al. 1996), we find the spectrum to be steepening from $`\alpha =0.65`$ ($`S_\nu \nu ^\alpha `$) between 365 MHz and 4.85 GHz to $`\alpha =1`$ between 8.46 GHz and 14.94 GHz. Because of the compact structure this steepening is most likely not due to resolution effects. The position of the central radio component is $`\alpha =22^h26^m30\stackrel{\mathrm{s}}{\mathrm{.}}07`$, $`\delta =18\mathrm{°}26^{}09\stackrel{}{\mathrm{.}}6`$ (B1950).
### 3.2. HST Images
Our HST images are shown in Figure 1. The continuum map, which is the combination of the red and green filters used also for off-band subtraction, reveals a highly elongated galaxy along PA $`55^{}`$. The inner region (1″ diameter) is bisected by a dark band, presumably a nuclear dust lane. We have fitted an elliptical Gaussian function to the inner region to locate the centroid of the continuum emission. The centroid thus found is marked with a cross in Fig. 1 and we shall refer to this position as the “nucleus” of the galaxy. It is in the middle of the supposed dust lane. The presence of this dust lane is further strengthened by the color map, which shows a region of high reddening along PA 60 extending roughly 1″ across the nucleus. We also see higher reddening on the NW side of the galaxy than on the SE which, for a disk galaxy, would indicate that the NW side is the nearer side of the galaxy disk (Hubble 1943).
The H$`\alpha `$+\[N II\] map shows a highly elongated structure roughly along PA $`40^{}\pm 5^{}`$, i.e. in the same direction as the radio emission, with a bright spot 0$`\stackrel{}{\mathrm{.}}`$2 NW of the supposed nucleus. The emission extends further towards the SE, with a broad, “wiggly” structure near the nucleus and a “plume” 1$`\stackrel{}{\mathrm{.}}`$5 from the nucleus. As in the continuum image, the adopted nucleus is not very bright in H$`\alpha `$+\[N II\], presumably because of obscuration by the dust lane.
The adopted nucleus in the HST images is within 1$`\stackrel{}{\mathrm{.}}`$5—the typical error in absolute HST astrometry—of the radio nucleus. Therefore we have assumed that the optical and radio nuclei coincide and shifted the HST images accordingly (see Falcke et al. 1998 for a discussion of VLA/HST registration and errors). The coordinates given in Fig. 1 are after this shift has been performed.
In the larger field of view of all four WFPC2 chips, we find a number of faint, extended sources around TXS2226-184 which are probably galaxies. In particular, there is a highly elongated galaxy only 17$`\stackrel{}{\mathrm{.}}`$2 SW (PA = –120) of the nucleus of TXS2226-184 at $`\alpha =22^h26^m29\stackrel{\mathrm{s}}{\mathrm{.}}0`$, $`\delta =18\mathrm{°}26^{}18\stackrel{}{\mathrm{.}}3`$ (B1950).
### 3.3. Isophotes and Radial Profile Fitting
We have fitted elliptical isophotes to the red continuum image of the galaxy ignoring the innermost few pixels which are heavily affected by the dust lane. The center was fixed at the adopted nucleus (see Sec. 3.2). The ellipticity is close to zero at $`R0\stackrel{}{\mathrm{.}}5`$, below which it is strongly affected by the dust lane, and approaches a constant value of around 0.6 beyond $`R3`$″. Similarly, the PA of the semi-major axis changes rapidly from $`140^{}`$ to a value of 65 at 0$`\stackrel{}{\mathrm{.}}`$5, and stays essentially constant (at 50°-60°) at larger radii. The colors are relatively red in the inner region, dropping from V–I$``$1.65 to around 1.35 at the outer isophotes.
Figure 2 shows the azimuthally averaged surface brightness of the isophotes as a function of $`R`$. This profile was fitted in IRAF with a) an exponential disk profile,
$$S_{\mathrm{disk}}=S_0\mathrm{exp}\left(\frac{R}{R_0}\right),$$
(1)
plus a bulge component (de Vaucouleurs 1948),
$$S_{\mathrm{bulge}}=S_\mathrm{e}\mathrm{exp}\left(7.688\left(\left(\frac{R_{\mathrm{SMA}}}{R_\mathrm{e}}\right)^{1/4}1\right)\right),$$
(2)
to represent a spiral or S0 galaxy, and b) with a bulge component (Eq. 2) only to represent an elliptical galaxy. While the fitting was done using surface brightness, $`S`$, weighted by the inverse errors, we give the results in the more conventional form of surface brightess $`\mu `$ (in mag arcsec<sup>-2</sup>).
For the disk + bulge model (a) we obtained a good fit (reduced $`\chi ^2=0.86`$) with the parameters $`\mu _0=18.0`$ mag arcsec<sup>-2</sup>, $`R_0=2\stackrel{}{\mathrm{.}}4`$ (1.1 kpc), $`\mu _\mathrm{e}=19.7`$ mag arcsec<sup>-2</sup>, and $`R_\mathrm{e}=0\stackrel{}{\mathrm{.}}6`$ (0.29 kpc). For a bulge component only (b), i.e. an elliptical galaxy profile, the fit is much worse (reduced $`\chi ^2=4.9`$) and at large radii lies consistently above the data (Fig. 2). The parameters we get here are $`\mu _\mathrm{e}=22.8`$ mag arcsec<sup>-2</sup> and $`R_\mathrm{e}=22\stackrel{}{\mathrm{.}}7`$ (10.5 kpc). The results clearly favor a spiral over an elliptical galaxy. The ellipticity of TXS2226-184 ($`e=1b/a=0.61`$ at 2$`\stackrel{}{\mathrm{.}}`$7$`<R<`$6$`\stackrel{}{\mathrm{.}}`$0) indicates an inclination of the galaxy to the line of sight of 70 (using $`i=\mathrm{arcsin}\sqrt{(1(b/a)^2)/0.96}`$, e.g. Whittle 1992). The details of the fitting depend somewhat on how much of the inner region is excluded, while the preference of a disk + bulge model over a bulge-only model does not.
The difference between the magnitudes of the integrated bulge and the galaxy as a whole in our spiral galaxy model (see Simien & De Vaucouleurs 1986) is $`\mathrm{\Delta }m_\mathrm{t}=1.9`$ if we integrate along elliptical isophotes with $`e=0.61`$. To correct for the inclination dependent absorption (e.g. Tully et al. 1998) we would have to add $``$0.5 mag to obtain the face-on value of this difference. Figure 2 and Eq. 4 of Simien & De Vaucouleurs (1986) then would formally indicate that TXS2228-184 is probably an Sb/c (RC2 Hubble type $`T=`$4-5). However, this determination of the relative bulge luminosity and the Hubble type classification is very uncertain. Still, our data should be good enough to indicate that TXS2228-184 is later than S0. The fact that we are measuring at I (Simien & De Vaucouleurs use B) strengthens this point, since one would expect the bulge to be more prominent relative to the disk at I than at B. If we integrate our surface luminosity profile to infinity the total I magnitude of disk and bulge is 15.1 mag. The uncertainty in the cut-off radius due to a low signal-to-noise in the outer isophotes may allow an increase of this value by up to 0.4 mag.
## 4. Discussion & Summary
Koekemoer et al. (1995) have classified this galaxy as an elliptical or S0 and speculated whether the unusually broad line-width of the megamaser emission seen in this galaxy and in NGC1052 might be typical of elliptical galaxies. Our HST images reveal that TXS2226-184 is almost certainly not an elliptical, so NGC1052 is the only known megamaser in an elliptical galaxy (Braatz, Wilson & Henkel 1994). On the other hand, the high inclination of TXS2226-184 strengthens the tentative conclusion of Braatz, Wilson, & Henkel (1997) that megamasers are preferentially found in highly inclined galaxies. Six out of fourteen spirals in their detected megamaser sample have now an inclination $`i>69^{}`$. This excess suggests that nuclear and large scale dust disks in many active spiral galaxies are indeed related.
The NLR in TXS2226-184 is very elongated and reminiscent of the jet-like NLR seen in many Seyfert galaxies, as imaged by HST (e.g. Capetti et al. 1996; Falcke et al. 1998). These gaseous structures are believed to be produced in the interaction between outflowing radio ejecta and the ISM (e.g. Falcke et al. 1998; Ferruit et al. 1999). The fact that our radio map is elongated along exactly the same direction as the NLR supports this view.
In addition to the NLR and radio jet, we find a dust lane in the nucleus which aligns with the galaxy major axis and presumably represents its normal interstellar medium. The elongation of the NLR and the radio source perpendicular to the NE-SW dust lane suggests that the nuclear accretion disk and the obscuring torus are more or less coplanar with the stellar disk in TXS2226-184. Preliminary results of VLBA observations of the masers in this galaxy indeed seem to roughly show a NE-SW orientation along PA 20 (Greenhill 1999). How to interpret this structure and whether this indicates a warp in the gas disk going from tens of pc to pc scales is unclear at present. Further VLBI observations of masers and the continuum in this and other maser sources together with HST observations of the host galaxies could help to clarify the nature of the obscuring torus/masing disk and its connection to the large scale molecular gas structure of the AGN host galaxy.
We thank Stacy McGaugh for helpful discussions on galaxy classfications and Lincoln Greenhill for providing informations on unpublished VLBA observations of TXS2226-184. This research was supported by NASA under grant NAG8-1027 and HST GO 7278 and by NATO grant SA.5-2-05 (GRG 960086)318/96. HF is supported by DFG grant 358/1-1&2. |
no-problem/9912/cond-mat9912419.html | ar5iv | text | # Fano Resonances in Electronic Transport through a Single Electron Transistor
## I Introduction
When a droplet of electrons is confined to a small region of space and coupled only weakly to its environment, the number of electrons and their energy become quantized. The analogy between such a system and an atom has proved to be quite close. In particular, these artificial atoms exhibit properties typical of natural atoms, including a charging energy for adding an extra electron and an excitation energy for promoting confined electrons to higher-lying energy levels . Remarkably, the analogy goes even further and includes cases where an artificial atom interacts with its environment. A system known as a single electron transistor (SET), in which an artificial atom is coupled to conducting leads, can be accurately described by the Anderson model . The same model has been used extensively to study the interaction of localized electrons with delocalized ones within a metal containing magnetic impurities. One of its subtle predictions is the Kondo effect, which involves many-body correlations between an electronic spin on an impurity atom and those in the surrounding metal. This effect has recently been observed in an SET when the artificial atom develops a net spin because of an odd electron occupancy . In this paper we report that by changing the parameters in a single electron transistor we observe another phenomenon typical of natural atoms: Fano resonances. While several aspects of the Fano resonances in our SETs are easily understood, others are very surprising.
Asymmetric Fano lineshapes are ubiquitous in resonant scattering , and are observed whenever resonant and nonresonant scattering paths interfere. The more familiar symmetric Breit-Wigner or Lorentzian lineshape is a limiting case that occurs when there is no interference, for example when there is no non-resonant scattering channel. Fano resonances have been observed in a wide variety of experiments including atomic photoionization , electron and neutron scattering , Raman scattering , and photoabsorption in quantum well structures .
The widely successful application of the Landauer-Büttiker formalism shows that electron transport through a mesoscopic system is closely analogous to scattering in the systems described above. Therefore, one might expect to also observe Fano resonances in the conductance of nanostructures . Indeed, Fano lineshapes have been reported in a recent experiment by Madhavan $`\mathrm{𝑒𝑡}\mathrm{𝑎𝑙}.`$ measuring tunneling from an STM tip through an impurity atom into a metal surface . However, this characteristic feature of interference between resonant and non-resonant scattering has not been reported for more conventional nanostructures.
In this paper we report the observation of Fano resonances in conductance through a single electron transistor (SET). This structure has the advantage over the tunneling experiment of Madhavan $`\mathrm{𝑒𝑡}\mathrm{𝑎𝑙}.`$ and over conventional scattering experiments that we can tune the key parameters and thus study the interference leading to Fano resonances in greater detail. This paper is organized as follows: In section II we describe the SET and the measurements we have made on it; we also summarize there the standard theory for Fano line shapes. Our results are presented in section III. In section IV we discuss the results, draw conclusions and point out issues which require further research.
## II Experimental details and theoretical background
An SET consists of a small droplet of electrons coupled to two conducting leads by tunnel barriers. Gate electrodes (shown in Fig. 1) are used to control the electrostatic potential of the droplet and, in our structures, the heights of the tunnel barriers. The SETs used in these experiments are the same ones used for studies by Goldhaber-Gordon $`\mathrm{𝑒𝑡}\mathrm{𝑎𝑙}.`$ of the Kondo effect . The electron droplet is created in a two-dimensional electron gas (2DEG) that forms at the interface of a GaAs/AlGaAs heterostructure with a mobility of $`100,000\text{cm}^2/\text{Vs}`$ and a density of $`8.1\times 10^{11}\text{cm}^2`$. Applying a negative voltage to two pairs of split gate electrodes depletes the 2DEG underneath them and forms two tunnel barriers. The barriers separate the droplet of electrons from the 2DEG regions on either side, which act as the leads. The heights of the two barriers can be adjusted independently by changing the voltages on the respective constriction electrodes (I), and the potential energy of the electrons on the droplet can be shifted relative to the Fermi energies in the leads using an additional plunger gate electrode (II) near the droplet.
Our SET’s are made with a 2DEG that is closer to the surface ($`20`$ nm) than in most other experiments, allowing the electron droplet to be confined to smaller dimensions. This also makes the tunnel barriers more abrupt than in previous structures. We estimate that our droplet is about $`100`$ nm in diameter, smaller than the lithographic dimensions because of depletion, and contains about $`50`$ electrons.
For conductance measurements we apply a small alternating drain-source voltage (typically $`5\mu `$V) between the leads and measure the pre-amplified current with a lock-in amplifier. The conductance is then recorded as a function of the plunger gate voltage $`V_\text{g}`$. For differential conductance measurements we add a finite offset drain-source voltage $`V_{\text{ds}}`$ and measure the response $`dI/dV_{\text{ds}}`$ to the small alternating drain-source voltage as a function of both $`V_\text{g}`$ and $`V_{\text{ds}}`$.
In an SET the Coulomb interaction among electrons opens up an energy gap $`U`$ in the tunneling spectrum, given classically by $`e^2/2C`$, where $`C`$ is the total capacitance between the the droplet and its environment, primarily the nearby conducting leads and gates. Thus, an energy $`U`$ is required to overcome Coulomb repulsion and add an electron to the droplet. This energy cost causes the number of electrons on the droplet to be quantized and electron transport through the droplet to be suppressed. However, lowering the chemical potential of the droplet by adjusting the voltage on the plunger gate makes it possible to add electrons one at a time. At a charge degeneracy point, where the states with $`N`$ and $`N+1`$ electrons on the droplet have equal energy, transport of electrons from one lead through the droplet to the other lead is allowed. This effect is known as Coulomb blockade because transport is suppressed everywhere except close to the degeneracy points. Because of the small size of the electron droplet in our SETs, the energy spacing between the discrete levels $`\mathrm{\Delta }ϵ`$, that is, the energy to excite the droplet at fixed $`N`$, is only a few times smaller than the charging energy $`U`$.
Depending on the transmission of the left and right tunnel barriers, characterized by tunneling rates $`\mathrm{\Gamma }_\text{L}/h,\mathrm{\Gamma }_\text{R}/h`$, respectively, we observe different transport regimes in our SETs at very low temperature. If the thermal energy $`k_\text{B}T`$ is much smaller than the coupling energy $`\mathrm{\Gamma }\mathrm{\Gamma }_\text{L}+\mathrm{\Gamma }_\text{R}`$, quantum fluctuations dominate so that the resonances have width $`\mathrm{\Gamma }`$. When $`\mathrm{\Gamma }\mathrm{\Delta }ϵ`$ the coupling is weak and one observes narrow quasi-periodic peaks (Fig. 2(c)). As the coupling is increased a new regime emerges, in which transport away from the resonances is enhanced by the Kondo effect when there is an odd number of electrons on the droplet (Fig. 2(b)). Surprisingly, if we increase the coupling beyond the Kondo regime we observe asymmetric Fano resonances on top of a slowly varying background (Fig. 2(a)). Before discussing these data in detail, we present the analytic form predicted for such line shapes.
Scattering theory, specifically the S-matrix approach, predicts Fano lineshapes as the general form for resonances in transport through quasi-1D systems . The S-matrix, which relates the outgoing and incoming scattering state amplitudes, is a unitary matrix at real energies $`ϵ`$ because of probability conservation. Therefore, for real energies the eigenvalues of S can be written in the form $`\lambda (ϵ)=e^{2i\delta (ϵ)}`$ where $`\delta (ϵ)`$ is the scattering phase.
Resonant scattering occurs when one of the eigenvalues of the S-matrix develops a pole at the complex energy $`ϵ_0i\mathrm{\Gamma }/2`$. Near this pole the resonant eigenvalue is given by $`\lambda (ϵ)=e^{2i\delta _0}e^{2i\delta _{\text{res}}(ϵ)}`$ . Here $`\delta _{\text{res}}(ϵ)`$ is the resonant contribution to the phase shift and is given by $`\mathrm{tan}\delta _{\text{res}}(ϵ)=\mathrm{\Gamma }/2(ϵϵ_0)`$. It varies from zero to $`\pi `$ as the energy is moved through the resonance from below. The background or nonresonant contribution to the scattering phase shift $`\delta _0`$ is, in this approximation, a constant independent of the energy.
The total cross section is directly related to the scattering phase shifts through
$$\sigma (ϵ)\mathrm{sin}^2(\delta _{\text{res}}(ϵ)+\delta _0).$$
(1)
Varying the value of the background phase shift produces the entire family of Fano lineshapes.
It is important to emphasize that two interfering channels are necessary for Fano resonances to arise. One is resonant, for which the phase changes by $`\pi `$ in an energy interval $`\mathrm{\Gamma }`$ around the resonance energy $`ϵ_0`$. The other is nonresonant and has a constant background phase shift $`\delta _0`$. If there is no nonresonant channel or the background phase shift between the channels is zero, symmetric Breit-Wigner resonances are observed. In all other cases Fano resonances result.
In his original work Fano treated the case of inelastic scattering in the context of autoionization and derived the so-called Fano formula for the scattering cross section
$$\sigma (ϵ)\frac{(\stackrel{~}{ϵ}+q)^2}{\stackrel{~}{ϵ}^2+1},$$
(2)
where $`\stackrel{~}{ϵ}(ϵϵ_0)/(\mathrm{\Gamma }/2)`$ is the dimensionless detuning from resonance and $`q`$ is called the asymmetry parameter. The asymmetry parameter is related to the background phase shift of the S-matrix treatment by $`q=\mathrm{cot}\delta _0`$. The magnitude of $`q`$ is proportional to the ratio of transmission amplitudes for the resonant and non-resonant channels . The limit $`q\mathrm{}`$, in which resonant transmission dominantes, leads to symmetric Breit-Wigner resonances. In the opposite limit $`q0`$ non-resonant transmission dominates, resulting in symmetric dips.
According to the Landauer-Büttiker formalism, conductance through any mesoscopic system can be expressed in terms of an S-matrix. Hence, Fano resonances should also be observed in conductance if a resonant and a nonresonant transmission path coexist . Analogous to the scattering case Eq. (2) the conductance is then given by
$$G=G_{\text{inc}}+G_0\frac{(\stackrel{~}{ϵ}+q)^2}{\stackrel{~}{ϵ}^2+1},$$
(3)
where $`G_{\text{inc}}`$ denotes an incoherent contribution to the conductance, which is often observed. Note that the Breit-Wigner limit $`q\mathrm{}`$ implies $`G_00`$ leading to a finite conductance maximum of $`G_{\text{inc}}+G_0(1+q^2)`$ at $`\stackrel{~}{ϵ}=1/q`$.
## III Results
As mentioned above, Fig. 2(a) shows three consecutive, well-separated and relatively narrow resonances on a background that varies smoothly in the range 0.11 – 0.22 $`e^2/h`$. The conductance does not vanish at resonance, as would occur if the destructive interference between the transmission paths were complete, presumably because of an incoherent component. The resonances in the center and right are of the Fano form Eq. (3) with asymmetry parameters given in the figure caption.
We might imagine that we could calculate the combined transmission through the resonant and nonresonant channels simply by adding the complex amplitudes
for transmission through the two channels, each considered individually. In fact, the phase difference between the two transmissions, and hence the degree of asymmetry of the resonant lineshape, depends on the relative strengths of transmission through the two channels. This means that just by changing the strength of resonant transmission we can change the shape of the Fano profile in a way that would naively seem to require changes in phase of one or both transmissions. This effect can be achieved experimentally by varying the voltages on our point contact electrodes, thereby changing the strength of transmission through each of the two channels, and generally changing the ratio of their strengths as well.
The influence of the strength of the tunnel couplings $`\mathrm{\Gamma }_\text{L}`$ and $`\mathrm{\Gamma }_\text{R}`$ on the resonances is shown in Fig. 3. All the resonances in this figure can be fit very well by the Fano formula Eq. (3). The fitting parameters for the data in Fig. 3(a) and (b) are plotted in Fig. 3(c) and (d), respectively. Increasing $`\mathrm{\Gamma }_\text{L}`$ leads to a more symmetric line shape. At the same time, the incoherent background grows strongly, while the difference between maximum and minimum conductance remains almost constant. This is reflected in a strong increase of the asymmetry parameter $`q`$ accompanied by a decrease in $`G_0`$ leaving the product $`G_0(1+q^2)`$ nearly unchanged. Additionally a slight decrease in $`\mathrm{\Gamma }`$ is observed. By contrast, increasing $`\mathrm{\Gamma }_\text{R}`$ leaves the magnitude of the incoherent transmission constant and decreases the asymmetry parameter. At the same time the resonance is broadened, as reflected by an increase in $`\mathrm{\Gamma }`$.
Two consecutive Fano resonances with small asymmetry parameters, resulting in nearly symmetric dips, are shown for a variety of temperatures in Fig. 4(a). The increase in width of the resonances with increasing temperature is in good agreement with the linear $`3.5k_\text{B}T`$ dependence expected from Fermi-Dirac broadening (Fig. 4(c)). From this we obtain the conversion factor $`\alpha =.059\pm .006`$ that relates gate voltage $`V_\text{g}`$ to energy.
In contrast to this the temperature dependence of the dip amplitude is not that expected from Fermi-Dirac
broadening but rather is reminiscent of that seen for peaks in the Kondo regime . Indeed, the amplitude, measured with respect to the background, shows a logarithmic dependence on $`T`$ over almost an order of magnitude as shown in Fig. 4(b). In addition, the symmetric dip on the right shows a shift of the resonance energy with temperature suggesting that the energy is renormalized at low temperatures just as for conductance peaks in the Kondo regime. These data resemble curves obtained from a mean-field treatment of quantum interference in the Kondo regime .
The situation becomes even more intriguing when one examines the differential conductance as a function of both gate voltage and source-drain voltage in the vicinity of two dips. The results of this measurement are shown on a gray-scale plot in Fig. 5, where a clear diamond-shaped structure is traced out by the resonances as one varies the two voltages. This behavior is familiar from many experiments in the Coulomb blockade regime . Indeed, close scrutiny reveals additional dips moving parallel to those forming the main diamonds, analogous to subsidiary peaks seen in the Coulomb blockade regime, which result from excited states of the electron droplet.
However, the results in Fig. 5 are different in important ways. The resonances are dips rather than peaks, and they appear on top of a continuous background conductance that is almost independent of the applied voltages.
From the slopes of the diamonds’ boundaries it is possible to determine the parameters $`\alpha =C_{\text{gate}}/C_{\text{tot}}=.049\pm .005`$ and $`\beta =C_{\text{lead}}/C_{\text{tot}}=.66\pm .09`$. Here $`C_{\text{tot}}`$ is the total capacitance coupling the electron droplet to its environment whereas $`C_{\text{gate}}`$ is the capacitance only to the plunger gate and $`C_{\text{lead}}`$ the capacitance only to a lead. The value for $`\alpha `$ is in good agreement with the one obtained from the temperature dependence above. The bottom resonance in Fig. 5 is identical to the left one in Fig. 4 allowing us to determine the spacing in gate voltage of three successive peaks. We assume that the smaller spacing is given by $`U/\alpha `$ and the larger by $`(U+\mathrm{\Delta }ϵ)/\alpha `$. Using this and the width from Fig. 4, we find $`U=1.13\pm .12\text{meV}`$, $`\mathrm{\Gamma }=105120\mu \text{eV}`$ and $`\mathrm{\Delta }ϵ=.66\pm .07\text{meV}`$. It is surprising that the charging energy is only about $`40\%`$ smaller than what we find in the Kondo regime. It is even more surprising that $`\mathrm{\Gamma }`$ has decreased by $`50\%`$ compared to the Kondo regime, rather than increasing as expected, even though we have opened up the tunnel barriers resulting in a sizable non-resonant conductance and resonant dips instead of peaks.
Also reminiscent of the Kondo regime are features that remain pinned near $`V_{\text{ds}}=0`$ as the gate voltage is varied, seen as a faint vertical stripe in the center of Figure 5. In the Kondo regime this results from a sharp peak in the density of states at the Fermi energy caused by coupling of the localized spin on the artificial atom to the spins in the metallic leads. However, in Fig. 5 the features do not
depend on whether an even or odd number of electrons is on the droplet as evinced by their spanning more than two resonances. Furthermore, the zero-bias peak in the measurements of Goldhaber-Gordon $`\mathrm{𝑒𝑡}\mathrm{𝑎𝑙}.`$ has an amplitude that depends strongly on the separation in $`V_\text{g}`$ from the resonance, whereas the features in Fig. 5 are nearly independent of $`V_\text{g}`$.
The effect of a magnetic field perpendicular to the 2DEG containing the electron droplet is dramatic (Fig. 6). A field of only 15 mT produces a clear effect on the line shape, and a field of 50 mT completely transforms it from an asymmetric resonance with a dip into a somewhat asymmetric peak.
## IV Discussion
### A Nature of interfering channels
The good fit of the Fano form to our measurements makes it clear that we are observing interference between resonant and non-resonant paths through our SET. Were electrons non-interacting, it would not be surprising for resonant and non-resonant transmission to coexist. We can see this with the help of a semiclassical non-interacting analog for the SET: a cavity with two small openings to reservoirs on the left and right sides. Electrons incident on the cavity from the left side at a random angle would bounce around the cavity, only achieving significant transmission to the right should their energy match an eigenenergy of the cavity. This process would give resonant transmission. In contrast, electrons incident on the left side at exactly the correct angle could traverse the cavity and leave through the righthand opening without suffering any bounces inside the cavity. This is a nonresonant process, independent of electron energy. However, in our case the resonant channel shows all the features typical of charging of artificial atoms: the near-periodicity in gate voltage of the conductance resonances and the diamond structure and additional features associated with excited states in the differential conductance. It is surprising that we see such strong evidence for charge quantization (Fig. 5) coexisting with a continuous open channel through our SET. We can resolve this seeming contradiction if two paths for an electron through the SET have traversal times long and short respectively compared to $`\mathrm{}/U`$. The path which spends a long time in the region of the electron droplet (passing through a well-localized state) must respect the charging energy, so it exhibits Coulomb blockade resonances in conductance. The other path can temporarily violate energy conservation, adding a charge to the electron droplet during a rapid traversal of the SET . We have calculated based on a suggestion by Brouwer et al. that the time required for ballistic traversal of our electron droplet should be comparable to $`\mathrm{}/U`$ .
A puzzle remains: How can the resonances be quite narrow (Fig. 4(a)) even though the point contacts are more open than in the Kondo regime (Fig. 2(b)), as demonstrated by a nonresonant background conductance larger than $`e^2/h`$. Conductance through a point contact in the lowest subband cannot exceed $`2e^2/h`$, corresponding to a transmission probability of unity. When electrons can be transmitted across a point contact into two distinct states, the sum of the two transmissions can be no greater than unity. So a large transmission across the first point contact into the delocalized state which ballistically traverses the electron droplet implies a correspondingly reduced transmission into the localized, resonant state. The same analysis holds for electrons traversing the second point contact to exit the droplet. The width $`\mathrm{\Gamma }`$ of a resonant state is given by the rate of escape of electrons from that state, which is in turn proportional to the sum of the transmissions across the two point contacts into or out of that state. This explains how the presence of a nonresonant transmission channel can actually make the transmission resonances sharper .
We have considered alternative explanations for the origin of the nonresonant background conductance, and find them unlikely. A path that circumvents the electron droplet might lead to a continous contribution to the conductance. However, a parallel conduction path in the dopant layer above the 2DEG has been ruled out by Hall and Shubnikov-de Haas measurements on samples from the same wafer with large-area gate electrodes. Since we observe Fano resonances in each of several devices we have studied, it also seems unlikely that the effect is a consequence of channels bypassing the quantum dot in the plane of the 2DEG, caused by lithographic defects or non-uniform charge distributions. Most importantly, should a path circumvent the electron droplet the resulting background would be incoherent. For interference it is important that both paths include the two point contacts since only they can act as coherent source and detector.
Detailed measurements of the evolution from the Kondo regime to the Fano regime are underway, with the hope of further elucidating the nature of the nonresonant background conductance.
### B Magnetic field dependence
Changes in transport even at very small field scales are not unexpected given the droplet’s geometry and the 2DEG properties. For our nonresonant transmission, electrons traverse the droplet directly, so backscattered paths enclose approximately the area of the droplet. Assuming an electron droplet of 100 nm diameter, one flux quantum $`\mathrm{\Phi }_0h/e`$ penetrates the droplet at approximately 530 mT applied field. Thus, at this field scale, changes in nonresonant conductance would result from the breakdown of coherent backscattering . However, the resonant path through our droplet is less strongly connected to the leads, so electrons traverse the droplet by more roundabout paths, enclosing more net area than simply that of the droplet. This means that breakdown of coherent backscattering should occur at much lower flux through the droplet, $`\mathrm{\Phi }_{\text{corr}}=\mathrm{\Phi }_0/\sqrt{g(\mathrm{\Delta }ϵ/\mathrm{\Gamma })}`$, where $`gG/(2e^2/h)`$ is the dimensionless conductance of the droplet itself . We saw earlier that $`\mathrm{\Delta }ϵ/\mathrm{\Gamma }5`$; and $`g`$ should be comparable to the conductance of the 2DEG $`g_{\text{2D}}300`$, though somewhat suppressed because the electron density of the droplet is less than that of the 2DEG . Taking $`g=g_{\text{2D}}`$, we arrive at 12 mT as an estimate and lower bound for $`\mathrm{\Phi }_{\text{corr}}`$, consistent with our empirical observations .
Empirically (Fig. 6), small magnetic fields produce dramatic alterations in the resonances, while leaving the nonresonant background essentially unchanged. The argument made above explains the changes in transport at small magnetic fields as resulting from the breakdown of coherent backscattering in the resonant channel, and the concomitant increase in forward transmission through that channel. Since non-resonant transmission is not affected at these low fields, the net result is an enhancement of $`q`$. The alternative explanation — that the magnetic field destroys the interference between non-resonant and resonant processes, transforming resonant dips into peaks — does not account for the extremely low field scale at which the change occurs, nor does it fully match the data. The breakdown of coherent backscattering indeed is caused by the loss of the special phase relation between pairs of time-reversed paths, but here we are concerned with interference between two distinct forward scattering paths (resonant and non-resonant) which do not form a time-reversed pair. In addition, interpreting the zero-field data as the simple interference between two paths would suggest that the resonant path has half the amplitude of the non-resonant path. Yet applying a field causes the resonant contribution of the right-hand peak to exceed the non-resonant background by a factor of three. Both these considerations lead us to believe that changes in amplitude (and perhaps phase) for the resonant process due to applied field are more important than destruction of coherence by that field.
## V Acknowledgements
We acknowledge fruitful discussions with P. Brouwer, L. I. Glazman, R. J. Gordon, G. Schön, H. Schöller, S. H. Simon, A. Yacoby and especially J. U. Nöckel and Ned Wingreen. J. G. thanks NEC, and D. G.-G. thanks the Hertz Foundation, for graduate fellowship support. This work was supported by the US Army Research Office under contract DAAG 55-98-1-0138, and by the National Science Foundation under grant number DMR-9732579. |
no-problem/9912/cond-mat9912376.html | ar5iv | text | # Out-of-equilibrium Kondo effect in Double Quantum Dots
## Abstract
The out-of-equilibrium transport properties of a double quantum dot system in the Kondo regime are studied theoretically by means of a two-impurity Anderson hamiltonian with interimpurity hopping. The hamiltonian is solved by means of a non-equilibrium generalization of the slave-boson mean-field theory. It is demonstrated that measurements of the differential conductance $`dI/dV`$, for the appropriate values of voltages and tunneling couplings, can give a direct observation of the coherent superposition between the many-body Kondo states of each dot. For large voltages and arbitrarily large interdot tunneling, there is a critical voltage above which the physical behaviour of the system again resembles that of two decoupled quantum dots.
Recent experiments have shown that new physics emerge when the transport properties of quantum dots (QD’s) at temperatures (T) below the Kondo temperature ($`T_K`$) are studied. QD’s offer the intriguing possibility of a continuous tuning of the relevant parameters governing the Kondo effect as well as the possibility of studying Kondo physics when the system is driven out of equilibrium in different ways. These experimental breakthroughs have opened up a new way for the study of strongly correlated electrons in artificial systems. The Kondo anomaly appearing in the density of states (DOS) of the QD reflects the formation of a quantum-coherent many-body state. Motivated by the recent experimental advances in the study of double quantum dots (DQD) it is thus interesting to study what happens when two QD’s in the Kondo regime are coupled. Previous theoretical studies of this problem at equilibrium have focused on the competition between Kondo effect and antiferromagnetic coupling generated via superexchange or via capacitive coupling between dots.
In this Letter we focus on the study of a DQD in the Kondo regime driven out of equilibrium by means of a DC voltage bias. There have hitherto been only few attemps to study this problem but a clear picture of the problem is yet missing. Following the recent work of Aono et al and Georges and Meir we employ the slave boson (SB) technique in a mean field approximation (MFA) and generalize it to a non-equilibrium situation. This MFA allows us to include nonperturbatively the interdot tunneling term (i.e coherence between dots). The different physical regimes that appear as the ratio $`\tau _c=t_C/\mathrm{\Gamma }`$ changes ($`t_C`$ is the interdot tunneling coupling and $`\mathrm{\Gamma }`$ is the single particle broadening coming from the coupling to leads) can be explored by measuring the non-linear transport properties of the system. Our results can be summarized in Figs. 1 and 2: the differential conductance dI/dV of the DQD directly measures the transition (as $`\tau _c`$ increases) from two isolated Kondo impurities to a coherent superposition of the many-body Kondo states of each dot, which form bonding and anti-bonding combinations. This coherent state which occurs for $`\tau _c>1`$ is reflected as a splitting of the zero-bias anomaly in the dI/dV curves. This splitting depends non-trivially on the voltage and on the many-body parameters of the problem. For large voltages, we find that there is a critical voltage above which the coherent configuration is unstable and the physical behaviour of the system again resembles that of two decoupled QD’s, i.e two Kondo singularities pinned at each chemical potential, even for $`\tau _c>1`$. This instability is reflected as a drastic drop of the current leading to singular regions of negative differential conductance (NDC).
Model: In typical experiments, $`U_{\mathrm{intradot}},\mathrm{\Delta }ϵ>>T`$ ($`U_{\mathrm{intradot}}`$ is the strong on-site Coulomb interaction on each dot, $`\mathrm{\Delta }ϵ`$ is the average level separation), which allows one to consider a single state in each QD. We can model the DQD with a (N=2) fold degenerate two-impurity Anderson hamiltonian with an extra term accounting for interdot tunneling. Each impurity is coupled to a diferent Fermi sea of chemical potential $`\mu _L`$ and $`\mu _R`$, respectively. In the limit $`U_{\mathrm{intradot}}\mathrm{}`$ (on each QD) and $`U_{\mathrm{interdot}}0`$ , the hamiltonian may be written in terms of auxiliary SB operators plus constraints:
$`H`$ $`=`$ $`{\displaystyle \underset{k_{\alpha \{L,R\}},\sigma }{}}ϵ_{k_\alpha }c_{k_\alpha ,\sigma }^{}c_{k_\alpha ,\sigma }+{\displaystyle \underset{\alpha \{L,R\},\sigma }{}}ϵ_{\alpha \sigma }f_{\alpha \sigma }^{}f_{\alpha \sigma }`$ (1)
$`+`$ $`{\displaystyle \frac{t_C}{N}}{\displaystyle \underset{\sigma }{}}(f_{L\sigma }^{}b_Lb_R^{}f_{R\sigma }+f_{R\sigma }^{}b_Rb_L^{}f_{L\sigma })`$ (2)
$`+`$ $`{\displaystyle \frac{1}{\sqrt{N}}}{\displaystyle \underset{k_{\alpha \{L,R\}},\sigma }{}}V_\alpha (c_{k_\alpha ,\sigma }^{}b_\alpha ^{}f_{\alpha \sigma }+f_{\alpha \sigma }^{}b_\alpha c_{k_\alpha ,\sigma })`$ (3)
$`+`$ $`{\displaystyle \underset{\alpha \{L,R\}}{}}\lambda _\alpha ({\displaystyle \underset{\sigma }{}}f_{\alpha \sigma }^{}f_{\alpha \sigma }+b_\alpha ^{}b_\alpha 1).`$ (4)
$`c_{k_\alpha ,\sigma }^{}(c_{k_\alpha ,\sigma })`$ are the creation (annihilation) operators for electrons in the lead $`\alpha `$. To simplify the notation we consider henceforth that $`V_L=V_R=V_0`$ and $`ϵ_{L\sigma }=ϵ_{R\sigma }=ϵ_0`$ (i.e, $`T_K`$ is the same for both dots at equilibrium. The generalization to different $`T_K`$’s is straightforward). The even-odd symmetry is broken by the interdot coupling $`t_C`$. In the SB representation, the annihilation operator for electrons in the QD’s, $`c_{\alpha \sigma }`$ is decomposed into the SB operator $`b_\alpha ^{}`$ which creates an empty state and a pseudo fermion operator $`f_{\alpha \sigma }`$ which annihilates the singly occupied state with spin $`\sigma `$ in the dot $`\alpha `$: $`c_{\alpha \sigma }b_\alpha ^{}f_{\alpha \sigma }`$ ($`c_{\alpha \sigma }^{}f_{\alpha \sigma }^{}b_\alpha `$). In the last term of (1), the charge operator $`\widehat{Q}_\alpha =_\sigma f_{\alpha \sigma }^{}f_{\alpha \sigma }+b_\alpha ^{}b_\alpha `$ has been introduced. This term represents the constraint $`\widehat{Q}_\alpha =1`$ in each dot with Lagrange multiplier $`\lambda _\alpha `$. This constraint prevents double occupancy in the limit $`U\mathrm{}`$.
Solution: In the lowest order, we assume that the SB operator is a constant c-number $`b_\alpha (t)/\sqrt{N}=b_\alpha (t)/\sqrt{N}=\stackrel{~}{b}_\alpha `$ neglecting the fluctuations around the average $`b_\alpha (t)`$ of the SB. At T=0, this MFA is correct for describing spin fluctuations (Kondo regime). Mixed-Valence behavior (characterized by strong charge fluctuations) cannot be described by the MFA. This restricts our non-equilibrium calculation to low voltages $`V<<ϵ_0`$. Charge fluctuations can be included as thermal or quantum fluctuations ($`1/N`$ corrections). Defining $`\stackrel{~}{V}_\alpha =V_0\stackrel{~}{b}_\alpha `$ and $`\stackrel{~}{t}_C=t_C\stackrel{~}{b}_L\stackrel{~}{b}_R`$ we obtain from the constraints and the equation of motion of the SB operators the selfconsistent set of four equations with four unknowns ($`\stackrel{~}{b}_L,\stackrel{~}{b}_R,\lambda _L,\lambda _R`$):
$`\stackrel{~}{b}_{L(R)}^2+{\displaystyle \frac{1}{N}}{\displaystyle \underset{\sigma }{}}f_{L(R)\sigma }^{}f_{L(R)\sigma }={\displaystyle \frac{1}{N}}`$ (5)
$`{\displaystyle \frac{\stackrel{~}{V}_{L(R)}}{N}}{\displaystyle \underset{k_{L(R)},\sigma }{}}c_{k_{L(R)},\sigma }^{}f_{L(R)\sigma }`$ (6)
$`+{\displaystyle \frac{\stackrel{~}{t}_C}{N}}{\displaystyle \underset{\sigma }{}}f_{R(L)\sigma }^{}f_{L(R)\sigma }+\lambda _{L(R)}\stackrel{~}{b}_{L(R)}^2=0`$ (7)
In order to solve (2) we need to calculate the non-equilibrium distribution functions: $`G_{\alpha \sigma ,k_\alpha ^{^{}}\sigma }^<(tt^{})ic_{k_\alpha ^{^{}}\sigma }^{}(t^{})f_{\alpha \sigma }(t)`$ and $`G_{\alpha \sigma ,\alpha ^{^{}}\sigma }^<(tt^{})if_{\alpha ^{^{}}\sigma }^{}(t^{})f_{\alpha \sigma }(t)`$. They can be derived by applying the analytic continuation rules of Ref. to the equation of motion of the time-ordered Green’s function along a complex contour (Keldysh, Kadanoff-Baym or a more general choice). This allows us to relate $`G_{\alpha \sigma ,k_\alpha ^{^{}}\sigma }^<(tt^{})`$ with $`G_{\alpha \sigma ,\alpha ^{^{}}\sigma }^<(tt^{})`$ and $`G_{\alpha \sigma ,\alpha ^{^{}}\sigma }^r(tt^{})i\theta (tt^{})\{f_{\alpha \sigma }(t),f_{\alpha ^{^{}}\sigma }^{}(t^{})\}`$ and close the set of equations (2) in Fourier space:
$`{\displaystyle \frac{\stackrel{~}{\mathrm{\Gamma }}_{L(R)}}{\mathrm{\Gamma }}}i{\displaystyle \frac{dϵ}{2\pi }G_{L,L(R,R)}^<(ϵ)}={\displaystyle \frac{1}{N}}`$ (8)
$`{\displaystyle \frac{\stackrel{~}{\mathrm{\Gamma }}_{L(R)}}{\mathrm{\Gamma }}}(\stackrel{~}{ϵ}_{L(R)}ϵ_0)=i{\displaystyle \frac{dϵ}{2\pi }G_{L,L(R,R)}^<(ϵ)(ϵ\stackrel{~}{ϵ}_{L(R)})},`$ (9)
with $`\stackrel{~}{ϵ}_\alpha =ϵ_0+\lambda _\alpha `$ and $`\stackrel{~}{\mathrm{\Gamma }}_\alpha =\stackrel{~}{b}_\alpha ^2\mathrm{\Gamma }`$. For $`t_C=0`$ , $`\stackrel{~}{ϵ}_\alpha `$ and $`\stackrel{~}{\mathrm{\Gamma }}_\alpha `$ give, respectively, the position and the width of the Kondo peaks in the dot $`\alpha `$ (at equilibrium, and in the Kondo regime, $`\sqrt{\stackrel{~}{ϵ}_\alpha ^2+\stackrel{~}{\mathrm{\Gamma }}_\alpha ^2}T_K^0=De^{\pi |ϵ_0|/\mathrm{\Gamma }}`$). The distribution functions in the QD’s are: $`G_{L,L(R,R)}^<(ϵ)=\frac{2i(\stackrel{~}{\mathrm{\Gamma }}_{L(R)}f_{L(R)}(ϵ)[(ϵ\stackrel{~}{ϵ}_{R(L)})^2+\stackrel{~}{\mathrm{\Gamma }}_{R(L)}^2]+\stackrel{~}{\mathrm{\Gamma }}_{R(L)}f_{R(L)}(ϵ)\stackrel{~}{t}_C^2)}{[(ϵ\stackrel{~}{ϵ}_L+i\stackrel{~}{\mathrm{\Gamma }}_L)(ϵ\stackrel{~}{ϵ}_R+i\stackrel{~}{\mathrm{\Gamma }}_R)\stackrel{~}{t}_C^2][(ϵ\stackrel{~}{ϵ}_Li\stackrel{~}{\mathrm{\Gamma }}_L)(ϵ\stackrel{~}{ϵ}_Ri\stackrel{~}{\mathrm{\Gamma }}_R)\stackrel{~}{t}_C^2]}`$. $`f_{L(R)}(ϵ)`$ is the Fermi-Dirac function in the left (right) lead. Note that the presence of $`\stackrel{~}{t}_C^2`$ in the denominators indicates that the interdot tunneling enters nonperturbatively in the calculations and, then, coherent effects between dots are fully included. Due to the interdot tunneling, the Kondo singularities of each dot at $`\stackrel{~}{ϵ}_L`$ and $`\stackrel{~}{ϵ}_R`$ combine into coherent superpositions at $`ϵ_\pm =\frac{1}{2}\{(\stackrel{~}{ϵ}_L+\stackrel{~}{ϵ}_R)\pm \sqrt{(\stackrel{~}{ϵ}_L\stackrel{~}{ϵ}_R)^2+4\stackrel{~}{t}_C^2}\}`$. Of course, at equilibrium $`\stackrel{~}{b}_L=\stackrel{~}{b}_R=\stackrel{~}{b}`$, $`\lambda _L=\lambda _R=\lambda `$, we recover the results of Refs.. Note that the formation of coherent superpositions of the Kondo singularity is not trivially related with its single-particle counterpart (formation of bonding and antibonding states at $`ϵ_0\pm t_C`$). Let’s focus for simplicity in the equilibrium case ($`\stackrel{~}{ϵ}_L=\stackrel{~}{ϵ}_R`$), the splitting is given by $`\delta ϵ_+ϵ_{}=2\stackrel{~}{t}_C`$ which is a many-body parameter (given by the strong renormalization of the interdot tunneling due to the Kondo effect). $`\delta `$ depends non-linearly on the single-particle splitting $`\delta _0=2t_C`$ (see Inset of Fig. 3a). In the Kondo limit, $`\{[(\stackrel{~}{ϵ}+\stackrel{~}{t}_C)^2+\stackrel{~}{\mathrm{\Gamma }}^2][(\stackrel{~}{ϵ}\stackrel{~}{t}_C)^2+\stackrel{~}{\mathrm{\Gamma }}^2]\}^{1/4}=T_K^0e^{\frac{\pi t_C}{\mathrm{\Gamma }}(\frac{\stackrel{~}{\mathrm{\Gamma }}}{\mathrm{\Gamma }}\frac{1}{2})}`$. From the solution of Eq. 3 we obtain the current $`I=\frac{2e}{\mathrm{}}Re\{_{k_L,\sigma }\stackrel{~}{V}_LG_{L\sigma ,k_L\sigma }^<(t,t)\}`$ and DOS in each QD: $`\rho _{L(R)}(ϵ)=\frac{1}{\pi }Im\{\frac{\stackrel{~}{b}_{L(R)}^2(ϵ\stackrel{~}{ϵ}_{R(L)}+i\stackrel{~}{\mathrm{\Gamma }}_{R(L)})}{[(ϵ\stackrel{~}{ϵ}_L+i\stackrel{~}{\mathrm{\Gamma }}_L)(ϵ\stackrel{~}{ϵ}_R+i\stackrel{~}{\mathrm{\Gamma }}_R)\stackrel{~}{t}_C^2]}\}`$.
Results: We solve numerically (for T=0) the set of non-linear equations (3) for different voltages $`\mu _L=V/2`$ and $`\mu _R=V/2`$, $`ϵ_0=3.5,D=60`$ (Kondo regime with $`T_K^010^3`$) and different values for the rest of parameters (all energies in units of $`\mathrm{\Gamma }`$). Depending on the ratio $`\tau _c=t_C/\mathrm{\Gamma }`$ we find two different physical scenarios for $`\tau _c<1`$ and $`\tau _c1`$.
In Fig. 1 we plot the I-V curves (for clarity, we show only the $`V0`$ region) for $`\tau _c1`$. The two main features of these curves are: i) An increase of the linear conductance $`𝒢=dI/dV|_{V=0}`$ as $`\tau _c`$ increases; ii) a saturation, followed by a drop, of the current for large voltages. This drop sharpens as $`\tau _c1`$. These features are more pronounced in a plot of the dI/dV (inset of Fig. 1). As $`\tau _c`$ increases, the zero-bias anomaly (originating from the Kondo resonance in the DOS of the dots) becomes broader and broader until it saturates into a flat region of value $`2e^2/h`$ (unitary limit) for $`\tau _c=1`$. The reduction of the current at larger V is reflected as NDC regions in the dI/dV curves. For $`\tau _c=1`$ this NDC becomes singular. For $`\tau _c>1`$, and contrary to the previous case, $`𝒢`$ decreases for increasing values of $`\tau _c`$ (Fig. 2a). This reduction of $`𝒢`$ can be atributed to the formation of the coherent superposition of the Kondo states. This can be clearly seen as a splitting $`\mathrm{\Delta }=2\delta `$ in the dI/dV curves (Fig. 2c): Increasing $`\tau _c`$, the zero-bias conductance decreases whereas two maxima at $`\pm V_{peak}`$ show up (the arrow shows the splitting $`\mathrm{\Delta }=2V_{peak}`$ for the maximum value of $`\tau _c`$ in the figure). Fig. 2c demonstrates that the dI/dV curves of a DQD in the Kondo regime directly measure the coherent combination between the two many-body states in the QD’s. For larger voltages, the sharp drop of the current (Fig. 2a) reflects as strong NDC singularities in the dI/dV curves (Fig. 2b).
The position of these singularities moves towards higher $`|V|`$ as $`\tau _c`$ increases. In order to explain the results of Figs. 1 and 2, we plot in Fig. 3a $`ϵ_\pm `$ as a function of $`V0`$ for different values of $`\tau _c`$. For $`\tau _c=0`$ (thick solid line) , this corresponds to a plot of $`\stackrel{~}{ϵ}_L`$ and $`\stackrel{~}{ϵ}_R`$ (i.e the positions of the Kondo resonances for the decoupled QD’s) as a function of $`V`$. We obtain, as expected, that each Kondo resonance is pinned at the chemical potential of its own lead, $`\stackrel{~}{ϵ}_L=\mu _L=V/2`$ and $`\stackrel{~}{ϵ}_R=\mu _R=V/2`$. As the interdot coupling is turned on, the voltage dependence becomes strongly non-linear. For low V, the curves for $`\tau _c0`$ do not coincide with the curves for $`\tau _c=0`$ (i.e, $`\mu _{L/R}`$). This situation, however, changes as we increase $`V`$; the level positions $`ϵ_\pm `$ converge towards the chemical potentials $`\mu _{L/R}`$ in a non-trivial way.
The voltage $`V`$ for which $`ϵ^+ϵ^{}`$ coincides with the chemical potential difference $`V`$ gives the position of the peak in the positive side of the dI/dV (Fig.2c). This voltage is the solution of the equation $`\delta (V_{\mathrm{peak}})=V_{\mathrm{peak}}`$ where $`\delta (V)\sqrt{(\stackrel{~}{ϵ}_L\stackrel{~}{ϵ}_R)^2+4\stackrel{~}{t}_C^2}`$, with $`\stackrel{~}{ϵ}_{L/R}`$ given by Eq. (3). Note the implicit (and non-trivial) voltage dependence of $`\delta (V)`$. $`\stackrel{~}{\mathrm{\Gamma }}_L`$, $`\stackrel{~}{\mathrm{\Gamma }}_R`$ and $`\stackrel{~}{t}_C`$ follow a similar behavior as a function of $`V`$ (Figs. 3b, 3c and 3d). For $`VV_{\mathrm{peak}}`$, we find numerically that $`\delta (V)V`$, a relationship that becomes asymptotically exact as $`V\mathrm{}`$. The equation $`\delta (V)=V`$, has stable solutions $`\stackrel{~}{t}_C0`$ for $`[\frac{(\stackrel{~}{ϵ}_L\stackrel{~}{ϵ}_R)}{V}]^2<1`$, while for $`[\frac{(\stackrel{~}{ϵ}_L\stackrel{~}{ϵ}_R)}{V}]^2>1`$, the only stable solution is $`\stackrel{~}{t}_C=0`$, corresponding to current $`I=0`$. We denote the crossover voltage where $`[\frac{(\stackrel{~}{ϵ}_L\stackrel{~}{ϵ}_R)}{V}]^2=1`$ by $`V^{}`$. For finite voltages $`V>V_{\mathrm{peak}}`$, on the other hand, the relation $`\delta (V)=V`$ is only approximate, so that at the crossover $`V^{}`$, the quantity $`\stackrel{~}{t}_C`$ and hence $`I`$ drop to a much smaller, but still finite, value instead. Nevertheless the crossover at $`VV^{}`$ still indicates the beginning of the NDC region.
To illustrate this, we plot in Fig. 4 the left and right QD’s DOS for $`\tau _c=1`$. At equilibrium (V=0), the Kondo singularity at $`ϵ=0`$ splits into the $`ϵ_\pm `$ combinations. For $`V/T_K^0=2`$ the coherence is still preserved but the physical picture utterly changes for higher voltages ($`V/T_K^0=4`$ and $`V/T_K^0=6`$). In this case, the previous configuration is no longer stable, the coherence between dots is lost ($`\stackrel{~}{t}_C0`$), the dots are almost decoupled and the Kondo resonances in each dot are pinned again at their own chemical potential: the weight of the left (right) DOS at $`ϵ=\mu _{R(L)}`$ is almost zero (even though $`\tau _c=1`$).
This instability resembles that of the SB at $`T0`$ in the single impurity Anderson hamiltonian. In the MFA the SB behaves as the order parameter associated with the conservation of $`Q`$. When $`\stackrel{~}{b}0`$ the gauge symmetry $`bbe^{i\theta }`$, $`ffe^{i\theta }`$ associated with charge conservation is broken and the MFA has two phases $`\stackrel{~}{b}0`$ and $`\stackrel{~}{b}=0`$ separated by a second order phase transition. It is important to point out that the fluctuations do not destroy completely this $`\stackrel{~}{b}0`$ behavior (the SB fluctuations develop power law behavior replacing the transition by a smooth crossover). We speculate that in our problem this zero-temperature transition at finite V may be also robust against fluctuations but $`1/N`$ corrections are needed to substantiate this argument. Work in this direction is in progress.
In closing, we have demonstrated that the non-linear transport properties $`(dI/dV)`$ of a DQD in the Kondo regime directly measures the transition (as $`t_C`$ increases) from two isolated Kondo impurities to a coherent bonding and antibonding superposition of the many-body Kondo states of each dot. While for $`t_C<\mathrm{\Gamma }`$ the conductance maximum is at $`V=0`$, for $`t_C>\mathrm{\Gamma }`$ the transport is optimized for a finite $`V`$ matching the splitting between these two bonding and antibonding states. For large voltages (and $`t_C\mathrm{\Gamma }`$) there is a critical voltage above which the coherent superposition is unstable and the physical behavior of the system again resembles that of two decoupled QD’s. This leads a strong reduction of the current and singular regions of negative differential conductance. Concerning the observability of these effects: In our MFA the maximum value of $`\delta `$ ranges from $`\delta 20T_K^0500T_K^0`$ (inset of Fig. 3a) giving, for the experiment of Ref. ($`\mathrm{\Gamma }150\mu eV`$), $`\delta 3\mu eV75\mu eV`$ (30mK-750mK) which is within the resolution limits of present day techniques.
This work was supported by the NSF grant DMR 97-08499, DOE grant DE-FG02-99ER45970 and by the MEC of Spain grant PF 98-07497938. |
no-problem/9912/astro-ph9912468.html | ar5iv | text | # Theoretical perspectives
## 1. Introduction
In the case of observations it is easy and meaningful to talk about perspectives. Plans are made for new satellites, new telescopes and new instruments, and from their specifications one can make an educated guess about what new observations will be made and extrapolate to what new information they will bring us (knowing well that some surprises might be in store as a bonus). This is not the case for theory, the progress of which mainly depends on bright new ideas, which are of course impossible to predict.
Theoreticians, however, have one tool, the computer, whose progress over the past few decades has been tremendous, and about whose future advances it is possible to make some predictions. This is true even for personal computers or workstations, but particularly so for machines on which one can perform CPU-intensive numerical simulations. I will thus devote the next section to these types of machines. I will then briefly discuss some theoretical problems of particular interest, on which important progress could be made in the next few years, particularly with the help of numerical simulations.
## 2. Computers for CPU-intensive calculations
The evolution of computers over the last half-century has been amazing, and the numerical simulations it allowed have been the source of important progress in galactic dynamics. Very large, CPU-intensive calculations are possible on mainly three types of computers, whose advantages and disadvantages will be considered in turn.
### 2.1. Supercomputers
Supercomputers are facilities which are either national or at least institutional. As such, they are run by an operating team and the user does not have to worry about hardware maintenance. They also provide good computer libraries and manuals, greatly facilitating the programming task, while the operating team is often available for advice. Furthermore they often have very large memories. They can, in principle, be used for a very large variety of programs. Finally the rapid recent increase in communication speeds has greatly facilitated the use of these facilities when not in-house.
As disadvantages let us mention the large purchase and running cost, the relatively small flexibility of use (one has to make proposals at given deadlines, sometimes far in advance) and the fact that the software that is tailored for them, since it is largely based on their specific libraries, is non-portable. As a consequence in many countries they start to be phased out in favour of smaller, more dedicated machines, and this tendency will probably be accelerated in the future.
### 2.2. Beowulf-type systems
Beowulf is a name commonly given to a computer consisting of a large number of PCs, coupled by a dedicated and fast network (cf. www.beowulf.org).
Their relatively low price makes it possible for small institutions or departments to acquire them, provided that some engineering personnel is available, or that a few astronomers are ready to invest some of their time. They are somewhat more difficult to program on than supercomputers, since they do not have as efficient libraries, but this is often compensated by their in-house availability and their very good price-to-performance ratio. Furthermore software written for one such system can be relatively easily used on any other.
It is thus easy to predict that such systems will become more and more frequent, and reach ever-increasing performances due to the amazing advances in PC technology.
### 2.3. GRAPE systems
GRAPEs - for GRAvity piPEs - are special purpose boards on which is cabled the most CPU-consuming part of N-body calculations, namely the calculation of the gravitational force. They are coupled to a standard workstation via an Sbus/VMEbus, or a PCI bus interface. The host computer provides the GRAPE with the masses and the positions of all the particles, and the GRAPE calculates and returns the accelerations and the potentials. These boards are developed by a group in Tokyo University, headed initially by D. Sugimoto, and now by J. Makino. The history of the GRAPE project, starting more than 10 years ago with GRAPE-1, is given by Makino & Taiji (1998). There are essentially two families of GRAPEs, those with odd numbers, that have limited precision, and those with even numbers, which have high precision.
#### GRAPE-5
The latest arrival in the family of the odd-numbered GRAPEs is GRAPE-5 (Kawai et al. 1999), and it follows to a large extent the architecture of GRAPE-3. As all other GRAPEs, it calculates the forces and potentials from a set of particles, and also gives the list of nearest neighbours, particularly useful when doing SPH or sticky particle calculations. It has a peak performance of 38.4 Gflops per board and a clock frequency of 80 MHz. Each board has 8 processor chips, and each chip has 2 pipelines. It is coupled to its host computer via a PCI interface.
GRAPE-5 is a vast improvement with respect to GRAPE-3. It is 8 times faster and roughly 10 times more accurate. The communication speed has also improved by an order of magnitude, while the size of the neighbour list is considerably lengthened, so that it can hold up to 32768 neighbours for 48 particles, thus rendering particle-hydro simulations much easier to program. At the time this talk was given, only the prototype GRAPE-5 had been tried out. As I am writing these lines several GRAPE-5 boards are already in use both in Komaba (Tokyo University) and the Observatoire de Marseille, while several more groups make plans to acquire them. Tokyo University has plans for building a massively parallel GRAPE-5 system with a peak performance of about 1 Tflops. On such a system one treecode time-step for 10<sup>7</sup> particles should take about 10 secs.
#### GRAPE-6
GRAPE-6 will be the successor of GRAPE-4, whose architecture it is basically following. It calculates not only the potential and the force, but also the derivative of the force, thus allowing the implementation of individual time-step schemes (e.g. Makino & Aarseth 1992). A single GRAPE-6 chip should be the CPU equivalent of a whole GRAPE-4 board. The chip is presently in its testing phases, and should be commercially available by 2001. Both single chip units (baby-6), and single board units (junior-6, with 16 chips) are planned.
#### PROGRAPE-1
In particle hydrodynamics, GRAPEs are used only to calculate the gravitational forces and the list of nearest neighbours. The actual evaluation of the SPH interactions is done on the host computer, thus hampering the performance. Nevertheless building a special purpose SPH machine may not be a good idea, since there are a large number of varieties of particle hydrodynamics, and each would necessitate its own GRAPE implementation. It is thus preferable to have recourse to reconfigurable computing, or field-programmable gate arrays (FPGA). Such chips, also called programmable chips, consist of logic elements and a switching matrix to connect them, and their logic can thus be reconfigured.
In order to reduce both the work of the designer and that of the application programmer, PROGRAPE is specialised to a limited range of problems, namely the evaluation of particle-particle interactions. The application programmer has to change only the functional form of the interaction. It is thus in a way intermediate between the standard GRAPE systems and general purpose computers. Another project for SPH FPGAs is being developed in a collaboration between groups in Heidelberg and Mannheim. The Tokyo group, after completing PROGRAPE-1 (Hamada et al. 1999), is now starting on PROGRAPE-2, a massively parallel extension of PROGRAPE-1, which should achieve somewhere between 1 and 10 Tflops, and be available in a couple of years.
#### Advantages and disadvantages of GRAPE systems
GRAPE systems are of course limited to N-body type simulations, and thus should not be purchased by groups having other types of CPU-intensive calculations. One of their big advantages is that they are within the reach of a small group or department, while their availability makes it possible to envisage long-term projects. To this one should add their excellent price-to-performance ratio, as witnessed by the two Gordon Bell prices they have won so far. Finally users of GRAPE facilities form a small community with close links, discussing their hardware and software environments, helping each other along, and often exchanging software. For all these reasons I wholeheartedly recommend GRAPE systems to groups which perform CPU-intensive N-body simulations and have a sufficient level of computer knowledge.
We have thus seen that both beowulf-type systems and GRAPEs have important advantages. The choice between the two depends basically on the type of applications (mainly N-body or a broader spectrum) and on personal preference. It is not, however, necessary to chose between the two, since it is possible to envisage a beowulf-type system with GRAPE boards attached to some or all of its nodes. On a similar line the National Observatory of Japan in Mitaka has plans for connecting sixteen GRAPE-5 boards to a supercomputer.
## 3. Problems of particular interest
### 3.1. Dark matter
Although dark halos have been with us for over twenty years, there is still a lot we do not know, or do not understand about them. They were first introduced in the seventies by Ostriker & Peebles (1973) as a way of stabilising discs against the ubiquitous bar instability. Today it is understood that they can achieve this, in the linear regime, only if they are sufficiently concentrated to cut the swing amplifier cycle (Toomre 1981), or, in the non-linear regime, to prohibit the incoming waves from tunneling through to the center of the galaxy. Although the extent and amount of mass in the outer halo is irrelevant to the bar instability, it is crucial for a lot of other dynamical issues.
Even in disc galaxies, where HI extended rotation curves have shown clearly the necessity of an extended dark matter halo <sup>1</sup><sup>1</sup>1Unless one allows for a modified gravity (e.g. Bosma 1981), there are still a number of unanswered questions. One of the most crucial ones concerns the disc-to-halo mass ratio in the main body of the galaxy. Are discs maximum? Or are they of relatively low mass, their dynamics to a large extent dominated by the massive halo? Several arguments, both theoretical and observational, have been advanced, and yet the answer is still not clear. Thus for example, if discs were not sufficiently heavy 2-armed structures could not form in them (Athanassoula, Bosma & Papaioannou 1987), while bars would decelerate relatively fast and end up as slow rotators (Debattista & Sellwood 1998), in both cases contrary to observations. On the other hand measurements of velocity dispersions in discs (e.g. Bottema 1993; see also Bosma in these proceedings), favour non-maximum discs, arguing that massive discs would lead to very low values of the Toomre Q parameter (Toomre 1964). Further arguments based on the Tully-Fischer relation come against the maximum disc hypothesis (e.g. Courteau & Rix 1999). Finally cosmological N-body simulations predict less than maximum discs (e.g. Navarro 1998). How can all these be reconciled? Are galactic discs maximum or not? Certainly more work is necessary here to better understand the effect of halos on the dynamics of disc galaxies, and thus their masses.
### 3.2. Evolution of galaxies
Recent observations with the HST, and in the future with the NGST, and with large ground-based telescopes, provide us with information on the properties of galaxies at high $`z`$. We now know more about both their morphology and their kinematics. As implied by the title of this conference, it is one of our main tasks to understand how the morphology and dynamics of galaxies changes with time. As long as such observational data did not exist, the only constraint on evolutionary scenarios was that they had to match observations at $`z=0`$. Observations at higher redshifts make the work of theoreticians more daunting and at the same time more interesting.
For example Abraham et al. (1999) argued that very few barred galaxies can be found at high $`z`$. Since interactions drive bar formation (Noguchi 1987, Gerin, Combes & Athanassoula 1990), wouldn’t it be reasonable to expect more bars at higher redshifts? Several answers can be proposed. One possibility would be that at higher redhifts discs had lower surface densities (since their mass can be assumed to grow in time until its present level). In that case multi-armed structures would be favoured over 2-armed ones. Since such patterns have necessarily inner Lindblad resonances and a small extent between their inner and outer such resonances, one would expect fragmentary multi-armed episodes, driven by interactions, rather than bars, in good agreement to observations at higher $`z`$. This suggestion merits further work, which, together with other scenarios, would lead to a better understanding of the morphology of disc galaxies at high redshifts.
### 3.3. Dynamics of bars
The life of a bar has several episodes: its formation, evolution, possible destruction and perhaps regeneration. All have parts which are poorly understood, but this is particularly true for the third and, even more, the fourth episode.
A bar may be destroyed by the infall of a companion on its host disc (Pfenniger 1991, Athanassoula 1996b). Furthermore bars in discs with a gaseous component are known to commit suicide by pushing gas towards their center, where a central concentration can form, destroy the orbits that support the bar and hence the bar itself. N-body simulations (e.g. Friedli & Benz 1993) show that this occurs on a time-scale of the order of a few bar rotations, i.e. that bars in discs containing gas should be relatively short lived. On the other hand observations show that strong bars are present in over a third of all discs, and weaker ones in yet another third, if not all the remaining discs. How can these two be reconciled? It is of course possible, although highly unlikely, that all bars have formed only a few rotations ago. It is also possible that we are witnessing a second generation of bars, although this solution may have its own problems, as will be shortly discussed below. Finally it is possible that SPH simulations, which have clearly illustrated this third phase in the lifetime of a bar, give shorter time-scales for the gas inflow, and hence for the bar destruction, then what is the case in real bars.
The fourth episode in the life of a bar, namely its possible regeneration, is even less well understood. The disc of the galaxy, as left after the bar destruction, is a hostile environment for a new bar to form. It is hot, since its stars have been heated by the previous bar, and it may have a large central concentration or bulge. How can a bar form in such circumstances? Two suggestions have been made so far. Sellwood & Moore (1999) suggested that freshly infalling gas may cool the disc sufficiently to allow the generation of a new bar, while Miwa & Noguchi (1998) use a very strong external forcing. Are the properties of these second generation bars, different in any way from those of the first generation bars? The simulation of Miwa and Noguchi argues that bars driven by a very strong external forcing should rotate slower than the spontaneous ones and end near their inner Lindblad resonance. Seen the contradiction with observations of early type galaxies, some further such simulations should be carried out, partly to see how general this result is, how much it constrains second generation bars, but also in order to understand the orbital structure in such bars.
Bars are particularly interesting from a dynamical point of view. There is thus a large number of further questions to be examined. What is the fraction of chaotic orbits in self-consistent bars, and, more generally, the relative importance of the different types of orbit families? What are the differences between the properties of bars in early and late type galaxies and what are they due to? How do bars within bars form and evolve? These are only few of the most interesting questions in this context.
### 3.4. Galaxy interactions and mergings. Dynamical effects on galaxies in groups and clusters
Although a considerable effort has been put lately in this very interesting topic (e.g. Barnes 1998 and references therein), still a lot remains to be done. For example we need to understand better interactions and mergings which are more characteristic of higher redshifts, e.g. by using smaller and more gas-rich discs. We also need to know more on mergings of unequal sized galaxies (for some preliminary results see e.g. Barnes 1998 and Athanassoula 1996a,b), an area hitherto insufficiently explored, since a fully self-consistent treatment of such cases requires considerably more particles than equal mass interactions and mergers. Finally most simulations have so far considered the interaction and merging of two unbarred discs. Now that this is getting somewhat better understood we should consider cases in which at least one of the partners is barred (Athanassoula, 1996a, 1996b, and in preparation), or an elliptical. Finally a lot can be learned from better modeling of nearby objects which still elude us, like M51 or the Cartwheel.
The fate of globular clusters (GCs) during mergers can reveal a wealth of information on the processes at work during the merging. Several observations have now shown that the colour distribution of the GCs of many elliptical galaxies are bi-modal or even multi-modal, arguing for the presence of more than one population of GCs around the host galaxy. Several possibilities about their formation have been discussed in the literature. Some could have been initially attached to one of the spirals that merged to make the elliptical, while others could have formed during the merger. Other GCs could have initially formed in dwarf galaxies and been appropriated by the main elliptical during a minor merger. Fully self-consistent high-resolution N-body simulations of mergings, both minor and major, in which the fate of the globular clusters are followed with the help of realistic rules, are necessary to understand the relative importance of the various origins proposed above, as well as the spatial and velocity distributions of the corresponding families of GCs. This study should be extended to galaxy clusters, where one has also to take into account that GCs can be tidally stripped from their parent galaxies and accreted by the brightest cluster member. The wealth of recent observations on this subject are well suited for comparisons with the results of N-body simulations.
More work is certainly necessary to understand the dynamical evolution of loose groups, and also under which (if any) conditions they can lead to compact groups. This would shed more light on the question whether observed compact groups are recently formed, or whether their longevity is due to a massive and not centrally concentrated common halo (Athanassoula, Makino & Bosma 1997).
A deeper understanding of the dynamical evolution of galaxies which are part of groups or clusters requires numerical simulations with a very high number of particles. Except for a couple of notable exceptions, so far progress has been achieved either by simplifying the description of the galaxies (e.g. considering only their halos), or by considering very small groups, or by assuming that the cluster can be described by a rigid potential. All three have led to some interesting results, although they have obvious shortcomings. Yet N-body simulations with a sufficient number of particles to describe a cluster of realistically modeled galaxies are, or will shortly be, within the reach of several computers and progress should be fast in this area.
Several observations of intra-group or intra-cluster stellar populations exist (e.g. Freeman, these proceedings). Here again fully self-consistent N-body simulations where each individual galaxy is realistically modeled should shed some light on the origin and evolution of debris. Some of my preliminary results on this subject show that these should indirectly set constraints on the properties of the common halo of the group or cluster.
### 3.5. Beyond pure stellar dynamics
In order to model a particular phenomenon or effect it is sometimes necessary to consider not only stars but also gas. The first question that arises in such cases is how this gas should be modeled. Using hydrodynamic schemes based on finite differences? Sticky particles? SPH? Before embarking into any extensive use of gas in N-body simulations it seems necessary to compare the results of the various methods of modeling gas in cases where observations “tell us the answer”. Thus in the case of the gas flow in a rigid bar potential there is a very good agreement between SPH and FS2 results (e.g. Patsis & Athanassoula, in prep.) and a relatively good one between FS2 and sticky particles (Athanassoula & Guivarch, in prep.), in as far as the response morphology is concerned. Similar work should be done to compare the rate of gas inflow. The time-scale of the gas inflow depends on the properties of the bar (mass, axial ratio, pattern speed, etc.), but also on viscosity, and thus on the code, so that it is necessary to know how code dependent the various estimates may be. It is thus important to compare inflow rates obtained with FS2, SPH, and other hydro approaches as well as include star formation. Finally including multi-phase interstellar medium might have still some surprises in store for us.
Star formation is on a yet more slippery ground. Various “recipes” have been used so far, based e.g. on Schmidt’s law, or on Toomre’s Q parameter. One has also to take into account the feedback of the stars on the gas, including heating by stellar winds and by supernovae. It is clear that a tight collaboration with people working on star formation would be most fruitful. Nevertheless the problem is rather complicated and real progress may be expected to be slow, since descriptions of numerous processes on a variety of spatial scales need to be combined in a unified framework.
##### Acknowledgments.
I would like to thank A. Bosma and J. Makino for motivating discussions.
## References
Abraham, R. G., Merrifield, M. R., Ellis, R. S., Tanvir, N. R., & Brinchmann J. 1999, MNRAS, 308, 569
Athanassoula, E., 1996a, in Barred Galaxies, R. Buta, D. A. Crocker & B. G. Elmegreen, Astron. Soc. Pac. Conference Series, 91, p. 309
Athanassoula, E., 1996b, in Barred Galaxies and Circumnuclear Activity, Aa. Sandqvist & P. O. Lindblad, Lecture Notes in Physics, 474, Springer Verlag, p. 59
Athanassoula, E., Bosma, A., & Papaioannou, S. 1987, A&A, 179, 23
Athanassoula, E., Makino, J., & Bosma, A. 1997, MNRAS, 286, 825
Barnes, J. E. 1998, in Interactions and Induced star formation: Saas-Fee Advanced Course 26, D. Friedli, L. Martinet, D. Pfenniger, Springer Verlag, Berlin, p. 275
Bosma, A. 1981, AJ, 86, 1825
Bottema, R. 1993, A&A, 275, 16
Courteau, S., & Rix, H.-W. 1999, ApJ, 513, 561
Debattista, V. P., & Sellwood, J. A. 1998, ApJ, 493, 5L
Diaferio, A., Geller M. J., & Ramella, M. 1994, AJ, 107, 868
Friedli, D. & Benz, W. 1993, A&A, 268, 65
Gerin, M., Combes, F., & Athanassoula, E. 1990, A&A, 230, 37
Hamada, T., Fukushige, T., Kawai, A., & Makino, J. 1999, astro-ph/9906419
Kawai, A., Fukushige, T., Makino, J., & Taiji, M. 1999, astro-ph/9909116
Makino, J., & Aarseth, S. J. 1992, PASJ, 44, 141
Makino, J., & Taiji, M. 1998, Scientific Simulations with special-purpose computers, John Wiley, Chichester
Miwa, T., & Noguchi, M. 1998 ApJ, 499, 149
Navarro, J. F. 1998, astro-ph/9807084
Noguchi, M. 1987, MNRAS, 228, 635
Ostriker, J. P., & Peebles, P. J. E. 1973, ApJ, 186, 467
Pfenniger, D. 1991, in Dynamics of Disc Galaxies, B. Sundelius ed., Göteborg press, Göteborg, p. 191
Sellwood, J. A., & Moore, E. M. 1999, ApJ, 510, 125
Toomre, A. 1964, ApJ, 139, 1217
Toomre, A. 1981, in The Structure and Evolution of Normal Galaxies, Fall S. M. & Lynden-Bell, Cambridge Univ. press, Cambridge, p. 111 |
no-problem/9912/hep-lat9912032.html | ar5iv | text | # Hyperscaling in the Broken Symmetry Phase of Dyson’s Hierarchical Model
## I Introduction
Spontaneous symmetry breaking plays a fundamental role in our understanding of the mass generation mechanism of the elementary particles. One of the simplest field theory model where it is observed is scalar theory. Despite its simplicity, there exists no known analytical method which allows one to elucidate quantitatively all the dynamical questions which can be asked about scalar field theory in various dimensions. From a sample of the recent literature on scalar field theory, one can see that the Monte Carlo method is a popular tool to settle questions such as the existence of non-perturbative states , large rescaling of the scalar condensate or Goldstone mode effects .
The Monte Carlo method allows us to approach quantum field theory problems for which there are no known reliable series expansions. The main limitations of the method are the size of the lattice which can be reached and the fact that the errors usually decrease like $`t^{1/2}`$, where $`t`$ is the CPU time used for the calculation. If, in the next decades, a better knowledge of the fundamental laws of physics has to rely more and more on precision tests, one should complement Monte Carlo methods with new computational tools which emphasize numerical accuracy.
This motivated us to use “hierarchical approximations” as a starting point, since they allow a more easy use of the renormalization group (RG) transformation. Examples of hierarchical approximations are Wilson’s approximate recursion formula or the hierarchical model . In the symmetric phase, we have found that polynomial truncations of the Fourier transform of the local measure provide spectacular numerical accuracy, namely, various types of errors decrease like $`\mathrm{e}^{At^u}`$, for some positive constant $`A`$ of order 1 when $`t`$ is measured in minutes of CPU time and $`0.5u1`$. In particular, $`t`$ only grows as the logarithm of the number of sites $`L^D`$ and the finite-size effects decay like $`L^2`$ when $`L`$ (the linear size) becomes larger than the correlation length. This method of polynomial truncations was used to calculate the critical exponent $`\gamma `$ in the symmetric phase for the hierarchical model with estimated errors of the order of $`10^{12}`$. The result was confirmed by calculating the largest eigenvalue of the linearized RG about the accurately determined non-trivial fixed point .
Thanks to the polynomial approximation, very accurate information can be encoded in a very small set of numbers. In the symmetric phase, this approximation is numerically stable when the number of sites becomes arbitrarily large and the high-temperature fixed point is reached. On the other hand, in the broken symmetry phase, numerical instabilities appear after a certain number of iterations following the bifurcation, and it is not possible to completely get rid of the finite size effects with the straightforward procedure used in the symmetric phase. This issue was briefly discussed in section III.E of Ref. .
In this paper, we analyze the numerical instabilities of the low-temperature phase in a quantitative way. We show that in spite of these numerical instabilities, it is possible to take advantage of the iterations for which the low-temperature scaling is observed to obtain reliable extrapolations of the magnetization, first to infinite volume at non-zero external field and then to zero external field. We then present a more pratical method of extrapolation which we apply to calculate the connected $`q`$-point functions at zero momentum $`G_q^c(0)`$ for $`q=1`$, 2 and 3. Finally, we use these calculations to extract the leading critical exponents and we check the hyperscaling relations among these exponents.
The paper is organized as follows. In section II we show how to construct recursively the generating function for the $`G_q^c(0)`$ when a magnetic field is introduced. In section III, we review the scaling and hyperscaling relations among the critical exponents and explain how they should be understood in the case of the hierarchical model. Hyperscaling usually refers to scaling relations involving the dimension explicitly. Dyson’s hierarchical model has no intrinsic dimensionality but rather a continuous free parameter usually denoted by $`c`$ introduced in section II, which controls the decay of the interactions among blocks of increasing sizes. This parameter can be tuned in order to insure that a massless field has scaling properties that can be compared with those of nearest neighbor models in $`D`$-dimensions. In the past we have chosen the parametrization $`c=2^{12/D}`$, however this is not the only possible one. In section III. C, we show that a more general parametrization of $`c`$, (which includes $`\eta `$) combined with linear arguments yields predictions that are identical to the conventional predictions obtained from scaling and hyperscaling. We want to emphasize that the main prediction of the linear theory – that can be interpreted as a hyperscaling relation – can be expressed in terms of $`c`$ only and is given in general by Eq. (32). For $`c=2^{1/3}`$, this general equation together with the accurate result of Ref. implies
$$\gamma _q=1.29914073\mathrm{}\times (5q6)/4,$$
(1)
where $`\gamma _q`$ is the leading exponent corresponding to the connected $`q`$-point function.
We then proceed to verify the predictions of Eq. (1) by doing actual calculations at various values of the inverse temperature $`\beta `$ near criticality. This is a rather challenging task because as one moves away from the unstable fixed point, in the low-temperature side, rapid oscillations appear in the Fourier transform of the local measure and the polynomial approximation ultimately breaks down. This is the cause of the numerical instabilities mentioned above. As a consequence, a relatively small number of iterations can be performed with a reasonable accuracy in the low-temperature phase. This is explained in section IV where we also show that the number of numerically accurate iterations in the low-temperature phase scales like the logarithm of the degree of the polynomial. For the calculations discussed later in the paper, we have used a polynomial truncation of order 200. With this choice, the number of iterations where an approximate low-temperature scaling is observed is slightly larger than 10. Since for Dyson’s hierarchical model the number of sites is halved after each iteration, it means roughly speaking that in correlation length units we can only reach volumes which are $`2^{10}10^3`$. If we use the $`D=3`$ interpretation of $`c=2^{1/3}`$, this means that the linear size, denoted by $`L`$, which can be reached safely are at most 10 times the correlation lengths.
Despite this limitation, the magnetization reaches its infinite volume limit with clearly identifiable $`L^2`$ corrections provided that the external magnetic field is not too large (otherwise the polynomial approximation breaks down) or not too small (otherwise a linear analysis applies and there is no spontaneous magnetization). The exact intermediate range of the magnetic field for which the connected $`q`$-point functions reach an infinite volume limit with the characteristic $`L^2`$ corrections is discussed in section V. In this intermediate range, two methods of extrapolation can be used. The first is the standard one which consists in extrapolating to infinite volume at fixed external field and then to zero external field. On the other hand, within the intermediate range of magnetic field mentioned above, the magnetization at finite volume can be fitted very accurately with a straight line which provides an extrapolation to zero magnetic field. This extrapolation has no physical meaning but it also reaches an infinite volume limit with $`L^2`$ corrections when the volume increases. This limit coincides with an accuracy of 6 significant figure with the limit obtained with the first method; in other words within the estimated errors of the calculation. The second procedure is much more practical because it does not require any overlap among the acceptable regions of magnetic field when the volume increases. The second method will be used to calculate the higher point functions.
Proceeding this way, we calculate the connected $`q`$-point functions at zero momentum $`G_q^c(0)`$, for $`q=1`$, 2 and 3 and for various values of the inverse temperature $`\beta `$. The results are reported in section VI. The critical exponents are then estimated by using a method discussed in Ref. where we selected a region of $`\beta `$ for which the combined effects of the errors due to subleading corrections and the numerical round-off could be minimized. Using linear fits within this limited range of $`\beta `$, we found exponents in agreement with the prediction of hyperscaling given in Eq. (32) with an accuracy of $`10^5`$ for the magnetization, $`4\times 10^5`$ for the susceptibility and $`5\times 10^3`$ for the 3-point function. As far as the first two results are concerned, the accuracy compares well with the accuracy that can usually be reached with a series analysis or the Monte Carlo method. Nevertheless, there is room for improvement: one should be able to “factor out” the rapid oscillations in the Fourier transform of the local measure and treat them exactly. This is discussed briefly in the conclusions.
## II Introduction of a Magnetic Field
In this section, we describe Dyson’s Hierarchical Model coupled to a constant magnetic field. All calculations are performed at large but finite volume. The total number of sites denoted $`2^{n_{max}}`$. We label the sites with $`n_{max}`$ indices $`x_{n_{max}},\mathrm{},x_1`$, each index being 0 or 1. In order to visualize this notation, one can divide the $`2^{n_{max}}`$ sites into two blocks, each containing $`2^{n_{max}1}`$ sites. If $`x_{n_{max}}=0`$, the site is in the first box, if $`x_{n_{max}}=1`$, the site is in the second box. Repeating this procedure $`n_{max}`$ times (for the two boxes, their respective two sub-boxes , etc.), we obtain an unambiguous labeling for each of the sites.
The non-local part of the action (i.e. the “kinetic term”) of Dyson’s Hierarchical model reads
$$S_{kin}=\frac{\beta }{2}\underset{n=1}{\overset{n_{max}}{}}(\frac{c}{4})^n\underset{x_{n_{max}},\mathrm{},x_{n+1}}{}(\underset{x_n,\mathrm{},x_1}{}\varphi _{(x_{n_{max}},\mathrm{},x_1)})^2.$$
(2)
The index $`n`$, referred to as the ‘level of interaction’ hereafter, corresponds to the interaction of the total field in blocks of size $`2^n`$. The constant $`c`$ is a free parameter which describes the way the non-local interactions decay with the size of the blocks. We often use the parametrization
$$c=2^{12/D},$$
(3)
in order to approximate $`D`$-dimensional models. This question will be discussed later (see Eq. (36) for a generalization of Eq. (3)).
A constant external source $`H`$, called “the magnetic field” later, is coupled to the total field. This can be represented by an additional term in the action
$$S_H=H\underset{x_{n_{max}},\mathrm{},x_1}{}\varphi _{(x_{n_{max}},\mathrm{},x_1)}.$$
(4)
However due to the linearity of the coupling, $`\mathrm{e}^{S_H}`$ factorizes into local pieces and this interaction can be absorbed in the local measure. The field $`\varphi _{(x_{n_{max}},\mathrm{},x_1)}`$ is integrated over with a local measure
$$W_0(\varphi ,H)W_0(\varphi )\mathrm{e}^{H\varphi },$$
(5)
where $`W_0(\varphi )`$ is the local measure at zero magnetic field. For simplicity, we use the convention that if the magnetic field does not appear explicitly in an expression (e. g. , $`W_0(\varphi )`$) the quantity should be understood at zero magnetic field. The constant of proportionality refers to the fact that we require both $`W_0(\varphi ,H)`$ and $`W_0(\varphi )`$ to be normalized as probability distributions. Since we are interested in universal properties, we will use a single local measure, namely the Ising measure, $`W_0(\varphi )=\delta (\varphi ^21)`$. Numerical experiments in Ref. show that the universal properties are very robust under changes in the local measure.
At $`H=0`$, the recursion relation corresponding to the integration of the fields in boxes of size 2, keeping the sum of the 2 fields in each box constant reads
$$W_{n+1}(\varphi )=\frac{C_{n+1}}{2}e^{\beta /2(c/4)^{n+1}\varphi ^2}𝑑\varphi ^{^{}}W_n(\frac{(\varphi \varphi ^{^{}})}{2})W_n(\frac{(\varphi +\varphi ^{^{}})}{2}),$$
(6)
where $`C_{n+1}`$ is a normalization factor which will be fixed in order to obtain a probability distribution. Introducing the Fourier representation as in Refs.
$$W_n(\varphi )=\frac{dk}{2\pi }e^{ik\varphi }\widehat{W_n}(k),$$
(7)
and a rescaling of the “source” $`k`$ by a factor $`(c/4)^{1/2}`$ at each iteration ,
$$R_n(k)=\widehat{W_n}(k(c/4)^{n/2}),$$
(8)
the recursion relation becomes
$$R_{n+1}(k)=C_{n+1}exp(\frac{1}{2}\beta \frac{^2}{k^2})(R_n(k(c/4)^{1/2}))^2.$$
(9)
We fix the normalization constant $`C_n`$ is such way that $`R_n(0)=1`$. $`R_n(k)`$ has then a direct probabilistic interpretation. If we call $`M_n`$ the total field $`\varphi _x`$ inside blocks of side $`2^n`$ and $`\mathrm{}_n`$ the average calculated without taking into account the interactions of level strictly larger than $`n`$, we can write (at $`H=0`$)
$$R_n(k)=\underset{q=0}{\overset{\mathrm{}}{}}\frac{(ik)^{2q}}{(2q)!}(M_n)^{2q}_n(c/4)^{qn}.$$
(10)
The introduction of the magnetic field is a very simple operation. The basic equation reads
$$W_n(\varphi ,H)W_n(\varphi )\mathrm{e}^{H\varphi }.$$
(11)
This can be seen in many different ways. One of them is to use Eq. (6) and realize that the $`\varphi ^{}`$ drops out of the magnetic interactions. Another one consists in realizing that due to the linearity, one can split Eq. (4) into sum over boxes of any desired size. In Fourier transform, this implies that
$$\widehat{W}_n(k,H)\widehat{W}_n(k+iH).$$
(12)
The normalization factor is fixed by the condition $`\widehat{W}_n(0,H)=1`$ which guarantees that $`W_n(\varphi ,H)`$ is a probability distribution and that $`\widehat{W}_n(k,H)`$ generates the average values of the positive powers of the total field. More explicitly,
$`\widehat{W}_n(k,H)`$ $`=`$ $`{\displaystyle \frac{\widehat{W}_n(k+iH)}{\widehat{W}_n(iH)}}`$ (13)
$`=`$ $`{\displaystyle \underset{q=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(ik)^q}{q!}}(M_n)^q_{n,H}.`$ (14)
From a conceptual point of view, as well as from a practical one, it is easier to deal with the rescaled quantity $`R_n(k)`$. Near the fixed point of Eq. (9), we have the approximate behavior
$$(M_n)^{2q}_n(4/c)^{qn},$$
(15)
In terms of the rescaled function, we can rewrite Eq. (14) as
$$\frac{R_n(k+iH(4/c)^{n/2})}{R_n(iH(4/c)^{n/2})}=\underset{q=0}{\overset{\mathrm{}}{}}\frac{(ik)^q}{q!}(M_n)^q_{n,H}(c/4)^{qn/2}.$$
(16)
The connected Green’s functions can be obtained by taking the logarithm of this generating function.
## III About Hyperscaling
### A General Expectations
The main numerical results obtained in this paper are the calculations of the critical exponents corresponding to the singularity of the connected $`q`$-point functions for $`q=1,2`$ and 3. For definitiness we use the notation
$$G_q^c(0)(\beta \beta _c)^{\gamma _q},$$
(17)
for the leading singularities in the low-temperature phase. We assume that the reader is familiar with the commonly used notations for the critical exponents. For $`q=1`$, we have $`\gamma _1=\beta `$ which should not be confused with the inverse temperature. After this subsection, we keep using the notation $`\beta `$ for the inverse temperature. For $`q=2`$, we have $`\gamma _2=\gamma ^{}`$. If one assumes that the scaled magnetization $`M/(TT_c)^\beta `$ is a function of the scaled magnetic field $`H/(TT_c)^\mathrm{\Delta }`$ only, one obtains that
$$\gamma _{q+1}\gamma _q=\mathrm{\Delta },$$
(18)
for any $`q`$. The exponent $`\mathrm{\Delta }`$ is often called the gap exponent and should not be confused with the exponent associated with the subleading corrections to the scaling laws.
In general, there exists 7 relations among the 10 critical exponents $`\alpha ,\alpha ^{},\beta ,\gamma ,\gamma ^{}`$, $`\mathrm{\Delta },\delta ,\nu ,\nu ^{},`$ and $`\eta `$, in which the dimension of the system does not enter explicitly. These are the so-called scaling relations which stimulated the development of the RG method. Their explicit form is
$`\alpha `$ $`=`$ $`\alpha ^{}`$ (19)
$`\gamma `$ $`=`$ $`\gamma ^{}`$ (20)
$`\nu `$ $`=`$ $`\nu ^{}`$ (21)
$`\alpha `$ $`+`$ $`2\beta +\gamma =2`$ (22)
$`\mathrm{\Delta }`$ $`=`$ $`\beta +\gamma `$ (23)
$`\mathrm{\Delta }`$ $`=`$ $`\beta \delta `$ (24)
$`\gamma `$ $`=`$ $`(2\eta )\nu .`$ (25)
Eq. (23) can be seen as an obvious version of Eq. (18) for $`q=1`$, but has also a non-trivial content summarizing Eq. (18) for all the higher $`q`$.
In addition there exists one relation where the dimension enters explicitly, for instance:
$$D\nu =2\alpha .$$
(26)
Other relations may be obtained by combining Eq. (26) with the scaling relations. Proceeding this way, we obtain a relation of relevance for the rest of the discussion, namely
$$\beta =\frac{(D2+\eta )}{2(2\eta )}\gamma .$$
(27)
The relations involving the dimension explicitly are usually called hyperscaling relations . A mechanism leading to a possible violation of hyperscaling (dangerous irrelevant variables) is explained in appendix D of Ref. . If the 8 relations hold, we are left with only two independents exponents, for instance $`\gamma `$ and $`\eta `$.
Combining the hyperscaling relation (27) and the scaling relations (20) and (23), we obtain
$`\gamma _q`$ $`=`$ $`\gamma +(q2)\mathrm{\Delta }`$ (28)
$`=`$ $`\gamma [2D+q(D+2\eta )]/(42\eta )`$ (29)
### B The Hierarchical Model (HT case)
In the case of the hierarchical model, the exponents $`\gamma _q`$ of the high-temperature (HT) phase (so for $`q`$ even) can be estimated by using the linearized RG transformation. Since this subsection is the only part of this article where we will consider the high-temperature phase, we have not found useful to introduce special notations for $`\gamma _q`$ in this phase. When $`\beta _c\beta `$ is small, the linearized RG transformation can be used for approximately $`n^{}`$ iterations, with $`n^{}`$ defined by the relation
$$|\beta \beta _c|\lambda ^n^{}=1,$$
(30)
where $`\lambda `$ is the largest eigenvalue of the linearized RG transformation. After the transient behavior has died off and until $`n`$ reaches the value $`n^{}`$, we are near the fixed point and $`R_n(k)`$ does not change appreciably. Remembering that the field is rescaled by a factor $`(c/4)^{\frac{1}{2}}`$ at each iteration (see Eq. (9)), we obtain the order of magnitude estimate for $`G_q^c(0)`$ after $`n^{}`$ iterations:
$$G_q^c(0)2^n^{}(4/c)^{qn^{}/2}.$$
(31)
For $`n`$ larger than $`n^{}`$, the non-linear effects become important. The actual value of $`G_q^c(0)`$ may still change by as much as 100 percent, however the order of magnitude estimate of Eq. (31) remains valid. This transition has been studied in detail in Ref. in a simplified version of the model. Eliminating $`n^{}`$ in terms of $`\beta _c\beta `$, we obtain the value of the leading exponents
$$\gamma _q=\gamma [(q/2)\mathrm{ln}(4/c)\mathrm{ln2}]/\mathrm{ln}(2/c),$$
(32)
with
$$\gamma =\mathrm{ln}(2/c)/\mathrm{ln}\lambda .$$
(33)
This relationship has been successfully tested in the symmetric phase for $`q=4`$ and $`4/c=2^{5/3}`$.
### C About Dimensionality
We will now show that Eq. (32) is compatible with the general relation of Eq. (28) provided that we relate $`c`$ to a parameter $`D`$ which can be interpreted as the dimension of a nearest neighbor model approximated by the hierarchical model. We introduce a linear dimension $`L`$ such that the volume $`L^D`$ is proportional to the total number of sites $`2^n`$. From
$$L2^{n/D},$$
(34)
we can in general relate $`c`$ and $`D`$ by assuming a scaling of the total field
$$M_n^{2q}_nL^{(D+2\eta )q},$$
(35)
From comparison with Eq. (15) this would imply that
$$(4/c)=2^{(D+2\eta )/D}.$$
(36)
Substituting in Eq. (32), we reobtain the general Eq. (28).
Since in the infinite volume limit, the kinetic term is invariant under a RG transformation, we have chosen in the past to use Eq. (36) with $`\eta =0`$. This is our conventional definition of $`c`$ given in Eq. (3). This is the same as saying that when we are near the fixed point, the total field in a box containing $`2^n`$ sites, scales with the number of sites in the same way as a massless Gaussian field. This obviously implies that in the vicinity of a Gaussian fixed point the total field scales exactly like a massless Gaussian field in $`D`$ dimension. On the other hand, an interacting massless field will also scale like a free one, which is not a bad approximation in $`D=3`$. This is an unavoidable feature which will need to be corrected when one tries to improve the hierarchical approximation.
We emphasize that this interpretation has no bearing on the validity of the calculations performed. What matters in our calculation is the value of $`4/c`$. In the following, we have used $`4/c=2^{5/3}`$, which can be interpreted either as $`D=3`$ and $`\eta =0`$ or, for instance, as $`D=2.97`$ and $`\eta =0.02`$.
### D The Low-Temperature Case
The extension of the argument for odd and even values of $`q`$ in the broken symmetry phase is somehow non-trivial. Since we need to take the infinite volume limit before taking the limit of a zero magnetic field, we need some understanding of the non-linear behavior. Some aspects of the non-linear behavior are discussed in section V. In the following, we will show numerically that Eq. (32) holds in good approximation in the broken symmetry phase for $`4/c=2^{5/3}`$. With this choice of $`4/c`$ and the corresponding value of $`\gamma `$ calculated in Ref. , Eq. (32) implies Eq. (1) given in the introduction. The verification of this relation for $`q`$=1, 2 and 3 is the main numerical result discussed in the following chapters.
## IV Polynomial Truncations
In the following we will exclusively consider the case of an Ising measure
$$R_0(k)=\mathrm{cos}(k).$$
(37)
This restriction is motivated by accurate checks of universality based on calculations with other measures. Given that $`R_0`$ can be expanded into a finite number of eigenfunctions of $`\mathrm{exp}(\frac{1}{2}\beta \frac{^2}{k^2})`$, one can in principle obtain exact expressions for the next $`R_n(k)`$, for instance
$$R_1(k)=\frac{1+\mathrm{e}^{\beta c/2}\mathrm{cos}(k\sqrt{c})}{1+\mathrm{e}^{\beta c/2}}.$$
(38)
One can in principle repeat this procedure. At each iteration, one obtains a superposition of cosines of various frequencies. For a given numerical value of $`c`$, $`n`$ iterations of this exact procedure requires to store $`2^{n1}+1`$ numerical coefficients. The memory size thus scales like $`2^n`$, while the CPU time scales like $`4^n`$. If $`\beta `$ differs from $`\beta _c`$ by $`10^{10}`$, one needs at least 80 iterations in order to eliminate the finite-size effects. Such a calculation using the exact method can be ruled out by practical considerations.
We will thus try to extend the approximate methods that we have used successfully in the symmetric phase , where the function $`R_n(k)`$ was calculated using finite dimensional approximations of degree $`l_{max}`$:
$$R_n(k)=1+a_{n,1}k^2+a_{n,2}k^4+\mathrm{}+a_{n,l_{max}}k^{2l_{max}}.$$
(39)
After each iteration, non-zero coefficients of higher order ($`a_{n+1,l_{max+1}}`$ etc. ) are obtained, but not taken into account as a part of the approximation in the next iteration. The recursion formula for the $`a_{n,m}`$ reads :
$$a_{n+1,m}=\frac{\underset{l=m}{\overset{l_{max}}{}}(\underset{p+q=l}{}a_{n,p}a_{n,q})[(2l)!/(lm)!(2m)!](c/4)^l[(1/2)\beta ]^{lm}}{_{l=0}^{l_{max}}(_{p+q=l}a_{n,p}a_{n,q})[(2l)!/l!](c/4)^l[(1/2)\beta ]^l}.$$
(40)
The method to identify $`\beta _c`$ has been discussed in detail in Ref. and consists in finding the bifurcation in the ratio $`a_{n+1,1}/a_{n,1}`$. In the following, we simply call this quantity “the ratio”. If $`\beta <\beta _c`$, the ratio drops to $`c/2`$ for $`n`$ large enough. In this case, the numerical stability of the infinite volume limit is excellent and allows extremely accurate determination of the renormalized quantities. If $`\beta >\beta _c`$, the ratio “jumps” suddenly a few iterations after $`n^{}`$ is reached and stabilizes near the value $`c`$, corresponding to the low-temperature scaling. This is seen from Eq. (14). Since $`M_n^2_n`$ grows like $`4^n`$, as one expects in the low-temperature phase, and remembering that there is a rescaling of $`c/4`$ at each iteration, the coefficient of $`k^2`$ grows like $`c^n`$. This implies a ratio equal to $`c`$. In our calculation, $`c=1.25992\mathrm{}`$. Unfortunately, the number of iterations where the low-temperature scaling is observed is rather small. Subsequently, the ratio drops back to 1. As we shall explain at length, this is an effect of the polynomial truncation. The length of the “shoulder” were the low-temperature scaling is observed increases if we increase $`l_{max}`$. This situation is illustrated in Figure 1.
No matter how large $`l_{max}`$ is, for $`n`$ large enough, the ratio eventually drops back to 1. This reflects the existence of a stable fixed point for the truncated recursion formula. The values $`a_l^{}`$ of $`a_l`$ at this fixed point for various $`l_{max}`$ are shown in Figure 2.
We see a clear evidence for a dependence of the form
$$a_l^{}(l_{max})^l.$$
(41)
This means that the stable fixed point is an effect of the polynomial truncations and has no counterpart in the original model.
It is possible to evaluate the value of $`n`$ for which the low-temperature shoulder ends. A detailed study shows that for $`n`$ large enough, we have in good approximation
$$R_n(k)\mathrm{cos}(c^{n/2}k),$$
(42)
where $``$ is the magnetization density in the infinite volume limit. If we assume that $`R_n(k)`$ is exactly as in Eq. (42), then we can use the basic recursion formula (9) in order to obtain the corresponding $`R_{n+1}(k)`$. Using $`2\times \mathrm{cos}^2(x)=1+\mathrm{cos}(2x)`$, we can reexpress $`(R_n(k(c/4)^{1/2}))^2`$ as a superposition of eigenfunctions of the one-dimensional Laplacian. When the exponential of the Laplacian in Eq. (9) acts on the non-constant modes it becomes exp($`\beta ^2c^{n+1}/2`$). In the polynomial truncation of the recursion relation, this exponential is replaced by $`l_{max}`$ terms of its Taylor expansion. This approximation is valid if the argument of the exponential is much smaller than $`l_{max}`$. Consequently, we obtain that the polynomial truncation certainly breaks down if $`n`$ is larger than $`n_b`$ such that
$$n_b+1[\mathrm{ln}(2/\beta )\mathrm{ln}(^2)+\mathrm{ln}(l_{max})]/\mathrm{ln}c.$$
(43)
If the estimate of Eq. (31) extends to the low-temperature phase, one realizes that the second term of (43) is roughly $`n^{}`$ while the third term stands for the length of the peak and the shoulder. Plugging the approximate values 1.1 for $`\beta `$ and 0.7 for $``$ (see section V), we obtain $`n_b=23`$ for $`l_{max}=80`$ and $`n_b=27`$ for $`l_{max}=200`$. A quick glance at Figure 1, shows that these estimates coincide with the first drastic drops of the low-temperature shoulder.
One can in principle extend indefinitely the low-temperature shoulder by increasing $`l_{max}`$. However, the CPU time $`t`$ necessary for $`n`$ iterations of a quadratic map in dimension $`l_{max}`$ grows like
$$tn(l_{max})^2.$$
(44)
As we will show in section V, the finite-size effects on $`G_q^c(0)`$ are of the order $`(c/2)^{n_s}`$ where $`n_s`$ is the number of points on the shoulder. This behavior has been demonstrated in the high-temperature phase and we will see later that it also applies in the low-temperature phase. From the previous discussion $`n_s\mathrm{ln}l_{max}/\mathrm{ln}c`$. This implies that the finite-size effects $``$ are of the order
$$(l_{max})^{\mathrm{ln}(c/2)/\mathrm{ln}c}.$$
(45)
Using Eq. (44) and the value of $`c`$ expressed in terms of $`D`$ according to Eq. (36) with $`\eta =0`$, we obtain
$$t^{1/(D2)}.$$
(46)
In particular, for the value $`4/c=2^{5/3}`$ used hereafter, the errors decrease like $`t^1`$. Consequently, we should try to modify the method in such a way that the rapidly oscillating part of $`R_n(k)`$ is treated without polynomial approximations. This possibility is presently under investigation. One can nevertheless obtain results with an accuracy competing with existing methods by using the finite data on the short shoulder in order to extrapolate to the infinite volume limit result. This procedure is made possible by the rather regular way the renormalized quantities approach this limit.
## V The Extrapolation to Infinite Volume
### A Preliminary Remarks
There is no spontaneous magnetization at finite volume. This well-known statement can be understood directly from Eq. (16). As explained at the beginning of section IV, at finite $`n`$, $`R_n(k)`$ is simply a superposition of cosines with finite positive coefficients provided that $`\beta `$ is real. However if $`\beta `$ is complex, these coefficients have singularities. This comes from the normalization factor, needed when we impose the condition $`R_n(0)=1`$, which has zeroes in the complex plane. The behavior of these zeroes has been studied in Ref. for $`n`$ between 6 and 12. As the volume increases, these zeroes “pinch” the critical point. However at finite $`n`$, there are no zeroes on the real axis. In conclusion, at real $`\beta `$ and finite $`n`$, $`R_n(k)`$ is an analytical function of $`k`$. For any given $`n`$, we can always take the magnetic field $`H`$ small enough in order to have
$$|H(4/c)^{n/2}|1.$$
(47)
If we express $`c`$ in terms of the linear dimension using Eqs. (3) and (34) this translates into
$$|H|L^{(D+2)/2}.$$
(48)
Given the analyticity of $`R_n(k)`$, one can then use Eq. (16) in the linear approximation. In this limit,
$$M_n_n2a_{n,1}H(4/c)^n,$$
(49)
and the magnetization vanishes linearly with the magnetic field.
On the other hand, for any non-zero $`H`$, no matter how small its absolute value is, one can always find a $`n`$ large enough to have $`|H(4/c)^{n/2}|1`$. The non-linear effects are then important and Eq. (49) does not apply. In addition it is assumed (and will be verified explicitly later) that when such a $`n`$ are reached, the value of the $`G_q^c(0)`$ stabilizes at an exponential rate. One can then, first extrapolate at infinite volume for a given magnetic field, and then reduce the magnetic field in order to extrapolate a sequence of infinite volume limits with decreasing magnetic field, towards zero magnetic field. Again, this procedure requires some knowledge about the way the second limit in reached. In the case considered here (one scalar component), the limit is reached by linear extrapolation. In systems with more components, the Nambu-Goldstone modes create a square root behavior
$$M(T<T_c,H>0)=M(T,0^+)+CH^{1/2}.$$
(50)
which has been observed for $`O(4)`$ models using Monte Carlo simulations . We now discuss the application of the procedure outlined above in the simplest case.
### B Calculation of the magnetization
In this subsection we discuss the calculation of the infinite volume limit of the magnetization. The magnetization density at finite volume is defined as
$`_n(H)`$ $`=`$ $`{\displaystyle \frac{M_n_{n,H}}{2^n}}.`$ (51)
We call it “the magnetization” when no confusion is possible. For definiteness, we have chosen a special value $`\beta =\beta _c+10^1`$ and calculated the magnetization by plugging numerical values of $`H`$ in Eq. (16) and expanding to first order in $`k`$. The results are shown in Fig. 3 for $`n=17`$ and $`l_{max}=200`$.
As one can see, we have three different regions. The first one (I) is the region where the linear approximation described above applies. For the example considered here, the linearization condition $`|H(4/c)^{n/2}|1`$ translates into $`\mathrm{log}_{10}(H)4.3`$. This is consistent with the fact that the linear behavior is observed below -5. The third part (III), is the region where the polynomial approximation breaks down. Given the approximate form given in Eq. (42), this should certainly happen when $`|H(4/c)^{n/2}|l_{max}`$. This means $`\mathrm{log}_{10}(H)2.0`$ in our example. On the figure, one sees that for $`\mathrm{log}_{10}(H)2.4`$, the magnetization drops suddenly instead of reaching its asymptotic value at large $`H`$, namely $`=1`$. Finally, the intermediate region (II) is the one which contains the information we are interested in.
As advertized, we will first take the infinite volume limit of the magnetization at non-zero magnetic field and then extrapolate to zero magnetic field. We need to understand how the second region shown in Fig. 3 changes with $`n`$. From the above discussion, region II is roughly given by the range of magnetic field
$$(n/2)\mathrm{log}_{10}(4/c)<\mathrm{log}_{10}(H)<\mathrm{log}_{10}(l_{max})(n/2)\mathrm{log}_{10}(4/c).$$
(52)
In the log scale of Fig. 3, the width of region II is at most $`\mathrm{log}_{10}(l_{max})`$ which is approximately 2.3 in our sample calculation. Region II shifts by $`(1/2)\mathrm{log}_{10}(4/c)`$, approximately 0.25 in our sample calculation, at each iteration. In addition, the whole graph moves slightly up at each iteration in a way which is better seen using a linear scale as in Fig. 4.
As one can see, the regions II of seven successive iterations do not overlap. Consequently 1.5 is a more realistic estimate than the previously quoted bound 2.3 for the average width of region II.
The fewer iterations we use to extrapolate to infinite volume, the broader the range of the magnetic field can be. We have compared 5 sets of 4 iterations well on the low-temperature shoulder starting from the set (17, 18, 19, 20) up to the set (21, 22, 23, 24). From our experience in the symmetric phase we have assumed that the finite-size effects could be parametrized as
$`_n`$ $`=`$ $`_{\mathrm{}}A\times B^n.`$ (53)
This relation implies that
$`\mathrm{log}_{10}(_{n+1}_n)`$ $`=`$ $`\stackrel{~}{A}+n\times \mathrm{log}_{10}(B),`$ (54)
where $`\stackrel{~}{A}=\mathrm{log}_{10}(A)+\mathrm{log}_{10}(1B)`$. The value $`\stackrel{~}{A}`$ and $`\mathrm{log}_{10}(B)`$ can be obtained from linear fits. For four successive iterations, this will give us 3-point fits for the infinite volume limit at fixed value of $`H0`$. In all fits performed, we found $`B0.63`$, which is compatible with the $`(c/2)^n`$ decay of the finite size effects found in the symmetric phase . In terms of the linear dimension $`L`$ introduced in Eq. (34), this corresponds to finite-size effects decaying like $`L^2`$. If the parametrization of Eq. (53) was exact, the value of $`_n+A\times B^n`$ would be independent of $`n`$ and equal to $`_{\mathrm{}}`$. In practice, variations slightly smaller than $`10^6`$ are observed. We have thus taken an average over these values in order to estimate $`_{\mathrm{}}`$ at fixed $`H`$. The results for the first set are shown in Fig. 5 for various values of $`H`$. The linear behavior allows an easy extrapolation to $`H=0`$.
We have repeated this procedure for the four other set of successive values defined previously and obtained the $`H=0`$ extrapolations:
Averaging over these five values, we obtain
$`_{\mathrm{}}^{H=0}`$ $`=`$ $`0.7105363\times 10^6.`$ (55)
It may be argued that the values coming from sets involving larger values of $`n`$ are better estimates because the finite size effects are smaller for those sets.
We have repeated this type of calculation with sets of 5 successive iterations and a correspondingly narrower range of magnetic field and found results compatible with the estimate given by Eq. (55).
### C The Susceptibility
We now consider the calculation of the connected susceptibility (two-point function). By using the previous notation, we can express it as
$`\chi _n(H)`$ $`=`$ $`{\displaystyle \frac{M_n_{n,H}^2M_n^2_{n,H}}{2^n}}`$ (56)
$`=`$ $`{\displaystyle \frac{(b_1^22\times b_2)}{2^n}},`$ (57)
where
$$\frac{R_n(k+iH(4/c)^{n/2})}{R_n(iH(4/c)^{n/2})}=\underset{q=0}{\overset{\mathrm{}}{}}b_qk^q.$$
(58)
The dependence on $`H`$ of the $`b_n`$ is implicit.
In order to extrapolate the susceptibility to infinite $`n`$, one has to determine the range of the magnetic field for which the scaling $`M_n_{n,H}^2M_{n,H}^2_n2^n`$ holds. When this is the case, the ratio $`\chi _{n+1}/\chi _n1`$. The range of values of the magnetic field for which this scaling is observed is analogous to “region II” introduced in the previous subsection, and we will use the same terminology here. The ratios of the susceptibility at successive $`n`$ are shown for various values of $`H`$ in Fig. 6.
One observes that the range where the desired scaling is observed shrinks when $`n`$ increases. For each successive iteration, the ratio of susceptibility has a “upside-down U”-shape. The values of $`H`$ for which the ratio starts dropping on the left are equally spaced and can be determined by linearization as before. On the other side of the upside-down U, dropping values of the ratio signal the breakdown of the polynomial truncation. This occurs at smaller values of $`H`$ than for the magnetization, making the region II smaller. A theoretical estimate of the lower value of $`H`$ for which this happens requires a more refined parametrization than the one given in Eq. (42). In order to get a controllable extrapolation, we need at least 4 successive values of $`\chi _n`$ (to get at least 3-point fits for the logarithm of the differences). This is unfortunately impossible: the region II of three successive upside-down U have no overlap as one can see from Fig. 6. Similar results are obtained by plotting the susceptibility versus the magnetic field as shown in Fig. 7.
One sees that the regions II (where the susceptibility can approximately be fitted by a line which extrapolates to a non-zero value at zero $`H`$) do not overlap for 4 consecutive iterations.
### D An Alternate Method
In the previous discussion, we have observed a linear behavior for the region II of the magnetization and the susceptibility. This linear behavior can be used to obtain extrapolations to non-zero values of these quantities at zero magnetic field. These values have no physical interpretation. We denote them by $`_n^{H0}`$, the arrow indicating that the quantity is a mathematical extrapolation and not “the spontaneous magnetization at finite volume”. They reach an asymptotic value at an exponential suppressed rate when $`n`$ increases, just as in Eq. (53). This is illustrated in Fig. 8.
Using a linear fit to fix the unknown parameters $`A`$ and $`B`$ in Eq. 53, and averaging the $`_n^{H0}+AB^n`$ over $`n`$, we obtain
$`_{\mathrm{}}^{H=0}`$ $`=`$ $`0.7105372\times 10^6,`$ (59)
which is consistent with the result obtained with the standard method.
Roughly speaking, the lines of region II move parallel to each other when $`n`$ increases and it is approximately equivalent to first extrapolate to zero $`H`$, using the linear behavior in region II, and then to infinite $`n`$ rather than the contrary. If the two limits coincide, the second method has a definite practical advantage: all we need is a small part of region II for each $`n`$, no matter if it overlaps or not with the region II for other $`n`$. So in general, it allows to use more iterations to get better quality extrapolations. The fact that the values of the magnetization obtained with the two methods coincide with 5 significant digits is a strong indication that the two procedure are equivalent. For the susceptibility and higher point functions, we do not have an independent check, since the alternate method is the only one available. However, we were able to make consistency checks such as the fact that the slope of the straight line used for the zero magnetic field extrapolation of the $`q`$point function coincides with the $`q+1`$-point function.
We can repeat the same steps for the 3-point function. The 3-point function is given by
$`G_3^c`$ $`=`$ $`{\displaystyle \frac{M_33M_1M_2+2M_1^3}{2^n}}`$ (60)
$`=`$ $`{\displaystyle \frac{6b_36b_1b_22b_1^3}{2^n}},`$ (61)
where the dependence on $`H`$ is implicit. As shown in Ref. , $`G_3^c<0`$ for $`H0`$. Due to the additional subtraction, the range where the proper low-temperature scaling is observed is smaller than for the suceptibility. It is not possible to repeat the same steps for the 4-point function which is given by,
$`G_4^c`$ $`=`$ $`M_43M_2^24M_1M_3+12M_1^26M_1^4,`$ (62)
and involves one more subtraction.
## VI Estimation of the Exponents
We have used the method described in the previous section to calculate the value of the connected $`q`$-point functions at value of $`\beta `$ approaching $`\beta _c`$ from above with equal spacings on a logarithmic scale. For reference, the numerical values are given in the table below.
The estimated errors on the values quoted above are of order 1 in the last digit for the first lines of the table and slowly increase when one moves down the table. For the last lines, the effects of the round-off errors become sizable. Otherwise, the errors are mainly due to the extrapolation procedure. We have checked that the numerical values of the quantities at finite $`H`$ had reaches their asymptotic values (well within the accuracy of the final result) as a function of $`l_{max}`$.
The results are displayed in Fig. 9 in a log-log plot.
The departure from the linear behavior is not visible on this figure. In the symmetric phase, we know that the relative strength of the subleading corrections is approximately -0.57$`(\beta _c\beta )^{0.43}`$. It is likely that a similar behavior should be present in the low-temperature phase. Consequently, taking into account the data on the left part of Fig. 9 will distort the value of the exponents. On the other hand getting too close to criticality generates large numerical errors. Using a linear fit of the data starting with the 5-th point and ending with the 10-th point, we obtain the value of the exponents
$`\gamma _1`$ $`=`$ $`0.3247`$ (63)
$`\gamma _2`$ $`=`$ $`1.2997`$ (64)
$`\gamma _3`$ $`=`$ $`2.9237.`$ (65)
This can be compared with the predictions from scaling and hyperscaling given by Eq. (1) and amount numerically to:
$`\gamma _1`$ $`=`$ $`0.324785`$ (66)
$`\gamma _2`$ $`=`$ $`1.2991407`$ (67)
$`\gamma _3`$ $`=`$ $`2.923066.`$ (68)
Better estimates can be obtained by using the method developed in Ref. , where it was found that the combined effects of the two types of errors are minimized for $`10^{10}<|\beta _c\beta |<10^9`$. This allowed estimates of $`\gamma `$ (in the symmetric phase) with errors of order $`3\times 10^5`$ (compared to more accurate estimates). Using 10 values between $`10^9`$ and $`10^{10}`$ with equal spacing on a logarithmic scale, we obtain here
$`\gamma _1`$ $`=`$ $`0.324775\pm 2\times 10^5`$ (69)
$`\gamma _2`$ $`=`$ $`1.29918\pm 10^4`$ (70)
$`\gamma _3`$ $`=`$ $`2.928\pm 10^2.`$ (71)
The errors due to the subleading corrections and the round-off errors are approximately of the same order in this region of temperature . The errors due to the subleading corrections are larger for larger values of $`|\beta _c\beta |`$ while the numerical errors are larger for smaller values of $`|\beta _c\beta |`$ . We have estimated the errors due to the subleading corrections by performing the same calculation between $`10^8`$ and $`10^9`$. The errors bars quoted above reflect the differences with the exponents obtained in this second region.
## VII Conclusions
One sees clearly that our best estimates of the critical exponents (Eq. (69)) are fully compatible with the predictions of hyperscaling (Eq. (66)). The differences between the predicted and calculated values are $`10^5`$ for $`\gamma _1`$, $`4\times 10^5`$ for $`\gamma _2`$ and $`5\times 10^3`$ for $`\gamma _3`$. They fall well within the estimated errors. Since hyperscaling is a reasonable expectation, this also shows that the non-standard extrapolation method that we have used is reliable. As far as $`\gamma _1`$ and $`\gamma _2`$ are concerned, the errors bars are smaller than what can usually be reached using a series analysis or Monte Carlo simulation. Our result for $`\gamma _1`$ is also compatible with the result 0.325 obtained in Ref. for the hierarchical model (for $`\sigma /d=2/3`$ with their notations) using the integral formula.
One could in principle improve the accuracy of these calculations by increasing the size of the polynomial truncation. However, the efficiency of this procedure (errors decreasing like the inverse of the CPU time) is not compatible with our long term objectives (errors decreasing exponentially). The main obstruction to keep using the polynomial truncation is that the generating function $`R_n(k)`$ starts oscillating rapidly in the low-temperature phase making the approximation of the exponential of the Laplacian by a sum inaccurate. It is thus important to obtain an approximate parametrization of $`R_n(k)`$ in terms of eigenfunctions of the Laplacian. A step in this direction is made by the parametrization of Eq. (42). This approximate analytical form needs to be improved in order to include the connected 2-point and higher point functions in terms of an exponential function. This possibility is presently under investigation.
This research was supported in part by the Department of Energy under Contract No. FG02-91ER40664. |
no-problem/9912/astro-ph9912199.html | ar5iv | text | # Spectral Line Variability in the Circumstellar Environment of the Classical T Tauri Star SU Aurigae
## 1. Introduction
SU Aurigae is a G2 classical T Tauri star (cTTS). It has a projected rotational velocity of $``$ 66 km s<sup>-1</sup> and a rotational period of about 3 days. So far the best period estimates come from the analysis of the spectral variations of the Balmer lines. Giampapa et al. (1993) found a period of 2.98 $`\pm `$ 0.4 days in the blue wing of the H$`\alpha `$ spectral line, later confirmed also in the red wing of H$`\beta `$ by Johns & Basri (1995b). Petrov et al. (1996) reported a period of 3.031 $`\pm `$ 0.003 days measured on the redshifted variations of the Balmer lines. Our data set suggests a somewhat shorter rotational period of $``$ 2.8 days (Unruh et al. 1998a,b). Using the H$`\alpha `$, H$`\beta `$, Na I D and He I D3 profiles, we try to disentangle the two main components that are believed to be present, namely accretion and wind signatures. From equivalent width measurements of fitted components in H$`\alpha `$ and H$`\beta `$, Johns & Basri (1995b) advocated that the accretion (H$`\beta `$ red absorption feature) and wind (H$`\alpha `$ blue absorption feature) signatures are out of phase in SU Aur, in what they call the misaligned “egg-beater” model or the oblique rotator, as it is also known. This model is a generalization of the magneto-centrifugally driven flow model of Shu et al. (1994). According to this model, a stellar dipole field truncates the accretion disk close to the co-rotation radius. At the truncation point, the ionized disk material is loaded either onto inner closed magnetic field lines, accreting thereby onto the stellar photosphere, or it is loaded onto outer open magnetic field lines that can drive a disk-wind flow. In this context, we analyse H$`\beta `$, Na I D and He I D3 data sets in terms of cross-correlation techniques (expanding on the work of Johns & Basri (1995a) for the H$`\alpha `$ spectral line). Also intriguing in our data set are two drifting features appearing in H$`\alpha `$ and H$`\beta `$ and, in one case, also in the Na I D lines.
## 2. Description of the Data Set
A more detailed description of our data set can be found in Unruh et al. (1998a), but we highlight here the most important characteristics. This data set was obtained in November 1996 during the MUSICOS 96 multi-site, multi-wavelength campaign, that involved five different observatories: Isaac Newton Telescope (INT, La Palma), Observatoire de Haute Provence (OHP, France), McDonald Observatory (USA), Xinglong Observatory (China) and Canada-France-Hawaii Telescope (CFHT, Hawaii). We obtained 126 echelle spectra spanning 8.5 days, with 3.5 days covered almost continuously. H$`\alpha `$, Na I D and He I D3 were observed at all five observatories while H$`\beta `$ could only be observed from the INT, OHP and CFHT. Our data set differs considerably from previous SU Aur data sets, because those had typically one to two spectra a night, even though over longer time spans. Our data set, by contrast, has a finer time sampling, so we are less limited by the loss of coherency of the observed phenomena. We can hence study the timescales over which the variations of the different spectral lines are related.
## 3. Cross-Correlation Analysis
The broad spectral coverage of our data set allows the comparison of lines that, due to their different ionization potentials, probe different parts of the circumstellar material. We compute the cross-correlation function of pairs of spectral lines as a function of the time lag ($`\mathrm{\Delta }`$t). The data sets are interpolated to account for the unequal spacing (White & Peterson 1994). In simple terms each velocity bin in one spectral line (at time t) is correlated with all the velocity bins of the other spectral lines (interpolated at time t+$`\mathrm{\Delta }`$t) and this gives, for each pair of lines, an intensity map like the examples shown in Fig. 1. These maps provide a wealth of information , even though not being necessarily the easiest to interpret. Starting with the auto-correlation function of H$`\beta `$ (see top row in Fig. 1):
* The red wing of H$`\beta `$ auto-correlates very well over the interval \[50:200\] km s<sup>-1</sup>.
* The red wing of H$`\beta `$ \[50:200\] km s<sup>-1</sup> is anti-correlated with the blue wing of H$`\beta `$ \[-150:0\] km s<sup>-1</sup>.
* The red and blue wings are correlated albeit over a smaller velocity range for a time lag of $``$ 1.5 days, approximately half of the detected period. For the same time lag the red wing is anti-correlated with itself.
* With a time lag of $``$ 2.8 days, the period we detected, we recover almost the initial cross-correlation map, due to periodic variations on both wings.
Thus the blue and red wings of H$`\beta `$ seem indeed to be 180 out of phase. For the cross-correlation of H$`\beta `$ and Na I D a similar analysis was performed. The resulting maps are shown in the middle row in Fig. 1. The map at $`\mathrm{\Delta }`$t=0 shows the same features as the H$`\beta `$ cross-correlation map, although over a narrower velocity range. This suggests that the two Na I resonance lines behave very similar to the H$`\beta `$ line. In particular we find:
* the red wings of the two Na I D lines \[25:150\] km s<sup>-1</sup> correlate well with the red wing of H$`\beta `$ \[50:200\] km s<sup>-1</sup>.
* With a time lag of about half of the detected period, these parts of the profiles become anti-correlated.
* There is a weak correlation between the blue wings of Na I D \[-100:0\] km s<sup>-1</sup> and H$`\beta `$ \[-150:0\] km s<sup>-1</sup>.
* The red wings of Na I D \[25:150\] km s<sup>-1</sup> are anti-correlated with the blue wing of H$`\beta `$ \[-150:0\] km s<sup>-1</sup>.
Therefore the red wings of Na I D and H$`\beta `$ vary approximately in phase. The case of the cross-correlation between H$`\beta `$ and He I D3 is slightly different, as the whole He I D3 seems to vary in a concerted way (see bottom row in Fig. 1). We find that:
* The whole He I D3 profile \[-100:150\] km s<sup>-1</sup> correlates well with the red wing of H$`\beta `$ \[25:220\] km s<sup>-1</sup>; in these regions the 2.8 day periodicity was detected.
* With a time lag of about half of the detected period the correlation diagram shows these lines as anti-correlated.
In conclusion, we find that the blue and red wings of H$`\beta `$ are approximately 180 out of phase, while the red wings of Na I D and H$`\beta `$ vary in phase. Finally, the whole profile of He I D3 varies in phase with the red wing of H$`\beta `$. The time lags here indicated have to be taken with caution, as no error estimates were performed.
## 4. Oblique “Egg-beater” Model Predictions
Johns & Basri (1995b) found that the blue wing of H$`\alpha `$ and the red wing of H$`\beta `$ were approximately 180 out of phase. They proposed that this behaviour could be explained by an oblique “egg-beater” model (see Fig. 15 of Johns & Basri 1995b) that predicts that the signatures of mass accretion and disk winds should be rotationally modulated and 180 out of phase, since at different phases the visible wind flow or funnel flow are favoured. From the H$`\beta `$ auto-correlation analysis we indeed have confirmation that the red wing and the blue wing absorptions vary in this way, meaning that the accretion and disk-wind signatures are in phase opposition. Also favouring this model is the fact that the variance profile of H$`\beta `$ shows in the red wing a preferred projected velocity (as expected from a funnel geometry); this is not seen in the blue wing, as this part of the profile originates in a less “collimated” disk-wind. The red wings of the Na I D lines vary in phase with the red wing of H$`\beta `$, a clear sign that they are also formed in accretion flows. Futhermore, the red wing of H$`\beta `$ varies in phase with the whole profile of He I D3. This can be taken as an indication that the helium line excitation is also related to the accretion but not to the wind in SU Aur. He I D3 excitation needs very high temperatures (or densities), therefore it can not be simply formed in the accreting flows. If the source of the excitation is the energy released at the “foot-points” of the accretion streams, then one could expect that the H$`\beta `$ red wing (accretion) and the He I D3 line vary approximately in phase, as we have found.
## 5. Transient Features
A drifting feature appears between the eighth and eleventh days of our observations in the profiles of H$`\alpha `$, H$`\beta `$ and Na I D (see Fig. 1 of Unruh et al. 1998b). By analysing the Na I D profiles, it is clear that this is an absorption component. As this part of the profile correlates well with the corresponding parts in the Balmer profiles, we treat it as the same drifting absorption feature, present in all three spectral lines. It is important to point out that no clear periodicity was detected in this part of the profiles for any of the lines. We have measured the velocity position of the minima of the line profiles in the relevant velocity interval and in Fig. 2 we show their velocity evolution. It seems to indicate that v(H$`\alpha `$$`<`$ v(H$`\beta `$$`<`$ v(Na I D). However, the velocity measurements for H$`\beta `$ have to be treated with caution as the feature can not be identified unambiguously in this line. Instead, we observe that the blueshifted absorption gets broader and deeper. The velocity difference (between the lines) probably traces material accelerating outwards. It should be stressed that approximately 1 day after the decay of the feature described above, another transient feature was detected in H$`\alpha `$ and H$`\beta `$ but no counterpart was seen in the Na I D lines.
## 6. Conclusions
The present analysis supports the so-called oblique or misaligned “egg-beater” model (Johns & Basri 1995b). Our data set covers 8.5 days with several spectra each night, and at least 3.5 days of almost continuous coverage, allowing us to compare different (parts of the) line profiles and study over which time scales are they related. The temporal relations we have found between the spectral lines help to clarify which parts of the line profiles variations originate from which component of the circumstellar environment of SU Aur: the red wing absorption components of the H$`\beta `$ and Na I D lines form in the accretion funnels, He I D3 forms in the highly energetic region at the “foot-points” of these columns, and the slightly blueshifted absorption feature in the blue wing of H$`\beta `$ forms in a disk-wind flow. Also present in the data set are transient features not fully understood, possibly the signature of an outwardly accelerating (stellar) wind component.
### Acknowledgments.
JMO acknowledges the support of the Fundação para a Ciência e Tecnologia (Portugal) under the grant BD9577/96. YCU acknowledges the support through grant S7302 of the Austrian Fond zur Wissenschaftlichen Förderung. The authors wish to thank the MUSICOS 96 collaboration and the staff in all the observatories involved.
## References
Giampapa M.S., Basri G.S., Johns C.M., Imhoff C.L. 1993, ApJS 89, 321
Johns C.M., Basri G. 1995a, AJ 109, 2800
Johns C.M., Basri G. 1995b, ApJ 449, 341
Petrov P.P., Gullbring E., Ilyin I. et al. 1996, A&A 314, 821
Shu F., Najita J., Ostriker E. et al. 1994, ApJ 429, 781
Unruh Y.C. et al. 1998a, The 10th Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, ed. R.A. Donahue, J.A. Bookbinder, CD-2064
Unruh Y.C., Donati J. et al. 1998b, ESO Workshop on “Cyclical Variability in Stellar Winds”, 355
White R.J., Peterson B.M. 1994, PASP 106, 879 |
no-problem/9912/astro-ph9912214.html | ar5iv | text | # On the energy of gamma-ray bursts
## 1 Introduction
The widely accepted interpretation of the phenomenology of $`\gamma `$-ray bursts (GRBs) is that the observable effects are due to the dissipation of the kinetic energy of a relativistically expanding fireball (see Mészáros (1995) and Piran (1996) for reviews). In the last two years, afterglows of GRBs have been discovered in X-ray (Costa et al. (1997)), optical (van Paradijs et al. (1997)) and radio (Frail et al. (1997)) wavelength. Afterglow observations lead to the confirmation of the cosmological origin of the bursts through the detection of redshifted optical absorption lines (e.g. Metzger et al. (1997)), and confirmed (Waxman 1997a ; Wijers, Rees & Mészáros (1997)) standard model predictions (Paczyński & Rhoads (1993); Katz (1994); Mészáros & Rees (1997); Vietri 1997a ) of afterglow that results from the collision of the expanding fireball with surrounding medium.
In fireball models of GRBs the energy released by an explosion is converted to kinetic energy of a thin baryonic shell expanding at an ultra-relativistic speed. After producing the GRB, the shell impacts on surrounding gas, driving an ultra-relativistic shock into the ambient medium. After a short transition phase, the expanding blast wave approaches a self-similar behavior (Blandford & McKee (1976)). The long term afterglow is produced by the expanding shock that propagates into the surrounding gas. This shock continuously heats fresh gas and accelerates relativistic electrons to a power-law energy distribution, which produces the observed radiation through synchrotron emission. The simplest fireball afterglow model therefore depends on five model parameters: the explosion energy $`E`$, the ambient density $`n`$, the spectral index of the electron energy distribution $`p`$, and the fractions $`\xi _e`$ and $`\xi _B`$ of shock thermal energy carried by electrons and magnetic field respectively. The model may be further elaborated by allowing inhomogeneous density distribution and deviations from spherical symmetry (Mészáros, Rees & Wijers (1998); Vietri 1997b ; Rhoads (1997); Chevalier & Li (1999)).
Despite the fact that afterglow observations are in general consistent with fireball model predictions, observations do not allow, in almost all cases, the determination of basic fireball parameters. In particular, the fireball energy, and hence the efficiency with which this energy is converted to $`\gamma `$-ray radiation, can be reliably determined only in one case, namely that of GRB 970508, for which wide spectral coverage is available over hundreds of days (Waxman 1997b ; Waxman, Kulkarni & Frail (1998); Granot et al. (1999); Wijers & Galama (1999); Frail, Waxman & Kulkarni (1999)). For all other cases, energy estimates rely on the observed $`\gamma `$-ray fluence (which dominates the fluence at all other wave-bands). Such energy estimates are, however, subject to large uncertainties.
During $`\gamma `$-ray emission the fireball expands with a large Lorentz factor, $`\mathrm{\Gamma }10^{2.5}`$, and a distant observer receives radiation from a conical section of the fireball of opening angle $`10^{2.5}`$ around the line of sight. Observed $`\gamma `$-ray fluence provides therefore the $`\gamma `$-ray energy per unit solid angle $`\epsilon _\gamma `$ emitted by a conical section of the fireball of opening angle $`10^{2.5}`$. Thus, if the fireball is a jet of opening angle $`\theta _j10^{2.5}`$, its total $`\gamma `$-ray energy would be smaller by a factor $`\theta _j^2/4`$ compared to that inferred assuming spherical symmetry. The total $`\gamma `$-ray energy emitted may differ significantly from that obtained assuming spherical symmetry also for $`\theta _j1`$, if $`\epsilon _\gamma `$ is strongly dependent on the angle $`\theta `$ with respect to the line of sight. It has therefore been suggested (Kumar & Piran (1999)) that the most energetic bursts (i.e. those with highest isotropic $`\gamma `$-ray energy) may not represent much higher energy release from the source, but rather cases in which our line of sight happens to coincide with a small patch on the fireball emitting surface which is much brighter than average.
Energy estimates based on $`\gamma `$-ray fluence are furthermore uncertain even in the case of spherical symmetry, since it is possible that only a small fraction of fireball energy is converted to $`\gamma `$-ray radiation. It is generally argued, for example, that if $`\gamma `$-ray emission is due to internal shocks within the fireball, then only a small fraction, $`<10^2`$, of fireball energy is converted to $`\gamma `$-rays (Panaitescu, Spada & Mészáros (1999); Kumar (1999); Kumar & Piran (1999)). The low efficiency obtained in these analyses is not due to low electron energy fraction (equipartition, $`\xi _e1/3`$, is assumed), but rather due to the low efficiency of converting kinetic energy to thermal energy in the models analyzed, and due to the fact that not all of the radiation is emitted in the observed $`\gamma `$-ray band. For $`\xi _e`$ values below equipartition, the efficiency would be even lower.
The main goal of this paper is to address the open questions associated with GRB energy and $`\gamma `$-ray production efficiency. We show that significant progress can be made in inferring basic fireball parameters if, instead of analyzing individual burst data, the ensemble of GRB afterglow observations is analyzed under the hypothesis that the values of parameters which are determined by the micro-physics of the relativistic shock, e.g. $`p`$ and $`\xi _e`$, are universal. We first show in §2.1 that afterglow observations provide strong support to the hypothesis that the spectral index of the electron energy distribution is universal, $`p2`$. We then show in §2.2 that, adopting a universal $`p`$ value, a single X-ray afterglow flux measurement at time $`t`$ provides a robust estimate of the total energy per unit solid angle carried by fireball electrons, $`\epsilon _e\xi _eE/4\pi `$, over a conical section of the fireball of opening angle $`1/\mathrm{\Gamma }(t)`$, where $`\mathrm{\Gamma }`$ is the fireball expansion Lorentz factor (For X-ray observations on day time scale, $`\mathrm{\Gamma }10`$). We then show in §2.3 that afterglow observations also imply a universal value close to equipartition for the electron energy fraction, $`\xi _e1/3`$. Thus, X-ray afterglow flux measurement also provides a robust estimate of the total fireball energy per unit solid angle, $`\epsilon E/4\pi `$. Applying these ideas we provide in §2 constraints on fireball energy for all GRBs with X-ray afterglow data. The implications of our results are discussed in §3.
## 2 Analysis
### 2.1 Electron spectral index
For a power-law electron energy distribution, $`dn_e/d\gamma _e\gamma _e^p`$, the spectrum of synchrotron emission is $`f_\nu \nu ^{(p1)/2}`$ at frequencies where emission is dominated by electrons for which the synchrotron cooling time is larger than the source expansion time, and $`f_\nu \nu ^{p/2}`$ at higher frequencies, where electron synchrotron cooling time is shorter (e.g. Rybicki & Lightman (1979)). Unfortunately, observed afterglow spectra do not in general allow an accurate determination of $`p`$, since optical spectra may be affected by extinction and since X-ray data typically does not allow accurate determination of X-ray photon spectral indices. Accurate determination of $`p`$ is possible, nevertheless, in the case of GRB 970508, where radio, optical and X-ray spectral data are available; these data imply $`p=2.2\pm 0.1`$ (Galama et al. (1998); Frail, Waxman & Kulkarni (1999)), and in the case of GRB 990510, where BVRI optical data determine $`p=2.2\pm 0.2`$ (Stanek et al. (1999)).
Within the framework of the fireball model, the afterglow flux at high frequency exhibits a power-law decay in time, $`f_\nu t^\alpha `$, where $`\alpha `$ is related to the electron index $`p`$. Here too, accurate determination of $`p`$ is generally not possible based on measurements of $`\alpha `$, since the relation between $`\alpha `$ and $`p`$ depends on the spatial distribution of ambient density and on the angular distribution of fireball parameters (Mészáros, Rees & Wijers (1998)). However, in several cases the time dependence of optical flux strongly suggests a universal value $`p2`$. The steepening of optical flux decay in GRB 990510 afterglow (Stanek et al. (1999)) and the fast decline of optical flux for GRB 980519, $`\alpha =2.05\pm 0.04`$, and GRB 980326, $`\alpha =2.1\pm 0.1`$, (Halpern, Kemp, & Piran (1999); Groot et al. (1999)) is most naturally explained (Rhoads (1997),1999,Mészáros, Rees & Wijers (1998); Sari, Piran & Halpern (1999); Harrison et al. (1999); Stanek et al. (1999)) by the fireball being initially a collimated jet. In this case steepening takes place once sideways expansion of the jet becomes important, at which stage the the temporal power-law flux decay index is $`\alpha =p`$. The observed values of $`\alpha `$ are consistent therefore with $`p2`$.
We present here additional evidence for a universal $`p2`$ value. In Table 1 the effective photon spectral index determined by X-ray and optical afterglow fluxes, $`\beta _{OX}\mathrm{ln}(f_X/f_O)/\mathrm{ln}(\nu _X/\nu _O)`$, is shown for all cases where both X-ray and optical afterglow fluxes are available. The values of $`\beta _{OX}`$ are in the range of 0.6 to 1.1. This is the range expected for a power-law electron energy distribution with $`p=2.2`$. For such a distribution $`\beta _{OX}=p/2=1.1`$ is obtained for the case where the frequency $`\nu _c`$, at which emission is dominated by electrons with synchrotron cooling time comparable to the fireball expansion time, is below the optical band $`\nu _c\nu _O`$, $`\beta _{OX}=(p1)/2=0.6`$ is obtained for $`\nu _c\nu _X`$, and $`0.6<\beta _{OX}<1.1`$ for $`\nu _O<\nu _c<\nu _X`$.
We note that an alternative model for the fast, $`\alpha 2`$, optical flux decline of GRB 980519 has been suggested (Chevalier & Li (1999)), in which the afterglow is produced by a spherical fireball expanding into a pre-burst massive star wind, where $`nr^2`$. Since for $`p2`$ expansion into a wind gives an optical flux decline which is not significantly steeper than that obtained for expansion into homogeneous density (e.g. Livio & Waxman (1999)), the wind model for GRB 980519 invokes a steep electron index $`p=3`$ to account for the fast decline (Chevalier & Li (1999)). We note, however, that for such a steep electron index the model X-ray flux is $`6`$ times lower than the measured flux at $`t=0.5`$ d<sup>1</sup><sup>1</sup>1 The apparent agreement between model and measured X-ray flux in Fig. 1 of Chevalier & Li (1999) is due to the fact that the presented flux is calculated assuming $`\nu _c>\nu _X`$ while for the wind model parameters $`\nu _c3\times 10^{16}\mathrm{Hz}\nu _X=10^{18}`$ Hz..
### 2.2 Fireball electron energy
For clarity, we first discuss in this section spherical fireballs of energy $`E`$ expanding into uniform ambient density $`n`$. We then generalize the discussion to the case where spherical symmetry and homogeneity are not assumed. Since the value of $`p`$ in the cases where it is best determined by observations is $`p=2.2`$, we present numeric results for $`p=2.2`$ and comment on the sensitivity of the results to changes in $`p`$ value.
Adopting the hypothesis that the electron spectral index value is universal, $`p2`$, the fact that $`\beta _{OX}>0.6`$ in all cases (see Table 1) implies that $`\nu _c<\nu _X`$ in all cases (Note that extinction in the host galaxy can only reduce the observed value of $`\beta _{OX}`$). This is indeed expected, since on time scale of a day, the time scale over which X-ray afterglow is observed, $`\nu _c`$ is typically expected to be well bellow the X-ray band. The (fireball rest frame) Lorentz factor $`\gamma _c`$ of electrons for which the synchrotron cooling time is comparable to the (rest frame) adiabatic cooling time is determined by $`6\pi m_ec/\sigma _T\gamma _cB^2=24\mathrm{\Gamma }t/13`$. Here, $`\mathrm{\Gamma }`$ is the shocked plasma Lorentz factor, related to observed time by $`tr/4\mathrm{\Gamma }^2c`$ (Waxman 1997c ), and the adiabatic cooling time is $`6r/13\mathrm{\Gamma }c`$ (Gruzinov & Waxman (1999)). The characteristic (observed) frequency of synchrotron photons emitted by such electrons, $`\nu _c0.3\mathrm{\Gamma }\gamma _c^2eB/2\pi m_ec`$, is
$$\nu _c4.7\times 10^{13}\left(\frac{1+z}{2.5}\right)^{1/2}\xi _{B,2}^{3/2}n_0^1E_{53}^{1/2}t_\mathrm{d}^{1/2}\mathrm{Hz},$$
(1)
where $`E=10^{53}E_{53}`$ erg, $`n=1n_0\mathrm{cm}^3`$, $`\xi _B=10^2\xi _{B,2}`$ and $`t=1t_\mathrm{d}`$ d. In deriving Eq. (1) we have used the self-similar relation between fireball Lorentz factor and radius, $`\mathrm{\Gamma }=(17E/16\pi nm_pc^2)^{1/2}r^{3/2}`$ (Blandford & McKee (1976)), and the relation $`t=r/4\mathrm{\Gamma }^2c`$ between fireball radius and observed time (Waxman 1997c ).
The synchrotron peak flux $`f_m`$ in the fireball model under consideration is time independent,
$$f_m=C_1(1+z)d_L^2\xi _B^{1/2}En^{1/2},$$
(2)
and the peak frequency $`\nu _m`$ decreases with time,
$$\nu _m=C_2(1+z)^{1/2}\xi _e^2\xi _B^{1/2}E^{1/2}t^{3/2}.$$
(3)
Here, $`C_{1,2}`$ are numeric constants, and $`d_L`$ is the burst luminosity distance. In what follows we use the analytic results of Gruzinov & Waxman (1999) for the values of the numeric constants, $`C_1=1.4\times 10^{21}\mathrm{cm}^{3/2}`$ and $`C_2=6.1\times 10^5\mathrm{s}^{3/2}\mathrm{g}^{1/2}\mathrm{cm}^1`$. Similar values have been obtained by numerical (Granot et al. (1999)) and approximate analytic (Wijers & Galama (1999)) calculations. Order unity differences between the various analyses reflect different detailed model assumptions and degree of accuracy of the approximations. Note, that the exact value of $`C_2`$ depends on the detailed shape of the electron distribution function ($`C_1`$ is insensitive to such details, Gruzinov & Waxman (1999)). Although we have strong evidence that at high energy this distribution is a power-law, it may not be a pure power-law at low energy, which would affect the value of $`C_2`$. However, the relation (3) would still hold for all GRBs as long as the distribution function is universal.
Using Eqs. (13), the fireball electron energy, $`\xi _eE`$, is related to the observed flux $`f_\nu (\nu ,t)`$ at time $`t`$ and frequency $`\nu \nu _c`$, $`f_\nu =f_m\nu _m^{(p1)/2}\nu _c^{1/2}\nu ^{p/2}`$, as
$$\xi _eE=(C_2C_3)^{1/2}C_1^1\frac{d_L^2}{1+z}\nu tf_\nu (\nu ,t)Y^ϵ,$$
(4)
where
$$YC_1C_3^{1/2}C_2^{3/2}\xi _e^3\xi _B^1d_L^2\nu t^2f_\nu ^1(\nu ,t),ϵ\frac{p2}{p+2}.$$
(5)
Here, we have defined $`C_3=6.9\times 10^{39}\mathrm{s}^{3/2}\mathrm{g}^{1/2}\mathrm{cm}^2`$ so that $`\nu _c=C_3(1+z)^{1/2}\xi _B^{3/2}n^1E^{1/2}t^{1/2}`$. Eq. (4) implies that a measurement of the flux $`f_\nu `$ at a frequency above the cooling frequency provides a robust estimate of the fireball electron energy. The energy estimate is independent of the ambient density $`n`$ and nearly independent of $`\xi _B`$, $`\xi _eE\xi _B^ϵ`$ with $`ϵ1`$ (e.g. $`ϵ=1/21`$ for p=2.2). Since $`ϵ1`$, the value of $`Y^ϵ`$ is similar for all GRBs. It also implies that changing the value of $`p`$ would affect the energy estimate of all bursts in a similar way. For typical parameters, $`f(\nu =10^{18}\mathrm{Hz},t=1\mathrm{d})=0.1\mu \mathrm{Jy}`$, $`d_L=3\times 10^{28}`$ cm, $`\xi _e=0.2`$ and $`\xi _B=0.01`$, we have $`Y^ϵ=10^{10ϵ}`$, i.e. $`Y^ϵ=3`$ for $`p=2.2`$ and $`Y^ϵ=8`$ for $`p=2.4`$. Note that changing $`p=2.2`$ to $`p=2.4`$ would increase the energy estimate by less than a factor of $`8/3`$, since the value of $`C_2`$ is higher for larger $`p`$. For pure power-law electron distribution, $`C_2^{1/2}(p2)/(p1)`$ and the energy obtained assuming $`p=2.4`$ is larger than that obtained assuming $`p=2.2`$ by a factor of $`1.6`$.
A few comments should be made here regarding the uniform density and spherical symmetry assumptions. Since the energy estimate (4) is independent of $`n`$, it holds not only for the case of homogeneous density, but also to models with variable density, e.g. for wind models in which $`nr^2`$. The values of the constants $`C_{1,2,3}`$ would of course differ, by order unity factors, from those used here. However, the relation (4) should still hold and the energy estimate would not be significantly affected as $`C_{1,2,3}`$ would not be significantly modified.
We have so far assumed spherical symmetry. Since the fireball expands at relativistic speed, a distant observer receives radiation from a conical section of the fireball with opening angle $`1/\mathrm{\Gamma }(t)`$ around the line of sight. Thus, the energy $`E`$ in the discussion above should be understood as the energy that the fireball would have carried had it been spherically symmetric. In particular, Eq. (4) determines the fireball energy per unit solid angle $`\epsilon _e\xi _eE/4\pi `$, within the observable cone of opening angle $`1/\mathrm{\Gamma }(t)`$. The Lorentz factor $`\mathrm{\Gamma }`$,
$$\mathrm{\Gamma }=10.6\left(\frac{1+z}{2}\right)^{3/8}\left(\frac{E_{53}}{n_0}\right)^{1/8}t_\mathrm{d}^{3/8},$$
(6)
is only weakly dependent on fireball parameters. Thus, X-ray observations on $`1`$ d time scale provide information on a conical section of the fireball of opening angle $`\theta 0.1`$ (this holds also for the wind case, Livio & Waxman (1999)). Note, that since observed $`\gamma `$-rays are emitted at a stage the fireball is highly relativistic, $`\mathrm{\Gamma }300`$, $`\gamma `$-ray observations provide information on a much smaller section of the fireball, $`\theta 10^{2.5}`$.
Table 2 compares observed GRB $`\gamma `$-ray energy per unit solid angle with fireball electron energy per unit solid angle derived from X-ray afterglow flux using Eq. (4). Results are given for all bursts with published $`\gamma `$-ray fluence and X-ray afterglow flux (Note, that for all but one of the BeppoSAX triggered GRBs which were observed with the NFI, X-Ray afterglow has been detected; We exclude GRB 980425 from the analysis, since for this bursts it is not clear whether or not X-ray afterglow was detected, Pian et al. (1999)). We present our results in terms of $`3\epsilon _e=3\xi _e\epsilon `$, which is the total fireball energy under the assumption of electron energy equipartition, $`\xi _e1/3`$. The results are also shown in Fig. 1. We have included in the table and figure bursts for which optical data is not available. For such bursts, the effective spectral index $`\beta _{OX}`$ can not be determined, and therefore it can not be directly demonstrated from observations that $`\nu _X>\nu _c`$, and hence that Eq. (4) applies. However, it is clear from Eq. (1) that the condition $`\nu _X>\nu _c`$ is likely to be satisfied on a day time scale. In addition, the distribution of GRB $`\gamma `$-ray and total energy ratios we infer for bursts without optical counterpart is similar to that of bursts with optical counterpart, indicating that indeed the condition $`\nu _X>\nu _c`$ is satisfied. To determine the absolute energy of GRBs for which the redshift is unknown, we have assumed GRB redshift of $`z=1.5`$, since, based on measured GRB redshifts, most detected GRBs are expected to occur at the redshift range of 1 to 2 (Krumholtz, Thorsett & Harrison (1998); Mao & Mo (1998); Hogg & Fruchter (1999)).
### 2.3 Electron energy fraction
Several characteristics of afterglow observations imply that the electron energy fraction $`\xi _e`$ is close to equipartition. For GRB 970508 afterglow data is detailed enough to determine $`\xi _e0.2`$ (Waxman 1997b ; Wijers & Galama (1999); Granot et al. (1999)). A similar conclusion can be drawn for GRB 971214. For this GRB, $`\nu _m4\times 10^{14}`$ Hz and $`f_m0.03`$ mJy have been observed at $`t=0.58`$ d (Ramaprakash et al. (1998)). Using Eqs. (2) and (3) this implies, for GRB 971214 redshift $`z=3.42`$ (Kulkarni et al. (1998)), $`\xi _e1(\xi _B/0.1)^{1/8}n_0^{1/8}`$. Thus, a value close to equipartition is implied by GRB 971214 observations (Wijers & Galama (1999) suggest a different interpretation of the GRB 971214 data, which also requires $`\xi _e1`$).
For most other bursts, afterglow observations are not sufficient for determining the value of $`\xi _e`$. However, $`\xi _e1/3`$ is consistent with all afterglow data, and is indeed commonly used in afterglow models attempting to account for observations. This is consistent with our hypothesis of universal $`\xi _e`$ value close to equipartition. Additional support for this hypothesis is provided by the following argument.
GRANAT/SIGMA observations of GRB 920723 (Burenin et al. (1999)) and BATSE observation of GRB 980923 (Giblin et al. 1999a ) show a continuous transition from GRB to afterglow phase in the hard X-ray range. It has also been shown for several BeppoSAX bursts that the 2–10 keV flux at the end of the GRB phase, on time scale of tens of seconds, is close to that obtained by extrapolating backward in time the X-ray afterglow flux (e.g. Costa et al. (1997); In ’t Zand (1998)). This suggests that the late GRB flux is in fact dominated by afterglow emission, i.e. by synchrotron emission from the shock driven into the ambient medium (see also Frontera et al. 1999a ). On minute time scale, the emission peaks at $`10`$ keV energies (Giblin et al. 1999a ; Frontera et al. 1999a ), implying $`\nu _m2.5\times 10^{18}`$ Hz at this time (note that this value is consistent with the $`\nu _m`$ values inferred at later time for GRB 970508 and GRB 971214). Using Eq. (3), we find that this in turn implies $`\xi _e0.4(\xi _B/0.1)^{1/4}E_{53}^{1/4}`$. Thus, a value of $`\xi _e`$ well below equipartition, $`3\xi _e1`$, would require $`E_{53}\xi _e^41`$, inconsistent with our conclusion that $`3\xi _eE_{53}1`$ (see Fig. 1).
## 3 Discussion and implications
We have shown that afterglow observations strongly suggest that the energy distribution of shock accelerated electrons is universal, given at high energy by a power-law $`dn_e/d\gamma _e\gamma _e^p`$ with $`p2.2`$ (§2.1), and that the energy fraction carried by electrons is also universal and close to equipartition, $`\xi _e1/3`$ (§2.3). Adopting the hypothesis that the value of $`p`$ is universal and close to 2, $`p2.2`$, we showed (§2.2) that a single measurement at time $`t`$ of X-ray afterglow flux provides, through Eq. (4), a robust estimate of the fireball electron energy per unit solid angle, $`\epsilon _e`$, over a conical section of the fireball of opening angle $`1/\mathrm{\Gamma }(t)`$, where $`\mathrm{\Gamma }(t)`$ is the fireball Lorentz factor. On day time scale $`\mathrm{\Gamma }10`$ \[see Eq. (6)\], and X-ray flux therefore provides a robust estimate of $`\epsilon _e`$ over an opening angle $`\theta 0.1`$. Adopting the hypothesis that $`\xi _e`$ is close to equipartition, the X-ray flux provides also a robust estimate of the total fireball energy per unit solid angle, $`\epsilon `$, over an opening angle $`\theta 0.1`$.
We emphasize here that the total (or electron) fireball energy estimates are not based on the total X-ray fluence. The X-ray fluence is dominated by the early time, $`t10`$ s, emission. Since at this time the fireball Lorentz factor is high, $`\mathrm{\Gamma }10^{2.5}`$, the total X-ray fluence provides information only on a small section of the fireball, $`\theta 10^{2.5}0.1`$. Moreover, using the X-ray fluence to derive constraints on fireball parameters using afterglow models is complicated since on $`10`$ s time scale the fireball is not in the self-similar expansion stage for which afterglow models apply, since GRB emission on this time scale is important and it is difficult to separate afterglow and main GRB contributions, and since the X-ray afterglow flux is not observed in the time interval of $`10^2`$ s to $`0.5`$ d, which implies that the total fluence depends strongly on the interpolation of X-ray flux over the time interval in which it is not measured.
It is also important to emphasize that while the X-ray flux is independent of the ambient density into which the fireball expands and very weakly dependent on the magnetic field energy fraction \[see Eq. (4)\], the optical flux detected on a time scale of a day is sensitive to both parameters, since the cooling frequency is close to the optical band on this time scale \[see Eq. (1)\]. Moreover, the optical flux may be significantly affected by extinction in the host galaxy. Thus, while the observed X-ray flux provides mainly information on intrinsic fireball parameters, the optical flux depends strongly on the fireball environment.
Our results for $`\epsilon `$ are presented in Table 2 and in Fig. 1, where $`\gamma `$-ray energy per unit solid angle, $`\epsilon _\gamma `$, is plotted as a function of fireball energy per unit solid angle, $`\epsilon `$. Several conclusion may be drawn based on the table and figure:
1. Fireball energies of observed GRBs are in the range of $`4\pi \epsilon =10^{51.5}`$ to $`10^{53.5}`$ erg.
2. $`\epsilon _\gamma `$ and $`\epsilon `$ are strongly correlated, with $`|\mathrm{log}_{10}(\epsilon _\gamma /\epsilon )|0.5`$. Thus, the most energetic bursts (i.e. those with highest isotropic $`\gamma `$-ray energy) represent much higher energy release from the sources.
3. Our results are inconsistent with models in which GRB $`\gamma `$-ray emission is produced by internal shocks with low efficiency (Panaitescu, Spada & Mészáros (1999); Kumar (1999); Kumar & Piran (1999)), $`\epsilon _\gamma /\epsilon <10^21`$ for electron energy equipartition $`\xi _e1/3`$ (and still lower efficiency for lower electron energy fraction). However, we believe this contradiction should not be considered strong evidence against internal shock models, since the low efficiency obtained in the analyses reflects mainly the underlying assumptions of the models regarding the variability of the wind produced by the source. In particular, if the typical ratio of Lorentz factors of adjacent fireball wind shells is large, rather than being of order unity as commonly assumed, the internal shock efficiency may be close to unity.
4. The strong correlation between $`\epsilon _\gamma `$ and $`\epsilon `$, and in particular the fact that values $`\epsilon _\gamma /\epsilon 1`$ are not obtained, implies that if fireballs are jets of finite opening angle $`\theta `$, then $`\theta `$ should satisfy $`\theta 0.1`$. This is due to the fact that for $`\theta 0.1`$ significant sideways expansion of the jet would occur well before X-ray afterglow observations, during which $`\mathrm{\Gamma }10`$, leading to $`\epsilon _\gamma /\epsilon 1`$. Our conclusion is similar to that obtained by Piro et al. (1999) based on a different argument (Piro et al. rely on analysis of the relation between X-ray afterglow spectral and temporal characteristics).
5. The fact that $`\epsilon _\gamma /\epsilon `$ values larger than unity, $`\epsilon _\gamma /\epsilon 3`$, are obtained for $`4\pi \epsilon _\gamma >10^{52}`$ erg, may suggest that the electron energy fraction in these cases is somewhat below equipartition ($`3\xi _e=1/3`$ can not be ruled out by our analysis), and/or that the fireball is a jet of opening angle $`\theta _j0.1`$, in which case some reduction of energy per solid angle at the X-ray observation time would be detectable. Note, that GRB 990123 and GRB 980519, for which evidence for a jet-like fireball exists and which are included in our analysis, are indeed in the group for which $`\epsilon _\gamma /\epsilon 3`$.
6. For low energy, $`4\pi \epsilon _\gamma <10^{52}`$ erg, bursts we find $`\epsilon _\gamma /\epsilon 1/3`$. This suggests that the $`\gamma `$-ray production efficiency of such bursts is lower than that of higher energy bursts. We note that, based on published GRB light curves, there is evidence that the lower energy, $`4\pi \epsilon _\gamma <10^{52}`$ erg, bursts are characterized by shorter durations of $`\gamma `$-ray emission, compared to that of $`4\pi \epsilon _\gamma >10^{52}`$ erg bursts. A more detailed analysis of light curves of all GRBs in our sample should be carried out to confirm or rule out such correlation.
7. While “low efficiency,” $`\epsilon _\gamma /\epsilon 1`$, bursts do not exist for fireball energy $`4\pi \epsilon >10^{52.5}`$ erg, $`\epsilon _\gamma /\epsilon 1`$ bursts (as suggested by Panaitescu, Spada & Mészáros (1999); Kumar (1999); Kumar & Piran (1999)) may exist for lower fireball energy, $`4\pi \epsilon <10^{52.5}`$ erg, as they would not have been detected by BeppoSAX (as indicated in Fig. 1).
Our analysis allows to determine the total fireball energy per unit solid angle averaged over an angle $`\theta 0.1`$. A determination of the total energy emitted from the source requires knowledge of the fireball opening angle. If typical opening angles are $`\theta 0.1`$, an angle suggested by the observed breaks in optical light curves and consistent with our results, the total energy emitted by the underlying GRB sources is in the range of $`10^{50}`$ erg to $`10^{51.5}`$ erg. The total energy may be two orders of magnitude higher, however, if emission from the source is close to spherical. X-ray observations on time scale $`1`$ d, during which $`\mathrm{\Gamma }10`$, will provide information on fireball properties averaged over larger opening angles and will therefore provide stronger constraints on the total energy associated with GRB explosions.
We thank J. N. Bahcall, D. A. Frail, A. Gruzinov, A. Loeb and B. Paczyński for useful comments on a previous version of this manuscript. DLF acknowledges support from the Karyn Kupcinet fund during her stay at the Weizmann Institute. EW is partially supported by BSF Grant 9800343, AEC Grant 38/99 and MINERVA Grant. |
no-problem/9912/physics9912051.html | ar5iv | text | # Untitled Document
RUTHERFORD SCATTERING
WITH RETARDATION
Alexander A. Vlasov
High Energy and Quantum Theory
Department of Physics
Moscow State University
Moscow, 119899
Russia
Numerical solutions for Sommerfeld model in nonrelativistic case are presented for the scattering of a spinless extended charged body in a static Coulomb field of a fixed point charge. It is shown that differential cross section for extended body preserves the form of the Rutherford result with multiplier, not equal to one (as in classical case), but depending on the size of Sommerfeld particle. Also the effect of capture by attractive center is found out for Sommerfeld particle. The origin of this effect lies in radiation damping.
03.50.De
Here we continue our numerical investigation of Sommerfeld model in classical electrodynamics. Let us remind that Sommerfeld model of charged rigid sphere is the simplest model to take into consideration the ”back-reaction” of self-electromagnetic field of a radiating extended charged body on its equation of motion (in the limit of zero body’s size we have the known Lorentz-Dirac equation with all its problems: renormalization of mass, preacceleration, run-away solutions, etc.).
In the previous article the effect of classical tunneling was considered - due to retardation moving body begins ”to feel” the existence of potential barrier too late, when this barrier is overcome (, see also ).
Consequently one should expect that Rutherford scattering of a charged extended body in the static Coulomb field of a fixed point charge also differs from classical scattering of point-like particle (for Lorentz-Dirac equation Rutherford scattering was numerically investigated in ).
For the case of simplicity here we consider the nonrelativistic, linear in velocity, version of Sommerfeld model.
Let the total charge of a uniformly charged sphere be $`Q`$, mechanical mass - $`m`$, radius - $`a`$. Then its equation of motion reads:
$$m\dot{\stackrel{}{v}}=\stackrel{}{F}_{ext}+\eta \left[\stackrel{}{v}(t2a/c)\stackrel{}{v}(t)\right]$$
$`(1)`$
here $`\eta =\frac{Q^2}{3ca^2},\stackrel{}{v}=d\stackrel{}{R}/dt,\stackrel{}{R}`$ \- coordinate of the center of the shell.
External force $`\stackrel{}{F}_{ext}`$, produced by fixed point charge $`e`$ (placed at $`\stackrel{}{r}=0`$), is
$$\stackrel{}{F}_{ext}=𝑑\stackrel{}{r}\rho \frac{e\stackrel{}{r}}{r^3}$$
and for
$$\rho =Q\delta (|\stackrel{}{r}\stackrel{}{R}|a)/4\pi a^2$$
reads
$$\stackrel{}{F}_{ext}=\frac{e\stackrel{}{R}}{R^3},R>a$$
$`(2)`$
In dimensionless variables $`\stackrel{}{R}=\stackrel{}{\mathrm{\Pi }}2L,ct=x2L`$ equation (1-2) takes the form
$$\ddot{\stackrel{}{\mathrm{\Pi }}}=K\left[\dot{\stackrel{}{\mathrm{\Pi }}}(x\delta )\dot{\stackrel{}{\mathrm{\Pi }}}(x)\right]+\lambda \stackrel{}{\mathrm{\Pi }}|\stackrel{}{\mathrm{\Pi }}|^3$$
$`(3)`$
with
$$K=\frac{2Q^2L}{3mc^2a^2},\lambda =\frac{eQ}{2mc^2L},\delta =a/L$$
or
$$K=\frac{2r_{cl}L}{3a^2},\lambda =\frac{er_{cl}}{Q2L},r_{cl}=\frac{Q^2}{mc^2}$$
Taking the $`XY`$ plane to be the plane of scattering ($`\stackrel{}{\mathrm{\Pi }}=(X,Y,0)`$ ), we split equation (3) into two:
$$\ddot{Y}=K\left[\dot{Y}(x\delta )\dot{Y}(x)\right]+\lambda Y(X^2+Y^2)^{3/2}$$
$$\ddot{X}=K\left[\dot{X}(x\delta )\dot{X}(x)\right]+\lambda X(X^2+Y^2)^{3/2}$$
$`(4)`$
The starting conditions at $`x=0`$ are:
$$X_i=1000,Y_i=b(bimpactparameter),\dot{X}_i=v_i=0.1,\dot{Y}_i=0$$
$`(5)`$
Numerical results are expressed on Figs. 1,2,3.
1.
On Fig. 1. one can see how the scattering angle varies from point-like particle (classical scattering, curve 1) to extended body (curve 2). Here we have chosen
$$L=5r_{cl},b=60.0,\delta =4.0,\lambda =0.1,K=(2/15)(\delta )^2$$
i.e.
$$a=20r_{cl},e=Q,K=(10/3)(r_{cl}/a)^2=1/120$$
vertical axis is $`Y`$, horizontal - $`X`$.
Thus due to retardation the scattering angle $`\theta `$ for extended body is smaller than that for point-like particle.
2.
Differential cross section $`d\sigma `$ is given by the formula
$$d\sigma =2\pi \rho (\theta )|\frac{d\rho (\theta )}{d\theta }|d\theta $$
where $`\rho =b2L,`$ or
$$\frac{1}{2\pi (2L)^2}\frac{d\sigma }{d\xi }=\frac{db^2}{d\xi }$$
$`(6)`$
where
$$\xi =\frac{1+\mathrm{cos}\theta }{1\mathrm{cos}\theta }$$
Classical Rutherford result is that R.H.S. of eq. (4) is constant:
$$b^2(v_i)^4(\lambda )^2=\xi $$
$`(7)`$
or
$$\frac{(\lambda )^2}{2\pi (2L)^2(v_i)^4}\frac{d\sigma }{d\xi }=1$$
$`(8)`$
This classical result can be derived from eq.(4) in standard manner for $`K=0`$ (see, for ex., )
In the case of extended body
$$L=5r_{cl},\lambda =0.1,K=(2/15)(\delta )^2$$
i.e.
$$e=Q,K=(10/3)(r_{cl}/a)^2$$
numerical calculations for various values of $`b,10.0<b<110.0`$ show that Rutherford formula (7,8) changes in the following way:
$$b^2(v_i)^4(\lambda )^2=\xi \left[1+const/\delta \right]^1$$
$`(9)`$
or
$$\frac{(\lambda )^2}{2\pi (2L)^2(v_i)^4}\frac{d\sigma }{d\xi }=\left[1+const/\delta \right]^1$$
$`(10)`$
where the multiplier $`const`$ is equal approximately to $`0.30`$.
Thus differential cross section for extended body preserves the form of the Rutherford result with multiplier, not equal to one (as in classical case), but depending on the value of size of Sommerfeld particle. For $`\delta \mathrm{}`$ (i.e. $`K0`$ ) formula (9,10) gives the Rutherford result.
On Fig. 2 we see how the direct proportionality between $`b^2(v_i)^4(\lambda )^2`$ and $`\xi `$ changes in accordance to formula (9). Vertical axis is $`b^2(v_i)^4(\lambda )^2`$ and horizontal - $`\xi `$. Values of retardation $`\delta `$ (or dimensionless body’s size) are taken to be $`1,2,3,4`$, and curves are marked accordingly as $`1,2,3,4`$; in the case of Rutherford scattering ($`K0`$) the curve is marked as ”R”.
3.
On Fig.3 we see the appearance of the effect of capture of Sommerfeld particle with charge $`Q`$ by the attractive Coulomb center with charge $`e`$. Here we have chosen the following values of parameters:
$$L=r_{cl}/2,\lambda =1.0,K=(4/3)(\delta )^2,\delta =5.0,b=30.0$$
i.e.
$$e=Q,a=2.5r_{cl},K=4/75.$$
Initial conditions are the same as in (5).
Following classical Rutherford scattering (eq. (4) with $`K0`$) for initial conditions (5) trajectory must be infinite one and thus there is no capture; but this is not the case of Sommerfeld particle: due to radiation damping particle loses its energy and consequently can fall down on the attractive center.
Varying impact parameter $`b`$ with fixed $`\lambda =1.0`$ and $`\delta =5.0`$ we numerically found out the crucial value of $`b`$ when the effect of capture begins:
$$bb_{cr}31.40$$
REFERENCES
1. Alexander A.Vlasov, physics/9905050.
2. A.Sommerfeld, Gottingen Nachrichten, 29 (1904), 363 (1904), 201 (1905).
L.Page, Phys.Rev., 11, 377 (1918)
T.Erber, Fortschr. Phys., 9, 343 (1961)
P.Pearle in ”Electromagnetism”,ed. D.Tepliz, (Plenum, N.Y., 1982), p.211.
A.Yaghjian, ”Relativistic Dynamics of a Charged Sphere”. Lecture Notes in Physics, 11 (Springer-Verlag, Berlin, 1992).
F.Rohrlich, Am.J.Phys., 65(11), 1051(1997).
3. Alexander A.Vlasov, physics/9905050.
F.Denef et al, Phys.Rev. E56, 3624 (1997); hep-th/9602066.
Alexander A.Vlasov, Theoretical and Mathematical Physics, 109, n.3, 1608(1996).
4. J.Huschielt and W.E.Baylis, Phys.Rev. D17, N 4, 985 (1978).
5. L.D. Landau, E.M.Lifshitz, Mechanics, 4-th edition, Moscow, Nauka, 1988.
2 1 0.00e0 3.48e1 6.95e1 1.04e2 1.39e2 1.74e2 2.09e2 2.43e2 2.78e2 3.13e2 3.48e2 -8.44e2 -6.59e2 -4.75e2 -2.90e2 -1.06e2 7.82e1 2.63e2 4.47e2 6.31e2 8.16e2 1.00e3
Fig. 1
R d=1 d=2 d=3 d=4 0.00e0 1.21e1 2.42e1 3.63e1 4.84e1 6.05e1 7.26e1 8.47e1 9.68e1 1.09e2 1.21e2 0.00e0 1.60e1 3.20e1 4.80e1 6.40e1 8.00e1 9.60e1 1.12e2 1.28e2 1.44e2 1.60e2
Fig. 2
-8.00e1 -6.40e1 -4.80e1 -3.20e1 -1.60e1 0.00e0 1.60e1 3.20e1 4.80e1 6.40e1 8.00e1 -4.84e0 9.56e1 1.96e2 2.97e2 3.97e2 4.98e2 5.98e2 6.99e2 7.99e2 9.00e2 1.00e3
Fig. 3 |
no-problem/9912/astro-ph9912448.html | ar5iv | text | # Extended Far–Infrared CO Emission in the Orion OMC–1 Core1
## 1 Introduction
The Orion molecular cloud is the prime example of high–mass star forming regions and due to its proximity one of the most extensively studied sources. The numerous observations performed in the millimeter and submillimeter range (see Blake et al. 1987 and Genzel & Stutzki 1989 for a review) have shown that the emission arises from three main components. The bulk of the molecular gas in the extended ridge (typical densities $`10^4\text{cm}\text{-3}`$) is heated up to 50–60 K by the UV field of the Trapezium stars. The molecular emission near the high–velocity outflow, the plateau, is distributed in a highly anisotropic region of $`40\mathrm{}`$ in size with a mean density $`10^6\text{cm}\text{-3}`$. The plateau consists of gas heated to temperatures of 80–150 K distributed in a low–velocity component centered on IRc2 and in a bipolar high–velocity component orthogonal to the latter. Finally, the hot core ($`T=400500`$ K) of IRc2 is a compact source (10″) with densities up to $`10^7\text{cm}\text{-3}`$. Its high–excitation molecular emission probably arises from material shocked by the interaction of the high–velocity outflow with the ambient gas.
The OMC–1 core has been widely studied in various molecular tracers. Among these, CO is of great interest because observations at different transitions delimit regions with marked contrasts in their physical characteristics. Most are low–$`J`$ ($`J_{up}4`$) observations that trace the extended low/moderate density gas, and the bulk of the outflow gas. Higher–$`J`$ transitions ($`J_{up}6`$) are sensitive to the denser and higher–temperature gas of the plateau or the hot core, however they are much less sensitive to those values from the ridge. Multiple CO line observations towards IRc2 carried out with the KAO in the submillimeter/far–infrared range (see e.g. Boreiko et al. 1989) suggest evidence for a hot gas component at 600–750 K in the high–velocity and (possibly) low–velocity plateau, based on the few flux measurements at $`J_{up}>22`$. However, the large uncertainties in the atmospheric transmission yield considerable variations in the parameters of this hot gas component. The ISO satellite offered us the possibility of assessing in a more systematic and comprehensive way the temperature distribution in Orion. We present in this Letter the first large–scale survey of the far–infrared CO lines in the Orion OMC–1 core.
## 2 Observations and Results
Five LWS–grating rasters were performed in the LWS01 mode (range 43–197 $`\mu `$m) with a spectral resolution of 0.3–0.6 $`\mu `$m ($`R=\lambda /\mathrm{\Delta }\lambda 150300`$). A total of 23 positions (6 scans per position) with an angular separation of 90$`\mathrm{}`$ ($``$ beam size) were observed around IRc2. The central position was included in each raster in order to check the relative calibration of the observations. The uncertainties in the absolute calibration are estimated to be $`30\%`$. After pre–reduction (pipeline 7) the data were reprocessed in order to remove non–linear effects (Leeks et al. 1999). The size of the LWS beam is actually close to 70$`\mathrm{}`$ (E. Caux, priv. comm.).
The spectra observed at each position (36 scans for the central one) were averaged together in order to minimize the noise level. The statistical errors of the measured lines fluxes are negligible. The transitions at $`\lambda <90`$$`\mu `$m are effected by additional uncertainties in the determination of the continuum level (hence the baseline subtraction), as well contamination from neighboring lines. The confusion level is $`3\times 10^{18}\mathrm{W}\text{cm}\text{-2}`$ in the averaged spectra. However, we checked that all the features identified in the averaged spectra were present in all of the original scans. Figure 1 shows the LWS spectra in the range 135–190 $`\mu `$m for the positions where the CO lines could be unambiguously identified. Besides CII (158 $`\mu `$m) and OI (63 $`\mu `$m and 146 $`\mu `$m) fine–structure atomic lines, the CO emission, together with that of H<sub>2</sub>O, dominates the OMC–1 far–infrared spectrum. Among the molecular species detected in Orion at infrared wavelengths, CO appears to be the most widespread (a full description of the IRc2 spectrum can be found in Cernicharo et al. 1999a, 1999b, hereafter C99a, C99b). At all of the positions, we observe the pure rotational transitions from $`J_{up}`$=14 to $`J_{up}`$=19, but only the lines up to $`J_{up}`$=17 have a SNR high enough to allow confident detection outside IRc2. At large angular distances from the central region ($`>180\mathrm{}`$), the CO lines become very weak and the spectra are dominated by the atomic fine–structure lines. The CO integrated fluxes at fifteen selected positions are given in Table 1. Some of lines are contaminated by nearby features (OI at 145.6 $`\mu `$m, OH at 163.2 $`\mu `$m and H<sub>2</sub>O at different wavelengths), whose contributions have been estimated from complementary Fabry–Perot observations (C99a) or from the grating data where the line separation was about $`0.3\mu `$m.
Towards the central region, the consecutive pure rotational lines from $`J_{up}`$=14 up to $`J_{up}`$=43 could be identified. Due to the presence of numerous unresolved components, the calibration of the high–$`J`$ transitions ($`J_{up}34`$) is somewhat uncertain. When observing the H<sub>2</sub>O lines towards IRc2 at high spectral resolution with the LWS in the Fabry–Perot mode ($`R7000`$), several CO transitions, between $`J_{up}`$=14 and $`J_{up}`$=33, were also detected in some detector bands (Fig. 2; for details on the observations, see C99a). The Fabry–Perot (hereafter FP) fluxes agree to better than 30% with the grating data, except for the $`J_{up}`$=16 and the $`J_{up}=`$28 lines where the differences reach 50%. In view of the possible uncertainties in the actual pipeline, we have used the grating fluxes for these two lines. Quite remarkably, the line intensities are nearly constant from $`J_{up}`$=18 up to $`J_{up}`$=21 indicating that the lines are optically thick and that the emitting regions are similar in size. The line intensity merely decreases by a factor of three for the following transitions up to $`J_{up}`$=28. All of the lines have typical widths of $`\mathrm{\Delta }v60\text{km\hspace{0.17em}s}\text{-1}`$ (HPFW), i.e. they are partially resolved by the FP.
## 3 Discussion
### 3.1 The Ridge
In order to determine the temperature of the gas traced by the CO lines detected in the OMC–1 core, we have modeled the emission of a gas layer with a column density of $`N`$(CO)$`=10^{19}\text{cm}\text{-2}`$ for various densities, $`n(\mathrm{H}_2)`$, and temperatures. The CO fluxes were computed using a simple Large–Velocity Gradient approach. For the LWS wavelength range the dust opacity is still low enough that the coupling between dust and gas can be neglected. The adopted linewidth in the model is $`\mathrm{\Delta }v=10`$ km s<sup>-1</sup> for positions close to the center ($`\pm 90^{\prime \prime }`$, $`\pm 90^{\prime \prime }`$) and $`\mathrm{\Delta }v=5`$ km s<sup>-1</sup> for the points more distant from IRc2.
We display in Fig. 3 b–d the expected fluxes for the transitions $`J_{up}25`$ with $`n(\mathrm{H}_2)`$ densities in the range $`440\times 10^4\text{cm}\text{-3}`$ for specific temperatures. The temperatures shown are those which give the best match to the fluxes measured at three typical positions in the core: halfway between the bar and the main core ($`90\mathrm{}`$, $`90\mathrm{}`$); south of IRc2 towards the S6 protostellar source ($`0\mathrm{}`$, $`90\mathrm{}`$); the extended ridge (0$`\mathrm{}`$, -180$`\mathrm{}`$). For densities in the range $`440\times 10^4\text{cm}\text{-3}`$, typical of those measured in the OMC–1, temperatures larger than 80 K are required to account for all of the observed fluxes. Lower temperatures of $`60\mathrm{K}`$ fail to reproduce the observations by a factor of two or more (see Fig. 3). Around the central position, ($`\pm 90\mathrm{}`$, $`\pm 90\mathrm{}`$), we find evidence for warm gas with $`T120`$ K and densities of the order of $`10^5\text{cm}\text{-3}`$. The column density adopted in our model appears as an upper limit since it was observed towards the IRc2 core (Blake et al. 1987). If we adopt a lower column density of $`5\times 10^{18}\text{cm}\text{-2}`$ for positions remote from IRc2, in closer agreement with CO observations at millimeter wavelengths (Bally et al. 1987), reasonable fits are obtained by increasing the kinetic temperature by 10–20%, or conversely increasing the density by a factor of 2–3. This does not effect our conclusions about the extended warm component around the hot core. From Fig. 3, it appears that low–$`J`$ observations bring very little constraints to the temperature determinations in our modeling. The kinetic temperatures derived from KAO observations of the $`J_{up}=7`$ line ($`90100`$ K) at $`100\mathrm{}`$ resolution by Schmid–Burgk et al. (1989) agree reasonably well for the S6 region and close to IRc2.
On the western side of the ridge, our modeling requires temperatures above 100 K or more, and/or high column densities and volume densities to account for the observed fluxes (see Table 1). The KAO data suggest moderate temperatures around $`50`$ K, although it is difficult to compare both sets of data due to the larger beam and coarse sampling of the former. We note that some contamination in our data from the IRc2 region within a possible LWS error beam cannot be excluded and could reduce the derived column densities and/or kinetic temperatures. Also the LWS beam profile is known to be rotationally asymmetric for an extended source and could account for some of the discrepancies.
### 3.2 The Orion–KL/IRc2 Region
The emission in the far–infrared CO lines from $`J_{up}`$=18 to 33 can be reasonably explained by a simple two–temperatures model of the plateau region. First we note that a single temperature model cannot reproduce both the flux of the high ($`J_{up}>28`$) and low–$`J`$ lines. On the other hand, the range of temperatures reported in the extended ridge imply that its contribution becomes negligible with respect to the plateau (see also Fig. 3 and below) for transitions above $`J_{up}=18`$. The physical properties of the gas in the different regions (in particular the plateau) were chosen as close as possible to those of Blake et al. (1987). The low–velocity plateau is modeled as a two–shell region expanding at 25 km s<sup>-1</sup> with a micro–turbulent velocity of 10 km s<sup>-1</sup>. The line fluxes were calculated with the radiative transfer code developed by Gonzalez–Alfonso & Cernicharo (1997) and were convolved with the expected instrumental profile (see Fig. 2) which accounts for the FP instrumental response (E. Caux, priv. comm.). We have adopted standard properties for the dust (silicate) grains: a typical radius of 0.1 $`\mu `$m and a standard gas–to–dust mass ratio of 100. The inner region gives rise to the entire emission of the $`J_{up}=33`$ and $`J_{up}=28`$ lines, to the bulk of the emission of the $`J_{up}=24`$ line and to a significant contribution of the lower–$`J`$ lines. The colder gas of the plateau also contributes to the emission of the low–$`J`$ lines. Based on our study of the H<sub>2</sub>O emission in Orion by Cernicharo et al. (C99a), we assume that the density is high enough to almost thermalize the CO lines: $`10^7\text{cm}\text{-3}`$ for the inner region and $`10^6\text{cm}\text{-3}`$ for the external one, close to the estimates derived by Blake et al. (1987). We use the standard CO abundance of $`1.2\times 10^4`$. The ratio of the $`J_{up}=33`$ to the $`J_{up}=28`$ lines sets an upper limit of $`500`$ K to the temperature of the inner region: a higher value would imply a lower contrast between the two lines for any CO column density. For a lower temperature ($`350`$ K), the high column density needed to account for the $`J_{up}=33`$ line would then overestimate the <sup>13</sup>CO lines, whereas they are hardly detected (we adopt an upper limit of 3000 Jy). We have adopted a temperature $`T=400`$ K for the inner plateau region. The resulting column density is $`N`$(CO)$`=10^{19}\text{cm}\text{-2}`$; the outer and inner radii are $`3\times 10^{16}`$ cm ($`4.5\mathrm{}`$) and $`2.2\times 10^{16}`$ cm respectively. For the external shell (outer radius $`=7\times 10^{16}`$ cm), the temperature is $`T=300`$ K and the column density $`N`$(CO)$`=3.5\times 10^{18}\text{cm}\text{-2}`$. The assumed column densities agree with the values quoted in the literature (Blake et al. 1987).
The agreement with the line profiles is very satisfying taking into account the simple hypothesis used. The apparent high–velocity emission is fully reproduced by convolving with the broad–wing instrumental response (Fig. 2). Down to the sensitivity of these observations, we do not detect in this wavelength range any emission from the very high–velocity gas around IRc2. Comparison with all the lines observed by the LWS (either in FP or grating mode) proves to be satisfying up the $`J_{up}=33`$ transition. We have added to our modeling the contribution of the extended ridge, as a layer at $`T=80`$ K (see Sect. 3.3) with a column density of $`N`$(CO)$`=4\times 10^{19}\text{cm}\text{-2}`$. The resulting fit and the contributions of the different regions are shown in Fig. 3. As a whole, the emission observed towards the IRc2 core can be satisfactorily accounted for by two regions of the plateau gas at different temperatures: $``$300 K and $``$400 K, and from the ridge contributing significantly up to the $`J_{up}=16`$ line. Note that since the central part of the line profiles is only partially resolved by the FP, we cannot exclude that a part of the warm component modeled actually arises from the hot core (where $`\mathrm{\Delta }v5\text{km\hspace{0.17em}s}\text{-1}`$). Therefore, the emission from the $`T=400`$ K component should be considered as the combined emission of the plateau and the hot core. The contribution of the ridge seems to be underestimated in our model. However, as for the adjacent positions, the contribution from an error beam could effect the observed fluxes.
### 3.3 The High–$`J`$ CO lines
The higher–$`J`$ transitions ($`J_{up}>34`$) observed with the grating mode are largely contaminated by other emission lines. Using our FP data we have estimated the contribution of the strongest adjacent lines (OH, H<sub>2</sub>O and NH<sub>3</sub>) to the CO fluxes. In view of the high density of spectral lines for $`\lambda <90`$$`\mu `$m, additional contamination by other weaker lines cannot be discarded (C99a,b). We note, however, that the lines are well above the confusion level and that most of the observed flux must arise from CO itself.
After correction, all the CO lines observed either in FP or grating mode lay above the model of the plateau and the ridge (Fig. 3a), showing evidence for a hotter gas component. In the absence of other information on the spatial location and the kinematics of this gas, we speculate that such emission arises from the shocked regions of interaction of the Orion outflows with the ambient medium, detected in the 2.12 $`\mu `$m lines. All sources of strong H<sub>2</sub> emission (BN, KL, PK1, PK2; Beckwith et al. 1983) are located within our 70″ beam. We modeled the high–$`J`$ CO emission using a Large–Velocity Gradient (LVG) approach with parameters representative of the H<sub>2</sub> Peak 1 region: a size of 10$`\mathrm{}`$, a density $`n(\text{H}\text{2})3\times 10^7\text{cm}\text{-3}`$ for the shocked emitting region, and a linewidth $`\mathrm{\Delta }v=30\text{km\hspace{0.17em}s}\text{-1}`$. The fluxes are roughly accounted for by a gas layer at $`T15002000`$ K and a column density $`N`$(CO)$`6\times 10^{17}\text{cm}\text{-2}`$, implying a shell thickness of $`1.6\times 10^{14}`$ cm. These values are merely indicative due to the lack of spectral and spatial information, but are consistent with those predicted by shock models (e.g., Flower & Pineau des Forêts, 1999).
Acknowledgements.
This work has been partially supported by the Spanish DGES under grant PB96–0883 and by PNIE grant ESP97–1618–E. SJL acknowledges receipt of a PPARC award. We thank Drs S. Pérez–Martínez and J.R Goicoechea for their help in the data reduction and Dr. E. Caux for providing us with the Fabry–Perot instrumental profile. |
no-problem/9912/physics9912044.html | ar5iv | text | # Relativistic photoelectron spectra in the ionization of atoms by elliptically polarized light
## I Introduction
Recently an increasing interest in the investigation of relativistic ionization phenomena has been observed . Relativistic effects will appear if the electron velocity in the initial bound state or in the final state is comparable with the speed of light. The initial state should be considered relativistic in the case of inner shells of heavy atoms . In a recent paper the photoionization of an atom from a shell with relativistic velocities has been considered for the case of elliptically polarized laser light. In the present paper we will study the effect of a relativistic final-state of an electron on the ionization of an atom by elliptically polarized light. The initial state will be considered as nonrelativistic. The final-state electron will have an energy in the laser field measured by the ponderomotive energy. If the ponderomotive energy approaches the electron rest energy, then a relativistic treatment of the ionization process is required. For an infrared laser the necessary intensities are of the order of $`10^{16}\mathrm{W}\mathrm{cm}^2`$.
Ionization phenomena influenced by relativistic final state effects have been studied for the cases of linearly and circularly polarized laser radiation both in the tunnel and above threshold regimes . The ionization rate for relativistic electrons has been found to be very small for the case of linear polarization . On the contrary a circularly polarized intense laser field produces mainly relativistic electrons .
In the papers of Reiss and of Crawford and Reiss a covariant version of the so-called strong field approximation has been given for the cases of linear and circular polarization. Within this approximation one calculates the transition amplitude between the initial state taken as the solution for the Dirac equation for the hydrogen atom and the final state described by the relativistic Volkov solution. Coulomb corrections are neglected in the final Volkov state. Analytical results for the ionization rate have been given in Refs. These results apply to above barrier cases as well as to tunneling cases. However, the corresponding expressions are complicated and numerical calculations are needed to present the final results.
The present paper is aimed to investigate the relativistic electron energy spectra in the ionization of atoms by intense elliptically polarized laser light. In contrast to the more sophisticated strong field approximation we would like to obtain simple analytical expressions from which the dependence of the ionization process on the parameters, such as binding energy of the atom, field strength, frequency and ellipticity of the laser radiation may be understood without the need of numerical calculations. Therefore we restrict the considerations to the case of tunnel ionization. Our results will be applicable only for laser field strengths smaller than the inner atomic field $`FF_a`$. In order to observe relativistic effects, the inequality $`ϵ=F/\omega c>0.1`$ should be fulfilled. (The atomic system of units is used throughout the paper, $`m=e=\mathrm{}=1`$.) Both inequalities yield a limitation for the laser frequency $`\omega `$ from above. For the ionization of multi-charged ions an infrared laser satisfies this condition.
The non-relativistic sub-barrier ionization with elliptically polarized light was studied in . In the tunnel limit the simple expression
$$W^{nonrel}\mathrm{exp}\left\{\frac{4}{3}\frac{\gamma }{\omega }E_b\left[1\frac{1}{10}\left(1\frac{g^2}{3}\right)\right]\right\}\mathrm{exp}\left\{\frac{\gamma }{\omega }\left[\left(p_zg\frac{F}{\omega }\right)^2+p_x^2\right]\right\}$$
(1)
has been derived for the electron momentum spectrum within exponential accuracy. Here $`p_x`$ and $`p_z`$ are the projections of the drift momentum on the direction of the wave propagation and along the smaller axis of the polarization ellipse, respectively; $`E_b`$ is the ionization energy of the atomic state, $`F`$, $`\omega `$ and $`g`$ are the field amplitude, frequency and ellipticity of the laser radiation, respectively; and $`\gamma =\omega \sqrt{2E_b}/F1`$ is the Keldysh adiabatic parameter.
From Eq. (1) one concludes that the electrons are mainly ejected in the polarization plane along the smaller axis of polarization; the most probable momentum at the time of ejection has the components: $`p_x=p_y=0`$ and $`p_z=gF/\omega `$. (For the sake of simplicity of the notations we neglect througout the paper the second symmetric maximum for the component $`p_z`$, $`p_z=gF/\omega `$.)
## II Relativistic semiclassical approach
We shall now generalize the non-relativistic result Eq. (1) to the case of relativistic final state effects, when $`gF/\omega `$ becomes comparable with the velocity of light. Our derivation starts with the relativistic version of the Landau-Dykhne formula . The ionization probability in quasiclassical approximation and with exponential accuracy reads
$$W\mathrm{exp}\left\{2\mathrm{Im}\left(S_f(0;t_0)+S_i(t_0)\right)\right\},$$
(2)
where $`S_i=E_0t_0`$ is the initial part of the action, $`S_f`$ is the final-state action. In the latter we will neglect the influence of the atomic core. Then the final-state action may be found as a solution of the Hamilton-Jacobi equation and reads
$$S_f(0;\xi _0)=c\left\{r\xi _0+\frac{ϵc}{q\omega }\left[p_y\mathrm{cos}\omega \xi _0p_zg\mathrm{sin}\omega \xi _0\right]+\frac{ϵ^2c^2}{4q}\left[(1+g^2)\xi _0+\frac{g^21}{2\omega }\mathrm{sin}2\omega \xi _0\right]\right\}.$$
(3)
Here the vector potential of the laser radiation has been chosen in the form
$$A_x=0,A_y=\frac{cF}{\omega }\mathrm{sin}\omega \xi ,A_z=g\frac{cF}{\omega }\mathrm{cos}\omega \xi ,,$$
(4)
where $`\xi =tx/c`$, $`\xi _0`$ is the initial value. Further the notations
$$r=\sqrt{c^2+p^2},q=rp_x$$
(5)
have been introduced; $`p_x`$, $`p_y`$ and $`p_z`$ are the components of the final electron momentum along the beam propagation, along the major and along the small axis of the polarization ellipse, respectively; $`p^2=p_x^2+p_y^2+p_z^2`$.
The complex initial time $`t_0`$ has to be determined from the classical turning point in the complex half-plane :
$$E_f(t_0)=c\left\{r+\frac{ϵc}{q}\left[p_y\mathrm{sin}\omega t_0gp_z\mathrm{cos}\omega t_0\right]+\frac{ϵ^2c^2}{2q}\left[\frac{g^2+1}{2}+\frac{g^21}{2}\mathrm{cos}2\omega t_0\right]\right\}=E_0=c^2E_b.$$
(6)
Eq. (2) together with Eqs. (3) and (6) is the most general expression for the relativistic rate of sub-barrier ionization by elliptically polarized laser light. We consider now the limit of a nonrelativistic initial state, i.e. $`E_bc^2`$. Furthermore the considerations will be restricted to the tunnel regime $`\lambda =i\omega t_01`$, or, equivalently, the Keldysh adiabatic parameter should satisfy the inequality $`\gamma 1`$. Under these conditions we may expand the sine and cosine functions in Eqs. (3) and (6) in Taylor series. Then we obtain the rate of tunnel ionization for arbitrary final-state momenta. Expanding this expression near its maximum value in terms of the parameters $`q`$, $`p_y`$ and $`p_z`$ one arrives at the following general expression
$$W^{rel}\mathrm{exp}\left\{\frac{4}{3}\frac{\gamma }{\omega }E_b\left[1\frac{\gamma ^2}{10}\left(1\frac{g^2}{3}\right)\frac{E_b}{12c^2}\right]\right\}\mathrm{exp}\left\{\frac{\gamma }{\omega }\left[\left(p_zp_{z,m}\right)^2+\left(qq_m\right)^2\right]\right\}$$
(7)
for the tunnel ionization rate (first exponent) and the momentum distribution of the photoelectron (second exponent) within exponential accuracy. In Eq. (7) the ionization rate and the most probable value for each component of the electron momentum,
$$p_{y,m}=0p_{z,m}=\frac{F}{\omega }g\left(1+\frac{\gamma ^2}{6}\right),q_m=c\frac{E_b}{3c}$$
(8)
are given including the first frequency and relativistic corrections in the initial state. In the distribution near the maximum momenta only those terms have been maintained which do not vanish at zero frequency. Equation (7) agrees with the relativistic angular-energy distribution of Krainov in the case of circular polarization $`g=\pm 1`$, vanishing frequency corrections $`\gamma ^21`$ and negligible relativistic effects in the initial state $`E_bc^2`$. In the nonrelativistic limit,i.e., $`pc`$ , $`F/wc1`$ and $`E_bc^2`$, we have $`qq_m=p_x`$ and Eq. (7) reduces to Eq. (1) as it should.
From Eq. (8) we easily obtain the most probable value for the component of the electron momentum along the beam propagation
$$p_{x,m}=\frac{F^2g^2}{2\omega ^2c}+\frac{E_b}{3c}\left(g^2+1\right),$$
(9)
the peak value of the angular distribution
$$\mathrm{tan}\theta _m=\frac{p_{x,m}}{|p_{z,m}|}=\frac{F|g|}{2c\omega }\left(1+\frac{g^2+2}{g^2}\frac{\gamma ^2}{6}\right),\phi _m=0,$$
(10)
and the value of the most probable electron energy $`E_m=p_m^2`$, with
$$p_m=\sqrt{p_{x,m}^2+p_{z,m}^2}=\frac{F|g|}{\omega }\sqrt{1+\left(\frac{Fg}{2\omega c}\right)^2+\frac{\gamma ^2}{3}+\frac{E_b}{3c^2}(g^2+1)}.$$
(11)
Here $`\theta `$ is the angle between the polarization plane and the direction of the photoelectron motion; $`\phi `$ is the angle between the projection of the electron momentum onto the polarization plane and the smaller axis of the polarization ellipse. For the ellipticity $`0<|g|<1`$ the most probable momentum $`𝒑_𝒎`$ of the ejected electron is situated in the plane perpendicular to the maximum value of the electric field strength; for $`|g|=1`$ the electron output in the $`(y,z)`$ plane is isotropic. Notice that the most probable total electron momentum $`p_m`$ contains relativistic final state effects, frequency corrections and weak relativistic initial state effects. Relativistic effects do not contribute to the projection of the momentum along the smaller axis of the polarization ellipse. On the contrary both relativistic final and initial state effects increase the electron momentum projection along the propagation of elliptically polarized laser radiation. The increase due to relativistic initial state effects is proportional to $`(E_b/c^2)(1+g^2)`$. It is typically small (except for the ionization from K shells of heavy atoms ) and does not vanish in the case of linear polarization of the laser light. In contrast to that the relativistic increase due to final state effects which is measured by $`Fg/2\omega c`$ is absent in the case of linear polarized laser radiation.
In what follows we will neglect the frequency corrections and the relativistic initial state effects in order to compare with previous works. In this case and for the case of circular polarization the expressions for the angle $`\theta _m`$ and the most probable electron momentum $`p_m`$ coincide with the corresponding expressions of Krainov . Moreover, though our calculations are valid only in the tunnel regime, our value for the most probable angle of electron ejection coincides with an approximation given by Reiss for the case of circular polarization and valid in the above-barrier ionization regime. In Ref. it has been shown that the simple estimate $`\mathrm{tan}\theta _m=F/2c\omega `$ is in good agreement with the numerical calculations based on the strong-field approximation and performed for above threshold conditions with circularly polarized light. Therefore we expect that our formula Eq. (10) predicts, at least qualitatively, the location of the peak in the relativistic angular distribution for the case of above barrier ionization with elliptically polarized light.
This statement is supported by a semiclassical consideration of the above barrier ionization. According to the semiclassical model the transition occurs from the bound state to that continuum state which has zero velocity at the time $`t`$ with the phase $`\xi `$ of the vector potential $`𝑨(\xi )`$. From this condition we have
$`q`$ $`=`$ $`\sqrt{c^2+p_y^2+p_z^2+ϵ^2c^2g^2+2ϵc\left(p_y\mathrm{sin}\omega \xi gp_z\mathrm{cos}\omega \xi \right)+(1g^2)ϵ^2c^2\mathrm{sin}^2\omega \xi },`$ (12)
$`p_y`$ $`=`$ $`{\displaystyle \frac{F}{\omega }}\mathrm{sin}\omega \xi ,p_z=g{\displaystyle \frac{F}{\omega }}\mathrm{cos}\omega \xi .`$ (13)
The ionization rate becomes maximal at the maximum of the electric field of the laser beam. Due to our choice of the gauge (see Eqs. (4)) this maximum occurs at the phase $`\xi =0`$. From Eqs. (12) and the relation $`p_x=(c^2q^2+p_y^2+p_z^2)/2q`$ we conclude that the most probable final state has the momentum with the components
$$p_x=\frac{F^2g^2}{2c\omega ^2},p_y=0,p_z=\frac{F}{\omega },$$
(14)
which agrees with the above estimations Eqs. (8) and (9) derived for the case of tunnel ionization if one neglects the frequency corrections and the relativistic initial state effects.
## III Results and conclusions
It is now straightforward to obtain the probability distribution for the components of the final state momentum. Neglecting again the frequency corrections and the relativistic initial state effects we get from Eq. (7)
$$W^{rel}\mathrm{exp}\left\{\frac{4}{3}\frac{\gamma }{\omega }E_b\right\}\mathrm{exp}\left\{\frac{\gamma }{\omega }\frac{\left[\delta p_x^22\delta p_x\delta p_zϵg+p_y^4/4c^2+\delta p_z^2\left(1+2ϵ^2g^2+ϵ^4g^4/4\right)\right]}{\left(1+ϵ^2g^2/2\right)^2}\right\},$$
(15)
where only the leading contributions in $`\delta p_x=\left(p_xp_{x,m}\right)`$, $`p_y`$ and $`\delta p_z=\left(p_zp_{z,m}\right)`$ have been given. In the non-relativistic limit $`ϵ1`$ and $`pc`$ we obtain Eq. (1). For the case of linear polarization $`g=0`$ we reproduce the momentum distribution of Krainov including the relativistic high energy tail for electrons emitted along the polarization axis. The latter is described by the term $`\mathrm{exp}\left\{(\gamma /\omega )(p_y^4/4c^2)\right\}`$. However, the high energy tail contains only a very small part of the ejected electrons. From Eq. (15) we see that in the case of linear polarization most of the electrons have nonrelativistic velocities. This is in agreement with recent numerical calculations based on the strong-field approximation . In contrast to the case of linear polarized laser radiation, the intense elliptically polarized laser light with $`ϵ|g|`$ of the order of unity produces mainly relativistic electrons.
For the sake of comparison we shall give the angular distribution at the maximum of the electron energy spectrum and the energy spectrum at the peak of the angular distribution. We obtain both distributions from Eq. (7) by putting $`p_x=p\mathrm{sin}\theta `$ and $`p_z=p\mathrm{cos}\theta `$, where we have taken into account that the ionization rate is maximal for the emission in the $`(x,z)`$ plane. Choosing the peak value of the angular distribution $`\theta =\theta _m=\mathrm{arctan}ϵ|g|/2`$ we obtain
$$W^{rel}\mathrm{exp}\left\{\frac{2}{3}\frac{\left(2E_b\right)^{3/2}}{F}\right\}\mathrm{exp}\left\{\left(\frac{pp_m}{\mathrm{\Delta }p}\right)^2\right\}$$
(16)
for the energy distribution along the most probable direction of electron ejection. Here
$$\mathrm{\Delta }p=\sqrt{\frac{F}{\sqrt{2E_b}}}\frac{1+(g^2/2)\left(F/\omega c\right)^2}{\sqrt{1+g^2\left(F/\omega c\right)^2}},$$
(17)
is the width of the relativistic energy distribution. From Eq. (17) we conclude that the relativistic width is broader than the nonrelativistic one, it increases with increasing field strength. The relativistic broadening has its maximum for circular polarization, there is no relativistic broadening of the energy width for the case of linear polarization.
In Fig. 1 the electron momentum spectrum from Eq. (16) is shown for electrons born in the creation of $`\mathrm{Ne}^{8+}`$ ($`E_b=239\mathrm{eV}`$) ions by an elliptically polarized laser radiation with wave length $`\lambda =1.054\mu \mathrm{m}`$, field strength $`2.5\times 10^{10}\mathrm{V}/\mathrm{cm}`$ and ellipticity $`g=0.707`$. The relativistic spectrum is compared with the spectrum of nonrelativistic theory. From the figure one sees the shift of the energy spectrum to higher energies, the relativistic broadening of the spectrum is too small to be observed from the figure.
Putting in equation (7) $`p=p_m=(F|g|/\omega )\sqrt{1+\left(Fg/2\omega c\right)^2}`$ we obtain the angular distribution for the most probable photoelectron energy,
$$W^{rel}\mathrm{exp}\left\{\frac{2}{3}\frac{\left(2E_b\right)^{3/2}}{F}\right\}\mathrm{exp}\left\{\left(\frac{\theta \theta _m}{\mathrm{\Delta }\theta }\right)^2\right\},$$
(18)
where the width of the angular distribution equals
$$\mathrm{\Delta }\theta =\frac{\omega }{|g|}\sqrt{\frac{1}{F\sqrt{2E_b}}}\frac{1}{\sqrt{1+g^2(F/2\omega c)^2}}.$$
(19)
We see that the relativistic theory predicts a narrower angular distribution as the nonrelativistic theory. The infinite width in the case of linear polarization is an artefact of the calculations using the angle between the polarization plane and the direction of electron movement. For the linear polarization the electrons are ejected preferably along the polarization axis if one neglects relativistic initial state effects. For the case of circular polarization the energy-angular distributions Eqs. (16) and (18) coincide with the corresponding expressions of Krainov . Notice that our notations slightly differ from those of Krainov.
In Fig. 2 we have plotted the relativistic and non-relativistic angular distributions for the electrons produced by the same process as in Fig. 1. The relativistic distribution has its maximum at the angle $`\theta _m=\mathrm{arctan}ϵ|g|/2=16.71^{}`$ and the nonrelativistic theory predicts a peak at the zero angle. Again the relativistic reduction of the angular distribution width is not observable for the parameters we have chosen. From Figs. 1 and 2 one concludes that the appearance of a nonzero mean of the drift momentum component along the beam propagation and the shift of the mean emission angle into the forward direction are the most important indications for a relativistic ionization process. On the contrary the width of the energy-angle distributions as well as the total ionization rate are less sensitive to the relativistic final state effects.
In conclusions, in this paper the relativistic semiclassical ionization of an atom in the presence of intense elliptically polarized laser light has been considered. Simple analytic expressions for the relativistic photoelectron spectrum have been obtained. For the cases of linear and circular polarization our results agree with previous studies. We have shown that the location of the peak in the relativistic angular distribution is shifted toward the direction of beam propagation. The theoretical approach employed in the paper predicts that the maximum of the electron energy spectrum is increased due to relativistic effects. The validity of the simple expressions is formally limited to the tunnel regime. Nevertheless, a part of the results, such as the most probable angle for electron emission, is shown to be valid in the above barrier ionization regime as well. The results obtained in this paper within exponential accuracy may be improved by the account of Coulomb corrections. However, whereas the Coulomb corrections may strongly influence the total ionization rate we expect only a small influence of the atomic core on the electron spectrum.
## IV Acknowledgements
I gratefully acknowledge usefull discussions with V.M. Rylyuk. This work is financially supported by the Deutsche Forschungsgemeinschaft (Germany) under Grant No. Eb/126-1.
FIGURE CAPTIONS
Electron momentum spectra for electrons produced in the creation of $`\mathrm{Ne}^{8+}`$ by an elliptically polarized laser radiation with wave length $`\lambda =1.054\mu \mathrm{m}`$, field strength $`2.5\times 10^{10}\mathrm{V}/\mathrm{cm}`$ and ellipticity $`g=0.707`$ and ejected at the most probable angle $`\theta =\theta _m`$; the relativistic spectrum is taken from Eq. (16) with $`\theta _m=16.71^{}`$, the non-relativistic one from Eq. (1) with $`\theta _m=0`$.
Electron angular distribution at the most probable electron momentum $`p=p_m`$. The other parameters are the same as in Fig. 1; the relativistic angular distribution is taken from Eq. (18) with $`p_m=85.91`$, the non-relativistic one from Eq. (1) with $`p_m=82.28`$. |
no-problem/9912/hep-ph9912232.html | ar5iv | text | # 1 Examples of Feynman diagrams for the 𝑢𝑢→𝑊⁺𝑊⁺𝑑𝑑 scattering process, 𝒪(𝛼_𝑆²𝛼_𝑊²) (a) and 𝒪(𝛼_𝑊⁴) (b-f).
Recently the importance of double parton scattering at the Large Hadron Collider (LHC) has been readdressed. In particular it has been pointed out that double parton scattering may constitute a significant background to Higgs boson production and decay via the $`b\overline{b}`$ decay channel, which, for a Higgs mass below the $`W^+W^{}`$ threshold, is one of the most promising discovery channels. Of course, double scattering contributes to the background in many other processes, and similar analyses have been performed in the past for hadron collisions at lower energy . However, the LHC and its discovery potential necessitates a very accurate estimation of backgrounds where double scattering may provide a significant contribution. Therefore it is essential to obtain a better quantitative understanding of double parton scattering and a more precise estimation of the effect.
The double (multiple) scattering occurs when two (many) different pairs of partons scatter independently in the same hadronic collision. From the theoretical point of view, the presence of double scattering is required to preserve unitarity in the high energy limit, i.e. when the distribution of partons with small momentum fractions within a hadron is high. In principle, double scattering probes correlations between partons in the hadron in the transverse plane, and thus provides important additional information on hadron structure . If a scattering event is characterized by high centre-of-mass energy and relatively modest partonic subprocess energy, which happens for example in the production of heavy gauge bosons or a Higgs boson at the LHC, then parton-parton correlations can be assumed to be negligible. Such an assumption leads to a simple factorised expression for the double scattering cross section (in the case of two distinguishable interactions, $`a`$ and $`b`$
$$\sigma _{\mathrm{DS}}=\frac{\sigma _a\sigma _b}{\sigma _{\mathrm{eff}}}.$$
(1)
Here $`\sigma _a`$ represents the single scattering cross section
$$\sigma _a=\underset{i,j}{}𝑑x_A𝑑x_Bf_i(x_A)f_j(x_B)\widehat{\sigma }_{ija},$$
(2)
with $`f_i(x_A)`$ being the standard parton distribution of parton $`i`$ and $`\widehat{\sigma }_{ija}`$ representing the partonic cross section. If the two interactions are indistinguishable, double counting is avoided by replacing Eq. (1) with
$$\sigma _{\mathrm{DS}}=\frac{\sigma _a\sigma _b}{2\sigma _{\mathrm{eff}}}.$$
(3)
The parameter $`\sigma _{\mathrm{eff}}`$, the effective cross section, contains all the information about the non-perturbative structure of the proton in this simplified approach and corresponds to the overlap of the matter distributions in the colliding hadrons. The factorisation hypothesis appears to be in agreement with the experimental data from CDF at the Tevatron $`p\overline{p}`$ collider. It is also believed that $`\sigma _{\mathrm{eff}}`$ is largely independent of the centre-of-mass energy of the collision and on the nature of the partonic interactions (for a detailed discussion the reader is referred to ). Therefore throughout this study we will use the value $`\sigma _{\mathrm{eff}}=14.5\mathrm{mb}`$, as measured by CDF. <sup>1</sup><sup>1</sup>1Strictly speaking, this value refers to an exclusive measurement and therefore should be understood as an upper bound on $`\sigma _{\mathrm{eff}}`$.
Given the potential importance of double scattering as a background to new-physics searches at the LHC, it is important to be able to calibrate the effect, i.e. to measure $`\sigma _{\mathrm{eff}}`$ using a known, well-understood Standard Model process. In the case of single scattering processes, the benchmark process is $`W`$ boson production, see for example Ref. <sup>2</sup><sup>2</sup>2It has even been suggested that this process could be used to measure the luminosity at the LHC. This suggests that $`W`$ pair production could be used to calibrate double parton scattering. In the Standard Model, like-sign $`W`$ pair production is much smaller than opposite-sign production, which suggests that the former channel is the best place to look for additional double scattering contributions.
The purpose of this note is to quantify the expected cross sections for like- and opposite-sign $`W`$ pair production at the LHC, from both the single and double scattering mechanisms, and to explore differences in the distributions of the final state particles.
The predicted rate of single $`W`$ production at the LHC is naturally very high, resulting in a significant double scattering cross section. Since the $`W^+`$ and $`W^{}`$ single scattering cross sections are comparable in magnitude, the same will be true for the double scattering $`\sigma _{\mathrm{DS}}(W^+W^+)`$, $`\sigma _{\mathrm{DS}}(W^{}W^{})`$ and $`\sigma _{\mathrm{DS}}(W^+W^{})`$ cross sections. However, for single scattering we would expect $`\sigma (W^{}W^{})<\sigma (W^+W^+)\sigma (W^+W^{})`$. The reason is that while the latter is $`𝒪(\alpha _W^2)`$ at leading order, same-sign inclusive $`W`$ pair production is a mixed strong-electroweak process with leading contributions of $`𝒪(\alpha _S^2\alpha _W^2)`$ and $`𝒪(\alpha _W^4)`$. Hence we might expect that like-sign $`W`$ pair production, with its relatively larger double scattering component, could give a clean measurement of $`\sigma _{\mathrm{eff}}`$.
The possibility of double scattering ‘background’ contributions to like-sign $`W`$ pair production was noticed some time ago , when this process was considered as one of the most promising channels for searching for strong scattering in the electroweak symmetry breaking sector . In these studies the double scattering contribution was treated as an unwanted background and suppressed by applying appropriate cuts.
We begin our analysis by calculating the total single-scattering cross sections for single $`W`$ and (opposite-sign and like-sign) $`W`$ pair production in $`pp`$ and $`p\overline{p}`$ collisions at scattering energy $`\sqrt{s}`$. For consistency, we consider only leading-order cross sections for all processes studied, i.e. we use leading-order subprocess cross sections with leading-order parton distributions. <sup>3</sup><sup>3</sup>3We note that the full $`𝒪(\alpha _S^2)`$ corrections to single $`W`$ and $`𝒪(\alpha _S)`$ corrections to $`W`$ pair production have been calculated.
As already noted, in the context of leading-order single parton scattering, opposite-sign $`W`$ pair production in hadron-hadron collisions arises from the $`𝒪(\alpha _W^2)`$ subprocess
$$q+\overline{q}W^++W^{}$$
(4)
In contrast, like-sign $`W`$ pair production is an $`𝒪(\alpha _S^2\alpha _W^2)`$ or $`𝒪(\alpha _W^4)`$ process at leading order:
$$q+qW^++W^++q^{}+q^{}$$
(5)
with $`q=u,c,\mathrm{}`$, $`q^{}=d,s,\mathrm{}`$, together with the corresponding crossed processes. Charge conjugation gives a similar set of subprocesses for $`W^{}W^{}`$ production. The Feynman diagrams split into two groups: the first set corresponds to the $`𝒪(\alpha _S^2\alpha _W^2)`$ gluon exchange process $`qqqq`$ where a single $`W`$ is emitted from each of the quark lines, see Fig. 1(a). The second, $`𝒪(\alpha _W^4)`$, set contains analogous electroweak diagrams, i.e. $`t`$channel $`\gamma `$ or $`Z`$ exchange, as well as $`WW`$ scattering diagrams, including also a $`t`$channel Higgs exchange contribution, see Fig. 1(b-f). Note that the corresponding cross sections are infra-red and collinear safe: the total rate can be calculated without imposing any cuts on the final-state quark jets. We would therefore expect naive coupling constant power counting to give the correct order of magnitude difference between the like-sign and opposite-sign cross sections, i.e. $`\sigma (W^+W^+)\alpha _{S,W}^2\sigma (W^+W^{})`$. Given the excess of $`u`$ quarks over $`d`$ quarks in the proton, we would also expect $`\sigma (W^+W^+)>\sigma (W^{}W^{})`$.
Figure 2 shows the total single $`W`$ and $`W`$ pair cross sections in proton–antiproton and proton–proton collisions as a function of the collider energy. No branching ratios are included, and there are no cuts on any of the final state particles. The matrix elements are obtained using MADGRAPH and HELAS . We use the MRST leading-order parton distributions from Ref. , and the most recent values for the electroweak parameters. <sup>4</sup><sup>4</sup>4Note that the same-sign cross sections are weakly dependent on the Higgs mass: varying the mass from $`M_H=125`$ GeV to $`M_H=150`$ GeV leads to only a 2% change in the total rate at the LHC. We use $`M_H=125`$ GeV as the default value. Note that for $`p\overline{p}`$ collisions, $`\sigma (W^+)\sigma (W^{})`$ and $`\sigma (W^+W^+)\sigma (W^{}W^{})`$. The like-sign and opposite-sign cross sections differ by about two orders of magnitude, as expected. Despite the fact that $`\alpha _S>\alpha _W`$, the electroweak contribution to the single scattering like-sign $`WW`$ production cross section is similar in size to the strong contribution. This is due to the relatively large number of diagrams (e.g. 68 for $`uuW^+W^+dd`$), as compared to the gluon exchange contribution (16 for the same process). A total annual luminosity of $`=10^5`$ pb<sup>-1</sup> at the LHC would yield approximately $`65`$ thousand $`W^+W^+`$ events and $`29`$ thousand $`W^{}W^{}`$ events, before high-order corrections, branching ratios and acceptance cuts are included.
The production characteristics of the $`W`$s in like- and opposite-sign production are somewhat different. In particular, the presence of two jets in the final state for the former leads to a broader transverse momentum distribution, as illustrated in Fig. 3. Also of interest is the jet transverse momentum distribution in $`W^\pm W^\pm `$ production, shown in Fig. 4. This indicates that a significant fraction of the jets would pass a detection $`p_T`$ threshold, and could be used as an additional ‘tag’ for like-sign production. Of course one also expects large $`p_T`$ jets in opposite-sign $`W`$ production via higher-order processes, e.g. $`q\overline{q}W^+W^{}g`$ at $`𝒪(\alpha _S)`$, but these have a steeply falling distribution reflecting the underlying infra-red and collinear singularities at $`p_T=0`$.
We turn now to the double parton scattering cross sections. As discussed above, we estimate these by simply multiplying the corresponding single scattering cross sections and normalising by $`\sigma _{\mathrm{eff}}`$ for the like-sign $`W`$ pair production and $`2\sigma _{\mathrm{eff}}`$ for the opposite-sign case. The factorisation assumption holds since the energy required to produce a vector boson is much lower than the overall centre of mass energy. Figure 2 shows the resulting total $`\sigma _{\mathrm{DS}}(W^+W^{})`$ and $`\sigma _{\mathrm{DS}}(W^\pm W^\pm )`$ cross sections as a function of $`\sqrt{s}`$. The opposite-sign single scattering and double scattering cross sections differ by two orders of magnitude. However for like-sign $`W^+W^+`$ ($`W^{}W^{}`$) production the double scattering scattering contribution is only a factor 2.1 (1.7) smaller than the single scattering contribution. Additionally, a double scattering event signature differs significantly from the single scattering case. In particular, the $`W`$ transverse momentum distribution from double scattering has a very pronounced, steep peak for small values of $`p_T`$ (see Fig. 3), inherited from the single scattering $`p_T`$ distribution <sup>5</sup><sup>5</sup>5We are assuming here that the non-perturbative ‘intrinsic’ transverse momentum distributions of the two partons participating in the double parton scattering are uncorrelated., in contrast to the broader single-scattering distributions. Obviously, similar features will characterize the $`p_T`$ spectra of leptons originating from $`W`$ decay, allowing for additional discrimination between double and single scattering events.
The absolute rate of like-sign $`W^+W^+`$ and $`W^{}W^{}`$ pair production therefore provides a relatively clean measure of $`\sigma _{\mathrm{eff}}`$ at LHC energies. Table 1 summarizes the number of expected events in the various $`WW`$ channels (recall these are leading-order estimates only, with no branching ratios), assuming $`\sigma _{\mathrm{eff}}=14.5`$ mb. However, since the absolute event rates shown in Table 1 are sensitive to overall measurement and theoretical uncertaintities, it may be more useful to consider cross section ratios. Consider for example the ratio of the like- to opposite-sign event rates
$``$ $`=`$ $`{\displaystyle \frac{N(W^+W^+)+N(W^{}W^{})}{N(W^+W^{})}}`$ (6)
$`=`$ $`{\displaystyle \frac{\sigma (W^+W^+)+\sigma (W^{}W^{})+(2\sigma _{\mathrm{eff}})^1\left[\sigma (W^+)^2+\sigma (W^{})^2\right]}{\sigma (W^+W^{})+\sigma _{\mathrm{eff}}^1\sigma (W^+)\sigma (W^{})}}`$
with both single and double scattering contributions included. The ratio $``$ for the LHC is shown as a function of $`\sigma _{\mathrm{eff}}`$ in Fig. 5. The limit $`\sigma _{\mathrm{eff}}\mathrm{}`$ corresponds to the (very small) single scattering ratio, $`=0.0125`$ while $`\sigma _{\mathrm{eff}}0`$ corresponds to the ratio ($`1.05`$) of the single $`W`$ production cross sections in $`pp`$ collisions. The CDF measured value of $`\sigma _{\mathrm{eff}}=14.5`$ mb gives $`=0.019`$.
In conclusion, we have shown that like-sign $`W`$ pair production provides a relatively clean environment for searching for and calibrating double parton scattering at the LHC. A measurement of $`\sigma _{\mathrm{eff}}`$ from this process would allow the double scattering backgrounds to new physics searches to be calibrated with precision. In this brief study we have concentrated on overall total event rates. An interesting next step would be to perform more detailed Monte Carlo studies of the various production processes, taking into account the $`W`$ decays, experimental acceptance cuts, etc. In fact it would not be difficult to devise additional cuts to enhance the double scattering contribution. We see from Fig. 3, for example, that a cut of $`p_T(W)<𝒪(20\mathrm{GeV})`$ would remove most of the single scattering events while leaving the double scattering contribution largely intact.
Acknowledgements
This work was supported in part by the EU Fourth Framework Programme ‘Training and Mobility of Researchers’, Network ‘Quantum Chromodynamics and the Deep Structure of Elementary Particles’, contract FMRX-CT98-0194 (DG 12 - MIHT). A.K. gratefully acknowledges financial support received from the ORS Award Scheme and the University of Durham. |
no-problem/9912/astro-ph9912269.html | ar5iv | text | # Sc and Mn abundances in disk and metal-rich halo stars Based on observations carried out at the European Southern Observatory, La Silla, Chile, and Beijing Astronomical Observatory, Xinglong, China
## 1 Introduction
Scandium and manganese abundances in long-lived F and G stars are of high interest not only because they help us to understand the chemical evolution of the Galaxy, but also because they provide some special constraints on nucleosynthesis theory.
It has been long known that $`\alpha `$ elements like O, Mg, Si, and Ca are mostly produced by Type II supernovae, while some iron-peak elements have significant contributions from Type Ia supernovae. But we have no clear idea how Sc, as an element intermediate between $`\alpha `$ elements and iron-peak elements in the periodic table, is formed. Sc abundances are available only for a few disk stars, and the two most extensive works on halo stars do not give consistent results. Gratton & Sneden (Gratton91 (1991)) find a solar Sc/Fe ratio in metal-poor dwarfs and giants, while Sc is found to be overabundant by Zhao & Magain (Zhao90 (1990)) with \[Sc/Fe\] $`+0.3`$ in metal-poor dwarfs. Clearly, more detailed abundance information will be useful to reveal the synthesis history of Sc in the Galaxy.
Mn is known to be quite underabundant with respect to Fe in metal-poor stars (Gratton Gratton89 (1989), McWilliam et al. McWilliam95 (1995); Ryan et al. Ryan96 (1996)), but the pattern of \[Mn/Fe\] vs. \[Fe/H\] is not known in great detail. It is still an open question if Mn is produced mainly in massive stars as advocated by Timmes et al. (Timmes95 (1995)) or if Type Ia SNe make a significant contribution at higher metallicities. Furthermore, the pattern of \[Mn/Fe\] in disk and metal-rich halo stars is needed for comparison with recent observations of Mn abundances in damped Lyman $`\alpha `$ systems (Pettini et al. 1999a ; 1999b ).
The reason that the Sc and Mn abundance patterns are not well established may be related to the significant hyperfine structure (HFS) of their lines. Data on the HFS of several Sc and Mn lines suitable for abundance determinations of solar-type dwarfs is, however, available. The lack of a consistent study on both elements for disk and metal-rich halo stars therefore inspired us to carry out a high precision analysis for a large sample of main-sequence stars selected to have $`5300<T_{\mathrm{eff}}<6500`$ K, $`4.0<\text{log}g<4.6`$, and $`1.4<\text{[Fe/H]}<+0.1`$.
## 2 Observations
The observational data are taken from two sources: Chen et al. (Chen99 (1999), hereafter Chen99) and Nissen & Schuster (Nissen97 (1997), hereafter NS97). The first sample (disk stars) was observed at Beijing Astronomical Observatory (Xinglong Station, China) with the Coudé Echelle Spectrograph and a $`1024\times 1024`$ Tek CCD attached to the 2.16m telescope, giving a resolution of the order of 40 000. The second sample (kinematically selected halo stars and metal-poor disk stars) was observed with the ESO NTT EMMI spectrograph and a $`2048\times 2048`$ SITe CCD detector at a higher resolution ($`R=\mathrm{60\hspace{0.17em}000}`$). The exposure times were chosen in order to obtain a signal-to-noise ratio of above 150 in both samples.
The spectra were reduced with standard MIDAS (Chen99) or IRAF (NS97) routines for background correction, flatfielding, extraction of echelle orders, wavelength calibration, continuum fitting and measurement of equivalent widths (see Chen99 and NS97 for details). Figure 1 shows the spectra for one disk star in Chen99 and one halo star in NS97 around the Mn i triplet at 6020 Å.
## 3 Analysis
### 3.1 Abundance calculation
In Chen99, the effective temperature was derived from the Strömgren $`by`$ color index using the recent calibration by the infrared-flux method (Alonso et al. Alonso96 (1996)). For consistency, stellar temperatures in the NS97 sample (derived from the excitation balance of Fe i lines) were redetermined using the Alonso et al. calibration of $`by`$. The new $`T_{\mathrm{eff}}`$’s are on the average 60 K lower than those of NS97 but the rms scatter between the two sets of values is only $`\pm 45`$ K.
Surface gravities in Chen99 were based on Hipparcos parallaxes (ESA ESA97 (1997)). For the majority of stars in NS97 accurate parallaxes are, however, not available, and surface gravities were therefore determined by requiring that Fe i and Fe ii lines provide the same iron abundance. As shown by Chen99, this leads to gravities consistent with those derived from Hipparcos parallaxes.
Microturbulence velocities were estimated by requesting that the iron abundance derived from Fe i lines with $`\chi >4`$ eV should not depend on equivalent width. As shown in Table 2, the typical error of the microturbulence parameter, $`\pm 0.3`$ km s<sup>-1</sup>, has no significant influence on the derived values of \[Sc/Fe\] and \[Mn/Fe\].
The oscillator strengths for two Sc ii lines ($`\lambda `$5657 and $`\lambda `$6604) and two Mn i lines ($`\lambda `$6013 and $`\lambda `$6021) were taken from Lawler & Dakin (Lawler89 (1989)) and Booth et al. (Booth84 (1984)), respectively, while differential values for another three Sc ii lines and one Mn i line ($`\lambda `$6016) were estimated from an “inverse” analysis of 10 well observed “standard” stars from Chen99 and all stars from NS97. Hence, these lines are forced to give the same average abundances of Sc and Mn as the lines with known $`gf`$ values for the aforementioned group of stars, but the inclusion of these lines improves the precision of differential abundances. We note, that in the spectrum of the Sun, the Mn i line at 6016 Å appears stronger relative to the two other lines. Hence, this line may be blended by a weak line in metal-rich stars, but exclusion of $`\lambda `$6016 in the derivation of Mn abundances does not change the derived trend of \[Mn/Fe\] significantly.
The damping constant corresponding to collisional broadening due to Van der Waals interaction with hydrogen and helium atoms was calculated in the Unsöld (Unsold55 (1955)) approximation with an enhancement factor of $`E_\gamma =2.5`$ for both elements. The effect of changing the enhancement factor will be discussed in Sect. 3.3.
The abundance analysis is based on a grid of flux constant, homogeneous, LTE model atmospheres computed using the Uppsala MARCS code with updated continuous opacities (Asplund et al. Asplund97 (1997)). Abundance calculations in the LTE approximation with HFS included were made using the Uppsala SPECTRUM synthesis program by forcing the theoretical equivalent widths, derived from the model, to match the observed ones.
Equivalent widths in the solar flux spectrum were measured from a Xinglong spectrum of the Moon and analyzed in the same way as the stellar equivalent widths using the same grid of models to interpolate to the atmospheric parameters of the Sun. Hence, \[Sc/Fe\] and \[Mn/Fe\] are derived in a strictly differential way with respect to the Sun, thereby minimizing many error sources. In particular, we emphasize that possible systematic errors in the gf-values do not affect the relative abundance ratios with respect to the Sun.
Table 3 and Table 4 present the derived abundances, together with the atmospheric parameters and equivalent widths, for stars in Chen99 and NS97, respectively.
### 3.2 Hyperfine structure effect
For all Sc and Mn lines used in our analysis, it was investigated how much the HFS affects the derived abundances. The energy splitting and the relative intensities required in the HFS calculation for Sc and Mn are taken from Steffen (Steffen85 (1985)). His data is mostly based on Biehl (Biehl76 (1976)), but he makes a small adjustment to the wavelength shift and the relative intensities, leading to fewer components than Biehl’s data. The log$`gf`$ values for the individual components (see Table 1) are then calculated from the relative strengths based on a given total $`gf`$ value.
The results show that HFS has a small influence on the two weak Sc ii lines ($`\lambda `$6245 and $`\lambda `$6604) with a maximum of 0.07 dex at $`EW`$ 50 mÅ, but the effect is very significant for the stronger lines ($`\lambda `$5526 and $`\lambda `$5657), reaching 0.1-0.3 dex for the equivalent width range of 30-100 mÅ. The Sc ii 5239Å line (available in the NS97 spectra) is subject to a maximum HFS effect of about 0.1 dex for $`EW`$ 50 mÅ.
For the Mn i lines, the abundance deviation between the derivations with and without HFS reaches a maximum of 0.2 dex for the 6013Å line and 0.10 dex for $`\lambda 6016`$ and $`\lambda 6021`$ at $`EW`$ 100 mÅ. We have found that the HFS data of Booth et al. (Booth83 (1983)) gives slightly lower Mn abundances than Steffen’s (Steffen85 (1985)) data; the deviation increases with line strength, reaching 0.1 dex for the 6013Å line and 0.04 dex for $`\lambda 6016`$ and $`\lambda 6021`$ at $`EW`$ 100 mÅ. The reason for the discrepancy is unclear. But the original HFS data for $`\lambda `$6013 from Biehl (Biehl76 (1976)) gives exactly the same abundance as that based on Steffen’s (Steffen85 (1985)) HFS data.
The final abundances are the mean values of the abundances derived from the available lines with HFS according to Steffen (Steffen85 (1985)) taken into account.
### 3.3 Abundance uncertainties
Abundance errors are mainly due to random errors in the equivalent widths, the uncertainty of the stellar atmospheric parameters, and a possible error in the enhancement factor of the damping constant. An estimate of the first contribution is obtained by dividing the rms scatter of abundances, determined from the individual lines, by $`\sqrt{N}`$, where $`N`$ is the number of lines. The second kind of uncertainty is estimated by a change of 70 K in $`T_{\mathrm{eff}}`$, 0.1 dex in log$`g`$, 0.1 dex in \[Fe/H\] and 0.3 km s<sup>-1</sup> in microturbulence, typical standard deviations for these atmospheric parameters as estimated in Chen99. Finally, the abundance change caused by a variation of $`E_\gamma `$ by 50% was calculated.
Table 2 presents the errors in the relative abundances for a typical disk star and one of the halo stars. Note, that we have compared the Sc abundance derived from Sc ii lines with the Fe abundance derived from Fe ii lines and the Mn abundance derived from Mn i lines with the Fe abundance derived from Fe i lines. Hence, the ratios are derived from lines having a similar dependence on $`T_{\mathrm{eff}}`$ and log$`g`$, which explains the the rather small effect of the uncertainty in the atmospheric parameters. In the case of Sc the uncertainty of the enhancement factor is not important, because the Sc ii lines are quite weak and not of too different strenghts in the solar and the stellar spectra. The Mn i lines show, however, a large variation in equivalent width, from about 100 mÅ at solar metallicity to about 10 mÅ in the metal-poor halo stars (due to the underabundance of Mn with respect to Fe). Hence, the choice of $`E_\gamma `$ has a significant effect on the slope of \[Mn/Fe\] vs. \[Fe/H\]. As seen from Table 2, an increase of $`E_\gamma `$ by 50% (from 2.5 to 3.75) increases the derived \[Mn/Fe\] at $`\text{[Fe/H]}0.8`$ by 0.05 dex, and a decrease of $`E_\gamma `$ from 2.5 to 1.0 (i.e. no enhancement of the Unsöld approximation for the damping constant) would decrease \[Mn/Fe\] of the metal-poor stars by about 0.10 dex. Unfortunately, there are no reliable theoretical calculations of the damping constant for the Mn i lines, nor any empirical estimates of $`E_\gamma `$ from e.g. the solar spectrum.
In addition to the abundance errors estimated in Table 2, possible non-LTE effects should be considered. As the Sc abundance is determined from Sc ii lines and compared to Fe from Fe ii lines, the non-LTE effects on \[Sc/Fe\] should be small, because the abundances of both elements are based on lines from the dominating ionization stage. The Mn abundances (derived from Mn i lines) may, however, be subject to non-LTE effects due to overionization of Mn i caused by the UV radiation field, especially in the metal-poor stars. According to our knowledge there is no non-LTE calculations of Mn for F and G stars, but noting that the ionization potential of Mn i (7.43 eV) is similar to that of Fe i (7.87 eV) one might expect that the ratio Mn/Fe is not significantly changed by non-LTE effects, when the abundances of both elements are determined from neutral lines. Clearly, this should be checked by detailed computations.
## 4 Discussion and conclusions
Stars in the metallicity range $`1.4<\text{[Fe/H]}<+0.1`$ include a mixture of different populations. In our sample, three groups can be separated according to their kinematics: thin disk stars ($`V^{^{}}>50\text{ km\hspace{0.17em}s}\text{-1}`$), thick disk stars ($`120<V^{^{}}<50\text{ km\hspace{0.17em}s}\text{-1}`$) and halo stars ($`V^{^{}}<175\text{ km\hspace{0.17em}s}\text{-1}`$). The values of $`V^{^{}}`$(the velocity component in the direction of Galactic rotation with respect to the Local Standard of Rest) are taken from Chen99 and NS97. As discussed in Chen99 a rotational lag of about 50 km s<sup>-1</sup> corresponds rather well to the transition between thick disk stars ($`\sigma (W^{^{}})40`$ km s<sup>-1</sup> and ages $`>10`$ Gyr) and thin disk stars ($`\sigma (W^{^{}})20`$ km s<sup>-1</sup> and ages $`<10`$ Gyr) although a clean separation between the two populations is not possible. The thick disk stars may also be contaminated by a few halo stars, whereas the group with $`V^{^{}}<175\text{ km\hspace{0.17em}s}\text{-1}`$ consists of genuine halo stars according to the Galactic orbits computed in NS97. Furthermore, the halo stars can be split up into “low-$`\alpha `$” and “high-$`\alpha `$” stars according to the abundances of $`\alpha `$ elements as described by NS97. The low-\[$`\alpha `$/Fe\] halo stars may belong to the “high-halo”, resulting from a merging process of the Galaxy with satellite components (NS97), or they may come from low-density regions in the outer halo where the chemical evolution proceeded slowly allowing incorporation of iron from Type Ia SNe at a lower metallicity than in the inner halo and the thick disk (NS97; Gilmore & Wyse Gilmore98 (1998)).
Figure 2 shows the run of \[Sc/Fe\] and \[Mn/Fe\] vs. \[Fe/H\] with the four groups of stars marked by different symbols.
### 4.1 Scandium
As seen from Fig. 2, \[Sc/Fe\] declines from an overabundance ($``$ 0.2) at $`\text{[Fe/H]}=1.4`$ to zero at solar metallicity even though there are some exceptions. Hence, Sc seems to follow the even-Z $`\alpha `$ elements like Si, Ca and Ti (see Edvardsson et al. Edvardsson93 (1993) and Chen99). In particular, Sc keeps a near-constant overabundance for $`\text{[Fe/H]}<0.6`$ like the $`\alpha `$ elements, except for the group of low-\[$`\alpha `$/Fe\] halo stars, which tend to have low values of \[Sc/Fe\], just as expected if Sc belongs to the $`\alpha `$-element family.
Theoretically, Samland’s (Samland98 (1998)) model reproduces both the overabundance of about 0.2 dex in \[Sc/Fe\] at $`\text{[Fe/H]}1.0`$ and the decreasing relation with increasing metallicity by suggesting that Sc is mostly made by high mass stars. Observationally, our result supports the high values of \[Sc/Fe\] for metal-poor dwarfs by Zhao & Magain (Zhao90 (1990)) instead of the solar ratio found by Gratton & Sneden (Gratton91 (1991)) for metal-poor dwarfs and giants. The reason for this discrepancy is unclear, but we note that Gratton & Sneden adopt solar $`gf`$ values based on the empirically-derived solar model by Holweger & Müller (Holweger74 (1974)). As stated in their paper, the use of a theoretical model for the Sun (like for the stars) would increase the derived \[Sc/Fe\] values by 0.06 dex. Furthermore, we note that Hartmann & Gehren (Hartmann88 (1988)) derive an average value \[Sc/Fe\] = +0.11 for 16 metal-poor subdwarfs.
### 4.2 Manganese
Figure 2 shows that \[Mn/Fe\] increases from a very significant underabundance at \[Fe/H\] $``$ 1.4 to a solar ratio at $`\text{[Fe/H]}0.0`$. The underabundance of $`0.5`$ in the metal-poor stars is consistent with the recent study of Mn in damped Lyman-$`\alpha `$ systems by Pettini et al. (1999a ; 1999b ), and with the results of Gratton (Gratton89 (1989)), who found \[Mn/Fe\] $`0.4`$ in 11 giants and dwarfs with $`2.5<\text{[Fe/H]}<1.0`$ and an increasing trend of \[Mn/Fe\] with \[Fe/H\] for 14 disk stars.
Timmes et al. (Timmes95 (1995)) have shown that nucleosynthesis of Mn in massive stars with a metallicity dependent yield (due to the lower neutron excess in metal-poor stars) explains the trend of \[Mn/Fe\] rather well, and they argue that Type Ia SNe are not a major contributor to the synthesis of Mn. As seen from Fig. 2 there is, however, a rather sharp increase of \[Mn/Fe\] around $`\text{[Fe/H]}0.7`$ or even a discontinuity in \[Mn/Fe\] between the thick disk and halo stars on one side and the thin disk stars on the other side. In fact, the trend of \[Mn/Fe\] appears to mirror that of the $`\alpha `$ elements, especially that of O (Edvardsson et al. Edvardsson93 (1993); Chen99). This suggests that Type Ia SNe give a large contribution to the production of Mn in the Galactic disk in agreement with the chemical evolution model of Samland (Samland98 (1998)), who finds that 75% of the Galactic Mn is produced in SNe of Type Ia. On the other hand, we note that the “low-$`\alpha `$” halo stars do not stand out from the thick disk stars in the \[Mn/Fe\] – \[Fe/H\] diagram. If Type Ia SNe make a large contribution to the production of Mn one might expect the “low-$`\alpha `$” stars to have higher \[Mn/Fe\] values than the thick disk stars, because the “low-$`\alpha `$” stars were explained by assuming that they were formed from interstellar gas enriched at lower than usual \[Fe/H\] values with the products of Type Ia supernovae. However, the position of the ”low-$`\alpha `$” stars in Fig. 2 may be due to cancelling effects; the Type Ia SNe produce no O and fewer $`\alpha `$-elements than Type II SNe, and the neutron excess necessary for Mn production may depend more on the overall metallicity than on just \[Fe/H\].
We conclude that the nucleosynthesis of Mn may be modulated in a complicated way. Moreover, it is hard to understand why a strong underabundance like that of Mn is not present for the other odd-Z iron-peak elements, V and Co. According to the results of Gratton & Sneden (Gratton91 (1991)), it seems that V and Co vary in lockstep with Fe down to metallicities of $`\text{[Fe/H]}2.5`$, perhaps with a slight underabundance of Co. Furthermore, Mn and Co show very different behaviours below $`\text{[Fe/H]}2.5`$ (McWilliam et al. McWilliam95 (1995); Ryan et al. Ryan96 (1996)) with \[Mn/Fe\] decreasing strongly and \[Co/Fe\] increasing. Hence, the term “iron-peak” elements does not indicate the products of a single nuclear reaction. These elements may have different origins. Further study of Mn and of other “iron-peak” elements is desirable to understand their synthesis.
## Acknowledgements
This research was supported by the Danish Research Academy and the Chinese Academy of Science, and by CONACyT and DGAPA, UNAM in México. |
no-problem/9912/hep-th9912252.html | ar5iv | text | # Ground State of the Quantum Symmetric Finite Size XXZ Spin Chain with Anisotropy Parameter Δ=1/2
## Abstract
We find an analytic solution of the Bethe Ansatz equations (BAE) for the special case of a finite XXZ spin chain with free boundary conditions and with a complex surface field which provides for $`U_q(sl(2))`$ symmetry of the Hamiltonian. More precisely, we find one nontrivial solution, corresponding to the ground state of the system with anisotropy parameter $`\mathrm{\Delta }=\frac{1}{2}`$ corresponding to $`q^3=1`$.
Dedicated to Rodney Baxter
on the occasion of his 60th birhday.
It is widely accepted that the Bethe Ansatz equations for an integrable quantum spin chain can be solved analytically only in the thermodynamic limit or for a small number of spin waves or short chains. In this letter, however, we have managed to find a special solution of the BAE for a spin chain of arbitrary length N with N/2 spin waves.
It is well known (see, for example and references therein) that there is a correspondence between the Q-state Potts Models and the Ice-Type Models with anisotropy parameter $`\mathrm{\Delta }=\frac{\sqrt{Q}}{2}`$. The coincidence in the spectrum of an N-site self-dual Q-state quantum Potts chain with free ends with a part of the spectrum of the $`U_q(sl(2))`$ symmetrical 2N-site XXZ Hamiltonian (1) is to some extent a manifestation of this correspondence.
$$H_{xxz}=\underset{n=1}{\overset{N1}{}}\{\sigma _n^+\sigma _{n+1}^{}+\sigma _n^{}\sigma _{n+1}^++\frac{q+q^1}{4}\sigma _n^z\sigma _{n+1}^z+\frac{qq^1}{4}(\sigma _n^z\sigma _{n+1}^z)\},$$
(1)
where $`\mathrm{\Delta }=(q+q^1)/2`$. This Hamiltonian was considered by Alcaraz et al. and its $`U_q(sl(2))`$ symmetry was described by Pasqier and Saleur . The family of commuting transfer-matrices that commute with $`H_{xxz}`$ was constructed by Sklyanin incorporating a method of Cherednik .
Baxter’s T-Q equation for the case under consideration can be written as
$$t(u)Q(u)=\varphi (u+\eta /2)Q(u\eta )+\varphi (u\eta /2)Q(u+\eta )$$
(2)
where $`q=\mathrm{exp}i\eta `$, $`\varphi (u)=\mathrm{sin}2u\mathrm{sin}^{2N}u`$ and $`t(u)=\mathrm{sin}2uT(u)`$. The $`Q(u)`$ are eigenvalues of Baxter’s auxilary matrix $`\widehat{Q}(u)`$, where $`\widehat{Q}(u)`$ commutes with the transfer matrix $`\widehat{T}(u)`$. The eigenvalue $`Q(u)`$ corresponding to an eigenvector with $`M=N/2S_z`$ reversed spins has the form
$$Q(u)=\underset{m=1}{\overset{M}{}}\mathrm{sin}(uu_m)\mathrm{sin}(u+u_m).$$
Equation (2) is equivalent to the Bethe Ansatz equations
$$\left[\frac{\mathrm{sin}(u_k+\eta /2)}{\mathrm{sin}(u_k\eta /2)}\right]^{2N}=\underset{mk}{\overset{M}{}}\frac{\mathrm{sin}(u_ku_m+\eta )\mathrm{sin}(u_k+u_m+\eta )}{\mathrm{sin}(u_ku_m\eta )\mathrm{sin}(u_k+u_m\eta )}.$$
(3)
In a recent article it was argued that the criteria for the above mentioned correspondence is the existence of a second trigonometric solution for Baxter’s T-Q equation and it was shown that in the case $`\eta =\pi /4`$ the spectrum of $`H_{xxz}`$ contains the spectrum of the Ising model. In this article we limit ourselves to the case $`\eta =\pi /3`$. This case is in some sense trivial since for this value of $`\eta `$, $`H_{xxz}`$ corresponds to the 1-state Potts Model. We find only one eigenvalue $`T_0(u)`$ of the transfer-matrices $`\widehat{T}(u)`$ when Baxter’s equation (2) has two independent trigonometric solutions. Solving for $`T(u)=T_0(u)`$ analytically we find a trigonometric polynomial $`Q_0(u)`$ the zeros of which satisfy the Bethe Ansatz equations (3). The number of spin waves is equal to $`M=N/2`$. The corresponding eigenstate is the groundstate of $`H_{xxz}`$ with eigenvalue $`E_0=\frac{3}{2}(1N)`$, as discovered by Alcaraz et al. numerically.
When does a second independent periodic solution exist? This question was considered in article . Here we use a variation more convenient for our goal.
Let us consider T-Q equation (2) for $`\eta =\frac{\pi }{L}`$, where $`L3`$ is an integer. Let us fix a sequence of spectral parameter values $`v_k=v_0+\eta k`$, where k are integers and write $`\varphi _k=\varphi (v_k\eta /2)`$, $`Q_k=Q(v_k)`$ and $`t_k=t(v_k)`$. The functions $`\varphi (u)`$, $`Q(u)`$ and $`t(u)`$ are periodic with period $`\pi `$. So the sequences we have introduced are also periodic with period $`L`$, i.e., $`\varphi _{k+L}=\varphi _k`$, etc..
Setting $`u=v_k`$ in (2) gives the linear system
$$t_kQ_k=\varphi _{k+1}Q_{k1}+\varphi _kQ_{k+1}.$$
(4)
The matrix of coefficients for this system has a tridiagonal form. Taking $`v_0\frac{\pi m}{2}`$, where $`m`$ is an integer, we have $`\varphi _k0`$ for all $`k`$.
It is straightforward to calculate the determinant of the $`L2\times L2`$ minor obtained by deleting the two left most columns and two lower most rows. It is equal to the product $`\varphi _1^2\varphi _2\varphi _3\mathrm{}\varphi _{L1}`$, which is nonzero, hence the rank of $`M`$ cannot be less than $`L2`$. Here we are interested in the case when the rank of M is precisely $`L2`$ and we have two linearly independent solutions for equation (4). Let us consider the three simplest cases L = 3, 4 and 5. The parameter $`\eta `$ is equal to $`\frac{\pi }{3}`$, $`\frac{\pi }{4}`$ and $`\frac{\pi }{5}`$ respectively.
For $`L=3`$ the rank of $`M`$ is unity and we immediately get $`t_0=\varphi _2,t_1=\varphi _0`$ and $`t_2=\varphi _1`$. Returning to the functional form, we can write
$$T_0(u)=t_0(u)/\mathrm{sin}2u=\varphi (u+\pi /2)/\mathrm{sin}2u=\mathrm{cos}^{2N}u.$$
(5)
This is the unique eigenvalue of the transfer-matrix for which the T-Q equation has two independent periodic solutions. It is well known (see, for example, ) that the eigenvalues of $`H_{xxz}`$ are related to the eigenvalues $`t(u)`$ by
$$E=\mathrm{cos}\eta (N+2\mathrm{tan}^2\eta )+\mathrm{sin}\eta \frac{t^{}(\eta /2)}{t(\eta /2)}.$$
For the eigenstate corresponding to eigenvalue (5) we obtain $`E_0=3/2(1N)`$. This is the groundstate energy which was discovered by Alcaraz et al. numerically.
Below we find all solutions of Baxter‘s T-Q equation corresponding to $`T(u)=T_0(u)`$. Zeros of these solutions satisfy the BAE (3). In particular we find $`Q_0(x)`$ corresponding to physical Bethe state.
For $`L=4`$, deleting the second row and the forth column of $`M`$ we obtain a minor with determinant $`\varphi _0\varphi _3(t_0+t_2)`$. It is zero when $`t_2=t_0`$, i.e., $`t(u+\frac{\pi }{2})=t(u)`$. Considering the other minors we obtain the functional equation
$$t(u+\pi /8)t(u\pi /8)=\varphi (u+\pi /4)\varphi (u\pi /4)\varphi (u)\varphi (u+\pi /2).$$
This functional equation was used in to find $`t(u)`$ and show that this part of the spectrum of $`H_{xxz}`$ coincides with the Ising model. It would be interesting to find a corresponding $`Q(u)`$.
Lastly for $`L=5`$, minor $`M_{35}`$ (the third row and the fifth column are deleted) has determinant $`\varphi _0\varphi _4(t_0t_1+\varphi _1t_3\varphi _0\varphi _2)`$. Setting this to zero we have
$$t(u)t(u+\pi /5)+\varphi (u+\pi /10)t(u+3\pi /5)\varphi (u\pi /10)\varphi (u+3\pi /10)=0.$$
It is not difficult to check that in this case all $`4\times 4`$ minors have zero determinant and that the rank of M is 3. Thus we have two independent periodic solutions of Baxter’s T-Q equation.
Note that this functional relation coincides with the Baxter-Pearce relation for the hard hexagon model . We have obtained the same truncated functional relations that have been obtained in with the same assumptions.
We now consider the solution of Baxter’s Equation for $`\eta =\frac{\pi }{3}`$ and $`T=T_0`$. For $`\eta =\frac{\pi }{3}`$ and transfer-matrix eigenvalue $`T_0(u)=\mathrm{cos}^{2N}u`$, the T-Q equation (2) reduces to
$$\varphi (u+3\eta /2)Q(u)+\varphi (u\eta /2)Q(u+\eta )+\varphi (u+\eta /2)Q(u\eta )=0.$$
(6)
This equation can be rewritten as
$$f(v)+f(v+2\pi /3)+f(v+4\pi /3)=0,$$
(7)
where $`f(v)=\mathrm{sin}v\mathrm{cos}^{2N}(v/2)Q(v/2)`$ has period $`2\pi `$. The trigonometric polynomial $`f(v)`$ is an odd function, so it can be written
$$f(v)=\underset{k=1}{\overset{K}{}}c_k\mathrm{sin}kv,$$
(8)
where $`K`$ is the degree of $`f(v)`$. Then equation (7) is equivalent to $`c_{3m}=0,mZ`$.
The condition that $`f(v)`$ be divisible by $`\mathrm{sin}v\mathrm{cos}^{2N}(v/2)`$ is equivalent to
$$(\frac{d}{dv})^if(v)|_{v=\pi }=0,i=0,1,\mathrm{},2N.$$
For even $`i`$ this condition is immediate, whereas for $`i=2j1`$ we use (8) to obtain
$$\underset{k=1,k3m}{\overset{K}{}}(1)^kc_kk^{2j1}=0,j=1,2,\mathrm{},N.$$
(9)
Our problem is thus to find $`\{c_k\}`$ satisfying the last equation. This problem is a special case of a more general problem which can be formulated as follows. Given a set of different complex numbers $`X=\{x_1,x_2,\mathrm{},x_I\}`$ we seek another complex set $`B=\{\beta _1,\beta _2,\mathrm{},\beta _I\}`$ where $`\beta _i0`$ for some $`i`$, so that
$$\underset{i=1}{\overset{I}{}}\beta _iP(x_i)=0$$
(10)
for any polynomial $`P(x)`$ of degree not more than $`N1`$. It is clear that for $`IN`$ the system $`B`$ does not exist. If $`\beta _10`$, for example, the product $`(xx_2)(xx_3)\mathrm{}(xx_I)`$ provides a counterexample.
Let $`I=N+1`$. We try the polynomials
$$P_r=\underset{i=1,ir,}{\overset{N}{}}(xx_i),r=1,2,\mathrm{},N.$$
(11)
Condition (10) gives $`\beta _rP_r(x_r)+\beta _IP_r(x_I)=0`$ and we immediately obtain
$$\beta _r=\text{const}\underset{i=1,ir}{\overset{N+1}{}}(x_rx_i)^1,$$
(12)
which is a solution because the system (11) forms a basis of the linear space of $`N1`$ degree polynomials. So for $`I=N+1`$ we have a unique solution (up to an arbitrary nonzero constant) given by (12). It is easy to show that for $`I=N+\nu `$ we obtain a $`\nu `$-dimensional linear space of solutions.
Returning to (9) we consider $`N=2n`$, $`n`$ a positive integer. Fix $`I=N+1=2n+1`$. The degree $`K`$ becomes $`3n+1`$. It is convenient to use a new index $`k=|3\kappa +1|`$ , where $`|\kappa |n`$. Equation (9) can be rewritten as
$$\underset{\kappa =n}{\overset{n}{}}\beta _\kappa (3\kappa +1)^{2(j1)}=0,j=1,2,\mathrm{},N,$$
where we use new unknowns $`\beta _\kappa =(1)^\kappa c_{|3\kappa +1|}|3\kappa +1|`$ instead of $`c_k`$. Using (12) and (8) we obtain the function $`f(v)`$
$$f(v)=\underset{\kappa =n}{\overset{n}{}}(1)^\kappa \left(\begin{array}{c}2n+\frac{2}{3}\\ n\kappa \end{array}\right)\left(\begin{array}{c}2n\frac{2}{3}\\ n+\kappa \end{array}\right)\mathrm{sin}(3\kappa +1)v.$$
(13)
We recall that the solution of Baxter’s T-Q equation for $`T(u)=T_0(u)`$ is given by
$$Q_0(u)=f(2u)/(\mathrm{sin}2u\mathrm{cos}^{2N}u)$$
(14)
and its zeros $`\{u_k\}`$ satisfy the BAE (3).
Another way to derive the above solution is to observe that the function $`f(v)`$ satisfies a simple second order linear differential equation. Indeed, it is easily seen that the functions $`F^+(x)`$ and $`F^{}(x)`$, where
$$F^+(x)=\underset{\kappa =n}{\overset{n}{}}(1)^\kappa \left(\begin{array}{c}2n+\frac{2}{3}\\ n\kappa \end{array}\right)\left(\begin{array}{c}2n\frac{2}{3}\\ n+\kappa \end{array}\right)x^{\kappa +\frac{1}{3}}\text{ and }F^{}(x)=F^+(1/x).$$
(15)
are the two linearly independent solutions of the differential equation
$$\{((\theta +n)^21/9)/x+(\theta n)^21/9\}F^+=0,$$
(16)
where $`\theta =x\frac{d}{dx}`$.<sup>1</sup><sup>1</sup>1Up to a change of variables this is just the standard hypergeometric differential equation, and in fact $`F^+(x)=\text{const }F(2n,2/32n,5/3,x)x^{1/3n}`$ Now the fact that there is a combination $`f(v)`$ of $`F^+(e^{3iv})`$ and $`F^{}(e^{3iv})`$ which vanishes to order $`2N+1`$ at $`v=\pi `$ follows immediately from the fact that $`x=1`$ is a singular point of the differential equation (16) and that the indicial equation at this point has roots 0 and $`2n+1`$. In terms of the variable $`v`$, equation (16) becomes
$$\frac{d^2f}{dv^2}+6n\mathrm{tan}(3v/2)\frac{df}{dv}+(19n^2)f=0.$$
(17)
The zeros of $`f(v)`$, the density of which is important in the thermodynamic limit, are located on the imaginary axis in the complex $`v`$-plane. So it is convenient to make the change of variable $`v=is`$. It is also useful to introduce another function $`g(s)=f(is)/\mathrm{cosh}^{2n}(3s/2)`$. The differential equation for $`g(s)`$ is then
$$g^{\prime \prime }+\left(\frac{9n(2n+1)}{2\mathrm{cosh}^2(3s/2)}1\right)g=0.$$
(18)
Let $`g(s_0)=0`$. For large $`n`$ we have in a small vicinity of $`s_0`$ an approximate equation $`g^{\prime \prime }+\omega _0^2g=0`$. This equation describes a harmonic oscillator with frequency $`\omega _0=3n/\mathrm{cosh}(3s_0/2)`$. The distance between nearest zeros is approximately $`\mathrm{\Delta }s=\pi /\omega `$ and we obtain the following density function which describes the number of zeros per unit length
$$\rho (s)=1/\mathrm{\Delta }s=\omega /\pi =3n/(\pi \mathrm{cosh}(3s/2)).$$
(19)
We note that equation (18) has a history as rich as the BAE. Eckart used the Schrodinger equation with bell-shaped potential $`V(r)=G/\mathrm{cosh}^2r`$ for phenomenological studies in atomic and molecular physics. Later it was used in chemistry, biophysics and astrophysics, just to name a few. For more recent references see, for example, .
We are grateful to V. Bazhanov, A. Belavin, L. Faddeev and G. Pronko for useful discussions. We would like to thank M. Kashiwara and T. Miwa for their kind hospitality in RIMS. This work is supported in part by RBRF–98–01–00070, INTAS–96–690 (Yu. S.). V. F. is supported by a JSPS fellowship. |
no-problem/9912/cond-mat9912370.html | ar5iv | text | # Ferromagnetic Nanowires with Superconducting Electrodes
## 1 INTRODUCTION
Electronic devices exploiting the spin of conduction electrons rather than their charge have been proposed recently as an alternative to conventional electronics (see e.g. Ref. 1 and references therein). Ferromagnetic materials, being a natural source of spin-polarized electrons for such devices, are in focus of intensive experimental and theoretical investigations. Hybrid ferromagnet/superconductor ($`FS`$) structures are prospective candidates for device application and can be useful tools for studying properties of nanometer-size ferromagnets. Recently the measurements of the spin polarization of direct current have been reported for ballistic point contacts . These experiments were in reasonable agreement with both the band structure of ferromagnets and the general picture of Andreev reflection on the $`FS`$ interface . In contrast, recent experiments on diffusive $`FS`$ nanostructures showed a dramatic disagreement between the theory and the data. While the theory predicts any superconducting correlation to decay in the diffusive ferromagnet over the distance $`\xi _F=\sqrt{\frac{\mathrm{}D}{k_BT_C}}`$ governed by the exchange energy of the ferromagnet, which is of the order of $`k_BT_C`$ ($`T_C`$ is the Curie temperature, $`D`$, the diffusion constant of the ferromagnet), the experimental results suggest that the influence of the superconductor penetrates into the ferromagnet over a distance up to $`10^2`$ times larger than $`\xi _F`$ .
Here we report further experimental studies of mesoscopic $`FS`$ structures of various geometries. We find that the conductance changes can be of both negative and positive sign at the superconducting transition, with the amplitude of the changes up to $`10^2`$ times larger than theoretical values. We demonstrate that the sign and the amplitude of the effect depend strongly on the interface transparency. Our preliminary experiments with $`FS`$ interferometers showed no phase-periodic oscillations down to the level of 0.1 $`e^2/h`$.
## 2 EXPERIMENTAL
The samples were fabricated using multiple e-beam lithography. Ni/Al structures were thermally evaporated
in a vacuum of $`10^6`$ mbar onto a silicon substrate kept at room temperature. The first layer was a ferromagnet (Ni) 40nm thick, the second layer, a superconductor (Al) 60 nm. Various geometries studied are shown in Fig. 1. The resistivities of the films varied from sample to sample and were in the range of 10 - 50 $`\mu \mathrm{\Omega }`$ cm for Ni and 1.0 - 1.5 $`\mu \mathrm{\Omega }`$ cm for Al.
The resistance of the structures was measured by the four-terminal method. Current and potential leads are marked $`I`$ and $`V`$ in Fig. 1. The resistance of the structures was measured using both dc and ac signals in the temperature range from 0.27K up to 50K and in magnetic fields up to 5T.
Special care was taken to create interfaces of controllable quality. Before the deposition of the second layer, the contact area was Ar<sup>+</sup> plasma etched. By varying etching parameters, we obtained interfaces of a wide range of transparencies. Wide checking layers were analyzed by Secondary Ion Mass-Spectroscopy (SIMS) with the primary beam of Cs<sup>+</sup> ions. For the best samples, the concentrations of oxygen and carbon at the Ni/Al interface were in the range of 0.1-0.01 of a single atomic layer.
## 3 RESULTS
Figures 2 a and b
show magnetoresistance of the $`FS`$ junction of sample $`\mathrm{\#}`$1 (geometry of Fig. 1a) measured using contacts ($`I1,I2,V1,V2`$) at temperatures well above ($`T`$=1.3K) and below ($`T`$=0.27K) the superconducting transition. At $`T`$=1.3K the junction shows anisotropy magnetoresistance with hysteresis typical for ferromagnetic conductors. At temperatures below the transition, the magnetic field dependence changes drastically. A resistance drop is observed at the superconducting transition of Al in the magnetic field. The amplitude of the drop is close to that observed in the temperature dependence . The hysteresial behaviour is also observed below the superconducting transition.
Some samples showed negative magnetoresistance at small magnetic fields (Fig. 2c). There is a correlation of this feature with the critical current of the superconducting transition of the adjacent superconducting structure ($`I2,I3,V2,V3`$) presented in Fig. 3d. Note that the critical current for sample $`\mathrm{\#}`$2 goes nearly to zero at small magnetic fields, while that for the sample $`\mathrm{\#}`$1 stays relatively large (about 20$`\mu `$A). The critical temperature of the superconducting transition of the structure measured using $`I2,I3,V2,V3`$ at zero magnetic field was within the range 1.0 - 1.05 K for all samples of this geometry.
To study properties of the $`FS`$ interface itself we used a cross geometry shown in Fig. 1b. In Fig. 3 we show differential voltage-current characteristics of three interfaces with different transparencies. We see a cross-over from negative to positive change in the resistance of the interface versus applied current upon increase of the interface resistance. The cross-over takes place at the specific interface resistance about 3x$`10^9`$ $`\mathrm{\Omega }`$cm<sup>2</sup>. Temperature dependencies reflect the same tendency .
Figures 4 a) and b) show 3D applied current - magnetic field diagrams of samples $`\mathrm{\#}`$ 3 and $`\mathrm{\#}`$5 respectively. There are a number of magnetic field independent peaks seen on these 3D diagrams. These peaks are small in amplitude but are clearly seen on all measured 3D diagrams of the contacts, including our best ones with interface resistance below 1$`\mathrm{\Omega }`$.
Proximity effect measured on samples with intermediate values of the interface transparency (20$`\mathrm{\Omega }<R_b<50\mathrm{\Omega }`$) showed increase in the resistance upon the superconducting transition in the geometry of Fig. 1c. Figure 5 shows magnetoresistance of three samples of different length, $`L`$. All three show non-monotonic magnetoresistance with maxima in resistance at a magnetic field about 200 Oe. The final changes in the resistance, $`R(H=0)R(H=700Oe)`$, are larger for longer samples.
Figure 6 shows temperature dependence and magnetoresistance curves for sample $`\mathrm{\#}`$9, geometry Fig. 1c. The temperature dependence shows a peak in resistance at the onset of superconductivity. The magnetoresistance curve shows a hysteresis in superconducting state similar to that of Fig. 2, but with opposite sign of resistance changes. The magnetoresistance of this structure above the transition showed the usual negative magnetoresistance of small ($`\mathrm{\Delta }R=0.05\mathrm{\Omega }`$) amplitude.
We have also studied the electron transport in the Andreev interferometer geometry of Fig. 1d. The magnetoresistance of the structure was measured with an accuracy down to $`\mathrm{\Delta }R/R^2`$ 0.1 $`e^2/h`$. In the first two samples tested, no phase-sensitive oscillations were detected so far.
## 4 DISCUSSION
Our experimental data confirms the existence of long-range effects in mesoscopic ferromagnet/superconductor structures, which was established in earlier works . The changes in conductance exceed greatly the value of $`e^2/h`$ which excludes the mesoscopic origin of the effect observed. Such a giant amplitude of the changes in conductance is yet to be explained.
There were several attempts to account for the long-range superconducting proximity effect in ferromagnets. The authors of Ref. 11 suggest that due to spin-orbit interaction in the superconductor, the superconducting wave function may have a triplet component. The lifetime of the triplet state in the ferromagnet is much larger than that of a singlet one, therefore this mechanism leads to the long-range effect. However, the estimate based on the formulas presented in Ref. 11 gives the values of the relative changes in conductance, $`\mathrm{\Delta }G/G`$, more than $`10^2`$ times smaller than the experimental values of few percent.
The contribution of the interface to the conductance of hybrid $`FS`$ structures has been addressed theoretically in both ballistic and diffusive cases. The resistance of the diffusive $`FS`$ interface was predicted to be always larger than that of the corresponding $`FN`$ one. In contrast, in Fig. 3 we see a decrease in the resistance of the $`FS`$ interface for samples with higher interface transparencies. For any interface transparency, we estimate the effect of the shunting by the small part of the superconductor to be of the order of or less than the resistance of one square of Al film. In our case it is 0.1 - 0.3 $`\mathrm{\Omega }`$. Since the changes in the resistance in Fig. 3 are considerably larger, we believe that shunting cannot explain the difference in behaviour.
The cross-over from positive changes in resistance to negative ones presented in Fig. 3 can be accounted for using the phenomenological analysis of Ref. 10. According to the latter the changes in the resistance of the ferromagnetic wire, $`\mathrm{\Delta }R_{FS}=R_{FN}R_{FS}`$, upon the superconducting transition can be written as :
$$\frac{\mathrm{\Delta }R_{FS}}{R_{FN}}=1\frac{1}{\eta (1P(1\alpha ))},$$
(1)
where $`P`$ is the spin polarization and $`\eta `$ and $`\alpha `$ are phenomenological parameters. $`\eta `$ is responsible for the conductance enhancement due to Andreev reflection and varies in the range $`1\eta 2`$. Parameter $`\alpha `$ is proportional to the amount of the spin polarized current in proximity to the superconductor and varies in the range $`0\alpha 1`$. Case $`\alpha =0`$ corresponds to total spin filtering (no spin polarized current in the proximity of the ferromagnetic wire). While the Andreev reflections increase the conductance of the $`FS`$ structure, the spin filtering decreases it. The competition between the two determines the final sign of the conductance changes. The two contributions may have different energy and magnetic field dependencies which may lead to non-monotonic dependencies like the ones presented in Fig. 5.
Magnetic field independent peaks on the $`dV/dI`$ versus current, magnetic field 3D diagrams (Fig. 4), are present on all measurements of the structures in the geometry of Fig. 1b. The nature of the peaks is unclear.
The peak in resistance near the superconducting transition seen in Fig. 6a may be explained by the charge imbalance effect caused by the penetration of electric field into the superconductor . This resistance anomaly near the superconducting transition has been suggested to be a measure of the spin polarization of the ferromagnet and it requires additional experimental study.
The interesting feature of the presented data is the hysteresis in the magnetoresistance of the $`FS`$ junctions. It is seen in Fig. 2 b) and c) and in Fig. 6 b). Note that the sign of the effect is opposite but hysteresis is present in both cases. This effect needs further investigation.
## 5 CONCLUSIONS
In conclusion, we have presented a systematic experimental study of mesoscopic ferromagnet/superconductor structures of various geometries. At the structures with high interface transparency we measured a giant long-range proximity effect. Values of the interface transparencies at which the cross-over from positive to negative changes in the resistance at the superconducting transition were found. Different theoretical approaches were discussed. While the theory can qualitatively explain the long-range superconducting proximity effect it fails to account for the amplitude of the effect. Further experimental and theoretical work is required.
## ACKNOWLEDGMENTS
We thank C. Lambert, A. Volkov, A. Golubov, A. Zagoskin, and V. Chandrasekhar for useful discussions and A.F. Vyatkin for the SIMS analysis of the samples. We appreciate technical support from M. Venti. Financial support from EPSRC GR/L94611 is acknowledged. |
no-problem/9912/cond-mat9912120.html | ar5iv | text | # Structure and Dynamics of Surface Adsorbed Clusters
## Abstract
Extensive numerical simulation are reported for the structure and dynamics of large clusters on metal(100) surfaces. Different types of perimeter hopping processes makes center-of-mass of the cluster to follow a a random walk trajectory. Then, a diffusion coefficient $`D`$ can be defined as $`\underset{t\mathrm{}}{lim}D(t)`$, with $`D(t)=d^2/(4t)`$ and $`d`$ the displacement of the center-of-mass. In the simulations, the dependence of the diffusion coefficient on those perimeter hopping processes can be analyzed in detail, since the relations between different rates for the processes are explicitly considered as parameters.
preprint: Preprint: LANL (1999)
The different diffusion processes that take place on surfaces have, clearly, an important role in many technological areas. Diffusion of individual atoms and clusters has been studied for a long time with different experimental techniques and, more recently, using scanning tunneling microscopy (STM) . From a theoretical point of view, several lattice-gas kinetic Monte Carlo simulations have addressed the question of the dependence of the cluster diffusion coefficient$`D`$ on the cluster size in square or triangular lattice. Recently, the diffusion of metal clusters on metal surfaces has received extensive attention due to the fact that some experimental and theoretical studies have led to the expectation that only small, two-dimensional (2D) clusters were able to diffuse. The larger two-dimensional clusters, also observed on the surfaces, were not expected to diffuse. However, recent experimental evidence from STM studies have became available showing that very large 2D Ag clusters clusters (containing $`N=10^210^3`$ atoms) are indeed able to diffuse on Ag(100) substrates.
From the previous theoretical and experimental work, it is now clear that the movement of the cluster is a consequence of several atomic-scale processes, taking place at the periphery of the cluster, that makes the center-of-mass follow a random walk trajectory. For instance, for small clusters ($`N<20`$) it has been proposed that the mechanism of diffusion is the short range motion of a single atom away from the periphery followed by a regrouping of the cluster around the peripheric vacancy . Evidence of concerted gliding is also available . On the order hand, the movement of clusters composed of a large amount of atoms has been considered a consequence of two main mechanisms taking place at the boundary of the cluster: periphery diffusion (PD) and 2D evaporation-condensation (EC). In the PD process several types of atomic motions along the periphery of the cluster are responsible for the displacement of the center-of-mass. However, the atoms do not leave the cluster while executing those movements. In the EC process the cluster is considered to be in quasi-equilibrium with a dilute 2D gas of atoms, diffusing very quickly on the metal surface surrounding the cluster. Both types of process are non-exclusive and can take place at the same time. Of course, it can be anticipated that the energy barriers for the PD mechanism are much lower than the energy needed for an atom to leave the cluster and, in principle, the PD process can be expected to be primarily responsible for the movement of the cluster.
At this point, the question of the dependence of the diffusion coefficient on the number of atoms in the cluster ($`N`$) becomes relevant. The diffusion coefficient $`D`$ can be defined as
$$\underset{t\mathrm{}}{lim}D(t)=d^2/(4t),$$
(1)
with $`d`$ the displacement of the center-of-mass.
It has been suggested that the value of the diffusion coefficient $`D`$ behaves as $`DN^\alpha `$. Different values for the exponent $`\alpha `$ have been proposed depending on which diffusion mechanism is considered to facilitate the movement of the cluster . In this sense, Monte Carlo simulations for cluster diffusion based on the PD mechanism are available, showing $`\alpha 1.5`$ to $`2.0`$ . However, these values of $`\alpha `$ lead to a strong variation of the diffusion coefficient $`D`$ as a function of the number $`N`$, which is not consistent with the experimental STM data available . On the other hand, in reference it is claimed that for circular clusters the diffusion coefficient behaves as $`Dd^n`$, where $`d`$ is the diameter of the cluster and the integer $`n`$ identifies different diffusion mechanisms. When the center of mass motion occurs by adatom diffusion along the periphery of the cluster $`n=3`$, while $`n=2`$ or $`1`$ when the cluster diffusion occurs by correlated or uncorrelated adatom evaporation and condensation, respectively.
Then it is clear that a deeper understanding of the way in which 2D large clusters move can be obtained using numerical simulations. Following this idea, we set up Monte Carlo simulations for the kinetics in which each of the above described processes takes place at its own rate. Diffusion of atoms along the cluster perimeter proceeds at a rate $`r_e`$ and evaporation-condensation at a rate $`r_c`$. Both are related by the detailed-balance relation $`\left(r_c/r_e\right)=exp(\mathrm{\Delta }E/k_bT)`$, where $`\mathrm{\Delta }E`$ is the energy difference between the two processes. The relation between the rates is kept as a parameter in the simulations.
Our Monte Carlo simulation proceeds as follows. We start with a ‘square’ cluster of $`N`$ atoms at the center of a $`1500\times 1500`$ lattice substrate . At a randomly selected site from the periphery of the cluster two main events can take place.
(a) With probability $`r_c/r_e`$, an evaporation or condensation event is selected. If the evaporation event is going to take place, an atom is detached from the cluster and goes into the surrounding two dimensional gas of fast, diffusing atoms. If the condensation event is going to take place, an atom is attached to a randomly selected place on the periphery of the cluster since it is considered to come from the surrounding two dimensional gas. In this way, the average number of atoms in the cluster in maintained equal to $`N`$.
(b) With probability $`1(r_c/r_e)`$, an atom is relaxed to the nearest-neighbor or next-nearest-neighbor empty site where it finds that more bonds are saturated. This is the mechanism that makes the cluster stay connected (i.e. not dissolve) in the $`r_er_c`$ limit. We restrict ourselves within this limit letting $`r_c/r_e`$ be less than 0.2.
After a certain thermalization time, we start to record the position of the center-of-mass of the cluster with respect to the origin of coordinates considered to be at the center of the lattice representing the substrate. Time is measured in Monte Carlo steps per atom.
The simulational results are shown in Figs 1 to 3. Results presented have been averaged over 100 independent runs and for 10 different cluster sizes containing from 121 to 961 particles.
In Fig. 1 is a representative plot of the overall behavior of the diffusion coefficient. It can be seen that, in agreement with the available experimental results , the simulations show a fast decay in the value of the diffusion coefficient at early times, for $`\left(r_c/r_e\right)=0.1`$ and $`121N961`$. After this fast decay the cluster reaches a diffusive random-walk-like behavior, demonstrated by the plateau at $`t\mathrm{}`$. This figure confirms that the main characteristics of the movements of the cluster are well reproduced in the simulations. In particular, the very fast decay at early times has been associated with a certain back correlation effect, and it has been observed in previous cluster diffusion simulations where it was considered to be related to the cluster connectivity .
In Fig. 2 the diffusion coefficient $`D`$ is plotted versus $`1/N`$. From the plot it can be seen that $`D`$ depends on $`N`$ as $`D=D_0\left(r_c/r_e\right)N^\alpha `$ with $`\alpha =1`$. The most notable characteristic is the dependence of the pre-factor $`D_0`$ on the evaporation-relaxation relation $`r_c/r_e`$. The dependence can only be observed by a numerical simulation that allows full control of the relation. It can be seen that the cluster has a greater $`D`$ for greater values of the relation $`r_c/r_e`$. This result is a little bit surprising. It tell us that, although the predominant processes at the periphery is the PD process, the EC process has a greater effect on the mobility of the cluster. Then, the simulations favor the EC mechanism as the primary one responsible for the movement of the cluster, in accordance with the available experimental data.
From Fig. 3, it can be seen that the exponent $`\alpha `$ does not depend on the ratio $`r_c/r_e`$. By linear fitting the points of the plot, an average value for the exponent $`\alpha `$ is obtained. This value is $`\alpha =1.092\pm 0.123`$ and differs from the value obtained from the experimental data, just reported in reference . However, the experimental measurements of $`D`$ as a function on $`N`$ have been adjusted within a certain experimental error and cannot be considered to be conclusive. On the other hand the value of $`\alpha 1.0`$ is in accordance with the theoretical predicted value of reference ($`n=1`$) when the evaporation-condensation process is uncorrelated.
Finally, in Fig 4 the behavior of the diffusion coefficient vs time (at early evolution times) is plotted in log-log plot for $`r_e=1.0`$ and three different values of $`r_c`$. From this plot a very new characteristic of the movement is identified. The diffusion coefficient behaves as $`D(t)t^\beta `$ with $`\beta =0.812\pm 0.021`$, showing a nontrivial scaling behavior at early times. In principle, it could be anticipated that the scaling behavior should be in close relation with the above mentioned back correlation effect. But, the explanation of this characteristic from a physical point of view needs more experimental and theoretical work.
In conclusion. Results for the simulation of the diffusion of large Ag clusters over Ag(100) surfaces are presented. The simulations reproduce very well the main characteristic of the movement of the cluster according to the available experimental results. The dependence of the diffusion coefficient on the number of atoms in the cluster has been explored, as well as other features of the mechanisms by which the large clusters diffuses. Also, from the simulations new characteristics of the movement have been identified and suggest new experimental and theoretical work. A deeper understanding of the basic cluster diffusion mechanisms will have important consequences on the techniques that make possible the epitaxial growth. |
no-problem/9912/cond-mat9912029.html | ar5iv | text | # Finite-size scaling and conformal anomaly of the Ising model in curved space
\[
## Abstract
We study the finite-size scaling of the free energy of the Ising model on lattices with the topology of the tetrahedron and the octahedron. Our construction allows to perform changes in the length scale of the model without altering the distribution of the curvature in the space. We show that the subleading contribution to the free energy follows a logarithmic dependence, in agreement with the conformal field theory prediction. The conformal anomaly is given by the sum of the contributions computed at each of the conical singularities of the space, except when perfect order of the spins is precluded by frustration in the model.
\]
The introduction of conformal field theory about fifteen years ago can be considered one of the most important developments towards the understanding of critical phenomena in two dimensions. This subject added to the progress achieved ten years before by application of the renormalization group ideas to critical phenomena. Both the renormalization group and conformal field theory have in common the idea of scaling. This plays a central role in the confrontation with experimental measurements near a critical point, as well as in numerical simulations where the critical behavior is approached by enlarging the size of the system.
The influence of the gravitational background on critical phenomena is largely unknown, though. This problem can be approached from the point of view of conformal field theory, which is able to deal with two-dimensional backgrounds related by conformal transformations to the plane. Although the kind of geometries one can handle in this way is restricted, we have learned about the interesting properties of conformal field theories on semiplanes, cylinders and conical singularities.
In the case of a two-dimensional smooth manifold, it has been shown on general grounds that the free energy $`F`$ has corrections that depend directly on the central charge $`c`$ of the conformal field theory. As a function of the length scale $`L`$ of the system, the free energy has to behave in the form
$$F(L)=aL^2+b\mathrm{log}(L)+\mathrm{}$$
(1)
where $`b=c\chi /12`$ for a manifold with Euler characteristic $`\chi `$. A similar formula applies to the case of a conical singularity. Logarithmic corrections to the free energy also arise associated to corners in higher-dimensional spaces Until now, though, no examples of statistical models have been considered where the logarithmic corrections due to the curvature have been tested. The question is actually nontrivial since, as we will see below, the simplest lattices that make feasible the construction of models with such scaling behavior do not give rise to smooth manifolds in the continuum limit.
In this paper we study the Ising model on lattices whose continuum limits have the topology of the tetrahedron and the octahedron. Our aim is to discern whether an expression like (1) applies, providing then a check of the conformal field theory description on the curved background. To accomplish this task we take the thermodynamic limit along a series of honeycomb lattices which are built by assembling triangular pieces like that in Fig. 1 as the facets of the given polyhedron. Our choice of this kind of lattices is determined by the feasibility of growing them up regularly while preserving the geometry of the polyhedron. From the point of view of the simplicial geometry, the local curvature at each $`n`$-fold ring of the lattice is given by $`R_i=\pi (6n_i)/n_i`$. Thus, our honeycomb lattices embedded on the tetrahedron, as well as in the octahedron, keep the same distribution of the curvature (nonvanishing only at the three-fold and four-fold rings that encircle the vertices of the respective polyhedra), no matter the size of the lattice.
The critical behavior of the Ising model on the tetrahedron has been discussed in Ref. . It has been shown there that the critical exponents $`\alpha ,\beta `$ and $`\gamma `$ do not deviate in the curved geometry from the known values of the Ising model on a flat space. In the present Letter we focus on the imprints that the curvature may leave in the scaling behavior of the statistical system. The $`\mathrm{log}(L)`$ correction to the free energy is known in the case of a conical singularity on a two-dimensional surface. By measuring the $`\mathrm{log}(L)`$ scaling of the free energy we are then making a nontrivial check of the conformal field theory prediction, since we are dealing with geometries that are not small perturbations with respect to flat space. This may also validate our construction as an alternative procedure to the determination of the conformal anomaly in spaces with the topology of the cylinder.
We begin by analyzing the Ising model on the octahedron. Contrary to the case of the tetrahedron, where there is frustration of the model, the present lattices are bipartite since they are built by assembling four of the pieces in Fig. 1 around each vertex. The partition function has to be then symmetric under the change of sign of the coupling constant $`\beta \beta `$.
The thermodynamic limit has to be taken in order to approach the critical point of the model. This may pose a problem as long as we want to measure a scaling behavior like (1) that is supposed to be well-defined at the critical point of the continuum model. On a lattice of finite length scale $`L`$, we may only define a “pseudocritical” coupling $`\beta _L`$ at which some of the observables, like for instance the specific heat, reaches a maximum value. We will discern later what is the correct choice to extract the genuine scaling of the conformal field theory from the finite-size data.
From the technical point of view, high-precision measurements are needed in order to observe neatly a $`\mathrm{log}(L)`$ dependence of the free energy, after subtraction of the leading contribution $`L^2`$. The dimer approach affords such possibility, by translating the computation of the partition function into the evaluation of the Pfaffian of an antisymmetric operator $`A`$. This is given by a coordination matrix of what is called the decorated lattice, which is obtained in our case by inserting a triangle in place of each of the points of the original lattice. A detailed example of how to build the coordination matrix for planar lattices similar to ours can be found in Ref. . The partition function can be written in the form
$$Z=(\mathrm{cos}\beta )^l\left(det(A)\right)^{1/2}$$
(2)
where $`l`$ is the number of links of the original lattice. From (2) it is clear that the partition function and the free energy $`F=\mathrm{log}(Z)`$ can be computed with high precision on reasonably large lattices, as far as the evaluation of the corresponding determinant becomes feasible.
In the case of our lattices in curved space, the determination of the logarithmic correction to $`F`$ is facilitated by the fact that finite-size scaling sets in at very small lattice size. The honeycomb lattices embedded on the octahedron form a family with increasing number of sites $`n=24N^2,N=1,2,\mathrm{}`$ The pseudocritical couplings approach the critical coupling $`\beta _{\mathrm{}}=\mathrm{log}(2+\sqrt{3})/20.6585`$ following the finite-size scaling law
$$\left|\beta _N\beta _{\mathrm{}}\right|1/N^\lambda $$
(3)
Usually $`\lambda `$ is related to the critical exponent $`\nu `$ of the correlation length, $`\lambda =1/\nu `$. One can check, however, that in the case of the octahedron $`\lambda `$ is sensibly higher than the expected value $`\nu ^1=1`$. The values of $`\beta _N`$, which we have computed by looking at the maxima of the specific heat for $`N=2`$ up to $`N=7`$ (1176 lattice sites), are given in Table I. By carrying out a sequence of fits, taking four consecutive lattices for each of them, we obtain the respective estimates of the exponent $`\lambda _{\mathrm{octa}}`$, in order of increasing lattice size : 1.825, 1.809, 1.798, 1.794.
We present in Fig. 2 a logarithmic plot of the values of $`\beta _N\beta _{\mathrm{}}`$ versus $`N`$ and the linear fit for the last four points. It is remarkable the small deviation of the points from the law (3), even for the smaller lattices, which ensures that the estimates for $`\lambda _{\mathrm{octa}}`$ are converging to a value different to that expected in flat space. The exponent for the octahedron is very close to the exponent obtained in the case of the tetrahedron, for a wider range of lattice sizes, $`\lambda _{\mathrm{tetra}}1.745(2)`$ . We recall that these estimates do not point at a critical exponent of the correlation length different from $`\nu =1`$, but rather at a violation of the Ferdinand-Fisher criterion for the determination of $`\nu `$ in the curved spaces.
We have computed the free energy $`F_N`$ for the members $`N=1`$ up to $`N=7`$ of the series of honeycomb lattices embedded on the octahedron. The values are given in Table I. We have observed a clear $`\mathrm{log}(N)`$ correction to the leading behavior $`N^2`$ of the free energy as a function of the lattice size, when the measurements are carried out at the critical coupling $`\beta _{\mathrm{}}`$. The task is facilitated by taking into account the precise value of the bulk free energy per site in the honeycomb lattice, $`a/240.331912`$ . By computing at coupling constant $`\beta =\beta _{\mathrm{}}`$ and making a sequence of fits for sets of four consecutive lattices, we obtain the respective values of the $`b`$ coefficient in (1), in order of increasing lattice size : $`0.20486`$, $`0.20763`$, $`0.20807`$, $`0.20820`$. We observe a clear convergence towards a value $`b0.208`$. We have plotted in Fig. 3 the values of $`F_NaN^2`$ and the best fit for the last four points in the plot. The sum of the squares of the deviations from the logarithmic dependence (for $`N=2`$ up to $`N=7`$) is $`2.710^7`$. The accuracy of the fit is remarkable, given that it is achieved by adjusting only the coefficient $`b`$ and the constant term in Eq. (1).
The above results show that the hypothesis of finite-size scaling may be applied to the free energy to determine the conformal anomaly on a curved background. Let us now interpret the coefficient $`b`$ of the anomaly that we have obtained for the octahedron. We assume that the logarithmic correction can be computed as the sum of the corrections for each of the conical singularities, $`b=_{i=1}^6b_i`$ . The coefficient $`b_i`$ for a conical singularity has been established in Ref. in terms of the central charge $`c`$ and the angle $`\theta `$ enclosed by the cone:
$$b_i=c\frac{\theta }{24\pi }\left(1(2\pi /\theta )^2\right)$$
(4)
This formula leads in the case of the octahedron to $`b=5c/120.41666c`$, which for $`c=1/2`$ corresponding to the Ising model is in very good agreement with our numerical result. This provides a clear indication that the continuum limit of the Ising model on the octahedron is given by a conformal field theory, with the same central charge as for the model in flat space.
We move now to the family of honeycomb lattices embedded on the tetrahedron. We may distinguish between the ferromagnetic and the antiferromagnetic regime, since the lattices are frustrated in this case. The finite-size scaling is actually different in the two regimes. The number of lattice sites is given now by the formula $`n=12N^2`$, where $`N`$ is the integer that labels the member in the family. At the ferromagnetic critical coupling $`\beta _{\mathrm{}}>0`$, we have measured the free energy with the same precision as before, for $`N=2`$ up to $`N=9`$. The values that we have obtained are given in Table II. The accuracy of the fit to a $`\mathrm{log}(N)`$ correction added to the leading behavior is again remarkable. By making a sequence of fits, each of them for four consecutive lattices, we obtain the respective estimates for the $`b`$ coefficient, in order of increasing lattice size: $`0.2499801`$, $`0.2499945`$, $`0.2499978`$, $`0.2499992`$. The plot of the function $`F_NaN^2`$ is given in Fig. 3, together with the logarithmic dependence from the last fit.
The result that we obtain for the coefficient $`b`$ is again in very good agreement with the value expected for a conformal field theory. The outcome of adding the effect of four conical singularities with enclosed angle $`\theta =\pi `$ yields the prediction $`b=c/2`$. Therefore, we may conclude that the critical point in the ferromagnetic regime provides an example of a $`c=1/2`$ conformal field theory on the curved background.
The finite-size scaling works differently in the antiferromagnetic regime. The values of the free energy computed at $`\beta =\beta _{\mathrm{}}`$ are given in Table II. The accuracy of the fits to determine the $`\mathrm{log}(N)`$ correction is as good as in the former instances, but now the $`b`$ coefficient turns out to be positive. By carrying out the same sequence of fits as in the ferromagnetic regime, we find the convergent series for the estimates of $`b`$ : 0.74944, 0.74986, 0.74995, 0.74996. We have plotted in Fig. 4 the values of $`F_NaN^2`$ and the logarithmic fit for the last four points. The sum of the squares of the deviations from the $`\mathrm{log}(N)`$ dependence ($`N=2`$ to $`N=9`$) is $`2.210^7`$.
We can learn the correct interpretation of these results from a similar feature of the finite-size data of the free scalar field on the curved lattice. This can be described by a simple tight-binding model, whose spectrum reproduces that of the laplacian on the lattice. The partition function is computed through the determinant of the coordination matrix, but the zero mode has to be removed in order to obtain a nonsingular result. As a consequence of that, the coefficient of the $`\mathrm{log}(N)`$ correction (fitted to data from $`N=2`$ to $`N=9`$ as shown in Fig. 4) turns out to be $`0.49999`$ . The correct result in front of the logarithmic correction is obtained by adding the regularized contribution of the zero mode, which scales like $`(1/2)\mathrm{log}(L^2)`$ after introducing the length dimensions of the laplacian in the lattice.
The same effect operates in the antiferromagnetic regime of the lattices on the tetrahedron. These cannot be decomposed in two disjoint sublattices, so that there is an intrinsic frustration which rules out perfect antiferromagnetic order, irrespective of lattice size. The zero mode is missing in the spectrum, and the correct conformal anomaly of the $`c=1/2`$ conformal field theory is restablished adding “by hand” to the free energy the regularized zero mode contribution $`\mathrm{log}(L^1)`$.
To summarize, we have checked that the continuum limit of the Ising model taken along lattices embedded on the tetrahedron and the octahedron corresponds to respective $`c=1/2`$ conformal field theories. We have seen that the convergence to the critical coupling is sensibly accelerated with respect to a flat geometry when performing the finite-size scaling in the curved lattices. Our construction may be useful to determine the central charge corresponding to other models whose underlying conformal field theory is not known. |
no-problem/9912/astro-ph9912439.html | ar5iv | text | # 1 Scientific goals
## 1 Scientific goals
Our main scientific goal is to derive SN rates as a function of various parameters such as host galaxy type and cluster environment: position within the cluster, cluster richness and cluster vs. field. SN rates can then be used to determine the current and past star formation rates in galaxy clusters . Our measured SN rates can replace the assumed rates used so far in studies of metal abundances in the intracluster gas. We also intend to study the rate, distribution and properties of intergalactic SNe in galaxy clusters. We have already discovered one candidate, SN 1998fc (See fig. 1).
Our search is also sensitive to other optical transients, such as AGNs in the clusters and behind them, flares from tidal disruption of stars by dormant massive black holes in galactic nuclei and GRB afterglows. We may also detect the gravitational lensing effect of the clusters on background SNe.
## 2 Intergalactic SNe and metal abundances in clusters
The existence of a diffuse population of intergalactic stars is supported by a growing body of observational evidence such as intergalactic planetary nebulae in the Fornax and Virgo clusters , and intergalactic red giant stars in Virgo . Recent imaging of the Coma cluster reveals low surface brightness emission from a diffuse population of stars , the origin of which is attributed to galaxy disruption . Since type Ia SNe are known to occur in all environments, there is no obvious reason to assume that such events do not happen within the intergalactic stellar population. SN 1998fc may well be such an event. The intergalactic stellar population is centrally distributed . Therefore, metals produced by intergalactic Ia SNe can provide an elegant explanation for the central enhancement of metal abundances with type Ia characteristics, recently detected in galaxy clusters . |
no-problem/9912/hep-lat9912006.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Issues of chiral symmetry permeate theoretical physics. Our understanding of pionic interactions revolves around spontaneous symmetry breaking and approximately conserved axial currents. The standard model itself is truly chiral, with the weak gauge bosons only coupling to one helicity state of the fundamental fermions. In the context of unification, chiral symmetry provides a mechanism for protecting fermion masses, possibly explaining how a theory at a much higher scale can avoid large renormalizations of the light particle masses. Extending this mechanism to bosons provides one of the more compelling motivations for super-symmetry.
On the lattice, chiral symmetry raises many interesting issues. These are intricately entwined with the famous axial anomalies and the so called “doubling” problem. Being a full regulator, the lattice must break some aspects of chiral symmetry to give the required anomalies in the continuum limit. Prescriptions for lattice fermions that do not accommodate anomalies cancel them with spurious extra species (doublers). Domain-wall fermions, the motivation for this talk and the subject of most of todays presentations, are one scheme to minimize these necessary symmetry violations.
But speak to an audience that already knows all this. In an attempt to avoid boring you, I will discuss domain-wall fermions from a rather unconventional direction. Following a recent paper of mine , I present the subject from a “chemists” point of view, in terms of a chain molecule with special electronic states carrying energies fixed by symmetries. For lattice gauge theory, placing one of these molecules at each space-time site gives excitations of naturally zero mass. This is in direct analogy to the role of chiral symmetry in conventional continuum descriptions. After presenting this picture, I will wander into some comments and speculations about exact lattice chiral symmetries and schemes for gauging them.
## 2 A ladder molecule
To start, let me consider two rows of atoms connected by horizontal and diagonal bonds, as illustrated here
The bonds represent hopping terms, wherein an electron moves from one site to another via a creation-annihilation operator pair in the Hamiltonian. Later I will include vertical bonds, but for now consider just the horizontal and diagonal connections.
Years ago during a course on quantum mechanics, I heard Feynman present an amusing description of an electron’s behavior when inserted into a lattice. If you place it initially on a single atom, the wave function will gradually spread through the lattice, much like water poured in a cell of a metal ice cube tray. With damping, it settles into the ground state which has equal amplitude on each atom. To this day I cannot fill an ice cube tray without thinking of this analogy and pouring all the incoming water into a single cell.
I now complicate this picture with a magnetic field applied orthogonal to the plane of the system. This introduces phases as the electron hops, causing interesting interference effects. In particular, consider a field of one-half flux unit per plaquette. This means that when a particle hops around a unit area (in terms of the basic lattice spacing) the wave function picks up a minus sign. Just where the phases appear is a gauge dependent convention; only the total phase around a closed loop is physical. One choice for these phases is indicated by the numbers on the bonds in the above picture.
The phase factors cause cancellations and slow diffusion. For example, consider the two shortest paths between the sites a and b in the following picture
With the chosen flux, these paths exactly cancel. For the full molecule this cancellation extends to all paths between these sites. An electron placed on site a can never diffuse to site b. Unlike in the ice tray analogy, the wave function will not spread to any site beyond the five nearest neighbors.
As a consequence, the Hamiltonian has localized eigenstates. While it is perhaps a bit of a misuse of the term, these states are “soliton-like” in that they just sit there and do not change their shape. There are two such states per plaquette; one possible representation for these two states is shown in the following figure
The states are restricted to the four sights labeled by their relative wave functions. Their energies are fixed by the size of the hopping parameter $`K`$.
For a finite chain of length $`L`$ there are $`2L`$ atoms, and thus there should be a total of $`2L`$ possible states for our electron (ignoring spin for the moment). There are $`L1`$ plaquettes, and thus $`2L2`$ of the above soliton states. This is almost the entire spectrum of the Hamiltonian, but two states are left over. These are zero energy states bound to the ends of the system. The wave function for one of those is shown here
We now have the full spectrum of the Hamiltonian: $`L1`$ degenerate states of positive energy, a similar number of degenerate negative energy states, and two states of zero energy bound on the ends.
Now consider what happens when vertical bonds are included in our molecule. The phase cancellations are no longer complete and the solitonic states spread to form two bands, one with positive and one with negative energy. However, for our purposes, the remarkable result is that the zero modes bound on the ends of the chain are robust. The corresponding wave functions are no longer exactly located on the last atomic pair, but now have an exponentially suppressed penetration into the chain. The following figure shows the wave function for one of these states when the vertical bond has the same strength as the others. There is a corresponding state on the other end of the molecule.
When the chain is very long, both of the end states are forced to zero energy by symmetry considerations. First, since nothing distinguishes one end of the chain from the other, they must have equal energy, $`E_L=E_R`$. On the other hand, a change in phase conventions, effectively a gauge change, can change the sign of all the vertical and diagonal bonds. Following this with a left right flip of the molecule will change the signs of the horizontal bonds. This takes the Hamiltonian to its negative, and shows that the states must have opposite energies, $`E_L=E_R`$. This is indicative of a particle-hole symmetry. The combination of these results forces the end states to zero energy, with no fine tuning of parameters.
For a finite chain, the exponentially decreasing penetration of the end states into the molecule induces a small interaction between them. They mix slightly to acquire exponentially small energies $`E\pm e^{\alpha L}`$. As the strength of the vertical bonds increases, so does the penetration of the end states. At a critical strength, the mixing becomes sufficient that the zero modes blend into the positive and negative energy bands. In the full model, the mixing depends on the physical momentum, and this disappearance of the zero modes is the mechanism that removes the “doublers” when spatial momentum components are near $`\pi `$ in lattice units .
Energy levels forced to zero by symmetry lie at the core of the domain wall fermion idea. On every spatial site of a three dimensional lattice we place one of these chain molecules. The distance along the chain is usually referred to as a fictitious “fifth” dimension. The different spatial sites are coupled, allowing particles in the zero modes to move around. These are the physical fermions. The symmetries that protect the zero modes now protect the masses of these particles. Their masses receive no additive renormalization, exactly the consequence of chiral symmetry in the continuum. The physical picture is sketched in this cartoon
where I have rotated the fifth dimension to the vertical. Our world lines traverse the four dimensional surface of this five dimensional manifold.
Actually, the connection with chiral symmetry is much deeper than just an analogy. The construction guarantees that the modes are are automatically chiral. To see how this works, place Pauli spin matrices on the spatial bonds. This couples the phases seen by the particles to their spins. The zero mode that is attracted to one end of the chain will continue to move spatially in a direction corresponding to its helicity, as sketched here
The “device” in this figure is effectively a helicity projector. This construction is equivalent to choosing a particular set of Dirac gamma matrices
$$\begin{array}{c}\gamma _0=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)\\ \gamma _5=\left(\begin{array}{cc}0& i\\ i& 0\end{array}\right)\\ \gamma _i=\left(\begin{array}{cc}0& i\sigma _i\\ i\sigma _i& 0\end{array}\right)\end{array}$$
## 3 Slicing the fifth dimension
I hope this description of domain-wall fermions in terms of simple chain molecules has at least been thought provoking. I now ramble on with some general remarks about the basic scheme. The existence of the end states relies on using open boundary conditions in the fifth direction. If we were to curl our extra dimension into a circle, they will be lost. To retrieve them, consider cutting such a circle, as in this figure
Of course, if the size of the extra dimension is finite, the modes mix slightly. This is crucial for the scheme to accommodate anomalies .
Suppose I want a theory with two flavors of light fermion, such as the up and down quarks. For this one might cut the circle twice, as shown here
Remarkably, this construction keeps one chiral symmetry exact, even if the size of the fifth dimension is finite. Since the cutting divides the molecule into two completely disconnected pieces, in the notation of the figure we have the number of $`u_L+d_R`$ particles absolutely conserved. Similarly with $`u_R+d_L`$. Subtracting, we discover an exactly conserved axial charge corresponding to the continuum current
$$j_{\mu 5}^3=\overline{\psi }\gamma _\mu \gamma _5\tau ^3\psi $$
The conservation holds even with finite $`L_5`$. There is a small flavor breaking since the $`u_L`$ mixes with the $`d_R`$. These symmetries are reminiscent of Kogut-Susskind , or staggered, fermions, where a single exact chiral symmetry is accompanied by a small flavor breaking. Now, however, the extra dimension gives additional control over the latter.
Despite this analogy, the situation is physically somewhat different in the zero applied mass limit. Staggered fermions are expected to give rise to a single zero mass Goldstone pion, with the other pions acquiring mass through the flavor breaking terms. In my double cut domain-wall picture, however, the zero mass limit has three degenerate equal mass particles as the lowest states. To see how this works it is simplest to discuss the physics in a chiral Lagrangian language. The finite fifth dimension generates an effective mass term, but it is not in a flavor singlet direction. Indeed, it is in a flavor direction orthogonal to the naive applied mass. In the usual “sombrero” picture of the effective Lagrangian, as illustrated here,
the two mass terms compete and the true vacuum rotates around the Mexican hat from the conventional “sigma” direction to the “pi” direction.
## 4 How many fermions?
Now I become more speculative. The idea of cutting multiply the fifth dimension to obtain several species suggests extensions to zero modes on more complicated manifolds. By having multiple zero modes, we have a mechanism to generate multiple flavors. Maybe one can have a theory where all the physical fermions in four dimensions arise from a single fermion field in the underlying higher dimensional theory. Schematically we might have something like
where each point represents some four dimensional surface and the question remark represents structures in the higher dimension that need specification.
One nice feature provided by such a scheme is a possible mechanism for the transfer of various quantum numbers involved in anomalous processes. For example, the baryon non-conserving ’t Hooft process might arise from a lepton flavor tunneling into the higher manifold and reappearing on another surface as a baryon.
I’ve been rather abstract here. This generic mechanism is in fact the basis of one specific proposed formulation of the standard model on the lattice. For this model the question mark in the above figure is a four-fermi interaction in the interior of the extra dimension, as sketched here
In some sense, the right-handed doubler of the left-handed electron is interpreted as an anti-quark. This picture appears to have all the necessary ingredients for a fully regulated and exactly gauge invariant formulation of the standard model. The primary uncertainty lies with side effects of the multiple fermion coupling. Our experience with non-perturbative phenomena involving strongly coupled fermions is rather limited; in particular, the model cannot tolerate the generation of a spontaneous symmetry breaking of any of the gauge symmetries at the scale of the cutoff.
## 5 Summary
I have presented a simple molecular picture for zero modes protected by symmetry. This illustrates the mechanism for mass protection in the domain-wall formulation of lattice fermions. Then I discussed how some chiral symmetries can become exact in this approach. Finally I speculated on schemes for generating multiple fermion species from the geometry of higher dimensional models. The latter may have connections with the activities in string theory.
## Acknowledgment
This manuscript has been authored under contract number DE-AC02-98CH10886 with the U.S. Department of Energy. Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes. |
no-problem/9912/hep-ph9912371.html | ar5iv | text | # Dileptons from P-Nucleus Collisions
## Introduction
The experimental detection of high-mass lepton pairs produced in hadronic reactions has a long and rich history. The famous quarkonium states that revealed the existence of the charm and beauty quarks in the 1970s were discovered through their dilepton decay branches. They are superimposed on a continuum, which was anticipated theoretically in 1970 drell , and is now known as the Drell-Yan (DY) process. The DY process, electromagnetic quark-antiquark annihilation, is closely related to the deeply inelastic lepton scattering (DIS). By 1980, DY production was already a source of information about antiquark structure of the nucleon. Additionally, DY production with beams of pions and kaons yielded the structure functions of these unstable particles for the first time. Also notable in the history of the DY process were the discoveries of the $`W^\pm `$ and $`Z^0`$ particles in 1983, produced by a generalized (vector boson exchange) quark-antiquark annihilation mechanism.
New experimental work has been carried out in recent years by few but prolific collaborations working in the fixed-target programs at the CERN SPS accelerator and at Fermilab. A series of fixed-target dimuon production experiments (E772, E789, E866) have been carried out at Fermilab in the last 10 years. Some of the highlights from these experiments, namely the observation of pronounced flavor asymmetry in the nucleon sea and the absence of antiquark enhancement in heavy nuclei, are discussed by Garvey at this Conference. In this paper, we will focus on other areas of dilepton physics studied in these experiments. To follow the main theme of this Conference, we first discuss the status of various QCD tests using dilepton productions. We will then discuss the nuclear medium effects for dilepton productions. In particular, the relevance of the Fermilab experiments on the issue of parton energy loss in nuclei will be presented.
## QCD tests in Dilepton Production
In the “Naive” Drell-Yan model, the differential cross section is given as
$`M^3{\displaystyle \frac{d^2\sigma }{dMdx_F}}={\displaystyle \frac{8\pi \alpha ^2}{9}}{\displaystyle \frac{x_1x_2}{x_1+x_2}}\times {\displaystyle \underset{a}{}}e_a^2[q_a(x_1)\overline{q}_a(x_2)+\overline{q}_a(x_1)q_a(x_2)].`$ (1)
Here $`q_a(x)`$ are the quark or antiquark structure functions of the two colliding hadrons evaluated at momentum fractions $`x_1`$ and $`x_2`$. The sum is over quark flavors. The Feynman-$`x`$ ($`x_F`$) is equal to $`x_1x_2`$.
Although the simple parton model originally proposed by Drell and Yan enjoyed considerable success in explaining many features of the early data, it was soon realized that QCD corrections to the parton model were required. Historically, two experimental features demanded theoretical improvement: first, the experimental cross section was about a factor of two larger than the parton-model value, and second, the distribution of dilepton transverse momenta extended to much larger values than are characteristic of the convolution of intrinsic parton momenta. We now discuss several consequences of QCD corrections to the DY observables.
### Absolute Cross Sections and $`p_T`$ Distribution
The inclusion of the NLO diagrams for the DY process brings excellent agreements between the calculations and the data. As an example, Figure 1 shows the NA3 data na3 at 400 GeV, together with the E605 e605 and E772 e772a data at 800 GeV. The solid curve in Figure 1 corresponds to NLO calculation for 800 GeV $`p+d`$ ($`\sqrt{s}`$ = 38.9 GeV) and it describes the NA3/E605/E772 data well.
Berger et al. berger98 recently compared their NLO calculations with the E772 data. As shown in Figure 2, the $`p_T`$ distribution is well reproduced by the calculation. At $`p_T>2`$ GeV/c, the DY cross section is shown to be dominated by processes involving gluons. This suggests the interesting possibility of probing gluon density using large $`p_T`$ DY events.
### Scaling Violation
The right-hand side of Eq. (1) is only a function of $`x_1,x_2`$ and is independent of the beam energies. This scaling property no longer holds when QCD corrections to the DY are taken into account.
While logarithmic scaling violation is well established in DIS experiments, it is not well confirmed in DY experiments at all. No evidence for scaling violation is seen. As discussed in a recent review plm , there are mainly two reasons for this. First, unlike the DIS, the DY cross section is a convolution of two structure functions. Scaling violation implies that the structure functions rise for $`x0.1`$ and drop for $`x0.1`$ as $`Q^2`$ increases. For proton-induced DY, one often involves a beam quark with $`x_1>0.1`$ and a target antiquark with $`x_2<0.1`$. Hence the effects of scaling violation are partially cancelled. Second, unlike the DIS, the DY experiment can only probe relatively large $`Q^2`$, namely, $`Q^2>16`$ GeV<sup>2</sup> for a mass cut of 4 GeV. This makes it more difficult to observe the logarithmic variation of the structure functions in DY experiments.
Possible indications of scaling violation in DY process have been reported in two pion-induced experiments, E326 e326 at Fermilab and NA10 na10 at CERN. E326 collaboration compared their 225 GeV $`\pi ^{}+W`$ DY cross sections against calculations with and without scaling violation. They observed better agreement when scaling violation is included. This analysis is subject to the uncertainties associated with the pion structure functions, as well as the nuclear effects of the $`W`$ target. The NA10 collaboration measured $`\pi ^{}+W`$ DY cross sections at three beam energies, namely, 140, 194, and 286 GeV. By checking the ratios of the cross sections at three different energies, NA10 largely avoids the uncertainty of the pion structure functions. However, the relatively small span in $`\sqrt{s}`$, together with the complication of nuclear effects, make the NA10 result less than conclusive.
RHIC provides an interesting opportunity for unambiguously establishing scaling violation in the DY process. Figure 3 shows the predictions for $`p+d`$ at $`\sqrt{s}`$ = 500 GeV. The scaling-violation accounts for a factor of two drop in the DY cross sections when $`\sqrt{s}`$ is increased from 38.9 GeV to 500 GeV. It appears quite feasible to establish scaling violation in DY with future dilepton production experiments at RHIC.
### Decay Anugular Distributions
In the parton model, the angular distribution of dileptons is characteristic of the decay of a transversely polarized virtual photon,
$`{\displaystyle \frac{d\sigma }{d\mathrm{\Omega }}}=\sigma _0(1+\lambda cos^2\theta ),`$ (2)
where $`\theta `$ is the polar angle of the lepton in the virtual photon rest frame and $`\lambda =1`$. Early experimental data from both pion and proton beams kenyon were consistent with this form but had large statistical errors.
Recently, E772 has performed a high-statistics study of the angular distribution for DY events QM96 with masses above the $`\mathrm{{\rm Y}}`$ family of resonances. About 50,000 events were recorded from 800 GeV/c $`p+Cu`$ collisions, using a copper beam dump as the target. Figure 4 shows the acceptance-corrected angular distribution, integrated over the kinematic variables. Analyzed in the Collins-Soper reference frame collins , the data yield $`\lambda =0.96\pm 0.04\pm 0.06`$(systematic).
Including higher-order QCD corrections to the DY process oakes ; collinsa results in the more complicated form of the angular distribution,
$`{\displaystyle \frac{d\sigma }{d\mathrm{\Omega }}}1+\lambda cos^2\theta +\mu sin2\theta cos\varphi +{\displaystyle \frac{\nu }{2}}sin^2\theta cos2\varphi ,`$ (3)
where $`\varphi `$ is the azimuthal angle and $`\lambda `$, $`\mu `$, and $`\nu `$ are angle-independent parameters. NLO calculations predict chiappetta small deviations from $`1+cos^2\theta `$ ($`5\%`$) for $`p_t`$ below 3 GeV/c. The relevant scaling parameter for the magnitude of these deviations is $`p_t/Q`$, implying that NLO corrections become important when $`p_tQ`$. A relation, $`1\lambda 2\nu =0`$, developed by Lam & Tung lam , is analogous to the Callan-Gross relation in DIS. Measurements with pion beams at CERN NA10angdist and at Fermilab E615a have shown that the Lam-Tung relation is clearly violated at large $`p_t`$.
Pion-induced DY experiments have unexpectedly shown that transverse photon polarization changes to longitudinal ($`\lambda 1`$) at large $`x_F`$ NA10angdist ; E615a ; NA3angdist ; E615 . The $`x_F`$ dependence of $`\lambda `$ is qualitatively consistent with a higher-twist model originally proposed by Berger & Brodsky berger ; bergera . However, the quantitative agreement is poor. The model’s basis can be described as follows. As $`x_F`$ of the muon pair approaches unity, the Bjorken-$`x`$ (momentum fraction) of the annihilating projectile parton must also be near unity. Thus, the whole pion contributes to the DY process. This can be treated with perturbation theory, with the result that the transverse polarization of the virtual photon becomes longitudinal. The angular distribution at large $`x_F`$ becomes
$`{\displaystyle \frac{d\sigma }{d\mathrm{\Omega }}}(1x)^2(1+\lambda cos^2\theta )+\alpha sin^2\theta ,`$ (4)
where $`\alpha `$ is $``$ $`p_t^2/Q^2`$.
Eskola et al eskola have shown that an improved treatment of the effects of nonasymptotic kinematics greatly improves quantitative agreement with the $`\lambda `$ values from the pion data. Brandenburg et al brandenburg have extended the higher twist model to specifically include pion bound-state effects. They predict values for $`\lambda `$, $`\mu `$ and $`\nu `$ that are in good agreement with the pion data at large $`x_F`$. Unfortunately, the results are quite sensitive to the choice of the pion Fock state wave functions, which are not well constrained by experimental data.
## Nuclear Medium Effects of Dilepton Production
From a high-statistics measurement of dilepton production in 800 GeV proton-nucleus interaction, the target-mass dependence of DY, $`J/\mathrm{\Psi }`$, $`\mathrm{\Psi }^{}`$, and $`\mathrm{{\rm Y}}`$ productions have been determined in E772 e772b ; e772c ; e772d . As shown in Figure 5, different nuclear dependences are observed for different dilepton processes. While the DY process shows almost no nuclear dependence, pronounced nuclear effects are seen for the production of heavy quarkonium states. E772 found that $`J/\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ have similar nuclear dependence. The nuclear dependences for $`\mathrm{{\rm Y}}`$, $`\mathrm{{\rm Y}}^{}`$ and $`\mathrm{{\rm Y}}^{\prime \prime }`$ are less than that observed for the $`J/\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$. Within statistics, the various $`\mathrm{{\rm Y}}`$ resonances also have very similar nuclear dependences.
Although the integrated DY yields in E772 show little nuclear dependence, it is instructive to examine the DY nuclear dependences on various kinematic variables. Using the simple $`A^\alpha `$ expression to fit the DY nuclear dependence, the values of $`\alpha `$ are shown in Figure 6 as a function of $`x_T(x_2)`$, $`M`$, $`x_F`$, and $`p_t`$. Several features are observed:
1. A suppression of the DY yields from heavy nuclear targets is seen at small $`x_2`$. This is consistent with the shadowing effect observed in DIS. In fact, E772 provides the only experimental evidence for shadowing in hadronic reactions. The reach of small $`x_2`$ in E772 is limited by the mass cut ($`M4`$ GeV) and by the relatively small center-of-mass energy (recall that $`x_1x_2=M^2/s`$). p-A collisions at RHIC clearly offer the exciting opportunity to extend the study of shadowing to much smaller $`x`$.
2. $`\alpha (x_F)`$ shows an interesting trend, namely, it decreases as $`x_F`$ increases. It is tempting to attribute this behavior to initial-state energy-loss effect. However, there is a strong correlation between $`x_F`$ and $`x_2`$ ($`x_F=x_1x_2`$), and it is essential to separate the $`x_F`$ energy-loss effect from the $`x_2`$ shadowing effect. Figure 7 shows $`\alpha `$ versus $`x_F`$ for two bins of $`x_2`$, one in the shadowing region ($`x_20.075`$) and one outside of it ($`x_20.075`$). There is no discernible $`x_F`$ dependence for $`\alpha `$ once one stays outside of the shadowing region. Therefore, the apparent suppression at large $`x_F`$ in Figure 6 reflects the shadowing effect at small $`x_2`$ rather than the energy-loss effect.
3. $`\alpha (p_t)`$ shows an enhancement at large $`p_t`$. This is reminiscent of the Cronin Effect cronin where the broadening in $`p_t`$ distribution is attributed to multiple parton-nucleon scatterings. It is instructive to compare the $`p_t`$ broadening for DY process and quarkonium production. Figure 8 shows $`\mathrm{\Delta }p_t^2`$, the difference of mean $`p_t^2`$ between p-A and p-D interactions, as a function of A for DY, J/$`\mathrm{\Psi }`$, and $`\mathrm{{\rm Y}}(1S)`$ productions at 800 GeV. The DY and $`\mathrm{{\rm Y}}`$ data are from E772 e772e , while the $`J/\mathrm{\Psi }`$ results are from E789 e789 , E771 e771 , and preliminary E866 analysis leitch . More details on this analysis will be presented elsewhere e772e . Figure 8 shows that $`p_t^2`$ is well described by the simple expression $`a+bA^{1/3}`$. It also shows that the $`p_t`$ broadening for $`J/\mathrm{\Psi }`$ is very similar to $`\mathrm{{\rm Y}}`$, but significantly larger (by a factor of 5) than the DY. A factor of 9/4 could be attributed to the color factor of the initial gluon in the quarkonium production versus the quark in the DY process. The remaining difference could come from the final-state multiple scattering effect which is absent in the DY process.
Baier et al. baier1 have recently derived a relationship between the partonic energy-loss due to gluon bremsstrahlung and the mean $`p_t^2`$ broadening accumulated via multiple parton-nucleon scattering:
$`dE/dz={\displaystyle \frac{3}{4}}\alpha _s\mathrm{\Delta }p_t^2.`$ (5)
This non-intuitive result states that the total energy loss is proportional to square of the path length traversed by the incident partons. From Figure 8 and Eq. 5, we deduce that the mean total energy loss, $`\mathrm{\Delta }E`$, for the p+W DY process is $`0.6`$ GeV. Such an energy-loss is too small to cause any discernible effect in the $`x_F`$ (or $`x_1`$) nuclear dependence. As shown in Figure 7, the dashed curve corresponds to $`\mathrm{\Delta }E=2.0\pm 1.7`$ GeV (for p+W), and the E772 data are consistent with Eq. 5. A much more sensitive test for Eq. 5 could be done at RHIC, where the energy-loss effect is expected to be much enhanced in A-A collision baier2 . |
no-problem/9912/chao-dyn9912010.html | ar5iv | text | # From Generalized Synchrony to Topological Decoherence: Emergent Sets in Coupled Chaotic Systems
## Abstract
We consider the evolution of the unstable periodic orbit structure of coupled chaotic systems. This involves the creation of a complicated set outside of the synchronization manifold (the emergent set). We quantitatively identify a critical transition point in its development (the decoherence transition). For asymmetric systems we also describe a migration of unstable periodic orbits that is of central importance in understanding these systems. Our framework provides an experimentally measurable transition, even in situations where previously described bifurcation structures are inapplicable.
The idea that several subsystems, when interacting nonlinearly, collectively give rise to novel dynamics that are not obviously attributable to the individual component parts has been termed emergence . In this Letter we investigate such novel dynamics in systems of coupled chaotic maps, with an emphasis on systems of dissimilar components. When synchronized, the time evolution occurs on a restricted manifold (called the synchronization manifold) embedded in the full state space. As the degree of coupling is decreased to zero, the system gradually evolves into a completely unsynchronized state in which all the degrees of freedom of the individual component maps are realized. At each extreme, the dynamics can be understood in terms of the components. In between, however, the situation is more complicated.
Various transitions in this desynchronization process have been described in the literature . Much of this earlier work depends on an invariant manifold that persists under decreased coupling due to system symmetries. Our approach is novel in that it does not refer to invariant manifolds or symmetries. Instead, we analyze the evolution of the periodic orbit structure as the coupling is decreased and synchronization breaks down. Our formalism is therefore applicable to a much larger class of coupled systems, in particular, those consisting of dissimilar components. We report two main results. We describe the creation and evolution of a complicated set that develops outside of the synchronization manifold (the emergent set), and we quantitatively identify a critical transition point in its development (the decoherence transition). For asymmetric systems we also describe a migration of unstable periodic orbits that is of central importance in understanding these systems. Our framework is advantageous because it provides an experimentally measurable transition in situations where previously described bifurcation structures are inapplicable.
Previous work has focused on the invariant dynamics in the synchronization manifold $``$, which can easily be identified in coupled systems with symmetry (such as when two identical sub-systems are coupled together). On $``$, the components evolve identically, and are said to exhibit identical synchrony . As the coupling decreases from a fully synchronized state, a bubbling bifurcation occurs when an orbit within $``$ (usually of low period ) loses transverse stability. In the presence of noise or small asymmetries, a typical trajectory quickly approaches and spends a long time in the vicinity of $``$, but makes occasional excursions. (If an appropriate attractor exists outside the synchronization manifold, this transition leads to the creation of riddled basins .) As the coupling is further decreased, the blowout bifurcation is observed when $``$ itself becomes transversely unstable (on average). More recent work has described bifurcations that lead to the creation of periodic orbits off the synchronization manifold ; these may lead to the creation of chaotic attractors external to $``$ . Also, imperfect phase synchrony has been analyzed recently in terms of unstable periodic orbits , and synchrony transitions have been investigated in coupled lattices of identical maps .
The concept of (differentiable) generalized synchrony (GS) extends these ideas. GS relaxes the condition that the state variables evolve identically, and only requires that they be functionally related. As the coupling is reduced and GS breaks down, however, this function may become extremely complicated, and the identification of bubbling-type or blowout-type bifurcations is especially problematic. Thus, a more general description of the desynchronization process is needed.
In the present work we extend the above description by considering the evolution of the unstable periodic orbits (UPOs) as the coupling is varied. We use the following two-dimensional, unidirectionally coupled system :
$$\{\begin{array}{ccc}x& & f(x)\hfill \\ y& & cf(x)+(1c)g(y),\hfill \end{array}$$
(1)
where $`f`$ and $`g`$ are chaotic maps and $`c`$ is a scalar that describes the coupling. We emphasize that $`f`$ and $`g`$ need not be similar, and may be of any dimension. For example, we have also studied a four-dimensional system in which $`f`$ and $`g`$ are Hénon maps with different parameters. For ease of presentation we restrict discussion to one-dimensional $`f`$ and $`g`$. Systems such as Equation (1) are known in the mathematical literature as skew products or extensions; ours is constructed such that for $`c=1`$, the $`x`$ and $`y`$ dynamics are identical and synchronized, whereas for $`c=0`$, the $`x`$ and $`y`$ dynamics are completely independent. We investigate this system as $`c`$ is decreased from $`1`$ to $`0`$.
The simplest case is when $`f=g`$, for which the synchronization manifold $``$ (i.e. the line $`x=y`$) is invariant and attracting at $`c=1`$. The bubbling bifurcation occurs when an orbit in $``$ loses transverse stability, typically via a period-doubling (pitchfork) bifurcation. This leads to the creation of new orbits outside of $``$. As the coupling is further reduced, more and more periodic orbits embedded in $``$ lose their transverse stability in a similar fashion , leading to the creation of additional orbits. As this process proceeds, the external UPOs simultaneously undergo period-doubling cascades to chaos, thus creating even more new orbits. We call the set of new orbits created in this fashion the emergent set.
In the more general case $`fg`$, $`x=y`$ is by construction invariant and attracting for $`c=1`$. Upon decreasing $`c`$, $`x=y`$ is no longer invariant, and we observe that the UPOs migrate and spread out as shown in Figure 1 for coupled quadratic maps. As the coupling is decreased, we first observe transverse Cantor-like structure, followed by a “fattening” of the striations as the Lyapunov dimension of the attractor increases to $`2.0`$. We have also observed similar UPO migration in the invertible case of coupled Hénon maps. It is remarkable that this UPO migration appears to occur well before any orbit loses its transverse stability. In fact, we observe a large range of $`c`$ over which, despite the apparent structure (ultimately two-dimensional), the periodic orbits migrate but are still transversely stable and one-to-one in the following sense: if the driver dynamics is fixed onto any one of its intrinsic period $`p`$ orbits, then the limiting $`y`$ dynamics is an attracting orbit of the same period.
Let $`U`$ be the set of unstable periodic orbits on the line $`x=y`$ when $`c=1`$. The number of orbits in $`U`$ is determined by the driver and remains constant for all $`c`$ because of the unidirectional coupling. For $`f=g`$, they remain fixed in place, but for $`fg`$, they migrate as described above. As $`c`$ is decreased from $`1`$, the orbits’ stability properties evolve, but they remain transversely attracting until a bubbling-type bifurcation is encountered. (We extend the concept of bubbling to the asymmetric case $`fg`$ by defining it as the point where the first orbit in $`U`$ loses stability .) As the coupling is further decreased, more and more orbits bifurcate and create orbits outside of $`U`$, and the above mechanism for the creation of the emergent set applies. Because of their migration, however, the orbits of $`U`$ become intermingled among those of the emergent set.
We wish to view system (1) as an infinite collection of subsystems defined as follows. First, enumerate the periodic orbits of $`f`$ (the driver), assigning each an index $`i=1,2,\mathrm{}`$. Then subsystem $`S_i`$ is given by Equation (1), but with the driver dynamics $`f`$ locked on orbit $`i`$. The bifurcations described above correspond to bifurcations in the $`y`$ components of these subsystems. Indeed, each subsystem $`S_i`$ exhibits a complete bifurcation structure in $`y`$ as $`c`$ is varied from $`1`$ to $`0`$.
To quantify the discussion, let $`N_{xy}(p)`$ denote the number of period $`p`$ orbits of system (1). Also, let $`N_f(p)`$ denote the number of period $`p`$ solutions of $`f^p(x)x=0`$ alone; $`N_g(p)`$ is defined analogously. These quantities contain contributions from all periodic orbits of period $`q`$, where $`q`$ is an integer factor of $`p`$. For $`c=1`$, the system exhibits identical synchrony and $`N_{xy}(p)=N_f(p)`$. In contrast, when $`c=0`$, the system is fully decoupled into independent systems, and $`N_{xy}(p)=N_f(p)N_g(p)`$. (Note that $`N_{xy}(p)`$ may achieve its maximum at $`c=0`$ or at intermediate values of $`c`$, depending on the nature of $`g`$.)
Our goal is to elucidate how this change in the unstable periodic orbit structure proceeds as $`c`$ is varied. To this end, we consider the topological entropy $`h`$ ; for large $`p`$, the number of periodic orbits of period $`p`$ in a chaotic set increases exponentially with $`p`$ as $`N(p)e^{hp}`$. Thus, the topological entropy of the coupled system is $`h_{xy}=lim_p\mathrm{}lnN_{xy}(p)/p`$, and similarly, the topological entropy of the driver is $`h_f=lim_p\mathrm{}lnN_f(p)/p`$.
Let $`N_e(p)`$ be the number of periodic orbits of period $`p`$ that are not in $`U`$. These orbits reside in the emergent set and are created by the bifurcations described above. Thus $`N_e(p)=_{i=1}^{n_b}N_i(p),`$ where the summation is over the number $`n_b`$ of subsystems that have bifurcated. $`N_i(p)`$ is the number of periodic orbits of period $`p`$ not in $`U`$ that are associated with a particular subsystem $`S_i`$ in the summation. The topological entropy of the emergent set is therefore $`h_e=lim_p\mathrm{}N_e(p)/p`$ .
We define the state of topological coherence for system (1) as the condition $`h_{xy}=h_f`$. In this state, the topological entropy of the system is determined by the driver. In order for topological coherence to be destroyed, the topological entropy of the full system $`h_{xy}`$ must exceed $`h_f`$. This occurs when the emergent set becomes sufficiently complex. In the present case, $`h_{xy}=lim_p\mathrm{}\frac{1}{p}ln\left(N_f(p)+N_e(p)\right)=max(h_f,h_e)`$. There is therefore a critical value $`c_d`$ of coupling where $`h_e`$ first exceeds $`h_f`$ and topological coherence is lost. We call this the decoherence transition. For the special case $`f=g`$, we find that this typically occurs between the bubbling $`c_{bu}`$ and the blowout $`c_{bo}`$ bifurcations.
We now address the question of how the decoherence transition may be measured from experimental data. First, we observe that in the symmetric noise-free case ($`f=g`$), trajectories collapse onto $``$ and remain there until the blowout bifurcation. Thus, estimates of the decoherence transition based on measured data will not reflect the contribution of the emergent set. However, this case is exceptional. In the more general asymmetric case ($`fg`$), the orbits of $`U`$ migrate and become intermingled with those of the emergent set. Because of this, typical trajectories do not necessarily remain near $`U`$; instead, the observed attractor incorporates parts of the emergent set. How much of the emergent set is incorporated depends on the degree of asymmetry and the coupling. By using trajectory data, an effective decoherence transition can be measured which indicates how much the emergent set actually influences the observed dynamics. It is precisely this effective transition that is most relevant to the observed dynamics of the system and hence is most relevant to experimental situations. Below we describe an efficient method for estimating the effective decoherence transition that is based on actual trajectory information.
We use the methods of Ref. . These authors define an average $`n`$-step stretching rate as follows. Let $`\lambda _i^{(n)}`$ denote the square root of the largest eigenvalue of $`[𝐉^n(𝐱_i)]^T𝐉^n(𝐱_i)`$ for some initial condition $`𝐱_i`$, where $`𝐉`$ is the Jacobian of the system in Equation (1). Then form the following average quantity over $`m`$ initial conditions chosen with respect to the natural measure: $`ln1d_{n,m}=ln\left(_{i=1}^m\lambda _i^{(n)}/m\right)/n`$. For hyperbolic systems with one stretching direction, it can be shown that $`h_1=lim_n\mathrm{}lim_m\mathrm{}ln1d_{n,m}`$ is the topological entropy of the system . Numerically, $`h_1`$ can be obtained by measuring the scaling of $`ln(_{i=1}^m\lambda _i^{(n)})`$ with $`n`$, for a sufficiently large $`m`$.
Since Equation (1) can exhibit two expanding directions for certain parameter values, we also measure the two dimensional average stretching rate: $`ln2d_{n,m}=ln\left(_{i=1}^m(\lambda _1\lambda _2)_i^{(n)}/m\right)/n`$, where $`(\lambda _1\lambda _2)_i^{(n)}`$ represents the square root of the product of the two largest eigenvalues of $`[𝐉^n(𝐱_i)]^T𝐉^n(𝐱_i)`$. The topological entropy when there are two expanding directions is given by $`h_2=lim_n\mathrm{}lim_m\mathrm{}ln2d_{n,m}`$.
These quantities enable us to calculate the topological entropy of the full system as the coupling parameter varies and traverses regions with one and two stretching directions: $`h_{xy}=max(h_1,h_2)`$. For the examples considered below, we have $`h_1=h_f`$ for the entire coupling range of interest. Thus, the effective decoherence transition occurs when $`h_2`$ first exceeds $`h_1=h_f`$. (In higher dimensional systems, these methods are practical if the number of unstable directions is low; otherwise, it may be difficult to accurately calculate these quantities from limited data .)
We apply these methods to a system of coupled quadratic maps. We take $`f(x)=1.7x^2`$, $`g(y)=a_gy^2`$, and consider the cases $`a_g=2.0`$, $`1.7`$, and $`1.5`$. (We have also used two Hénon maps coupled as in Eq. 1, and the results are qualitatively the same.) Figure 2 shows $`max(h_1,h_2)`$ for these cases. In all cases, $`h_1`$ is equal to the topological entropy of the driver dynamics $`h_f`$ (for $`a_g=2.0`$, $`h_1>h_f`$ only for $`c<0.1`$, not shown). The effective decoherence transition occurs when $`h_2`$ exceeds $`h_1`$, as indicated by arrows.
Finally, we illustrate the influence of the emergent set for the symmetric case $`a_f=a_g`$ with noise. As $`c`$ is decreased from bubbling, the dynamics will make occasional excursions from $``$. We expect progressively longer transient times outside of $``$ due to the increasing complexity of the emergent set. To illustrate, we consider the symmetric case $`a_f=a_g=1.7`$ and plot in Figure 3 the average duration of a burst versus coupling value. We define a burst as an excursion of at least $`10`$ iterations beyond a small distance $`\delta =0.05`$ from the line $`x=y`$; the end is when the orbit falls back within this distance. We measure iterations per burst over trajectories of $`10^6`$ iterations, then average over $`100`$ such realizations. For each noise level (uniformly distributed noise within a given amplitude), there is a transition range of $`c`$, depending on $`\delta `$ and the magnitude of the noise, above which bursts are not observed. For very small noise, this transition is close to the blowout bifurcation; for larger noise, the transition shifts to higher values of coupling. Note that for $`c`$ values below their respective transitions, the various curves asymptote to a common curve, suggesting that the dynamics during the bursts is consistently influenced by the emergent set. As expected, the average duration of bursts increases with decreasing $`c`$.
Finally, we note that the mechanism for the creation of emergent sets outlined here leads one naturally to expect unstable dimension variability as a typical feature of emergent sets, and of coupled systems in general.
In conclusion, we emphasize that the emergent set framework developed here is quite general and applies to coupled systems of non-identical elements where previously studied bifurcation structures may be problematic or inappropriate. Furthermore, the effective decoherence transition can be estimated in such systems from experimental data.
We thank C. Grebogi, E. Ott, B. Hunt, and J. A. Yorke. This work was supported by NSF (IBN 9727739, E.B. and P.S.) and NIH (7K02MH01493, S.J.S.; 2R01MH50006, S.J.S., P.S., and B.J.G.). Figure 1 was generated using DYNAMICS . |
no-problem/9912/quant-ph9912110.html | ar5iv | text | # Experiments on Sonoluminescence: Possible Nuclear and QED Aspects and Optical Applications
## Introduction
A lack of a complete explanation of some unusual characteristics of sonoluminescence came to be a source of a few exotic suggestions about its nature. For our studies we selected the most intriguing ones that are somehow in line with a general scientific stream in Dubna research center. Search for predicted nuclear and quantum electrodynamic effects in sonoluminescence is an aim of described experiments that are now in a stage of preparation.
## Experimental approaches
### Hot-plasma hypotesis.
It was predicted that very high temperatures are possible at the moment of a collapse of the sonoluminescencing bubble. Under some special mode of upscaling sonoluminescence (a strong short pressure pulse should be added to the ultrasound standing wave) the temperature presumably reach a level of observable traces of thermonuclear fusion in the D+D system. The additional short pressure pulse with a magnitude of several bars are to be synchronous with the sonoluminescence flash. If the system contains deuterium dissolved in heavy water (D<sub>2</sub>O) a neutron yield is expected. A value about 0.1 nph is predicted for some optimal conditions Moss . Measurements of this low neutron rate are planned to be done by means of a triple-coincidence method using an original neutron counter. The neutron spectrometer was designed taking into account requirements for minimizing the $`\gamma `$-ray and random coincidence backgrounds 12 . It is a calorimeter based on a liquid organic scintillator-thermalizer with <sup>3</sup>He proportional counters of thermalized neutrons distributed uniformly over the volume. The energy of thermalized neutrons is transformed into light signals in a scintillation detector. The signals from proportional counters provide a ‘neutron label’ of an event. The triple coincidences are to be sorted by a following algorithm:
(signal from the sonoluminescence-light flash) & (the scintillator flash in moderator) & (the signal from the <sup>3</sup>He counter).
The measurements are supposed to be performed in the underground laboratory of the Baksan Neutrino Observatory of the Institute for Nuclear Research of Russian Academy of Sciences, Caucasus. A shield from cosmic rays in this Lab is about 5,000 m of w. e. Under these conditions a sensitivity of about 0.01 nph can be reached for about a three-month measurement cycle. The main components of necessary devices and equipment are already in our disposal, including the sonoluminescence devices and the neutron counter.
The system of intensive pressure pulsing is under construction. Certainly, many efforts are necessary to modify the experimental setup and accommodate it for the measurements at Baksan. To reach the most possible sensitivity in these experiments it would be reasonable to use a so-called few-bubble-sonoluminescence (FBSL) regime when several single bubbles are trapped within a higher harmonic modes of acoustic resonator as it was reported in Geisler . We have observed concurrently lighting SL-bubbles in the second harmonic under some special boundary conditions which have yet to be specified Belyaev .
### QED-hypotesis.
Another idea connects the sonoluminescence to the energy of zero vibrations of the vacuum (Casimir energy) 13 . To test this idea the following two types of experiments are designed.
(1) Transforming a short wave part of the sonoluminescence spectrum to the region of $`\lambda `$ higher than the water absorption edge. To this end, certain specific luminofores should be selected, and among them perhaps tiny dispersive powder of crystal röntgenoluminofores would be promising as an interaction with lattice is essential in this case. Certainly, possible influence of those suspensions on cavitation properties of water have to be clarified beforehand. If the hyposesis on the vacuum-fluctuation nature of the sonoluminescence phenomenon is valid then no short wave emission with $`\lambda <200`$ nm is expected.
(2) Angular-correlation measurements of the coincident photons in the vicinity of 180 . The QED model for the sonoluminescence predicts the emission of time-correlated pairs of photons flying away in opposite directions.
Some other experiments are considered also. In particular,
(1) Direct testing the so-called dissociation hypothesis (DH) of the sonoluminescence. According to DH, when the stable single bubble sonoluminescence conditions have been created, the inert gas alone remains inside the bubble. Due to high temperature inside the bubble all other components (nitrogen and oxygen) undergo the intensive chemical interaction with each other and with water. This results in nitrogen oxides (NO, NO<sub>2</sub>) and NH<sub>3</sub> 14 . At present only indirect evidence for the DH has been found 15 ; 16 . For direct measurement of the products of dissociation a small SL cell, completely closed to the atmosphere, and containing a relatively small volume of water, will be needed. Long-time runs can be accomplished via computer-controlled monitoring of the system MATULA PC .
(2) Measurements of spatial distribution of the light in water. Measurements of the time-averaged spectral distribution of the radiation emitted by the bubble will be done in the presence of the luminofore additives that will be used in the procedure of spectrum-trasformation experiments. The aim of the experiments is to infer the source brightness in the short UV range by means of taking into account the diffusion of UV emission in the water solutions or suspensions of the above luminofores, and, by comparing them with predictions of quasi-stationary thermal source model, estimates of plasma temperature are expected to be obtained 17 .
(3) Study of near-IR spectrum of the emission of Xe-doped bubble. We will try to search for spectral lines of Xe similar to distinctive line emissions observed in high pressure xenon lamps.
## Development of new type of super fast pulse light sources
One of the most remarkable features of SBSL flashes is their brief duration. The most recent measurements show that many parameters, such as the nature and concentration of gases, temperature, pressure, resonance performances of the SL instrument, etc., influence the temporal and other properties. For example, larger flasks operating at lower frequencies, cause the bubble to emit more light 18 . It is important that the light pulse duration remains the same within the limit of a few picosecond for whole wavelength range .
Goals of this part of experiments are:
(1) Studying parameters controlling the duration of light pulses and other temporal parameters of SBSL radiation. (2) Investigation of correlation in intensity of flashes. In this experiment the statistical properties of the intensity of the light flashes will be studied to determine the short-term and long-term aspects of SL. The synchronicity between flashes has been shown to be remarkably high 19 . It is interesting whether the intensity distribution is as narrow, as well. (3) Development of SL-light sources of single light pulses. To this end the Kerr cells will be applied. Regular and relatively rare repetition of the flashes makes possible using the photo-shutters with a limited temporal resolution to obtain the single pulses with temporal performances determined by primary SBSL-properties. (4) Development of methods to generate various series of the single light pulses. (5) On this basis, developing simple, inexpensive light sources for physical research, fist of all for the time-resolution calibration of fast photodetectors (PMT and the like).
## Acknowledgments
The authors are grateful to Prof. L. A. Crum and Dr. T. J. Matula for very fruitful discussions, and to Prof. W. Lauterborn for his interest to this program.
This work was supported in part by the Russian Foundation for Basic Research, Grant No. 98–02–16884. |
no-problem/9912/astro-ph9912506.html | ar5iv | text | # Precise Interplanetary Network Localization of the Bursting Pulsar GRO J1744-28
## 1 Introduction
The Bursting Pulsar GRO J1744-28 was discovered with the Burst and Transient Source Experiment (BATSE) aboard the Compton Gamma-Ray Observatory (GRO) in 1995 December (Fishman et al. 1995; Kouveliotou et al. 1996a). Between the discovery date and 1997 April, BATSE detected over 5800 type II bursts (i.e., accretion-powered, Lewin et al. 1996) from this source (Woods et al. 1999), many of which were also detected by instruments aboard the Rossi X-Ray Timing Explorer (RXTE: Giles et al. 1996) and Ulysses , among others (e.g., KONUS-WIND, Aptekar et al. 1998). The initial source localization was a 6 $`\mathrm{°}`$ radius error circle (Fishman et al. 1995). Triangulation with BATSE and Ulysses resulted in a 24 $`\mathrm{}`$ wide annulus which intersected this error circle, and the use of the BATSE Earth occultation technique reduced the area of the localization further (Hurley et al. 1995). Observations using the Oriented Scintillation Spectrometer Experiment (Kurfess et al. 1995; Strickman et al. 1996), and BATSE observations of a variable, pulsating (467 ms period) quiescent source associated with the bursting source (Finger et al. 1996a,b; Paciesas et al. 1996) resulted in a still smaller error box. A subsequent RXTE observation produced an $``$ 5 sq. arcminute error box (Swank 1996; Giles et al. 1996). Within this error box, Frail et al. (1996a,b) found a variable radio source. Observations of the region around the radio source position with the Advanced Satellite for Cosmology and Astrophysics (ASCA) revealed a pulsating, bursting X-ray source with the same 467 ms period (Dotani et al. 1996a,b) whose position was consistent with that of the radio source, but a later ROSAT observation (Kouveliotou et al. 1996b; Augusteijn et al. 1997) with higher angular resolution found an X-ray source within the 1 $`\mathrm{}`$ radius ASCA error circle which was significantly displaced from the radio position. The radius of the ROSAT error circle is 10 $`\mathrm{}`$, corresponding to a 5 $`\mathrm{}`$ statistical error and a 8 $`\mathrm{}`$ systematic error, summed in quadrature. No confidence level can be quoted for the systematic error, but the statistical error corresponds to $`10\sigma `$ (J. Greiner, private communication).
Although the radio source was rejected as a possible counterpart to GRO J1744-28, optical and near-infrared observations of the ROSAT source region did uncover an object at the limit of the 10 $`\mathrm{}`$ radius ROSAT error circle which appeared to be variable (Augusteijn et al. 1997; Cole et al. 1997). These observations were carried out at the European Southern Observatory (ESO) and at the Astrophysical Research Consortium’s Apache Point Observatory (APO). In some of the observations, it was not possible to rule out the apparent detection as an instrumental artifact (Augusteijn et al. 1997); in others, however, there was no reason to suspect that the detection was not valid (Cole et al. 1997).
It has been proposed that GRO J1744-28 is a low-mass X-ray binary system (LMXB), in which a neutron star with a dipole field B $`10^{11}\mathrm{G}`$ accretes matter from its companion. The rotation period of the neutron star is 467 ms, the orbital period of the system is 11.8 d, and the system is viewed nearly face-on (e.g. Daumerie et al. 1996). The distance is approximately that of the Galactic center.
Because of the difficulty of identifying the counterpart at various wavelengths in a crowded region of the sky towards the Galactic center, it is important to consider the details of the ROSAT observation. It was a short one (820 s) with the High Resolution Imager (HRI); only 273 photons were collected, and, in contrast to the ASCA observation, neither pulsations nor bursts were detected. (During the observation, no bursts were recorded by BATSE or Ulysses either, and the upper limit to the ROSAT pulsed flux is consistent with that derived from BATSE and RXTE observations.) From earlier ROSAT observations in which the source was not detected, it was concluded that the object was transient; based on the statistics of transient sources in the galactic plane, it was estimated that the probability of observing a random source unrelated to the bursting pulsar was less than 10<sup>-4</sup>. Since no energy spectra are recorded by the HRI, the ASCA spectrum was assumed to calculate the source flux; it was found to be $`2\times 10^9\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ (unabsorbed) in the 0.1 - 2.4 keV energy range (Augusteijn et al. 1997). This observation took place in 1996 March. For comparison, the fluxes measured by ASCA in the 2 - 10 keV energy range were $`2\times 10^8\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ in 1996 February and $`5\times 10^9\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ in 1997 March (Nishiuchi et al. 1999). These fluxes would convert to unabsorbed 0.1-2.4 keV fluxes of $`9.7\times 10^9\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ and $`2.4\times 10^9\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ respectively using a simple extrapolation of the power-law continuum measured by Nishiuchi et al. (1999).
To summarize, there are good arguments both in favor of and against the idea that the true X-ray and optical/IR counterparts to GRO J1744-28 have been identified. In favor:
1. this was the only ROSAT source detected within the ASCA error circle,
2. it was transient, and
3. the variable optical/IR source was reliably detected in some of the observations of Cole et al. (1997).
Against:
1. no bursts or pulsations were observed by ROSAT (although the short duration of the observation may be to blame),
2. Augusteijn et al. (1997) estimate that the proposed optical/IR counterpart, if real, exhibited a change in its IR flux by a factor of 10 over a period of minutes, with no accompanying X-radiation; the detection could have been an artifact, and
3. the proposed optical/IR counterpart lies at the edge of or just outside the ROSAT error circle (depending on the astrometry).
Here we adopt the view that the true counterpart to GRO J1744-28 may not yet have been identified and localized with certainty, and we analyze the observations of bursts from GRO J1744-28 by Ulysses and BATSE in order to better constrain the position of the source.
## 2 Observations
We began this analysis by examining Ulysses GRB experiment (Hurley et al. 1992) data for each BATSE burst. Knowing the arrival time of a burst at BATSE, the coordinates of the Ulysses spacecraft, and the approximate source position, we extracted data for $`\pm `$ 100 s about the Ulysses crossing time. Although the bursting pulsar was a prolific source, it was not a particularly intense one, and this procedure resulted in the identification of only $``$ 500 bursts in the Ulysses data. Typically, these were count rate increases in the 3 - 6 $`\sigma `$ range. The vast majority of them were recorded in the untriggered data, which have a time resolution of 0.25 - 2 s, depending on the telemetry mode. We then retained only those bursts which were recorded by BATSE with 0.064 s time resolution, since these are the ones which can be cross-correlated with the Ulysses time histories with the best accuracy. Figure 1 shows one example. The final data set then consisted of 426 bursts, of which only 5 were recorded by Ulysses in triggered (32 ms resolution) data. The first event in this set was BATSE # 4042 on 1995 December 19, and the last was BATSE # 6085 on 1997 February 2.
Triangulation of a single burst results in an annulus of possible arrival directions whose width depends on the vector between the two spacecraft and the uncertainty in cross-correlating the two time histories (see, e.g. Hurley et al. 1999a). As examples, we show the first and last annuli in figure 2. Their widths are $`0.9\mathrm{}\mathrm{and}\mathrm{\hspace{0.25em}3.8}\mathrm{}(1\sigma )`$ respectively, and they intersect at an angle $`37\mathrm{°}`$, approximately the same angle as the displacement of the Earth-Ulysses vector during the period between the bursts. In figure 3 we show the distribution of the 426 annulus half-widths. The average total width is $`3.2\mathrm{}`$. We can predict what the approximate result might be of combining these annuli statistically. Two 3.2 $`\mathrm{}`$ wide annuli intersecting at an angle of 37 $`\mathrm{°}`$ form a box shaped roughly like a rhombus with diagonals 3.4 $`\mathrm{}`$ and 10 $`\mathrm{}`$. (The actual error region will be an ellipse inscribed in the rhombus, with minor and major axes somewhat smaller than the diagonals; for the purposes of this simple estimate we ignore this fact and base our calculation on the lengths of the diagonals, which will give us an overestimate of the final error region size.) The statistical combination of the 426 annuli should therefore be an elliptical error region with minor and major axes approximately $`3.4\mathrm{}/\sqrt{426}\mathrm{and}\mathrm{\hspace{0.17em}10}\mathrm{}/\sqrt{426}`$, or 10 $`\mathrm{}`$ and 29 $`\mathrm{}`$ respectively. We show below that these are in fact close to, but larger than the final dimensions.
The statistical method for combining the results of multiple triangulations has been outlined in Hurley et al. (1999b). It consists of defining a chisquare-distributed variate which is a function of an assumed source position in right ascension and declination, and of the parameters describing the triangulation annuli. Let $`\alpha ,\delta `$ be the right ascension and declination of the assumed source position, and let $`\alpha _i,\delta _i,\theta _i`$ be the right ascension, declination, and radius of the ith annulus. Then the angular distance d<sub>i</sub> between the two is given by
$$d_i=\theta _i\mathrm{cos}^1(\mathrm{sin}(\delta )\mathrm{sin}(\delta _i)+\mathrm{cos}(\delta )\mathrm{cos}(\delta _i)\mathrm{cos}(\alpha \alpha _i))$$
(1)
. If the 1 $`\sigma `$ uncertainty in the annulus width is $`\sigma _i`$, then
$$\chi ^2=\underset{i}{}\frac{d_i^2}{\sigma _i^2}.$$
(2)
The assumed source position is varied to obtain a minimum chisquare; 1, 2, and 3 $`\sigma `$ equivalent confidence contours in $`\alpha `$ and $`\delta `$ are found by increasing $`\chi _{min}^2`$ by 2.3, 6.2, and 11.8.
The best fitting position for the 426 annuli is $`\alpha (2000)=17^\mathrm{h}44^\mathrm{m}32^\mathrm{s},\delta (2000)=28^\mathrm{o}44\mathrm{}31.7\mathrm{}`$, and has a $`\chi _{min}^2`$ of 415.7 for 424 degrees of freedom (426 annuli, minus the two fitting parameters $`\alpha ,\delta `$). For a large number of degrees of freedom m, the $`\chi ^2`$ distribution approaches the normal distribution with standard deviation $`\sqrt{2m}`$ and mean m. Thus the value we obtain for $`\chi ^2`$ lies 0.27 standard deviations from the mean and is an acceptable fit. Figure 4 shows the best fitting position, the ROSAT and ASCA error circles, and the two slightly different positions for the proposed optical counterpart found by Augusteijn et al. (1997) and Cole et al. (1997) (these sources are likely to be one and the same, considering their quoted astrometric uncertainties), along with the 1, 2, and 3 $`\sigma `$ error ellipses obtained in this analysis. The Augusteijn et al. (1997) and the Cole et al. (1997) positions for the proposed counterpart lie at $`\chi _{min}^2`$+12.3 and $`\chi _{min}^2`$+15.3, or at the 99.8% and 99.95% confidence levels, respectively. The VLA source position is off the map; it lies at $`\chi _{min}^2`$+1709, and is definitely excluded as a candidate in this analysis. The parameters of the 1, 2, and 3 sigma error ellipses are given in table 1. Figure 5 shows the distribution of the distances between the individual annuli and the best fit position.
## 3 Accuracy of the Method
One of the design goals of the Ulysses mission was an absolute timing accuracy of several milliseconds. To confirm that no large errors exist in the spacecraft timing and ephemeris, end-to-end timing tests are routinely carried out, in which commands are sent to the GRB experiment at precisely known times, and the times of their execution onboard the spacecraft are recorded and compared with the expected times. Because of command buffering on the spacecraft, there are random delays in the execution of these commands, and the timing is verified to different accuracies during different tests. The tests just before, during, and just after the series of 426 bursts analyzed here took place on 1995 December 5, 1996 October 1, and 1997 February 19, and indicated that the timing errors at those times could not have exceeded 50, 3, and 1 ms respectively. For comparison, the 1 $`\sigma `$ uncertainties in these triangulations are all greater than 125 ms. This includes both the statistical errors, and a conservative estimate of possible unknown timing and spacecraft ephemeris errors.
Two other independent confirmations of the accuracy of the triangulation method are first, the excellent agreement between the VLA and triangulated positions of SGR1900+14, using the same statistical method as the one we employ here (Hurley et al. 1999b), and second, the agreement between the triangulated positions and the positions of gamma-ray bursts with optical and/or X-ray counterparts (e.g. Hurley et al. 1999c).
Although there is no reason to suspect timing errors, it is difficult to prove beyond a doubt that they do not exist, so we have investigated the effects which such errors would have. We distinguish between two hypothetical types. The first is a constant, systematic offset in the timing of one spacecraft. For example, if the difference in the burst arrival times at the two spacecraft were systematically overestimated by a constant value of the order of several hundred milliseconds for each burst, the result would be to increase the radii of all the annuli, leaving the annulus widths and the coordinates of the annulus centers unchanged. (The increase in each radius would be almost, but not exactly the same, since it depends on the value of the interspacecraft vector, which changes from burst to burst as the spacecraft move.) The new annuli would still be consistent with a best-fitting position with an acceptable $`\chi _{min}^2`$, but the position would shift by 15 $`\mathrm{}`$ for every 100 ms of offset.
The second is a random error whose average value is zero, but whose value for any given burst may take on positive or negative values up to several hundred milliseconds. To simulate the effects of such errors we have added a random number to the difference in the spacecraft arrival times for each burst; the number is drawn from a Gaussian distribution with mean zero and standard deviation 100 ms. The effect of such an error would again be to change only the radii of all the annuli, but by different amounts whose average would be zero. Since the annulus widths are unaffected, the $`\chi _{min}^2`$ for the best-fitting position increases, but not to the point where it becomes unacceptable or even suspect. The best-fitting position shifted by 6 $`\mathrm{}`$ in this simulation.
Other types of errors can of course be imagined, but we reiterate that there is neither any indication that such errors exist, nor any means to disprove their existence entirely.
## 4 Discussion and Conclusions
Because pulsations and bursts were detected during the ASCA observation, there is no doubt that ASCA detected the X-ray counterpart to GRO J1744-28, but it is not well localized. If we accept the ROSAT source as the counterpart, then the combination of the 3 $`\sigma `$ error ellipse derived here and the ROSAT error circle gives a new, smaller error box whose area is $``$ 150 sq. arcsec., or about one half the ROSAT area. One reason to accept it is the fact that the error ellipse indeed overlaps it partially; we estimate the chance probability of an overlap between the two within the ASCA error circle to be $``$0.14. If we reject the ROSAT source as the counterpart, the appropriate error box for GRO J1744-28 becomes the entire 532 sq. arcsec. 3 $`\sigma `$ error ellipse. However, this implies that the X-ray counterpart must have faded to an undetectable flux during the ROSAT observation, or $`<5\times 10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ (unabsorbed).
In either case, the possible variable IR source is at or beyond the 3 $`\sigma `$ confidence levels of both the ROSAT and the triangulation regions. From a purely statistical point of view it is unlikely to be the counterpart, but it cannot be completely ruled out. The IPN error ellipse has been examined in four of the archived K’ images taken at APO and ESO. Their dates and limiting magnitudes are 1996 January 21 (APO: 14.4 $`\pm `$ 0.3), 1996 January 30 (APO: 15.2 $`\pm `$ 0.3), 1996 February 8 (ESO: 16.75 $`\pm `$ 0.3), and 1996 May 1996 (ESO: 17.1 $`\pm `$ 0.3). Comparing the first three with the last reveals no variable objects other than the previously identified IR source. However, based on the magnitudes of LMXB’s, Augusteijn et al. (1997) estimated that the quiescent counterpart to GRO J1744-28 might have a K magnitude $`18.7`$, or at least two magnitudes fainter than the completeness limit of their observations, and Cole et al. (1997) estimated that observations down to K’=20 were needed. It is also possible that the true counterpart is considerably farther away than the Galactic center, or that absorption in this direction is greater than expected.
Fortunately, it may be possible to resolve the ambiguity. The X-ray counterpart can be detected in an observation with the Chandra High Resolution Camera (HRC) if its flux has not decreased by more than a few orders of magnitude. Detection of pulsations would lead to an unambiguous identification of the counterpart, and the 1 $`\mathrm{}`$ HRC resolution would provide the smaller error box needed to carry out deeper searches for the IR counterpart.
KH is grateful to JPL for Ulysses support under Contract 958056, and to NASA for Compton Gamma-Ray Observatory support under grant NAG 5-3811. |
no-problem/9912/gr-qc9912091.html | ar5iv | text | # Tetrad Gravity and Dirac’s Observables.
Talk given at the Conference “Constrained Dynamics and Quantum Gravity 99”, Villasimius (Sardinia, Italy), September 13-17, 1999.
In a recent series of papers the canonical reduction of a new formulation of tetrad gravity to a canonical basis of Dirac’s observables in the 3-orthogonal gauge in the framework of Dirac-Bergmann theory of constraints was studied.
This concludes the preliminary work in the research program aiming to give a unified description of the four interactions in terms of Dirac’s observables. See Ref. for a complete review of the achievements obtained till now: i) The understanding of the mathematical structures involved, in particular of the Shanmugadhasan canonical transformations . ii) The non-manifestly covariant canonical reduction to a canonical basis of Dirac’s observables of many relativistic systems, including relativistic particles, the Nambu string, the electromagnetic, Yang-Mills and Dirac fields, the standard SU(3)xSU(2) xU(1) model of elementary particles in Minkowski spacetime. In the case of gauge theories, this required an understanding of all the pathologies of the constraint manifold associated with the Gribov ambiguity (gauge symmetries and gauge copies) and of the fact that the presence or absence of the Gribov ambiguity depends on the choice of the function space for the gauge fields and the gauge transformations. With the hypothesis that no physics is hidden in the Gribov ambiguity, on can work in special weighted Sobolev spaces where it is absent. Then, in the case of trivial principal bundles on constant time hypersurfaces in Minkowski spacetime (no monopoles; winding number but not instanton number) and for suitable Hamiltonian boundary conditions on gauge potentials and gauge transformations (the behaviour at spatial infinity must be direction-independent) allowing the color charges, in the case of SU(3), to be well defined, one can do a complete canonical reduction of Yang-Mills theory like in the electromagnetic case and find the singularity-free physical Hamiltonian. iii) The definition of the Wigner-covariant rest-frame instant form of dynamics (replacing the non-relativistic separation of the center-of-mass motion) for the timelike configurations of every isolated relativistic system (particles, strings, fields) in Minkowski spacetime. This is obtained starting from the reformulation of the isolated system on arbitrary spacelike hypersurfaces (parametrized Minkowski theories) and making a restriction to the special foliation (3+1 splitting) of Minkowski spacetime with Wigner hyperplanes: they are determined by the given configuration of the isolated system, being orthogonal to its conserved total 4-momentum (when it is timelike). A general study of the relativistic center of mass, of the rotational kinematics and of Dixon multipolar expansions is now under investigation for N-body systems. See Refs. for the center of mass of a Klein-Gordon configuration. iv) The Wigner-covariant reformulation of the previous canonical reductions in the rest-frame instant form taking into account the stratification of the constraint manifold associated with the isolated system induced by the classification of its configurations according to the Poincaré orbits for the total 4-momentum. v) The realization that in the rest-frame instant form there is a universal breaking of Lorentz covariance regarding only the decoupled canonical non-covariant “external” center of mass (the classical analogue of the Newton-Wigner position operator), while all the relative degrees of freedom are Wigner-covariant. The spacetime spreading of this non-covariance determines a classical unit of length, the Møller radius, which is determined by the value of the Poincaré Casimirs of the given configuration of the isolated system and which should be used as a physical ultraviolet cutoff in quantization. The Møller radius is a non-local effect of Lorentz signature: already at the classical level it is impossible to localize in a covariant way the canonical center of mass of an isolated extended relativistic system with a precision better of this radius. This classical problem happens at those distances where quantum mechanics introduces pair creation (the Møller radius is of the order of the Compton wavelength of the isolated system). Moreover, the Møller radius is a remnant in Minkowski spacetime of the energy conditions of general relativity. With the methods of Ref. one can find the “internal” 3-center of mass inside the Wigner hyperplane, whose vanishing is the gauge fixing for the constraints defining the rest frame. vi) Since the rest-frame instant form is a special classical background for the Tomonaga-Schwinger formulation of quantum field theory, there is now the possibility to start with a Wigner-covariant quantization of field theory on Wigner hyperplanes. Having built-in a covariant concept of “equal time”, one expects to find a Schroedinger-like equation for relativistic bound states (avoiding the problem of the spurious solutions of the Bethe-Salpeter equation), to be able to define Tomonaga-Schwinger asymptotic states (with the possibility of including bound states among them) and to use the Møller radius as a physical ultraviolet cutoff.
The next conceptual problem was to apply all the technology developed for constrained systems in Minkowski spacetime to a formulation of general relativity able to incorporate the standard model of elementary particles and such that it could be possible to formulate a deparametrization scheme according to which the switching off of the Newton constant reproduces the description of the standard model on the Wigner hyperplanes in Minkowski spacetime. In this way, at least at the classical level, the four interactions would be described in a unified way and one could begin to think how to make their quantization in a way avoiding the existing incompatibilty between quantum mechanics and general relativity.
Tetrad gravity, rather than metric gravity, was the natural formulation to be used for achieving this task for the following reasons: i) The fermions of the standard model couple naturally to tetrad gravity. ii) Tetrad gravity incorporates by definition the possibility to have the matter described by an arbitrary (geodesic or non-geodesic) timelike congruence of observers. In this way one can arrive at a Hamiltonian treatment of the precessional aspects of gravitomagnetism like the Lense-Thirring effect. iii) In tetrad gravity it is possible to replace the supermomentum constraints with SO(3) Yang-Mills Gauss laws associated with the spin connection and to solve them with the technology developed for the canonical reduction of Yang-Mills theories. Instead in metric gravity one does not know how to solve the supermomentum constraints.
Let us remark that till now supergravity and string theories have not been analyzed, since the emphasis is on learning how to make the canonical reduction in presence of constraints and from this point of view these theories only have bigger gauge groups and many more constraints to be solved.
Another important point is that the dominant role of the Poincaré group and of its representations in the theory of elementary particles in Minkowski spacetime requires to formulate general relativity on non-compact spacetimes asymptotically flat at spatial infinity so that the asymptotic Poincaré charges exist and are well defined. In presence of matter these asymptotic Poincaré charges must reproduce the ten conserved Poincaré generators of the isolated system with same matter content when the Newton constant is switched off.
All these requirements select a class of spacetimes with the following properties: i) They are pseudo-Riemannian globally hyperbolic 4-manifolds $`(M^4R\times \mathrm{\Sigma },{}_{}{}^{4}g)`$ \[$`(\tau ,\stackrel{}{\sigma })z^\mu (\tau ,\stackrel{}{\sigma })`$\]. These spacetimes have a global time function $`\tau (z)`$ and admit 3+1 splittings corresponding to foliations with spacelike hypersurfaces $`\mathrm{\Sigma }_\tau `$ (simultaneity 3-manifolds, which are also Cauchy surfaces). ii) They are non-compact and asymptotically flat at spatial infinity. iii) They are parallelizable 4-manifolds, namely they admit a spinor structure and have trivial orthonormal frame principal SO(3)-bundles over each simultaneity 3-manifold $`\mathrm{\Sigma }_\tau `$. iv) The non-compact parallelizable simultaneity 3-manifolds $`\mathrm{\Sigma }_\tau `$ are assumed to be topologically trivial, geodesically complete and diffeomorphic to $`R^3`$ \[$`\mathrm{\Sigma }_\tau R^3`$\]. This implies the existence of global coordinate systems on $`\mathrm{\Sigma }_\tau `$, so that coordinate systems $`(\tau ,\stackrel{}{\sigma })`$, adapted to the simultaneity 3-surfaces $`\mathrm{\Sigma }_\tau `$, can be used for $`M^4`$. In this simplified case the geodesic exponential map is a diffeomorphism, there are no closed 3-geodesics and no conjugate Jacobi points on 3-geodesics. v) The cotriads on $`\mathrm{\Sigma }_\tau `$ and the associated 3-spin-connection on the orthogonal frame SO(3)-bundle over $`\mathrm{\Sigma }_\tau `$ are assumed to belong to suitable weighted Sobolev spaces so that the Gribov ambiguity is absent. This implies the absence of isometries (and of the associated Killing vectors) of the non-compact Riemannian 3-manifold $`(\mathrm{\Sigma }_\tau ,{}_{}{}^{3}g)`$. vi) Diffeomorphisms on $`\mathrm{\Sigma }_\tau `$ and their extension to tensors are interpreted in the passive sense (pseudo-diffeomorphisms), following Ref., in accord with the Hamiltonian point of view that infinitesimal diffeomorphisms on tensors are generated by taking the Poisson bracket with the first class supermomentum constraints.
As action principle we use the ADM metric action with the 4-metric $`{}_{}{}^{4}g`$ rewritten in terms of general cotetrads on $`M^4`$. For the general cotetrads a new special parametrization has been found. Starting from $`\mathrm{\Sigma }_\tau `$-adapted cotetrads (Schwinger time gauge) whose 13 degrees of freedom are the lapse and shift functions and the cotriads on $`\mathrm{\Sigma }_\tau `$, the remaining 3 degrees of freedom are described by the 3 parameters which parametrize timelike Wigner boosts acting on the flat indices of the cotetrad (in the cotangent spaces over each point of $`\mathrm{\Sigma }_\tau `$). This implies that the flat indices acquire Wigner covariance (the time index becomes a Lorentz scalar, while the spatial indices become Wigner spin 1 3-indices) in each point of $`\mathrm{\Sigma }_\tau `$. These 3 boost parameters describe the transition from the $`\mathrm{\Sigma }_\tau `$-adapted Eulerian observers associated with the $`\mathrm{\Sigma }_\tau `$-adapted tetrads (this timelike congruence is surface-forming and is orthogonal to the $`\mathrm{\Sigma }_\tau `$’s) to an arbitrary (in general not surface-forming) timelike congruence of observers.
The ADM Lagrangian density is considered a function of these 16 fields: the lapse and shift functions, the cotriads on $`\mathrm{\Sigma }_\tau `$, the 3 boost parameters. In tetrad gravity there are 14 first class constraints (10 primary and 4 secondary ones): i) The momenta conjugate to the lapse and shift functions vanish, so that lapse and shifts are 4 arbitrary gauge variables \[arbitrariness in the choice of the standard of proper time and conventionality in the choice of the notion of simultaneity with the associated possible anisotropy in light propagation\]. ii) The momenta conjugate to the 3 boost parameters vanish (Abelianization of the Lorentz boost contained in the SO(3,1) group acting on the flat indices of the cotetrads): the 3 boost parameters are arbitrary gauge variables (the physics does not depend on the choice of the timelike congruence of observers). iii) There are 3 constraints describing the generators of SO(3) rotations on the flat indices of the cotetrads: the associated 3 angles (3 degrees of freedom among the 9 parametrizing the cotriads) are gauge variables (conventionality in the choice of the standard of non-rotation for a timelike congruence of observers). iv) There are 3 secondary constraints which are equivalent to the ADM supermomentum ones (it is possible to replace them with SO(3) Yang-Mills Gauss laws for the spin connection): 3 degrees of freedom, depending on the cotriads and their time derivatives, are arbitrary gauge variables describing the freedom in the choice of the 3-coordinates on $`\mathrm{\Sigma }_\tau `$ (arbitrariness in the choice of 3 standards of length). These 3 constraints generate the pseudo-diffeomorphisms. v) One secondary constraint coincides with the ADM superhamiltonian one. It can be shown that this constraint has to be interpreted as the Lichnerowicz equation determining the conformal factor of the 3-metric $`{}_{}{}^{3}g`$. Therefore, the last gauge variable is the momentum conjugate to this conformal factor \[it is non-locally connected with the trace of the extrinsic curvature of $`\mathrm{\Sigma }_\tau `$, also named York time \] and the gauge transformations generated by the superhamiltonian constraint correspond to the transition from one allowed 3+1 splitting of $`M^4`$ with spacelike hypersurfaces $`\mathrm{\Sigma }_\tau `$ to another one (the physics does not depend on the choice of the 3+1 splitting, like in parametrized Minkowski theories).
In conclusion, there are only two dynamical degrees of freedom hidden in the cotriads on $`\mathrm{\Sigma }_\tau `$ and they describe the gravitational field. Their determination requires a complete breaking of general covariance, namely a complete fixation of the gauge degrees of freedom (this amounts to the choice of a physical laboratory where to do all the measurements).
Let us remark that the fixation of the 3-coordinates and of the 3 rotation angles are inter-related, because the associated constraints do not have vanishing Poisson brackets. Moreover, there are restrictions on the gauge transformations when one restricts himself to the solutions of Einstein’s equations: according to the general theory of constraints one has to start by adding the gauge fixings to the secondary constraints; the requirement of their time constancy generates the gauge fixings of the primary constraints. Therefore, since the supermomentum constraints are secondary ones, the choice of the 3-coordinates on $`\mathrm{\Sigma }_\tau `$ determines the choice of the shift functions (i.e. of the associated convention for simultaneity in $`M^4`$; the Einstein convention can be applied only when the shift functions vanish). Analogously, the choice of the 3+1 splitting of $`M^4`$ (fixation of the momentum conjugate to the conformal factor of the 3-metric) determines the choice of the lapse function (namely of how the 3-surfaces $`\mathrm{\Sigma }_\tau `$ are packed in the chosen 3+1 splitting of $`M^4`$).
The next problem is the choice of the boundary conditions for the 16 fields in the cotetrads and for the allowed Hamiltonian gauge transformations. The existence of the Poisson brackets and the differentiability of the Dirac Hamiltonian require the addition of a surface term to the Dirac Hamiltonian containing the strong ADM Poincaré charges: they are surface integrals at spatial infinity, which differ from the weak ADM Poincaré charges (volume integrals) by terms vanishing due to the secondary constraints. In spacetimes asymptotically flat at spatial infinity besides the 10 asymptotic Poincaré charges there is a double infinity of Abelian supertranslations (associated with the asymptotic direction-dependent symmetries of these spacetimes ). Their presence generates an infinite-dimensional algebra of asymptotic charges which contains an infinite number of conjugate Poincaré subalgebras: this forbids the identification of a well defined angular momentum in general relativity. The requirement of absence of supertranslations, so to have a uniquely defined asymptotic Poincaré algebra, puts severe restrictions on the boundary conditions of the 16 fields and of the gauge transformations.
Following Dirac , we assume the existence of asymptotic flat coordinates for $`M^4`$. It can be shown that this implies the restriction of the allowed 3+1 splittings of $`M^4`$ to those whose associated foliations have the leaves $`\mathrm{\Sigma }_\tau `$ approaching spacelike Minkowski hyperplanes at spatial infinity. The absence of supertranslations requires that this approach must happen in a direction-independent way and that the lapse and shift functions can be consistently written as an asymptotic part (equal to the lapse and shifts of spacelike Minkowski hyperplanes) plus a part which vanishes at spatial infinity. Since spacelike Minkowski hyperplanes are described in phase space by 10 configuration variables (an origin plus an orthonormal tetrad) plus the conjugate momenta (see the parametrized Minkowski theories), Dirac adds these 20 variables to the ADM phase space, but then he also adds 10 first class constraints to the Dirac Hamiltonian (so that the 10 configurational variables are gauge variables). These constraints determine the 10 extra momenta in terms of the 10 weak Poincaré charges.
The satisfaction of all the requirements on the boundary conditions of the 16 fields and of the gauge transformations, in particular the absence of supertranslations, leads to the following results. The Hamiltonian formulation of both metric and tetrad gravity is well posed for the class of Christodoulou-Klainermann spacetimes , which are near Minkowski spacetime in a norm sense and avoid the singularity theorems not admitting a conformal completion, but which contain asymptotic gravitational radiation at null infinity (even if with a weaker peeling of the Weyl tensor). The allowed 3+1 splittings for these spacetimes have all the leaves $`\mathrm{\Sigma }_\tau `$ approaching, in a direction-independent way, those special Minkowski hyperplanes asymptotically orthogonal to the weak ADM Poincaré 4-momentum. These asymptotic spacelike hyperplanes are the analogue of the Wigner hyperplanes of parametrized Minkowski theories, and, when matter is present, allow to deparametrize tetrad gravity so to obtain the description of the same matter in the rest-frame instant form on Wigner hyperplanes in Minkowski spacetime when the Newton constant is switched off. Therefore, this Hamiltonian treatment of the Christodoulou-Klainermann spacetimes is the rest-frame instant form of general relativity; like in parametrized Minkowski theories, there is a decoupled canonical non-covariant “external” center of mass (a point particle clock) now located near spatial infinity, while all the physical degrees of freedom are relative variables (a weak form of Mach principle). These asymptotic hyperplanes are privileged observers dynamically selected by the given configuration of the gravitational field (they replace the “fixed stars”) and not a priori given like in bimetric theories or in theories with a background. It can be shown that given an asymptotic tetrad determined by the ADM 4-momentum, this tetrad can be transported in each point of $`\mathrm{\Sigma }_\tau `$ by using the Frauendiener equations with the Sen connection (replacing the Sen-Witten equations for spinors in the case of triads and tetrads), so determining a dynamically selected privileged timelike congruence of observers. These spacelike hypersurfaces $`\mathrm{\Sigma }_\tau `$ can be called Wigner-Sen-Witten (WSW) hypersurfaces.
Given this framework, it is possible to solve the rotation and supermomentum constraints and to find parametrizations of the cotriads in terms of: i) the 3 gauge rotation angles; ii) the 3 gauge parameters associated with the pseudodiffeomorphisms, namely with the choice of the 3-coordinates; iii) the conformal factor of the 3-metric; iv) the two physical degrees of freedom describing the gravitational field. Each choice of the 3-coordinates on $`\mathrm{\Sigma }_\tau `$ turns out to be equivalent to the choice of a particular parametrization of the cotriad (see Refs. for previous attempts). In this way 13 of the 14 first class constraints are under control and we can do a Shanmugadhasan canonical transformation adapted to these 13 constraints.
We have till now studied the most natural choice of 3-coordinates, which corresponds to the 3-orthogonal gauges in which the 3-metric is diagonal (they are the nearest ones to the standards of the non-inertial physical laboratories on the earth). The 3 rotation angles and the 3 boost parameters are put equal to zero. As a last gauge fixing we put equal to zero the momentum conjugate to the conformal factor of the 3-metric, which, in turn, must be determined as solution of the Lichnerowicz equation in this gauge. By going to Dirac brackets with respect to the 14 constraints and the 14 gauge-fixings, we remain with two pairs of canonical variables , describing the Hamiltonian physical degrees of freedom or Dirac’s observables of the gravitational field in this completely fixed 3-orthogonakl gauge (complete breaking of general covariance). The physical Hamiltonian for the evolution in the mathematical time parametrizing the WSW hypersurfaces $`\mathrm{\Sigma }_\tau `$ of the foliation is the weak (volume form) ADM energy : it depends only on the Dirac’s observables, even if part of the dependence is through the conformal factor of the 3-metric, whose form is unknown since noone is able to solve the Lichnerowicz equation. The physical times (atomic clocks, ephemeris times,..) have to be locally correlated to this mathematical time.
Also the Komar-Bergmann individuating fields , needed for a physical identification of the points of the spacetime $`M^4`$ (due to general covariance the mathematical points of $`M^4`$ have no physical meaning in absence of a background; see Einstein’s hole argument), may be re-expressed in terms of Dirac’s observables.
Finally the Poincaré Casimirs associated with the asymptotic weak Poincaré charges allow to define the Møller radius (and a possible ultraviolet cutoff in a future attempt to make a quantization of completely gauge-fixed tetrad gravity) also for the gravitational field.
The main tasks for the future are:
A) Make the canonical quantization of scalar electrodynamics in the rest-frame instant form on the Wigner hyperplanes, which should lead to a particular realization of Tomonaga-Schwinger quantum field theory, avoiding the no-go theorems of Refs.. The Møller radius should be used as a physical ultraviolet cutoff for the point splitting technique and the results of Refs. about the infrared dressing of asymptotic states in S matrix theory should help to avoid the ‘infraparticle’ problem.
B) Study the linearization of tetrad gravity in the 3-orthogonal gauge to reformulate the theory of gravitational waves in this gauge.
C) Study the N-body problem in tetrad gravity at the lowest order in the Newton constant (action-at-a-distance plus linearized tetrad gravity). See Ref. for preliminary results on the action at a distance hidden in Einstein’s theory at the lowest order in the Newton constant, which agree with the old results of Ref..
D) Study the perfect fluids both in the rest-frame instant form in Minkowski spacetime and in tetrad gravity.
E) Make the Hamiltonian reformulation of the Newman-Penrose formalism by using Hamiltonian null tetrads and study its connection with the 2+2 decompositions of $`M^4`$.
F) Begin the study of the standard model of elementary particles coupled to tetrad gravity starting from the Einstein-Maxwell system. |
no-problem/9912/cond-mat9912439.html | ar5iv | text | # Quantum kinetic theory VII: The influence of vapor dynamics on condensate growth
## I Introduction
The fundamental process in the growth of a Bose-Einstein condensate is that of bosonic stimulation, by which atoms are scattered into and out of the condensate at rates enhanced by a factor proportional to the number of atoms in the condensate. This was first quantitatively considered by Gardiner et al. , in a paper which treated the idealized case of the growth of a condensate from an nondepletable “bath” of atoms at a fixed positive chemical potential $`\mu `$ and temperature $`T`$. This gave rise to a simple and elegant formula known as the simple growth equation
$`\dot{n}_0=2W^+(n_0)\left\{\left(1\text{e}^{[\mu _\mathrm{C}(n_0)\mu ]/kT}\right)n_0+1\right\},`$ (1)
in which $`n_0`$ is the population of the condensate, $`\mu `$ is the chemical potential of the thermal cloud, and $`\mu _\mathrm{C}(n_0)`$ is the condensate eigenvalue. The prefactor $`W^+(n_0)`$ is a rate with an expression derived from quantum kinetic theory , which was estimated approximately in by using a classical Boltzmann distribution. To go beyond the Boltzmann approximation for $`W^+`$ involves a very much more detailed treatment of the populations of the trap levels with energy less than $`\mu `$, since the equilibrium Bose-Einstein distribution for $`\mu >0`$ is not consistent with energies less than $`\mu `$. In other words, the populations of the lower trap levels cannot be treated as time-independent, and thus the dynamics of growth must include at least this range of trap levels as well as the condensate level. Therefore in we considered a less simplified model, covering a range of energies up to a cut-off, $`E_R`$, above which the system was assumed to be a thermal cloud with a fixed temperature and chemical potential. Equations were derived for the rate of growth of these levels along with the condensate, and the rates at which particles from the thermal bath scattered these quasi-particles between levels within the condensate band. The results of calculations showed that a speedup of the growth rate by a factor of the order of 3–4 compared to the simple growth equation could be expected, and that the initial part of the growth curve would be modified, leading to a much sharper onset of the initiation of the condensate growth.
The only experiment that has been done on condensate growth was then under way. In these experiments clouds of sodium atoms were cooled to just above the transition temperature, at which point the high energy tail of the distribution was rapidly removed by an very severe RF “cut”, where the frequency of the RF field was quickly ramped down. After a short period of equilibration, the resulting vapor distribution was found to be similar to the assumptions of our theoretical treatments, and condensate growth followed promptly. The results obtained were fitted to solutions of the simple growth equation (1). When experimental results became available a speedup of about the predicted factor was found, and indeed the higher temperature results agreed very well with the theoretical predictions. At lower temperatures there was still some disparity; the theory predicted a slower rate of growth with decreasing temperature, but experimentally the opposite was observed.
The situation in which we now find ourselves leaves no alternative other than to address the remaining approximations. In our previous work we have made four major approximations:
* The part of the vapor with energies higher than $`E_R`$ has been treated as being time independent.
* The energy levels above the condensate level were modified phenomenologically to account for the fact that they must always be greater than the condensate chemical potential, which rises as the condensate grows.
* We treated all levels as being particle-like, on the grounds that detailed calculations have shown that only a very small proportion of excitations of a trapped Bose gas are not of this kind.
* We have used the quantum Boltzmann equation in a ergodic form, in which all levels of a similar energy are assumed to be equally occupied.
In this paper we will no longer require the first two of these approximations. Abandoning the first means that we are required to take care of all kinds of collisions which can occur, and thus treat the time-dependence of all levels. This comes at a dramatic increase in both the computation time required (hours rather than seconds) and the precision of algorithms required. We also use a density of states which should be close to the actual density of states as the condensate grows, thereby avoiding the phenomenological modification of energy levels. However, we still treat all of the levels as being particle-like, since it seems unlikely that the few non-particle-like excitations will have a significant effect on the growth as a whole. The ergodic form of the quantum Boltzmann is needed to make the computations tractable, and is of necessity retained.
## II Formalism
The basis of our method is Quantum Kinetic theory, a full exposition of which is given in Ref. . These develop a complete framework for the study of a trapped Bose gas in a set of master equations. The full solution of these equations is not feasible, however, and therefore some type of approximation must be made. The basic structure of the method used here is essentially the same as that of QKVI, the major difference being that all time-dependence of the distribution function is retained. As explained in , quantum kinetic theory leads to a model which can be viewed as a modification of the quantum Boltzmann equation in which
* The condensate wavefunction and energy eigenvalue—the condensate chemical potential $`\mu _\mathrm{C}(n_0)`$—are given by the solution of the time-independent Gross-Pitaevskii equation with $`n_0`$ atoms.
* The trap levels above the condensate level are the quasiparticle levels appropriate to the condensate wavefunction. This leads to a density of states for the trap levels which is substantially modified, as discussed below in Sect. III B
* The transfer of atoms between levels is given by a modified quantum Boltzmann equation (MQBE) in the energy representation. This makes the ergodic assumption; that the distribution function depends only on energy.
### A The ergodic form of the quantum Boltzmann equation
The derivation of the ergodic form of the quantum Boltzmann equation used by is particular to the undeformed harmonic potential, and we give here a derivation appropriate to our case, in which the density of states can change with time as the condensate grows. We bin the phase space into energy bands labeled by the index $`n`$ with energies in a range $`D_n(t)(\epsilon _n(t)\frac{\delta \epsilon _n(t)}{2},\epsilon _n(t)+\frac{\delta \epsilon _n(t)}{2})`$ of width $`\delta \epsilon _n(t)`$, and these widths change in time so that the number of states within each bin, $`g_n`$ is constant in time.
Starting from the full quantum Boltzmann equation the ergodic approximation is expressed in terms of this binned description as follows: We set $`f(𝐱,𝐊,t)`$ equal to a constant, $`f_n`$, when $`ϵ(𝐱,𝐊,t)\mathrm{}^2𝐊^2/2m+V_{\mathrm{eff}}(𝐱,t)`$ is inside the $`n`$th bin, i.e., $`ϵ(𝐱,𝐊,t)D_n(t)`$. (Here $`V_{\mathrm{eff}}(𝐱,t)`$ is the potential of the trap, as modified by the mean field arising from the presence of the condensate, as explained in QKV and QKVI.)
Thus we can approximate
$`{\displaystyle \frac{f(𝐱,𝐊,t)}{t}}{\displaystyle \frac{f_n}{t}}\text{ if }ϵ(𝐱,𝐊,t)D_n(t),`$ (2)
In order to derive the ergodic quantum Boltzmann equation, we define the indicator function $`\chi _n(𝐱,𝐊,t)`$ of the $`n`$th bin $`D_n(t)`$ by
$`\chi _n(𝐱,𝐊,t)`$ $`=`$ $`1\text{ if }ϵ(𝐱,𝐊,t)D_n(t),`$ (3)
$`=`$ $`0\text{ otherwise.}`$ (4)
The number of states in the bin $`n`$ will be given by $`g_n=d^3𝐱d^3𝐊\chi _n(𝐱,𝐊,t)/h^3`$, and is held fixed.
The formal statement of the binned approximation is
$`f(𝐱,𝐊,t){\displaystyle \underset{n}{}}f_n\chi _n(𝐱,𝐊,t),`$ (5)
and the ergodic quantum Boltzmann equation is derived by substituting (5) into the various parts of the quantum Boltzmann equation as follows. For the time derivative part we make this replacement, and project onto $`D_n(t)`$, getting
$`{\displaystyle \frac{d^3𝐱d^3𝐊}{h^3}\chi _n(𝐱,𝐊,t)\frac{f(𝐱,𝐊,t)}{t}}`$ $``$ $`g_n{\displaystyle \frac{f_n}{t}}.`$ (6)
\[Note that the expansion (5) would mean that delta function singularities at the upper and lower boundaries of $`D_n(t)`$ would arise by differentiating $`f(𝐱,𝐊,t)`$ as defined in (5), but the condition that $`g_n`$ be fixed means that these are of equal and opposite weight, and cancel when integrated over $`D_n(t)`$, givng a result consistent with (6).\] We now replace $`f(𝐱,𝐊,t)/t`$ on the left hand side of (6) by the collison integral that appears on the right hand side of the quantum Boltzmann equation, and substitute for $`f(𝐱,𝐊,t)`$ in the collision integral using (5). \[The streaming terms give no contribution, since the form (5) is a function of the energy $`ϵ(𝐱,𝐊,t)`$.\]
This leads to the ergodic quantum Boltzmann equation in the form
$`g_n{\displaystyle \frac{f_n}{t}}`$ $`=`$ $`{\displaystyle \frac{4a^2h^3}{m^2}}{\displaystyle \underset{pqr}{}}\left\{f_pf_q(1+f_r)(1+f_n)(1+f_p)(1+f_q)f_rf_n\right\}{\displaystyle \frac{d^3𝐱d^3𝐊}{h^3}d^3𝐊_1d^3𝐊_2d^3𝐊_3}`$ (9)
$`\times \chi _n(𝐱,𝐊_1,t)\chi _p(𝐱,𝐊_2,t)\chi _q(𝐱,𝐊_3,t)\chi _r(𝐱,𝐊,t)\delta \left(𝐊_1+𝐊_2𝐊_3𝐊\right)`$
$`\times \delta \left(ϵ(𝐱,𝐊_1,t)+ϵ(𝐱,𝐊_2,t)ϵ(𝐱,𝐊_3,t)ϵ(𝐱,𝐊,t)\right).`$
The final integral is now approximated by the method of to give \[$`a`$ and $`\overline{\omega }`$ are defined in Sect. III\]
$`g_n{\displaystyle \frac{f_n}{t}}`$ $`=`$ $`{\displaystyle \frac{8ma^2\overline{\omega }^2}{\pi \mathrm{}}}{\displaystyle \underset{pqr}{}}\left\{f_pf_q(1+f_r)(1+f_n)(1+f_p)(1+f_q)f_rf_n\right\}M(p,q,r,n)\mathrm{\Delta }(p,q,r,n).`$ (10)
Here $`\mathrm{\Delta }(p,q,r,n)`$ is a function which expresses the overall energy conservation, and is defined by
$`\mathrm{\Delta }(p,q,r,n)`$ $`=`$ $`1\text{ when }|\epsilon _p+\epsilon _q\epsilon _r\epsilon _n|{\displaystyle \frac{\left|\delta \epsilon _p+\delta \epsilon _q+\delta \epsilon _r+\delta \epsilon _n\right|}{2}},`$ (11)
$`=`$ $`0\text{ otherwise}.`$ (12)
Because we approximate $`f(𝐱,𝐊,t)`$ by a constant value within each $`D_n(t)`$, energy conservation means that $`\overline{E}_n\epsilon _ng_nf_n(t)`$ is constant. This follows from energy conservation in the full quantum Boltzmann equation, which also implies that
$`{\displaystyle \underset{rn}{}}\mathrm{\Delta }(p,q,r,n)M(p,q,r,n)(\epsilon _r+\epsilon _n)`$ (13)
$`=(\epsilon _p+\epsilon _q){\displaystyle \underset{rn}{}}\mathrm{\Delta }(p,q,r,n)M(p,q,r,n)`$ (14)
This is the limit to which the binning procedure defines energy conservation.
## III Details of model
The most important aspect of our model is the inclusion of the mean field effects of the condensate. As the population of the condensate increases, the absolute energy of the condensate level also rises due to the atomic interactions. This results in a compression in energy space of the quantum levels immediately above the condensate, (see Fig. 1) and has an important effect on the evolution of the cloud.
The correct description of the quantum levels immediately above the ground state when there is a significant condensate population requires a quasiparticle transformation. This is computationally difficult, however, so we make use of a single-particle approximation for these states. This should be reasonable, as most of the growth dynamics will involve higher lying states that will be almost unaffected by the presence of the condensate. In we did this using a linear interpolation of the density of states; here we use an approximate treatment based on the Thomas-Fermi approximation.
### A Condensate chemical potential $`\mu _\mathrm{C}(n_0)`$
We consider a harmonic trap with a geometric mean frequency of
$`\overline{\omega }`$ $`=`$ $`(\omega _x\omega _y\omega _z)^{1/3}.`$ (15)
We include the mean field effects via a Thomas-Fermi approximation for the condensate eigenvalue, which is directly related to the number of atoms in the condensate mode. As in , we use a modified form of this relation in order to give a smooth transition to the correct harmonic oscillator value when the condensate number is small:
$`\mu _\mathrm{C}(n_0)=\alpha \left[n_0+(3\mathrm{}\overline{\omega }/2\alpha )^{5/2}\right]^{2/5},`$ (16)
where $`\alpha =(15a\overline{\omega }m^{1/2}\mathrm{}^2/4\sqrt{2})^{2/5}`$, and $`a`$ is the atomic *s*-wave scattering length. Thus, for $`n_0=0`$ we have $`\mu _\mathrm{C}(0)=\epsilon _0=3\mathrm{}\overline{\omega }/2`$.
### B Density of states $`\overline{g}(\epsilon )`$
We assume a single particle energy spectrum with a Bogoliubov-like dispersion relation, as in Timmermans et al. , which leads to a density of states of the form
$`\overline{g}(\epsilon ,n_0)`$ $`=`$ $`{\displaystyle \frac{4}{\pi }}{\displaystyle \frac{\mu _\mathrm{C}(n_0)^2}{(\mathrm{}\overline{\omega })^3}}\left[\left({\displaystyle \frac{\epsilon }{\mu _\mathrm{C}(n_0)}}1\right){\displaystyle _0^1}𝑑x\sqrt{1x}{\displaystyle \frac{\left[\sqrt{(\epsilon /\mu _\mathrm{C}(n_0)1)^2+x^2}x\right]^{1/2}}{\sqrt{(\epsilon /\mu _\mathrm{C}(n_0)1)^2+x^2}}}+{\displaystyle _1^{\epsilon /\mu _\mathrm{C}(n_0)}}𝑑x\sqrt{x}\sqrt{{\displaystyle \frac{\epsilon }{\mu _\mathrm{C}(n_0)}}x}\right],`$ (17)
This integral can be carried out analytically; the result is
$`\overline{g}(\epsilon ,n_0)`$ $`=`$ $`{\displaystyle \frac{\epsilon ^2}{2(\mathrm{}\overline{\omega })^3}}\left\{1+q_1\left(\mu _\mathrm{C}(n_0)/\epsilon \right)+\left(1{\displaystyle \frac{\mu _\mathrm{C}(n_0)}{\epsilon }}\right)^2q_2\left({\displaystyle \frac{1}{\epsilon /\mu _\mathrm{C}(n_0)1}}\right)\right\},`$ (19)
where
$`q_1(x)`$ $`=`$ $`{\displaystyle \frac{2}{\pi }}\left[\sqrt{x}\sqrt{1x}(12x)\mathrm{sin}^1(\sqrt{x})\right],`$ (20)
$`q_2(x)`$ $`=`$ $`{\displaystyle \frac{4\sqrt{2}}{\pi }}\left[\sqrt{2x}+x\mathrm{log}\left({\displaystyle \frac{1+x+\sqrt{2x}}{\sqrt{1+x^2}}}\right)\left\{{\displaystyle \frac{\pi }{2}}+\mathrm{sin}^1\left({\displaystyle \frac{x1}{\sqrt{1+x^2}}}\right)\right\}\right].`$ (21)
This is plotted in Fig. 2, along with that for the ideal gas. Thus the density of states of the trap varies smoothly as the condensate grows.
## IV Numerical methods
### A Representation
The bins we shall choose for the representation of the distribution in terms of the quantities $`f_n`$ as in (5) are divided into two distinct regions, as shown diagrammatically in Fig. 3. The lowest energy region corresponds essentially to the condensate band $`R_C`$ of . This is the region in which $`f_n`$ is rapidly varying in the regime of quantum degeneracy, and is described by a series of fine-grained energy bins up to an energy $`E_R3\mu _\mathrm{C}(n_{0,\mathrm{max}})`$. The condensate is a *single* quantum state represented by the lowest energy bin.
As the number of particles in the condensate changes, the energy of the condensate level changes according to the Thomas-Fermi approximation of Eq. 16. Thus the total energy width of $`R_C`$ decreases as the condensate grows.
We represent $`R_C`$ by a fixed number of energy bins of equal width $`\delta \epsilon _n`$ with a midpoint of $`\epsilon _n`$ .
As the condensate energy increases, we adjust $`\epsilon _n`$ and $`\delta \epsilon _n`$ between integration timesteps, such that the widths of the bins remain equal. This is done by redistributing the numbers of particles into new bins after each timestep, and thus does not contradict the requirement that $`g_n`$ is fixed during the timestep. We find that this is the most simple procedure for the calculation of rates in and out of these levels. We choose the number of bins to be sufficient such that the width is not more than about $`\delta \epsilon _n5\mathrm{}\overline{\omega }`$.
The high energy region corresponds to the thermal bath of our previous papers. This is the region in which $`f_n`$ is slowly varying, and therefore the energy bins are considerably broader (up to $`64\mathrm{}\overline{\omega }`$ in the results presented in this paper). The evaporative cooling is carried out by the sudden removal of population of the bins in this region with $`\epsilon _n>\epsilon _{\mathrm{cut}}`$.
### B Solution
There are four different types of collision that can occur given our numerical description of the system. These are depicted in Fig. 4.
* Growth: This involves two particles in $`R_{NC}`$ colliding, resulting in the transfer of one of the particles to the condensate band (along with the reverse process).
* Scattering: A particle in $`R_{NC}`$ collides with a particle in the condensate band, with one particle remaining in $`R_C`$.
* Internal: Two particles within the condensate band collide with at least one of these particles remaining in $`R_{NC}`$ after the collision.
* Thermal: This involves all particles involved in the collision coming from the non-condensate band and remaining there.
Our first description of condensate growth considered only process (a). The next calculation involved both process (a) and (b). The calculations presented below include all four processes, allowing us to determine whether the earlier approximations were justified.
The computation of the rates of processes (a) and (b) is made difficult because of the different energy scales of the two regions of the distribution function. Our solution is to *interpolate* the distribution function $`f_n`$ in $`R_{NC}`$ (non-condensate band) such that the bin sizes are reduced to be the same as for $`R_C`$ (the condensate band). The rates are then calculated using this interpolated distribution function, now consisting of more than one thousand bins, and the rates for the large bins of the non-condensate band are found by summing the rates of the appropriate interpolated bins.
We have found that these rates are extremely sensitive to the accuracy of the numerical interpolation — small errors lead to inconsistencies in the solutions of the MQBE. This procedure is more efficient than simply using the same bin size for the whole distribution, as there are only a small number of bins for the condensate band.
### C Algorithm
The algorithm we use to solve the MQBE is summarised as follows
* Calculate the collision summation for all types of collisions, keeping the density of states, and the energies of the levels in the condensate band $`R_C`$ fixed. The distribution function $`f_n(t+\delta t)`$ is calculated using an embedded 4th order Runge-Kutta method, using Cash-Karp parameters .
* The quantity $`M(p,q,r,n)`$ defined by (10) expresses all the overlap integrals, and is quite difficult to compute exactly. In our computations we have simply set this to correspond to the value found in , i.e., we set
$`M(p,q,r,n)`$ $`=`$ $`g_{\mathrm{min}(p,q,r,n)},`$ (22)
and express energy conservation in a simplified form, using the fact that the energy bins will be chosen equally spaced, by choosing a Kronecker delta form
$`\mathrm{\Delta }(p,q,r,n)`$ $``$ $`\delta (p+q,r+n).`$ (23)
The difference between these two forms clearly goes to zero as the bins become very narrow.
It has been explicitly checked that in practice energy is conserved to a very high degree of accuracy throughout the calculation.
* As a result of the time step, the condensate population will have changed. This causes the density of states to alter slightly, along with the positions and widths of the energy bins in the condensate band, as all these quantities are determined by the condensate number $`n_0`$. The derivation in Sect. II A shows that the populations $`g_nf_n`$ are of the bins which move with the change of energy levels and density of states as the condensate grows so as to maintain the number of levels $`g_n`$ in the bin constant. Therefore after the Runge-Kutta timestep, the numbers $`g_nf_n`$ represent the numbers of particles in the bins determined by the appropriate energy levels after that step.
* As a result of the preceding step, the bins will no longer of equal width, so we rebin the numbers of atoms into a new set of equally spaced bins, as explained Sect. IV A.
To ensure total number conservation of particles, we keep the *number* of particles in each bin, $`g_nf_n`$ constant when we adjust the energies and widths of the bins. As the change in the density of states and the width of each bin is determined by the condensate number, the occupation per energy level of the $`n`$th bin, $`f_n`$, must be altered slightly to ensure number conservation.
* We now continue with step (1).
The change in $`\mu _C(n_0)`$ with each time step, and hence the shifts in the energy of the bins in $`R_C`$ is very small. Hence, the adjustment of the distribution function due to step (3) is tiny, much smaller than the change due to step (1).
The method has been tested by altering the position of $`E_R`$ and width of the energy bins of $`R_{NC}`$ and $`R_C`$. We have found that the solution is independent of the value of $`E_R`$ over a large range of values of these parameters.
## V Results
In this paper we present the results of simulations modelling the experiments described in . In these experiments a cloud of sodium atoms confined in a ‘cigar’ shaped magnetic trap was evaporatively cooled to just above the Bose-Einstein transition temperature. Then, in a period of 10ms the high energy tail of the distribution was removed with a very rapid and rather severe RF cut. The condensate was then manifested by the formation of a sharp peak in the density distribution.
We have carried out a full investigation of the effect of varying the initial cloud parameters has on the growth of the condensate for the trap configuration described in . In this paper we concentrate on a comparison of these results with our earlier theoretical model. To model these experiments, we begin our simulations with an equilibrium Bose-Einstein distribution, with temperature $`T_i`$ and chemical potential $`\mu _{\mathrm{init}}`$ and truncate it at an energy $`\epsilon _{\mathrm{cut}}=\eta kT_i`$, which represents the system at the end of the RF sweep. This is then allowed to evolve in time, until the gas once again approaches an equilibrium the appropriate equilibrium Bose-Einstein in the presence of a condensate. This is pictured schematically in Fig. 5.
Because of the ergodic assumption, the MQBE that we simulate depends only on the geometric average of the trapping frequencies $`\overline{\omega }=(\omega _x\omega _y\omega _z)^{1/3}`$. There is likely to be some type of experimental dependence on the actual trap geometry which is not included in our simulation; however in the regime $`kT\mathrm{}\overline{\omega }`$ this should be small. The trap parameters of were $`(\omega _x,\omega _y,\omega _z)=2\pi \times (82.3,82.3,18)`$Hz , giving $`\overline{\omega }=2\pi \times 49.6`$Hz.
### A Matching the experimental data
The main source of quantitative experimental data of condensate growth generally available is Fig. 5 of . This gives growth rates as a function of final condensate number and temperature rather than the initial conditions. Whereas the growth curves calculated in required these parameters as inputs, the calculations presented here require three different input parameters; the initial number of atoms in the system $`N_i`$ (and hence the initial chemical potential $`\mu _{\mathrm{init}}`$), the initial temperature $`T_i`$, and the position of the cut energy $`\eta kT_i`$.
Given the final parameters supplied in , it is possible to calculate a set of initial conditions that we require. As we know the final condensate number, we can calculate the value of the chemical potential of the gas using the Thomas-Fermi approximation for the condensate eigenvalue, Eq. 16. This gives a density of states according to Eq. 19, and along with the measured final temperature $`T_f`$, we can calculate the total energy $`E_{\mathrm{tot}}`$ and number of atoms $`N_{\mathrm{tot}}`$ in the system at the end of the experiment, completely characterizing the final state of the gas.
$`N_{\mathrm{tot}}`$ $`=`$ $`n_0+{\displaystyle \underset{\epsilon _n>\mu _\mathrm{C}(n_0)}{\overset{\mathrm{}}{}}}{\displaystyle \frac{g_n}{\mathrm{exp}[(\epsilon _n\mu _\mathrm{C}(n_0))/kT_f]1}},`$ (24)
$`E_{\mathrm{tot}}`$ $`=`$ $`E_0(n_0)+{\displaystyle \underset{\epsilon _n>\mu _\mathrm{C}(n_0)}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\epsilon _ng_n}{\mathrm{exp}[(\epsilon _n\mu _\mathrm{C}(n_0))/kT_f]1}}.`$ (25)
We now want to find an initial distribution that would have the same total energy and number of atoms if truncated at $`\epsilon _{cut}=\eta kT_i`$. If we specify an initial chemical potential for the distribution $`\mu _{\mathrm{init}}`$, we can self-consistently solve for the parameters $`T_i`$ and $`\eta `$ from the following non-linear set of equations
$`N_{\mathrm{tot}}`$ $`=`$ $`{\displaystyle \underset{\epsilon _n=3\mathrm{}\overline{\omega }/2}{\overset{\eta kT_i}{}}}{\displaystyle \frac{g_n}{\mathrm{exp}[(\epsilon _n\mu _{\mathrm{init}})/kT_i]1}},`$ (27)
$`E_{\mathrm{tot}}`$ $`=`$ $`{\displaystyle \underset{\epsilon _n=3\mathrm{}\overline{\omega }/2}{\overset{\eta kT_i}{}}}{\displaystyle \frac{\epsilon _ng_n}{\mathrm{exp}[(\epsilon _n\mu _{\mathrm{init}})/kT_i]1}}.`$ (28)
This gives the input parameters for our simulation, and we can now calculate growth curves starting with initially different clouds, but resulting in the same final condensate number and temperature.
### B Typical results
A sample set of growth curves is presented in Fig. 6(a), for a condensate with $`7.5\times 10^6`$ atoms at a final temperature of $`830`$nK and a condensate fraction of 10.4%. The initial parameters for the curves are given in Table I.
As can be seen the curves are very similar, and arguably would be difficult to distinguish in experiment. The main difference is the further the system starts from the transition point (i.e. the more negative the initial chemical potential), the longer the initiation time but the steeper the growth curve. This trend continues as $`\mu _{\mathrm{init}}`$ becomes more negative.
#### 1 Effective chemical potential
To facilitate understanding of these results, we introduce the concept of an effective chemical potential $`\mu _{\mathrm{eff}}`$ for the non-condensate band. We do this by fitting a Bose-Einstein distribution to the lowest energy bins of $`R_{NC}`$ as a function of time. Obviously, the chemical potential is undefined when the system is not in equilibrium, but as has been noted for the classical Boltzmann equation , the distribution function tends to resemble an equilibrium distribution as evaporative cooling proceeds. The effective chemical potential is not unique—it is dependent on the particular choice of the energy cutoff $`E_R`$. It gives a good indication of the “state” of the non-condensate, however, since the majority of the particles entering the condensate after a collision come from these bins. In this paper $`\mu _{\mathrm{eff}}`$ was computed by a linear fit to $`\mathrm{log}[1+1/f_n]`$ of the first ten bins of the noncondensate band, with the intercept giving $`\mu _{\mathrm{eff}}`$ and the gradient the temperature.
#### 2 Interpretation
We find that all the results presented in this paper can be qualitatively understood in terms of the simple growth equation Eq. (1), with the vapour chemical potential $`\mu `$ replaced by the effective chemical potential $`\mu _{\mathrm{eff}}`$ of the thermal cloud.
The simple growth equation requires $`\mu _{\mathrm{eff}}>\mu _\mathrm{C}(n_0)`$ for condensate growth to occur. In Fig. 6(b) we plot the effective chemical potential $`\mu _{\mathrm{eff}}`$ of the thermal cloud and the chemical potential of condensate $`\mu _\mathrm{C}(n_0)`$. This graphs helps explain the two effects noted above—longer initiation time and a steeper growth curve for the $`\mu _{\mathrm{init}}=100\mathrm{}\overline{\omega }`$ case. Firstly, the inversion of the chemical potentials for this simulation occurs at a later time than for $`\mu _{\mathrm{init}}=0`$, causing the stimulated growth to begin later. This is because the initial cloud for the $`\mu _{\mathrm{init}}=100\mathrm{}\overline{\omega }`$ simulation is further from the transition point at $`t=0`$. Secondly, the effective chemical potential of the thermal cloud rises more steeply, meaning that $`\mu _{\mathrm{eff}}\mu _\mathrm{C}(n_0)`$ is larger, and therefore the rate of condensate growth is increased.
### C Comparison with earlier model
In Fig. 6(a) we have also plotted the growth curve that is calculated for these final condensate parameters by the model of , which we refer to as the simple model (not to be mistaken with the solution of the simple growth equation 1). For this earlier model the initial condensate number is indeterminate, whereas for the detailed calculation presented here the initial distribution is Bose-Einstein, with the zero of the time axis being the removal of the high energy tail.
For these particular parameters, it turns out that the results of the full calculation of the growth curve give very similar results to the previous model, with the initial condensate number adjusted appropriately. This is not surprising; indeed, from Fig. 6(b) we can see that the approximation of the thermal cloud by a constant chemical potential (i.e. the cloud is not depleted) is good for the region where the condensate becomes macroscopic.
For larger condensate fractions, however, the principal condition assumed in the model of , that the chemical potential of the vapor can be treated as approximately constant, is no longer satisfied. In Fig. 7(a) we plot the growth of the same size condensate as in Fig. 6, (that is, $`7.5\times 10^6`$ atoms) but at a lower final temperature of 590nK. In this situation the condensate fraction increases to 24.1%, and so there is considerable depletion of the thermal cloud. The effect of this can be seen in Fig. 7(b). The difference between the vapor and condensate chemical potentials $`\mu _{\mathrm{eff}}\mu _\mathrm{C}(n_0)`$ initially increases to much larger values than for the simple model, where $`\mu _{\mathrm{eff}}`$ is held constant at its final equilibrium value. It is this fact that causes more rapid growth.
As the condensate continues to grow, it begins to significantly deplete the thermal cloud, causing $`\mu _{\mathrm{eff}}`$ to decrease from its maximum. It is the “overshoot” of $`\mu _{\mathrm{eff}}`$ from the final equilibrium value that the model of and QKVI cannot take account of. This overshoot only occurs for final condensate fractions of more than about ten percent; hence up to this value the simple model should be sufficient.
### D Effect of the final temperature on condensate growth
We have investigated the effect that final temperature has on the growth of a condensate of a fixed number. All simulations were begun with $`\mu _{\mathrm{init}}=0`$, since the initial chemical potential has little effect on the overall shape of the growth curves. This determines the other parameters, $`T_i`$ and $`\eta `$ and the initial conditions are shown in Table II. The results of these simulations are presented in Fig. 8.
We find the somewhat surprising result that the growth curves do not change significantly over a very large temperature range for the same size condensate. In fact, a condensate formed at 600nK grows more slowly than at 400nK for these parameters. As the temperature is increased further, however, the growth rate increases again, and at a final temperature of 1$`\mu `$K the growth rate is faster than at 400nK. This effect has also been observed for both larger ($`7.5\times 10^6`$) and smaller ($`1\times 10^6`$) condensates.
This observation can once again be interpreted using the simple growth equation (1). Although $`W^+(n_0)`$ increases with temperature (approximately as $`T^2`$ as shown in ), the maximum value of $`\mu _{\mathrm{eff}}\mu _\mathrm{C}(n_0)`$ achieved via evaporative cooling decreases with temperature for a fixed condensate number, as the cut required is less severe and the final condensate fraction is smaller. Also, the term in the curly brackets of Eq. 1 is approximately proportional to $`T^1`$ for most regimes. The end result is that the decrease in this term compensates for the increase in $`W^+(n_0)`$, giving growth curves that are very similar for the different simulations. Once the “overshoot” of the thermal cloud chemical potential ceases to occur (when the evaporative cooling cut is not as severe), the growth rate begins to increase with temperature as predicted by the model of and QKVI.
### E Effect of size on condensate growth
Finally we have performed some simulations of the formation of a condensate at a fixed final temperature, but of a varying size. The parameters for these simulations are given in Table III, and the growth curves are plotted in Fig. 9(a). We find that the larger the condensate, the more rapidly it grows. The initial clouds required to form the larger condensates not only start at a higher temperature (and thus have a higher collision rate to begin with), but also they need to be truncated more severely, causing a larger difference in the chemical potentials, as seen in Fig. 9(b). Thus instead of these effects negating each other as in the previous section, here they tend to reinforce one another. This causes the growth rate to be highly sensitive to the final number of atoms in the condensate for a fixed final temperature.
For further comparison with the previous model, in Fig. 9(a) the dashed curve is for the same parameters as for the lower temperature results of , whose prediction is plotted in grey—as can be seen, the two methods are in very good agreement with each other for this choice of parameters. It is this particular set of final parameters for which the discrepancy between theory and experiment remains.
### F The appropriate choice of parameters
In our computations we have taken some care to make sure that we can give our results as a function of the experimentally measured final temperature $`T_f`$ and condensate number $`n_0`$. Nevertheless, it can be seen from our results that this can give rise to counterintuitive behavior, such as the fact that under the condition of a given final condensate number, the growth rate seems to be largely independent of temperature, because of the cancellation noted in Sect. V D. This effect has its origin in the quite simple fact that with a sufficiently severe cut it is impossible to separate the process of equilibration of the vapor distribution to a quasiequilibrium from the actual process of growth of the condensate. In other words, the attempt to implement the “ideal” experiment in which a condensate grows from a vapor with a constant chemical potential and temperature cannot succeed with a sufficiently large cut. Under these conditions, the initial temperature differs quite strongly from the final temperature, and as well, the number of atoms required to produce the condensate is so large that the vapor cannot be characterized by a slowly varying chemical potential during most of the growth process.
### G Comparison with experiment
#### 1 Comparison with MIT fits
The most quantitative data available from is in their Fig. 5, in which results are presented as parameters extracted from fits to the simple growth equation 1. In we took two clusters of data from this figure, at the extremes of the temperature range for which measurements were made, and compared the theoretical results with the fitted experimental curves. At the higher temperature of 830nK the results were in good agreement with experiment, but at 590nK they differed significantly, the experimental growth rate being about three times faster than the theoretical result.
We have performed the same calculations using the detailed model. The results for 830nK are presented in Fig. 6 and those for 590nK are presented in Fig. 9. There is a good match between the two theoretical models at *both* temperatures.
#### 2 Comparison with sample growth curves
In some specific growth curves are also presented, and we shall compare these with with our computations, and those of Bijlsma et al. , which appeared in preprint form after the initial submission of this paper.
In Fig. 10 we show the data from Fig. 3 of , the computation of Fig. 11 of , and our own computations. This is for the MIT sodium trap, with the simulation parameters taken from Ref. of $`N_i=60\times 10^6`$, $`T_i=876`$nK and $`\eta =2.5`$. We find this results in a condensate of $`6.97\times 10^6`$ atoms at a temperature of 604nK, and a final condensate fraction of 21.8% after half a second, which agrees with our predictions from the solution of the equations in Sect. V A at $`t=\mathrm{}`$ to within 0.2%.
We can see that there is little difference in the results of the two computations for this case, the main discrepancy being that the initiation time for our simulation is a little longer than that of Bijlsma et al. This is likely to be due to the fact that their calculation starts with the condensate already occupied with $`n_0=5\times 10^4`$ atoms, whereas we begin with the equilibrium number at this temperature given by the Bose distribution of $`n_0=208`$ atoms. This difference could be brought about by the use of a slightly different density of states, which is also the likely cause of the difference in the final condensate number, of approximately $`3\times 10^5`$ atoms.
The agreement with the experimental growth curve data is very good for both computations. The simpler model of and QKVI cannot reproduce the results at this temperature, as is shown by the lower grey curve in Fig. 10. This is as we expect—the final condensate fraction is far greater than 10% and in this case the “overshoot” of $`\mu _{\mathrm{eff}}`$ is significant.
Given these initial conditions, this is the only case in which we have found that the “speedup” given by the full quantum Boltzmann theory may yield a significant improvement of the fit to the experimental data.
We would like to emphasise, however, that the parameters used for this simulation do not come from Ref. . The MIT paper does not provide any details of the size of the thermal cloud, or the temperature at which this curve was measured, and as such, a set of unique initial and final parameters of the experiment cannot be determined. We have simply taken these parameters from the calculation of Ref. .
In fact, it seems likely to us that the final temperature for the experimental curve shown in Fig. 10 should be higher. Studying Fig. 5 of Ref. shows that most condensates of $`7\times 10^6`$ atoms or more were formed at temperatures above 800nK. We have therefore performed a second calculation using the simple model with a final temperature of $`830`$nK, and this result is shown as the upper grey curve in Fig. 10. As can be seen, this also fits the experimental data extremely well. The condensate fraction at this higher temperature is 10.2%, meaning that these parameters are very similar to the situation considered in Fig. 6, which was originally found to be a good match to experimental data in Ref. . We note that the solution to the simple model at this higher temperature is also in good agreement with our more detailed calculation for these parameters.
The situation is very different, however, if we compare with the result of Fig. 4 of , in which the final condensate number was $`1.2\times 10^6`$ atoms. In this case, the data of Fig. 4(b) of Ref. can be used to extract all the relevant experimental parameters. This graph shows an experimentally measured reduction in the thermal cloud number from about $`40\times 10^6`$ atoms to about $`15\times 10^6`$ over the duration of the experiment. Including the final condensate population gives a total number of atoms in the system of approximately $`16.2\times 10^6`$, or a loss of about 60% of the atoms. With the three pieces of data taken from the MIT graphs (initial thermal cloud number, final thermal cloud number and final condensate number), we can estimate all the relevant parameters using the equations of Sect. V A, and these are shown in the second column of Table IV.
While the parameters we present here are consistent with the static experimental data, the growth curve corresponding to these parameters (shown in Fig. 11) certainly does not fit the dynamical data. We find that to remove such a large proportion of atoms, yet still obtain a relatively small condensate, the initial system must be a long way from the transition temperature, with $`\mu _{\mathrm{init}}=212\mathrm{}\overline{\omega }`$.
This means that condensate growth does not occur until the relaxation of the thermal cloud is almost complete, resulting in a very long initiation time, Also, when the growth does begin, the rate is significantly slower than was experimentally observed. This is the region in which the experimental and theoretical discrepancies lie.
The comparison of results is presented in Fig. 11. As well as the computation based on the extracted parameters, we also present two “apparent fits”, one based on our calculations and another based on a calculation of , and here we find the results of the two different formulations are almost identical. The difference appears to be due to the initial condensate number—our calculations begin with $`295`$ atoms, whereas Bijlsma et al. begin with $`10^4`$ atoms. The initial parameters chosen in for this simulation are a system of $`N_i=40\times 10^6`$ atoms at at temperature of $`T_i=765`$nK, and the energy distribution is truncated at $`\eta =0.6`$—an extremely severe cut.
However, while the fit to the experimental data looks very good, the initial parameters for these calculations are not consistent with the experiment. An inspection of the final state of the gas explains the situation. The final temperature according to these computations is $`T_f=211`$nK, and the condensate fraction is 51%. Looking at the data of , we find no reported temperatures to be lower than 500nK, and the largest condensate fraction reported to be 30% (although our analysis of their data from Fig. 5 gave a maximum of 17%). The evaporative cooling of these particular simulations would have to remove 94% of the atoms in the trap, and we believe it is very unlikely that this matches any of the experimental situations.
#### 3 Speedup of condensate growth compared to simplified theory
We have shown that a significant speedup of the condensate growth can occur at higher condensate fractions, but this cannot explain this particular discrepancy with the experimental results, for all of the measured values of temperature and condensate fraction for which growth rates are presented in Fig. 5 of .
The only situation in which this speedup might possibly be relevant to experiment is the single growth curve corresponding to Fig. 10. However, as we have noted, the initial conditions for this figure are quite speculative, and in fact also appear to be unrealistic.
The actual speedup observed in our computations is of the order of magnitude of that achievable with a different condensate fraction, and it is conceivable that the problem could be experimental rather than theoretical—a systematic error in the methodology of extracting the condensate number from the observed data could possibly cause the effect. For a realistic comparison to be made between experiment and theory, sufficient data should be taken to verify positively all the relevant parameters which have an influence on the results. Thus, one should measure initial and final temperatures, the final condensate number, the number of atoms in the vapor initially and finally, and the size of the “cut”. It should be noted in particular, in the one case where all of this data is available—that presented in Fig. 11—good agreement is not found.
The previous paper in this series, QKVI, considered in detail a semi-classical method of fitting theoretical spatial distributions to the two dimensional data extracted by phase-contrast imaging of the system during condensate growth. This method shows that significantly different condensate numbers and temperatures are consistent with the MIT data and methodology . This seems to us to be a more likely origin of the discrepancy between theory and experiment at low temperatures with a small condensate number.
### H Outlook
It does remain conceivable, however, that approximations made in this formulation of Quantum Kinetic theory are not appropriate to the experimental regime where the discrepancy remains. In this section we summarise the possible further extensions.
The first is the ergodic approximation, that all levels of a similar energy are assumed to be equally occupied. From the results of QKII it would seem than any non-ergodicity in the initial distribution would be damped on the time-scale of the growth—therefore the effect of this could be significant if the initial distribution is far from ergodic. It is difficult to know what the exact initial distribution of the system is without performing a three-dimensional detailed calculation of the evaporative cooling, which would require massive computational resources. There is also the fact that we have used the simplified form (22), derived in analogy with the work of Holland et al. on the ergodic approximation.
The second important approximation is that the lower-lying states of the gas are reasonably well described by the single particle excitation spectrum, and thus using a density of states description in calculating the collision rates of these levels. The justification of this is that these states are not expected to be important in determining the growth of the condensate, and in QKVI it was shown that varying these rates by orders of magnitude had little effect on the growth curves.
A third approximation made is that the growth of the condensate level is adiabatic, and its shape remains well described by the Thomas-Fermi wave function. This may not be the case, and indeed some collective motion during growth was observed in . We feel this may become important for sufficiently large truncations of the thermal cloud, in experiments that could be considered a temperature “quench”. Removing this assumption would require introducing a full description of the lower lying quasiparticle levels, and a time dependent Gross-Pitaevskii equation for the shape of the condensate.
The final approximation is that fluctuations of the occupation of the quantum levels are ignored.
The agreement between the theory and the single experiment performed so far is generally good, and there is only one regime in which there is significant discrepancy. The removal of these approximations requires a large amount of work, and we feel this is not justified until new experimental data on condensate growth becomes available.
## VI Conclusion
We have extended the earlier models of condensate growth and QKVI to include the full time-dependence of both the condensate and the thermal cloud. We have compared the results of calculations using the full model with the simple model, and determined that for bosonic stimulation type experiments resulting in a condensate fraction of the order of 10%, the model of and QKVI is quite sufficient.
However, for larger condensate fractions the depletion of the thermal cloud becomes important. We have introduced the concept of the effective chemical potential $`\mu _{\mathrm{eff}}`$ for the thermal cloud as it relaxes, and observed it to overshoot its final equilibrium value in these situations, resulting in a much higher growth rate than the simple model would predict. Thus we have identified a mechanism for a possible speedup that may contribute to eliminating the discrepancy with experiment.
We have also found that the results of these calculations can be qualitatively explained using the effective chemical potential of the thermal cloud, $`\mu _{\mathrm{eff}}`$, and the simple growth equation (1). In particular, the rate of condensate growth for the same size condensate can be remarkably similar over a wide range of temperatures; in contrast, the rate of growth is highly sensitive to the final condensate number at a fixed temperature.
This model we have used in this paper eliminates all the major approximations in the calculation of condensate growth, apart from the ergodic assumption, whose removal would require massive computational resources. In the absence of experimental data sufficiently comprehensive to make possible a full comparison between experiment and theory, this does not at present seem justified.
In Sect. V G 2 we have compared the results of our simulations to those of Bijlsma et al. , and found that our formulations are quantitatively very similar, giving growth curves in very good agreement with each other. The two treatments are based on similar, but not identical methodologies, and have been independently computed. Thus the disagreement with experiment must be taken seriously.
## ACKNOWLEDGMENTS
MJD would like to thank Keith Burnett for his support and guidance, St. John’s College, Oxford for financial support, and Mark Lee for many useful discussions and assistance with solutions of the simple growth model. CWG would also like to acknowledge fruitful discussions with Eugene Zaremba.
This work was supported by the Royal Society of New Zealand under the Marsden Fund Contracts PVT-603 and PVT-902. |
no-problem/9912/astro-ph9912160.html | ar5iv | text | # Multiwavelength optical observations of chromospherically active binary system OU Gem
## 1. Introduction
OU Gem (HD 45088) is a bright (V= 6.79, Strassmeier et al. 1990) and nearby (d= 14.7 pc, ESA 1997) BY Dra-type SB2 system (K3V/ K5V) with a 6.99-day orbital period and a noticeable eccentricity (Griffin $`\&`$ Emerson 1975). Both components show Ca ii H $`\&`$ K emission, though the primary emission is slightly stronger than the secondary emission. The H$`\alpha `$ line is in absorption for the primary and filled-in for the secondary (Bopp 1980, Bopp et al. 1981a, b, Strassmeier et al. 1990). Dempsey et al. (1993) observed that the Ca ii IRT lines were filled-in. It is interesting that the orbital and rotational periods differ in 5% due to the appreciable orbital eccentricity (e= 0.15), according to Bopp (1980).
Although BY Dra systems are main-sequence stars, their evolutionary stage is not clear. OU Gem has been listed by Soderblom et al. (1990) and Montes et al. (1999) as a possible member of the UMa moving group (300 Myr), and this suggests it may be a young star.
In this paper we present multiwavelength optical observations of chromospherically active binary system OU Gem.
## 2. Observations
Spectroscopic observations in several optical chromospheric activity indicators of OU Gem and some inactive stars of similar spectral type and luminosity class have been obtained during three observation runs.
1) Two runs were carried out with the 2.56 m Nordic Optical Telescope (NOT) at the Observatorio del Roque de Los Muchachos (La Palma, Spain) in March 1996 and April 1998, using the SOFIN echelle spectrograph covering from 3632 Å to 10800 Å (resolution from $`\mathrm{\Delta }`$$`\lambda `$ 0.15 to 0.60 Å), with the 1152$`\times `$770 pixels EEV P88200 CCD detector.
2) The last run was carried out with the 2.5 m INT at the Observatorio del Roque de Los Muchachos (La Palma, Spain) in January 1999 using Multi-Site Continuous Spectroscopy (MUSICOS), covering from 3950 Å to 9890 Å (resolution from $`\mathrm{\Delta }`$$`\lambda `$ 0.15 to 0.40 Å), with the 2148$`\times `$2148 pixels SITe1 CCD detector.
In the three runs we obtained 4 spectra of OU Gem at different orbital phases. The stellar parameters of OU Gem have been adopted from Strassmeier et al. (1993), except for distance (given by the Hipparcos catalogue (ESA 1997)), the B-V color index and the radius (taken from the spectral type). The spectra have been extracted using the standard reduction procedure of the IRAF package (bias subtraction, flat-field division and optimal extraction of the spectra). The wavelength calibration was obtained by taking spectra of a Th-Ar lamp, for NOT runs, and a Cu-Ar lamp, for INT runs. Finally the spectra have been normalized by a polynomial fit to the observed continuum. The chromospheric contribution in the activity indicators has been determined using the spectral subtraction technique (Montes et al. 1995a, b).
## 3. The H$`\alpha `$ line
We have taken several spectra of OU Gem in the H$`\alpha `$ line region in three different runs and at different orbital phases. The line profiles are displayed in Fig. 1. For each observation, we have plotted the observed spectrum (left panel) and the subtracted spectrum (right panel). In the observed spectra, we can see an absorption line for the primary star and a nearly complete filling-in for the secondary star. After applying the spectral subtraction technique, clear excess H$`\alpha `$ emission is obtained for the two components, being stronger for the hot one. The excess H$`\alpha `$ emission equivalent width (EW) is measured in the subtracted spectrum and corrected for the contribution of the components to the total continuum. We took one spectrum in this region in Dec-92 (Montes et al. 1995b). At the orbital phase of this observation ($`\phi `$= 0.48) we could not separate the emission from both components and we measured the total excess H$`\alpha `$ emission EW. We obtained a similar value to Mar-96, Apr-98 and Jan-99 values obtained adding up the excess emission EW from the two components. In Fig. 2 we have plotted the excess H$`\alpha `$ emission EW versus the orbital phase.
## 4. The H$`\beta `$ line
Four spectra in the H$`\beta `$ region are available, in three different runs and at different orbital phases. Looking at the observed spectra in Fig. 3, we can only see the H$`\beta `$ line of the primary, in absorption. After applying the spectral subtraction tecnique, small excess H$`\beta `$ emission is obtained for the two components. We have determined the excess H$`\beta `$ emission EW in the subtracted spectra, the ratio of excess H$`\alpha `$ and H$`\beta `$ emission EW, and the $`\frac{E_{H\alpha }}{E_{H\beta }}`$ relation:
$$\frac{E_{H\alpha }}{E_{H\beta }}=\frac{EW_{sub}(H\alpha )}{EW_{sub}(H\beta )}0.24442.512^{(BR)}$$
(1)
given by Hall & Ramsey (1992) as a diagnostic for discriminating between the presence of plages and prominences in the stellar surface. We have obtained high $`\frac{E_{H\alpha }}{E_{H\beta }}`$ values for the two components, so the emission can come from prominences.
## 5. The Ca ii IRT lines
The Ca ii infrared triplet (IRT) $`\lambda `$8498, $`\lambda `$8542, and $`\lambda `$8662 lines are
other important chromospheric activity indicators. We have taken several spectra of OU Gem in the Ca ii IRT lines region in three different runs and at different orbital phases, the three Ca ii IRT lines are only included in MUSICOS 99 observation run. In the observed spectra, we can see both components of OU Gem show these lines in emission superimposed to the corresponding absorption (Fig. 4). After applying the spectral subtraction tecnique, clear excess emission appears for the two components, being stronger for the hot one. In Fig. 2 we have plotted the Ca ii $`\lambda `$8662 EW versus the orbital phase for the hot and cool components.
## 6. The Ca ii H $`\&`$ K and H$`ϵ`$ lines
We have taken three spectra in the Ca ii H & K lines region during the two NOT (96 & 98) observation runs. In Fig. 5 we can observe that both components of this binary have the Ca ii H $`\&`$ K and H$`ϵ`$ lines in emission. We can also see that the hot star’s excess Ca ii H & K emission is bigger than the cool star’s. At 0.19 orbital phase, we only see the H$`ϵ`$ line of the cool component because the H$`ϵ`$ line of the hot star is overlapped with the Ca ii H line of the cool one. At the other orbital phases, we can only see the H$`ϵ`$ line of the hot component because the H$`ϵ`$ line of the cool one is overlapped with the Ca ii H line of the hot star. We took one spectrum in this region in Mar-93 (Montes et al. 1995a, 1996), with the 2.2 m telescope at the German Spanish Astronomical Observatory (CAHA). At the orbital phase of this observation ($`\phi `$= 0.47) we could not separate the emission from both components and we measured the total excess Ca ii K $`\&`$ H emission EW. We obtained similar values to Mar-96 and Apr-98 values obtained adding up the excess emission EW from the two components, though we note that the Apr-98 values are slightly bigger than the others.
### Acknowledgments.
This work was supported by the Universidad Complutense de Madrid and the Spanish Dirección General de Enseñanza Superior e Investigación Científica (DGESIC) under grant PB97-0259.
## References
Bopp B.W., 1980, PASP, 92, 218
Bopp B.W., Hall D.S., Henry G.W., Noah P., Klimke A., 1981a, PASP, 93, 504
Bopp B.W., Noah P., Klimke A., Africano J., 1981b, ApJ, 249, 210
Dempsey R.C., Bopp B.W., Henry G.W., Hall D.S., 1993, ApJS, 86, 293
ESA 1997, ”The Hipparcos and Tycho Catalogues” ESA SP-1200
Griffin R.F., Emerson B., 1975, Observatory, 95, 23
Hall J.C., Ramsey L.W., 1992, AJ, 104, 1942
Montes D., De Castro E., Fernández-Figueroa M.J., Cornide M., 1995a, A$`\&`$AS, 114, 287
Montes D., Fernández-Figueroa M.J., De Castro E., Cornide M., 1995b, A$`\&`$A, 294, 165
Montes D., Fernández-Figueroa M.J., Cornide M., De Castro E., 1996, A$`\&`$A, 312, 221
Montes D., Latorre A., Fernández-Figueroa M.J., 1999, ASP Conf. Ser., Stellar clusters and associations: convection, rotation and dynamos. R. Pallavicini, G. Micela and S. Sciortino eds.
Soderblom D.R., Pilachowski C.A., Fedele S.B., Jones B.F., 1990, ApJS, 72, 191
Strassmeier K.G., Fekel F.C., Bopp B.W., Dempsey R.C., Henry G.W., 1990, ApJS, 72, 191
Strassmeier K.G., Hall D.S., Fekel F.C., Scheck M., 1993, A$`\&`$AS, 100, 173 (CABS) |
no-problem/9912/quant-ph9912036.html | ar5iv | text | # Quantum Computer Using Coupled Quantum Dot Molecules
## 1 Introduction
One of the challenges in nanoelectronics is to develop quantum computer that is based on solid-state devices.Any performance of quantum computer can be decomposed into a succession of operations of basic quantum gates: one-qubit gate and two-qubit controlled NOT (CN) gate. To construct a quantum computer, we must create feasible basic quantum gates. This paper proposes a novel method for implementation of quantum gates using coupled quantum dot molecules. Some implementations of the quantum gates have been proposed using solid-state structures. The main proposed gates are based on the dynamics of electrons in quantum dots<sup>1,2</sup>,donor ions in Si<sup>3</sup> and Josephson junctions<sup>4,5</sup>.Although the numerical studies of the quantum dot gates and the experimental demonstration of the Josephson junction qubit have been made, but it is not yet clear whether these implementations can be extended to many-bit system and large-scale quantum computer.<sup>1,2,4,5</sup> Secondly, two basic states $`|0`$ and $`|1`$ of the qubit are established by applying voltage or magnetic field so that the coupling of the electrons with external degrees of freedom spoils the unitary structure of quantum evolution (if the external voltage and magnetic field fluctuate). Furthermore, the obvious obstacle to building the Si-based quantum computer<sup>3</sup> is the incorporation of the donor array into the Si substrate. The quantum computer requires that the donor must be placed precisely into each qubit. It is extremely difficult to create the donor array by existing technologies, such as ion implantation and focused deposition.
In the paper we propose a novel method for implementation of quantum gates that use artificial molecules. The artificial molecule consists of two coupled asymmetric quantum dots stacked along z direction. One-qubit and two-qubit gates are constructed by one molecule and two coupled molecules, respectively.The method for implementation of the quantum computer has the following features. 1) The structures of the basic quantum gates are simple and can be fabricated by existing technologies. 2) The implementation can be extended to a large-scale quantum computer. 3) Except for the signals that control the quantum computation process, external electric field and magnetic field are not necessary to be applied to the gates so that the coupling of the electrons with external degrees of freedom can be reduced. In the following sections we first present the structures of quantum gates and describe their operations (§2). We then establish analytic models of the quantum gates and study numerically the operations and decoherence of the quantum computer (§3). We also discuss issues related to the physical implementation (§4). Finally, we will summarize the main results (§5).
## 2 Quantum Logic Gates
### 2.1 Qubit
Figure 1 shows the schematic structure of the proposed qubit using an artificial molecule that consists of two stacked asymmetric quantum dots with a disk shape and one single electron. The single electron can tunnel between two quantum dots. The dimensions of the asymmetric quantum dots are designed well so that the ground and first excited sates are mainly localized in the dot 1 and dot 2, respectively, as shown in Fig.1(c). The two localized states correspond to the bonding and anti-bonding states of the coupled quantum dots, respectively. The energy states of the artificial molecule can be used to represent $`|0`$ and $`|1`$ states of the qubit.
The two states of the qubit exhibit two bistable polarizations of the electron charge and are manipulated by an electromagnetic (optical) resonance or voltage pulse<sup>5-7</sup>. Here we use a resonant electromagnetic wave to irradiate and manipulate the qubit through a microstrip line integrated on the substrate.The approach has the following advantages: 1)A qubit can be addressed by a microstrip line connected directly to it; 2)it can be realized by the existing nano-fabrication and large scale integrated circuit (LSI) technologies; and 3) comparing with other kinds of qubits in which the $`|0`$ and $`|1`$ states are prepared by biasing external electrical field or magnetic field, the coupling of the electrons in our qubit with external degrees of freedom can be reduced. A role of the metal films in the qubit structure will be explained later.
### 2.2 Two-Qubit Controlled NOT(CN) Gate
We can constitute a quantum CN gate using two qubits with different resonance frequencies. Figure 2 shows the schematic structure of the CN gate that consists a control qubit and a target qubit. The diameters of the quantum dots in the control qubit are designed to be different from the diameters of the dots in the target qubit, respectively. The coupling between the two qubits are controlled by three metal film electrodes between the qubits: upper electrode, middle electrode and lower electrode. The middle electrode is always grounded. We numerically calculated the electrostatic interaction between the electrons in the qubits. If the upper and lower electrodes are floated, the electrons at two kinds of configurations: 1) occupying dot 1 (dot2) and dot 3 (dot4) and 2) occupying dot 1 (dot2) and dot 4 (dot3) can be coupled by Coulomb repulsion interactions U and U<sub>1</sub>,respectively. But, Coulomb repulsion interaction U is much larger than U1.Therefore the two qubits are coupled by the dipole-dipole interactions. The electronic state of the target qubit depends on the state of the control qubit. On the other hand, if all of the electrodes between the qubits are grounded, it is found that the interaction energy between the electrons can be controlled to be smaller than 10<sup>-10</sup> eV. Consequently, Coulomb interaction between the qubits can be turned off, that is, the energy levels of the two-qubit gate can be controlled by turning on/off the electrostatic interaction between two electrons, as shown in Fig. 2(c).
The quantum CN gate operates as follows. First we turn on the Coulomb interaction between two qubits by floating the upper and lower electrodes. Then we use the resonant electromagnetic wave of the frequency $`\omega _{T+U}`$ to irradiate target qubit for $`\pi `$-pulse time. If the control qubit is set to $`|0`$, the target qubit does not change its state, and if the control qubit is $`|1`$, the target qubit change its state from $`|0`$($`|1`$) to $`|1`$($`|0`$). Therefore, the XOR operation is reached. Finally the Coulomb interaction is turned off, the state of one qubit becomes independent on that of the other qubit. The CN gate has the advantages: 1) the coupling between the qubits can simply controlled by floating or grounding the metal electrodes; and 2) even if the length of the metal electrode between two qubits is made longer, the Coulomb repulsion interactions U can be maintained so that the array of the qubits in the quantum computer is designed easily.
## 3 Operation of Quantum Computer
We will analyze numerically the operation of the quantum logic gates by the method described below and demonstrate that the proposed quantum gates can perform quantum computation.
### 3.1 Model
The qubit consists of two coupled asymmetric quantum dots(1 and 2)stacked along the z direction. The size of the dot 1 is set to be larger than the dot 2. We use a box-shaped potential as an analytic model of the quantum dot. The valence-band and core electrons are neglected in our analysis because these electrons are highly localized and their wave functions do not overlap with each other. The dimensions of the asymmetric quantum dots are designed well so that the ground and first excited sates of the coupled quantum dots are mainly localized in the dot 1 and dot 2, respectively.We use the basic model to investigate the eigenstates and the quantum dynamics of the quantum computation logic gates.The quantum logic gates can be described by the following Hamiltonian
$`H`$ $`=`$ $`{\displaystyle \underset{i=1,2}{}}H_i+{\displaystyle \underset{k,j=1,2}{}}U_{1k,2j}+{\displaystyle \underset{i,j=1,2}{}}Qz_{ij}E_{zi}COS(\omega _it)`$ (1)
$`H_i`$ $`=`$ $`{\displaystyle \underset{j}{}}\left\{{\displaystyle \frac{\mathrm{}^2}{2m}}_{ij}^2+V_{ij}(x,y,z)\right\}`$ (2)
$`V_{ij}`$ $`=`$ $`\{\begin{array}{cc}0\hfill & x_{ij0}\frac{w}{2}xx_{ij0}+\frac{w}{2},y_{ij0}\frac{w}{2}yy_{ij0}+\frac{w}{2}\hfill \\ V_0\hfill & x<x_{ij0}\frac{w}{2},x>x_{ij0}+\frac{w}{2},y<y_{ij0}\frac{w}{2},y>y_{ij0}+\frac{w}{2}\hfill \\ V_1\hfill & z<z_{ij0}\frac{h}{2},z>z_{ij0}+\frac{h}{2}\hfill \end{array}`$ (6)
where $`H_i`$ represents the Hamiltonian of the i-th qubit with an electron effective mass m ,$`V_{ij}(x,y,z)`$ is the box-shaped three-dimensional confinement potential giving rise to the j-th quantum dot in the i-th qubit, $`U_{1k,2,j}`$ describes the Coulomb interaction between the electrons occupying dots k and j in the two qubits 1 and 2.The interaction between the resonant electromagnetic wave of the amplitude $`E_{zi}`$ (along z direction) and the frequency i and the electron in the j-th dot of the i-th qubit is described by $`Qz_{ij}E_{zi}cos(\omega _it)`$. $`Q`$ is electron charge. w and h are width (x,y) and height of the dot, respectively. $`(x_{ij0},y_{ij0},z_{ij0})`$ are the center coordinates of the j-th dot in the i-th qubit. We first obtain numerically the eigenstates of one qubit.Then we construct the basic functions: the two basis vectors of $`|0`$ and $`|1`$ for one qubit, and the four basis vectors of $`|00`$,$`|01`$,$`|10`$ and $`|11`$ for the CN gate. After that we solve a time-dependent Schridinger equation numerically and analyze the operation of the quantum logic gates.
Finally we analyze the electron-phonon scattering in one qubit and estimate the decoherence time by taking electron-phonon interaction in the coupled quantum dots at low temperature ( about 0 K). For sake of simplicity, we assume the energy difference between the $`|0`$ and $`|1`$ states is smaller than the optical-phonon energy. Therefore longitudinal-acoustic (LA) phonon emission (adsorption) via deformation potential interaction is considered. The deformation potential energy is given by<sup>8</sup>
$$V_{eph}(r,q,t)=i\sqrt{\frac{\mathrm{}q}{2\mathrm{\Omega }\rho U_s}}D[e^{iqr}e^{i\omega _qt}+e^{iqr}e^{i\omega _qt}].$$
(7)
where $`\mathrm{\Omega }`$ is the system volume, D is the deformation potential constant,is the mass density, q is wave vector and Us is sound velocity. We assume a linear dispersion relation, $`q=qU_s`$.The confinement effect of the phonon modes is ignored, that is known to be legitimate for LA phonon.<sup>12</sup>
### 3.2 Operation of Quantum Logic Gates
The above model is applied to a concrete example of the qubit that is formed by the stacked GaAs/AlGaAs quantum dots. Figure 3 shows the calculated results of the operation of the qubit. The dependence of the probabilities $`|\alpha |^2`$ and $`|\beta |^2`$ of the $`|0`$ and $`|1`$ states on the irradiation of the electromagnetic wave obtained. The dielectric constant was taken to be 10. The effective mass of the electron is 0.67m<sub>0</sub>.The electron confinement potential $`V_0`$ and $`V_1`$ are 1eV and 0.24eV, respectively.The energy difference between the $`|0`$ and $`|1`$ states of the qubit was set to about 25 meV. The frequency and the amplitude (electric field) of the resonant electromagnetic field is 6 THz and 1.5mV/nm, respectively. As shown in Fig. 3, the qubit changes its states from $`|0`$ ($`|1`$) to $`|1`$ ( $`|0`$) due to irradiation of the electromagnetic wave for the $`\pi `$-pulse time.The operating speed of the gate is very fast(about operation per 4ps). On the other hand, when a $`\pi /2`$-pulse of the electromagnetic wave was irradiated on the qubit, the operation of Hadamard transformation
$$|0\frac{1}{\sqrt{2}}(|0+|1),|1\frac{1}{\sqrt{2}}(|0|1).$$
(8)
was given.
Figure 4 shows the calculated results of the operation of the two-qubit CN gate. The resonant energies of the control and the target qubits were set to 26 meV and 18 meV, respectively. The Coulomb repulsion interaction energy U was taken to 10meV. As shown in Fig. 4, it was confirmed that if the control and target qubits are set to different states, respectively, such as $`|0`$,$`|1`$ and $`\alpha |0+\beta |1`$, the XOR operation and the entanglement states were realized by applying intelligently the resonant electromagnetic wave with frequency $`\omega _{T+U}`$. The above results demonstrated that the proposed gates can be used to construct the quantum computer and perform quantum computation.
Considering the electron-phonon interaction in a qubit, we calculated the dynamics of the electron in the qubit. Figure 5 gives the time evolution of the state of the qubit with the phonon-electron interaction. The energy difference between the $`|1`$ and $`|0`$ states was set to 26 meV, and the phonon energy was taken to satisfy the energy conservation. The deformation potential D is -6.8 eV, the density is 5.4$`\times 10^3`$ kg/m<sup>3</sup>, and the sound velocity $`U_s`$ is 3.4$`\times `$ 10<sup>3</sup> m/s.The figure shows the evolution of the probability of the $`|0`$ state of the qubit with the electron-phonon interaction under the irradiation of the electromagnetic wave.To compare with the result without the electron-phonon interaction, the Rabi oscillations of the qubit under the irradiation of the electromagnetic wave is given also.Form the results, it was found that due to the phonon-electron interaction the obvious phase shift of Rabi oscillation was observed after 10 ns. The electron-phonon scattering rate was estimated to be about $`10^7`$/s
## 4 Discussion
We will discuss several subjects related to the fabrication, operation and the reading.The rapid progress in the nano-technology made it possible to fabricate fine quantum dot structure with small size of 5$``$100nm.Furthermore, several techniques, such as mesa etching of triple barrier heterostructure<sup>6,7</sup>,and self-organization growth<sup>9</sup>, have been proposed to produce a two-dimensional array of the stacked asymmetric quantum dots. The qubit in our quantum computation can be implemented by combining the above techniques and the standard semiconductor process used in LSI fabrication. Indeed,some kinds of the quantum artificial molecules have been fabricated by several groups.<sup>6,7,10</sup> After a two-dimensional array of the quantum artificial molecules are formed,the metal film electrodes and the insulator films in our quantum computation can be produced by the film deposition,lithography and etching process.
We address and operate one qubit in a quantum computer by selectively supplying an electromagnetic power to a chosen microstrip line.When a qubit is biased by the electromagnetic power, the microstrip lines that connected to the other qubits are grounded so that the other qubits are screened from the electromagnetic wave. We estimated the structure of the microstrip line needed to the qubit. Assuming that the resonant energy of the qubit ranges form 10 to 100 meV, the frequency of the resonant electromagnetic field is between 2.4 and 24THz. The progress in the microstrip line technique is rapid.<sup>11</sup> If we design both of the height and width of microstrip line are about 1 $`\mu m`$ and the separation between the line and the substrate electrode is about 1 $`\mu m`$, the microstrip line can be integrated by LSI process, on the semiconductor substrate. Figure 6 gives a schematic layout of the quantum computer.
In our qubit there are many physical mechanisms contributing quantum decoherence, such as phonon scattering, spontaneous emission, impurity scattering and interface trap scattering. But, the spontaneous emission results in a lifetime longer than that by the phonon-electron interaction.<sup>12</sup> Although the impurity scattering result in departures from unitary structure of quantum evolution and the lifetime of 10<sup>-9</sup>s,<sup>1</sup> in principle,impurities can be reduced by improving the crystal growth and nano-fabrication technologies. The coupling of the electrons to the phonons is unavoidable.We estimated that the relaxation time of the electron-LA phonon scattering is about 10<sup>-7</sup>s and that a raito of the relaxation time to operating time of gate is about 10<sup>4</sup> $``$ 10<sup>5</sup> at T = 0K. At finite temperature, the electron-LA phonon scattering rate can be obtained by multiplying the zero-temperature result by $`(1+n_q)`$ (due to the simulated emission).$`n_q`$ is LA phonon (Bose-Einstein) distribution function.$`n_q=1/[exp(h\omega _q/KT)^1]`$,where T is the phonon temperature, K is Boltzmann constant and $`h\omega _q`$ is the energy of the phonon with wave vector q. To satisfy the energy conservation, $`h\omega _q`$ is taken to be about 20meV in the quantum gate. When the temperature is below T = 77K, $`1+n_q`$.This means that the designed qubit and CN gates can work below T = 77K. Furthermore, if we design a qubit so that the energy difference between the $`|0`$ and $`|1`$ is much larger than of the energies of optical and acoustic phonon, the electron-phonon scattering rate may be lowered by the well-known phonon bottleneck phenomenon, and the maximum possible operation temperature can be raised also.
The reading operation of the qubit can be performed by detecting the voltage potential. We have calculated the voltage potential near the upper dot of the qubit and discovered that the voltage potential distribution changes markedly when the state of the qubit is switched. Figure 7 shows typical contour plots of voltage potential distributions on the cross section of the qubit.If we switch the $`|0`$ and $`|1`$ states of the qubit, the potential difference at measuring point A can reach 4.6mV. After the quantum computation is finished, we can use electrometers to measure the potential difference through the microstrip lines and to read out the output of the quantum computer. Details of the operations will be investigated in the future.
## 5 Summary
We proposed a method for implementation of a quantum computer using artificial molecules. The artificial molecule consists of two coupled quantum dots stacked along z direction and one single electron. The ground state and the first excited state of the molecule represent the $`|0`$ and $`|1`$ states of a qubit, respectively.The state of the qubit is manipulated by a resonant electromagnetic wave that is applied directly to the qubit through a microstrip line. The coupling between two qubits in a quantum controlled NOT gate is switched on (off) by floating (grounding) the metal film electrodes. We studied numerically the operations of the gates and demonstrated that the quantum gates can perform the quantum computation. The operating speed of the gates is about operation/4ps. The estimated the decoherence time is about 10<sup>-7</sup>s.The reading operation of the output of the quantum computer can be performed by using electrometers to detect the polarization of the qubits.
## Figure Captions
Figure 1. Schematic structure of a qubit using two coupled quantum dots (an artificial molecule): (a) a top view of the qubit; (b) a cross section of the qubit; (c) electron potential along z direction and the two lowest energy levels of the qubit. The structure of the qubit contains metal film electrodes and a microstrip line.
Figure 2. Schematic structure and energy levels of a two-qubit controlled NOT (CN) gate: (a) a top view of the CN gate; (b) a cross section of the CN gate; (c) energy levels of the gates when the coupling between the qubits is turned on or off. If the coupling between the qubits is turned off, T and C are the resonant frequencies of the electromagnetic waves; if the coupling is turn on, $`\omega _{T+U(C+U)}`$ and $`\omega _{TU(CU)}`$ are the resonance frequencies of the target (control) qubit, respectively, when the control (target) qubit is set to $`|1`$ and $`|0`$. Here Coulomb interaction between the electrons at dot 2 (1) and 4 (3) is ignored.
Figure 3. Calculated basic and Hadamard transformation operations of the qubit. The probabilities of the $`|0`$ and $`|1`$ states of the qubit $`|\psi =\alpha |0+\beta |1`$ were given. $`\omega _r`$ is the resonance frequency of the qubit. $`\pi `$ and $`\pi /2`$ represent the $`\pi `$ and $`\pi /2`$ pulse operations, respectively. (Dot 1 : w = 24nm and h = 20nm; dot 2: w = 22 and h = 15nm; separation between two dots is 7nm)
Figure 4. Calculated basic operations of the CN gate and an example entanglement state. The probabilities of the $`|00`$, $`|01`$, $`|01`$ and $`|11`$ states of the gate ($`|\psi =\alpha |00+\beta |01+\gamma |10+\eta |11`$) were given. $`\omega _{T+U}`$ is the resonance frequency of the target qubit. $`\omega _{rC}`$ is the resonance frequency when the coupling between the two qubit is turned off. $`\pi `$ and $`\pi /2`$ represent the $`\pi `$ and $`\pi /2`$ pulse operations, respectively. (dimensions of the control qubit is the same as that in Fig.3; dot 3 : w = 29nm and h = 20nm; dot 4: w = 27 and h = 15nm; separation between two dots is 7nm)
Figure 5. Time evolution behavior of the state $`|0`$ of the qubit with the phonon-electron interaction at near 0ns, 10ns and 50ns. To compare the result of the qubit without phonon-electron interaction, Rabi oscillations of the qubit under the irradiation of the electromagnetic wave is given also.
Figure 6. A schematic structure of the quantum computer based on the artificial molecules. If a metal partition is deposited between the microstrip lines, the influence of the electromagnetic wave on the neighbor qubit can be removed.
Figure 7. Contour plots voltage potential near the upper quantum dot of the qubit: (a) when the state of qubit is $`|1`$; (b) when state is $`|0`$. A shows the measuring point. |
no-problem/9912/astro-ph9912426.html | ar5iv | text | # 1 Introduction
## 1 Introduction
In the last years we have computed several sequences of AGB models of intermediate mass stars with the purpose to investigate the evolutionary properties and the related nucleosynthesis of these class of objects. In this paper we have collected all these models in order to summarize our main findings. In Table 1 we report a list of the computed sequences. Some features of the models are reported too. Let us remind that the models presented here have been obtained with the same version of the FRANEC code used in our previous computation of low mass AGB stars (Straniero et al. 1997).
Tab. 1
The ten rows in Table 1 report respectively: the initial mass (ZAMS mass) of each sequence, the initial metallicity, the initial helium, the adopted mass loss rate (”no” means no mass loss, while numbers indicates the value of the $`\eta `$ parameter in the Reimers formula), the numbers of computed thermal pulses, the final (last computed) mass, the final (last computed) carbon over oxygen ratio, the maximum temperature (in $`10^6`$ K) at the base of the convective envelope (generally it coincides with the one obtained in the last computed interpulse except in the case of the 5 M $`\eta =10`$), the maximum temperature (in $`10^6`$ K) at the base of the He convective shell and, finally, the last computed value of the amount of mass (in $`M_{}`$) dredged from the He-core. Note that all the sequences (except the one of the 5 $`M_{}`$ with $`\eta =10`$) were arbitrarily stopped after about 40-50 TPs. The case of the 5 $`M_{}`$ with $`\eta =10`$ will be discussed below.
## 2 The III dredge-up
At variance with low mass stars, the III dredge-up (TDU) occurs rather early in our thermally pulsing intermediate mass models. For Z=0.02, the first evident episode is found after the 4th, the 5th and the 7th thermal pulse in the 5, 6 and 7 $`M_{}`$ respectively, whereas at Z=0.001 the 5 $`M_{}`$ experiences a TDU just after 3 TPs. The positions of the H and He burning shells as well as the position of the base of the convective envelope are reported in Figure 1, for the 5 $`M_{}`$ Z=0.001. We found that the penetration of the convective envelope into the He core decreases when the mass increases (see the last row in Table 1). In addition the extension (in mass) of the region between the two burning shells is lower for the more massive models. These occurrences are probably due to the connection between the strength of the pulse and the efficiency of the Hydrogen burning shell (HBS). In fact the ignition point of the He burning shell (HeBS) in the temperature/density plane depends on the H burning rate. The larger this rate the lower the density and the larger the temperature of the He shell at the moment of the re-ignition. Then, less work must be done by the He burning to expand the lighter layers above it and, in turn, a weaker pulse and a smaller TDU occur. In the last computed pulse of the 5 $`M_{}`$ the 3$`\alpha `$ luminosity peak exceeds $`10^8`$ $`L_{}`$, but it drops to $`610^7`$ and $`210^7`$ $`L_{}`$ in the 6 and 7 $`M_{}`$ models, respectively (see also Figure 2 and 3).
The more massive models have a more efficient HBS because of the larger core mass and the deeper penetration of the H rich convective envelope into the burning region (see next section). We have tested such a connection between H burning efficiency, pulse strength and TDU, by artificially reducing (or increasing) the rate of the <sup>14</sup>N(p,$`\gamma `$)<sup>15</sup>O), which is the bottleneck of the CNO cycle. This test confirms our hypothesis and allow us to conclude that any input physics, which could alter the rate of the H burning, have a direct influence on the features of the thermal pulse, in particular its strength and dredge-up.
## 3 Hot Bottom Burning
The evolution of the temperature at the bottom of the convective envelope (TBCE) for our 7 $`M_{}`$ model of solar chemical composition is reported in Figure 4. We can see that this temperature rapidly approaches $`810^7`$ K. Similar values were obtained by Blöcker (1995) for a 7 $`M_{}`$ and by Lattanzio et al. (1996) for a 6 $`M_{}`$. As shown in Figure 5 these temperatures are so large that most of the carbon dredged up from the He shell by the TDU is converted into nitrogen during the interpulse. A similar situation is found for the 6 $`M_{}`$, although the maximum TBCE is slightly lower (about $`710^8`$ K). However, the <sup>12</sup>C(p,$`\gamma `$)<sup>13</sup>N reaction is almost inactive at the base of the convective envelope in the 5 $`M_{}`$. In this case the maximum TBCE never exceeds $`510^7`$ K.
As already noted by Lattanzio et al. (1996) HBB is strongly limited by mass loss. In the extreme case of the 5 $`M_{}`$ $`\eta =10`$ the maximum TBCE does never reach $`210^7`$ K.
At lower metallicity, the thickness of the HBS is larger and the penetration of the convective envelope into the region of the CNO burning is favored by a less steep entropy barrier. In our 5 $`M_{}`$ Z=0.001, we found a bottom temperature as large as $`710^7`$ K and a significant HBB. (see Figure 6 and 7). This is in agreement with the result of Lattanzio & Forestini (1999).
As firstly found by Blöcker & Schönberner (1991) in numerical computations of AGB stars, the classical core mass/luminosity relation (Paczynski, 1975, Iben & Renzini 1983) cannot apply to stars which experience HBB. Although we found a certain deviation from the classical relation, the luminosity of our more massive models, which are close to $`M_{up}`$ (i.e. the minimum mass for the degenerate carbon ignition), never exceeds the observed magnitude of the AGB tip ($`M_{bol}7`$, see e.g. Wood et al. 1992).
In Figure 8 we report the luminosity versus the core mass for our model of 7 $`M_{}`$. Note how the luminosity of these models asymptotically approaches the observed AGB limit.
## 4 Neutron source and s-process nucleosynthesis
At the time of the TDU, if a certain amount of protons are diffused below the convective envelope into the top layer of the Helium/carbon rich region, a tiny <sup>13</sup>C pocket forms. Later on, as already found for low mass stars (Straniero et al. 1995, Straniero et al. 1997, Herwig et al. 1997), when the temperature in that pocket rises up to 90-100 $`10^6`$ K, the <sup>13</sup>C is fully destroyed by $`\alpha `$ capture during the interpulse. So neutrons are released and the heavy s-elements production can take place. In such a case the typical neutron density is of the order of $`10^710^8`$ $`n/cm^3`$. In our models with $`M5`$ $`M_{}`$, a further neutron source operates during the convective thermal pulse. In fact, as shown in Table 1, the temperature at the base of the He convective shell reaches $`3.510^8`$ K. In such condition, the substantial activation of the <sup>22</sup>Ne($`\alpha `$,n)<sup>25</sup>Mg reaction provides an important contribution to the s-process nucleosynthesis. This confirms the old prescription of Iben (1975a,b). The neutron density during the TP is definitely greater than in the case of the <sup>13</sup>C burning. We found $`\rho _n10^{11}`$ $`n/cm^3`$. Some details of the nucleosynthesis in intermediate mass stars can be found in Vaglio et al. (1999).
## 5 Termination of the AGB
The search of the physical mechanism responsible for the formation of a Planetary Nebula is a longstanding problem in modern astrophysics. This subject is obviously related to the comprehension of the final stage of the AGB evolution. In model computations it is currently assumed that, at a certain point along the AGB, mass loss abruptly grows (up to $`10^510^4M_{}/yr`$), so that an envelope ejection is simulated (see e.g. Vassiliadis & Wood, 1993; Blöcker, 1995; Forestini & Charbonnel, 1997). However, the correct evaluation of the masses of planetary nebulae and those of their nuclei, as well as the description of the AGB termination will depend on the particular parameterization of this superwind regime.
In a recent paper Sweigart (1998) found that in mass losing models the ratio ($`\beta `$) of the radiation pressure to the gas pressure drops abruptly in a region located just above the H/He discontinuity. This occurs just after each thermal pulse. When the envelope mass is reduced enough (how much is depending on the core mass and metallicity), $`\beta `$ becomes practically 0 and the local stellar luminosity exceeds the Eddington luminosity. Then, an instability, which could drive the envelope ejection, settles on. Something similar was also reported by Wood & Faulkner (1986).
We confirm the Sweigart’s finding. In our 5 $`M_{}`$ $`\eta =10`$ the minimum value of $`\beta `$ is attained during the TDU. At the beginning of the TP-AGB phase, the value of this minimum is about 0.3-0.4, but it decreases from one pulse to the next as the envelope mass is reduced by the mass loss. After 23 thermal pulses, when the residual total mass is 2.48 $`M_{}`$ and the core mass is 0.89 $`M_{}`$, $`\beta `$ goes to 0 and our hydrostatic code cannot go ahead (see Figure 9). We found the same situation in a 2.5 $`M_{}`$, Z=0.006 and $`\eta =2`$. In such a case the final (last computed) mass is 0.81 $`M_{}`$ and the core mass is 0.67 $`M_{}`$.
Such an occurrence have a quite simple explanation. As a consequence of the thermal pulse, an expansion, starting from the He shell, propagates toward the surface. When this expansion reaches the base of H rich envelope, the local temperature decreases below $`10^6`$ K. Then the drop of $`\beta `$ is determined by the well known bump in the metal opacity around log T=5.3 (see e.g. Iglesias, Roger, Wilson, 1992). which implies a decrease of the local Eddington luminosity. Thus a sort of void forms between the core and the envelope (see Figure 9). In this thin layer the stellar structure cannot react, as usually occurs, to an increase of the local temperature by expanding the gas and reducing the local pressure. Then, any small perturbation will inevitably grow. We cannot say if this instability can indefinitely grow, but we believe that this phenomenon could play a pivotal role in the AGB termination. This problem deserves a further investigation, possibly by means of an hydrodynamic code. Let us finally note that the TDU increases the metal content of the envelope, and, in turn, the opacity bump will increase. So the larger the dredge-up the deeper the $`\beta `$ drop is. In the present computations the final C/O ratio in the envelope of the 5 $`M_{}`$ is 0.6, whereas in the case of the 2.5 $`M_{}`$ we obtain a carbon star well before the end of the sequence.
## Acknowledgements
This work was partially supported by the MURST Italian grant Cofin98, by the MEC Spanish grant PB96-1428, by the Andalusian grant FQM-108 and it is part of the ITALY-SPAIN integrated action (MURST-MEC agreement) HI1998-0095. |
no-problem/9912/physics9912026.html | ar5iv | text | # Detection of a flow induced magnetic field eigenmode in the Riga dynamo facility
## Abstract
In an experiment at the Riga sodium dynamo facility, a slowly growing magnetic field eigenmode has been detected over a period of about 15 seconds. For a slightly decreased propeller rotation rate, additional measurements showed a slow decay of this mode. The measured results correspond satisfactory with numerical predictions for the growth rates and frequencies.
Magnetic fields of cosmic bodies, such as the Earth, most of the planets, stars and even galaxies are believed to be generated by the dynamo effect in moving electrically conducting fluids. Whereas technical dynamos consist of a number of well-separated electrically conducting parts, a cosmic dynamo operates, without any ferromagnetism, in a nearly homogeneous medium (for an overview see, e.g., and ).
The governing equation for the magnetic field $`𝐁`$ in an electrically conducting fluid with conductivity $`\sigma `$ and the velocity $`𝐯`$ is the so-called induction equation
$`{\displaystyle \frac{𝐁}{t}}=curl(𝐯\times 𝐁)+{\displaystyle \frac{1}{\mu _0\sigma }}\mathrm{\Delta }𝐁`$ (1)
which follows from Maxwell equations and Ohms law. The obvious solution $`𝐁=\mathrm{𝟎}`$ of this equation may become unstable for some critical value $`Rm_c`$ of the magnetic Reynolds number
$`Rm=\mu _0\sigma Lv`$ (2)
if the velocity field fulfills some additional conditions. Here $`L`$ is a typical length scale, and $`v`$ a typical velocity scale of the fluid system. $`Rm_c`$ depends strongly on the flow topology and the helicity of the velocity field. For self-excitation of a magnetic field it has to be at least greater than one. For typical dynamos as the Earth outer core, $`Rm`$ is supposed to be of the order of $`100`$.
The last decades have seen an enormous progress of dynamo theory which deals, in its kinematic version, with the induction equation exclusively or, in its full version, with the coupled system of induction equation and Navier-Stokes equation for the fluid motion. Numerically, this coupled system of equations has been treated for a number of more or less realistic models of cosmic bodies (for an impressive simulation, see ).
Quite contrary to the success of dynamo theory, experimental dynamo science is still in its infancy. This is mainly due to the large dimensions of the length scale and/or the velocity scale which are necessary for dynamo action to occur. Considering the conductivity of sodium as one of the best liquid conductors ($`\sigma 10^7(\mathrm{\Omega }\text{m})^1`$ at 100C) one gets $`\mu _0\sigma 10\text{s}/\text{m}^2`$. For a very efficient dynamo with a supposed $`Rm_c=10`$ this would amount to a necessary product $`Lv=1\text{m}^2/\text{s}`$ which is very large for a laboratory facility, even more if one takes into account the technical problems with handling sodium. Historically notable for experimental dynamo science is the experiment of Lowes and Wilkinson where two ferromagnetic metallic rods were rotated in a block at rest . A first liquid metal dynamo experiment quite similar to the present one was undertaken by some of the authors in 1987. Although this experiment had to be stopped (for reasons of mechanical stability) before dynamo action occurred the extrapolation of the amplification factor of an applied magnetic field gave indication for the possibility of magnetic field self-excitation at higher pump rates . Today, there are several groups working on liquid metal dynamo experiments. For a summary we refer to the workshop ”Laboratory Experiments on Dynamo Action” held in Riga in summer 1998 .
After years of preparation and careful velocity profile optimization on water models, first experiments at the Riga sodium facility were carried out during November 6-11, 1999. The present paper comprises only the most important results of these experiments, one of them being the observation of a dynamo eigenmode slowly growing in time at the maximum rotation rate of the propeller. A more comprehensive analysis of all measured data will be published elsewhere.
The principal design of the dynamo facility, together with some of the most important dimensions, is shown in Fig. 1. The main part of the facility consists of a spiral flow of liquid sodium in an innermost tube (with a velocity up to the order of $`15\text{m/s}`$) with a coaxial back-flow region and a region with sodium at rest surrounding it. The total amount of sodium is $`2\text{m}^3`$. The sodium flow up to $`0.6\text{m}^3/\text{s}`$ is produced by a specially designed propeller which is driven by two 55 kW motors.
All three sodium volumes play an important role in the magnetic field generation process. The spiral flow within the immobile sodium region amplifies the magnetic field by stretching field lines . The back-flow is responsible for a positive feedback . The result is an axially non-symmetric field (in a symmetric flow geometry!) slowly rotating around the vertical axis. Hence, a low frequency AC magnetic field is expected for this configuration. Concerning the azimuthal dependence of the magnetic field which includes terms of the type $`\mathrm{exp}(im\phi )`$ with in general arbitrary m it is well-known that for those Rm available in this experiment only the mode with $`m=1`$ can play any role . A lot of details concerning the solution of the induction equation for the chosen experimental geometry and the optimization of the whole facility in general and of the shape of the velocity profiles in particular can be found in , , and .
For the magnetic field measurements we used two different types of sensors. Inside the dynamo, close to the innermost wall and at a height of 1/3 of the total length from above, a flux-gate sensor was positioned. Additionally, 8 Hall sensors were positioned outside the facility at a distance of 10 cm from the thermal isolation. Of those, 6 were arranged parallel to the dynamo axis with a relative distance of 50 cm, starting with 35 cm from the upper frame. Two sensors were additionally arranged at different angles.
After heating up the sodium to 300C and pumping it slowly through the facility for 24 hours (to ensure good electrical contact of sodium with the stainless-steel walls) various experiments at 250C and around 205C at different rotation rates of the propeller were carried out.
According to our numerical predictions, self-excitation was hardly to be expected much above a temperature of 200C since the electrical conductivity of sodium decreases significantly with increasing temperature. Nevertheless, we started experiments at 250C in order to get useful information for the later dynamo behaviour at lower temperature, i.e. at higher $`Rm`$. Although the experiment was intended to show self-excitation of a magnetic field without any noticeable starting magnetic field, kick-field coils fed by a 3-phase current of variable low frequency were wound around the module in order to measure the sub-critical amplification of the applied magnetic field by the dynamo. This measurement philosophy was quite similar to that of the 1987 experiment and is based on generation theory for prolongated flows as the length of our spiral flow exceeds its diameter more then ten times. Generation in such a geometry should start as exponentially high amplification of some kick-field (known as convective generation) and should transform, at some higher flowrate, into self-excitation without any external kick-field.
As a typical example of a lot of measured field amplification curves, Fig. 2 shows the inverse relation of the measured magnetic field to the current in the excitation coils for a frequency of 1 Hz and a temperature of 205C versus the rotation rate of the propeller. The two curves (squares and crosses) correspond to two different settings of the 3-phase current in the kick-field coils with respect to the propeller rotation. An increasing amplification of the kick-field can be clearly observed in both curves until a rotation rate of about 1500 rpm. These parts of both curves point to about 1700 rpm which might be interpreted as the onset of convective generation , . If the excitation frequency would be exactly the one the system likes to generate as its eigenmode, the curves should further approach the abscissa axis up to the self-excitation point. As 1 Hz does not meet exactly the eigenmode frequency the points are repelled from the axis for further increasing rotation rates as it is usual for externally excited linear systems passing the point of resonance (for this interpretation, see also Fig. 5).
It should be underlined that all points on Fig. 2 except the rightmost one are calculated from sinusoidal field records showing the same 1 Hz frequency as the kick-field. However, the rightmost point at 2150 rpm is exceptional. Let us analyse the magnetic field signal at this rotation rate in more detail. Fig 3a shows the magnetic field measured every 10 ms at the inner sensor in an interval of 15 s. Evidently, there is a superposition of two signals.
Numerically, this signal (comprising 1500 data points) has been analyzed by means of a non-linear least square fit with 8 free parameters according to
$`B(t)=A_1e^{p_1t}\mathrm{sin}(2\pi f_1t+\varphi _1)+A_2e^{p_2t}\mathrm{sin}(2\pi f_2t+\varphi _2)`$
The curve according to this ansatz (which is also shown in Fig. 3a) fits extremely well into the data giving the following parameters (the errors are with respect to a 68.3 per cent confidence interval):
$`A_1=`$ $`(0.476\pm 0.004)\text{mT},`$ $`p_1=(0.0012\pm 0.0003)\text{s}^1`$
$`f_1=`$ $`(0.995\pm 0.00005)\text{s}^1,`$ $`\varphi _1=0.879\pm 0.012`$
$`A_2=`$ $`(0.133\pm 0.001)\text{mT},`$ $`p_2=(0.0315\pm 0.0009)\text{s}^1`$
$`f_2=`$ $`(1.326\pm 0.00015)\text{s}^1,`$ $`\varphi _2=(0.479\pm 0.009)`$
The positive parameter $`p_2=0.0315s^1`$ together with the very small error gives clear evidence for the appearance of a self-exciting mode at the rotation rate of 2150 rpm. Fig. 3b shows in a decomposed form the two contributing modes, the larger one reflecting the amplified field of the coils and the smaller one reflecting the self-excited mode.
For reasons of some technical problems, this highest rotation rate could be hold only for 20 seconds after which it fell down to 1980 rpm. At that lower rotation rate the coil current was switched of suddenly. Figure 4 shows the magnetic field behaviour at three selected Hall sensors positioned outside the dynamo. This mode has a frequency of $`f=1.1\text{s}^1`$ and a decay rate of $`p=0.3\text{s}^1`$. A similar signal was recorded by the inner fluxgate sensor, too.
It is interesting to compare the frequencies and growth or decay rates at the two different rotation rates 2150 rpm and 1980 rpm with the numerical predictions. These are based on the outcomes of a two-dimensional time dependent code which was described in . As input velocity for the computations an extrapolated velocity field based on measurements in water at two different heights and at three different rotation rates (1000, 1600, and 2000 rpm) was used. Fig. 5 shows the predicted growth rates and frequencies for the three temperatures 150C, 200C, and 250C which are different due to the dependence of the electrical conductivity on temperature. The two pairs of points in Fig. 5 represent the respective measured values. Having in mind the limitations and approximations of the numerical prognosis the agreement between pre-calculations and measured values is good, particularly regarding the frequencies of the magnetic field eigenmode.
The main part of the experiment was originally planned at T=150C where self-excitation with a much higher growth rate was expected. Unfortunately, the safety rules required to stop the experiment at T=205C since technical problems with the seal of the propeller axis against the sodium flow-out have been detected. It is worth to be noted that the overall system worked stable and without problems over a period of about five days. The sealing problem needs inspection, but represents no principle problem.
For the first time, magnetic field self-excitation was observed in a liquid metal dynamo experiment. Expectedly, the observed growth rate was still very small. The correspondence of the measured growth rates and frequencies with the numerical prognoses is convincing. The general concept of the experiment together with the fine-tuning of the velocity profiles have been proven as feasible and correct. The facility has the potential to exceed the threshold of magnetic field self-excitation by some 20 per cent with respect to the critical magnetic Reynolds number. The experiment will be repeated at lower temperature when the technical problems with the seal will be resolved. For lower temperature, a higher growth rate will drive the magnetic field to higher values where the back-reaction of the Lorentz forces on the velocity should lead to saturation effects.
We thank the Latvian Science Council for support under grant 96.0276, the Latvian Government and International Science Foundation for support under joint grant LJD100, the International Science Foundation for support under grant LFD000 and Deutsche Forschungsgemeinschaft for support under INK 18/A1-1. We are grateful to W. Häfele for his interest and support, and to the whole experimental team for preparing and running the experiment. |
no-problem/9912/astro-ph9912152.html | ar5iv | text | # The Eddington Luminosity Phase in Quasars: Duration and Implications
## Introduction
An approximate relationship $`M_h(0.003.006)M_b`$ between the central MBH mass, $`M_h`$, and that of the galactic bulge, $`M_b`$, has been established for a few dozen of galaxies, both nearby and more distant ones 0003 ,0006 . A relationship between absolute magnitudes of quasars and their host galaxies found in Bahc is reduced to the MBH to bulge mass relation in galaxies provided that oz98 : (i) the central MBH shines at or near to the Eddington luminosity and (ii) the host galaxy undergoes through a starburst episode. This correlation, coupled with the known luminosity function of galaxies, can serve to obtain S98 the MBH mass distribution $`\varphi _1(M_h)dM_h`$. The history of matter accretion onto a central MBH thought to serve as a source of quasar activity is linked to the present observable properties of each individual quasar, such as its luminosity, variability, and emission spectrum. If the bolometric luminosity of a quasar comprises a fraction $`\lambda `$ of the Eddington luminosity, i.e. $`\lambda =L/L_E`$, $`L_E=4\pi GM_hm_pc/\sigma _T`$, the underlying accretion is accompanied by an exponential growth of the MBH mass with the characteristic time $`t_E=4.510^8\eta /\lambda \text{ yrs}`$, where $`\eta `$ is the accretion efficiency of mass-to-energy transformation. The crucial problem is the duration, $`t_q`$, of such a nearly Eddington accretion phase. Usually an effective $`t_q`$, the same for the entire black hole mass range $`M_h10^610^{10}M_{}`$, is calculated by comparing the global number density of normal galaxies and quasars and is found to be $`t_q=10^6510^8`$ years in HNR97 . Meanwhile the recent data on mass distribution of MBHs in galaxies provide an opportunity to solve this problem in a more detailed way, viz., to calculate the dependence of $`t_q`$ upon $`M_h`$, which is the major aim of this paper. We shall explore whether the distribution functions of quasars and MBHs in normal galaxies are consistent with each other, and we will do this locally in the vicinity of each mass.
## The Eddington Luminosity Phase
It would be reasonable to assume that the duration of the Eddington phase $`t_q`$ depends on the initial mass of a newborn MBH or, in other words, on the initial luminosity, $`L_i`$, of the quasar: $`t_q=t_q(L_i)`$. For simplicity, the transition to and out of the Eddington phase is supposed to occur instantaneously:
$$L=\{\begin{array}{cc}0,\hfill & \text{ if }t<t_i;\hfill \\ L_i\mathrm{exp}[(tt_i)/t_E],\hfill & \text{ if }t_i<t<t_i+t_q(L_i);\hfill \\ 0,\hfill & \text{ if }t>t_i+t_q(L_i),\hfill \end{array}$$
(1)
where $`t_i`$ is the instant of the MBH formation. Along with the distribution function of MBHs in the galactic nuclei, $`\varphi _1`$, we use the observed distribution of quasars in absolute magnitude $`M_B`$ and redshift $`zz_e3`$, $`\varphi _2(M_B,z)dM_Bdz`$, taken from boyle91 . The balance equation is given by
$$\frac{2.5}{\mathrm{ln}10}\underset{0}{\overset{\mathrm{}}{}}𝑑z(1+z)^{3/2}\varphi _2(M_B(L),z)\frac{t_E(L)}{t_0}\underset{M(L)}{\overset{X}{}}\varphi _1(M_h)𝑑M_h,$$
(2)
where $`X=M(L)\mathrm{exp}[t_q(L)/t_E]`$ with $`M(L)`$ determined by equation $`L=\lambda L_E`$ and a relationship $`t=t_0/(1+z)^{3/2}`$ for the flat cosmological model is used. While obtaining Eq. (2), we have also taken into account that $`t_E`$ very slowly varies with respect to $`\varphi _1(M_h)`$. Eq. (2) defines $`t_q`$ as an implicit function of $`L_i`$, which can be translated into a relationship between $`t_q`$ and the initial BH mass $`M_i`$ by using equation $`L=\lambda L_E`$.
## Results
Numerical solution of Eq. (2) is found by adopting the mass distribution of MBHs $`\varphi _1(M_h)dM_h`$ from S98 , derived with the use of three relationships, viz., (i) a correlation $`\mathrm{log}(M_h)=\mathrm{log}(M_b)2.6\pm 0.3`$ between the MBH mass $`M_h`$ and the bulge mass $`M_b`$; (ii) the mass-luminosity relation for galaxies, and (iii) the Schechter luminosity function.
In Fig. 1 and Fig. 2, we employ two somewhat different distributions in MBH mass from S98 and name them ‘distribution A’ and ‘distribution B’, which correspond to the power-law and log-Gaussian shape of dispersion, respectively. Fig. 1 presents the results of our numerical computation of the ratio $`t_q(M_i)/t_E`$ for different values of $`\eta `$ and MBH mass distributions A and B. It should be noted that if $`\eta <0.1`$, the solution exists not for all values of $`M_i`$. The domain where the solution exists is determined from the condition that the r.h.s of Eq. (2) exceeds its l.h.s. if one puts $`X=+\mathrm{}`$. For those $`M_i`$ which lead to an opposite condition, the number of galactic nuclei with MBHs is not enough to explain, in the framework of our model, the distribution function of quasars in $`M_B`$ and $`z`$, even if these MBHs stay in an active quasar state during the maximum possible time $`t_q3t_E`$. The solution only exists at $`\eta >710^7`$ for BH mass distribution A and at $`\eta >610^3`$ for distribution B. A single-valued mapping $`M_iM_h`$ breaks up on the left end of curves 2 to 6.
Fig. 1 demonstrates the main result of this work: distributions of MBHs and quasars in mass are connected by one-to-one correspondence through the whole range of the observed masses only for $`\eta 0.1`$, both for the distribution A (curve 1) and B (curve 7). This concordance breaks down for a certain range of MBH masses, viz. the solution of Eq. (2) does not exist if $`\eta 0.1`$. Nevertheless, the jumps of the $`\eta `$ value are not excluded on the boundary of the domain, where the solution exists. Similar jumps seem to be quite natural if MBHs in the different mass ranges are formed by different ways (e.g., by collapse of massive gas clouds, stellar clusters, etc., see review by Rees 1984) and so there are various accretion regimes with different values of $`\eta `$. If such jumps indeed take place, transitions between the curves of each of distributions A and B are possible. These transitions must be smoothed because the MBHs formed by different ways would coexist in some mass range(s). The most probable value of $`\eta `$ established above is $`\eta 0.1`$, which corresponds to curves 1 and 7 in Fig. 1. For both these curves, the relationship $`t_q<t_E`$ is carried out and therefore the BH mass growth is not substantial – it generally does not exceed a value comparable to the initial BH mass, and for $`M_h>10^7\mathrm{M}_{}`$ it is negligible. |
no-problem/9912/cond-mat9912243.html | ar5iv | text | # Comment on “Quantum Monte Carlo study of the dipole moment of CO” [J. Chem. Phys. 110, 11700 (1999)]
\[
## Abstract
>
\]
In a recent paper Schautz and Flad studied the dipole moment of CO using various first-principles computational methods. As part of this study they considered the applicability of the Hellmann-Feynman theorem within the fixed-node diffusion quantum Monte Carlo (DMC) method . The purpose of this comment is to point out that the conclusion reached by Schautz and Flad regarding the applicability of the Hellmann-Feynman theorem within fixed-node DMC is incorrect.
In the DMC method stochastic evolution of the imaginary-time Schrödinger equation is used to project out the lowest energy many-electron wave function. Such projector methods suffer from a fermion sign problem in which the wave function decays towards the bosonic ground state. To enforce the fermion symmetry the fixed-node approximation is normally used. In a fixed-node DMC calculation the Schrödinger equation is solved separately in each nodal pocket with the boundary condition that the wave function in the $`\alpha ^{th}`$ pocket, $`\psi _\alpha `$, is zero on the surface of the pocket. For simplicity we consider only ground state wave functions which satisfy the tiling property that all nodal pockets are equivalent and related by the permutation symmetry. The pocket wave function is zero outside the pocket and has a discontinuous derivative at the surface of the pocket and therefore satisfies
$$\widehat{H}\psi _\alpha =E_\mathrm{D}\psi _\alpha +g[\psi _\alpha ]\delta (𝐫𝐫_\mathrm{n}[\psi _\alpha ]),$$
(1)
where $`𝐫_\mathrm{n}[\psi _\alpha ]`$ is the surface of pocket $`\alpha `$. The delta function term, which was neglected by Schautz and Flad , arises from the action of the kinetic energy operator on the discontinuity in the derivative of the wave function at the pocket surface. Operating on this equation with the antisymmetrising operator, $`\widehat{A}`$, gives
$$\widehat{H}\mathrm{\Psi }=E_\mathrm{D}\mathrm{\Psi }+h[\mathrm{\Psi }]\delta (𝐫𝐫_\mathrm{n}[\mathrm{\Psi }]),$$
(2)
where $`\mathrm{\Psi }=\widehat{A}\psi _\alpha `$ is the antisymmetric DMC wave function and $`𝐫_\mathrm{n}[\mathrm{\Psi }]`$ is the nodal surface of $`\mathrm{\Psi }`$. The nodal surface is fixed by the choice of the trial wave function, $`\mathrm{\Phi }_T`$. The DMC energy may then be calculated from
$$E_\mathrm{D}=\frac{\mathrm{\Phi }_T\widehat{H}\mathrm{\Psi }𝑑𝐫}{\mathrm{\Phi }_T\mathrm{\Psi }𝑑𝐫}.$$
(3)
If the nodal surface of $`\mathrm{\Phi }_T`$ is exact then $`\mathrm{\Psi }`$ has no gradient discontinuities on the nodal surface and $`\widehat{H}\mathrm{\Psi }_0=E_0\mathrm{\Psi }_0`$, where $`\mathrm{\Psi }_0`$ and $`E_0`$ are the exact wave function and energy, but if the nodal surface is inexact then $`h[\mathrm{\Psi }]`$ is non-zero and normally of order $`\mathrm{\Psi }\mathrm{\Psi }_0`$. On substituting Eq. 2 into Eq. 3 we find that the nodal term, $`h\delta `$, does not contribute to the energy because $`\mathrm{\Phi }_T`$ is zero on the nodal surface. However, the nodal term can contribute to derivatives of $`E_\mathrm{D}`$. A simple way to see this is to note that the DMC energy can also be evaluated as the expectation value with the fixed-node DMC wave function,
$$E_\mathrm{D}=\frac{\mathrm{\Psi }\widehat{H}\mathrm{\Psi }𝑑𝐫}{\mathrm{\Psi }\mathrm{\Psi }𝑑𝐫}.$$
(4)
The derivative with respect to a parameter, $`\lambda `$, is
$$\frac{E_\mathrm{D}}{\lambda }=\frac{\mathrm{\Psi }\frac{\widehat{H}}{\lambda }\mathrm{\Psi }𝑑𝐫}{\mathrm{\Psi }\mathrm{\Psi }𝑑𝐫}+2\frac{\frac{\mathrm{\Psi }}{\lambda }\widehat{H}\mathrm{\Psi }𝑑𝐫}{\mathrm{\Psi }\mathrm{\Psi }𝑑𝐫}2E_\mathrm{D}\frac{\mathrm{\Psi }\frac{\mathrm{\Psi }}{\lambda }𝑑𝐫}{\mathrm{\Psi }\mathrm{\Psi }𝑑𝐫},$$
(5)
and using Eq. 2 we obtain
$$\frac{E_\mathrm{D}}{\lambda }=\frac{\mathrm{\Psi }\frac{\widehat{H}}{\lambda }\mathrm{\Psi }𝑑𝐫}{\mathrm{\Psi }\mathrm{\Psi }𝑑𝐫}+2\frac{\frac{\mathrm{\Psi }}{\lambda }h\delta 𝑑𝐫}{\mathrm{\Psi }\mathrm{\Psi }𝑑𝐫},$$
(6)
which is the Hellmann-Feynman expression with an additional nodal term. While $`h\delta `$ is determined by the nodes of $`\mathrm{\Psi }`$ (or equivalently those of $`\mathrm{\Phi }_T`$), $`\frac{\mathrm{\Psi }}{\lambda }`$ depends on how we choose the nodes to vary with $`\lambda `$. We therefore expect the nodal term in Eq. 6 to be non-zero in general. If we choose the nodes to be independent of $`\lambda `$ then $`\frac{\mathrm{\Psi }}{\lambda }=0`$ on the nodal surface and the contribution from the nodal term is zero. In this case the Hellmann-Feynman theorem holds, as correctly stated by Schautz and Flad . However, if the nodes of the trial wave function vary with $`\lambda `$ then the nodal term will normally be non-zero and the Hellmann-Feynman expression does not give the exact derivative of the DMC energy of Eqs. 3 and 4, in contradiction to Schautz and Flad .
We conclude that Schautz and Flad are correct in stating that the Hellmann-Feynman expression evaluated with the fixed-node DMC wave function is equal to the derivative of the fixed-node DMC energy with respect to a parameter $`\lambda `$ if the nodes are independent of $`\lambda `$, but this does not hold in general if the nodal surface depends on $`\lambda `$. |
no-problem/9912/astro-ph9912416.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Population synthesis models based on solar abundance ratios predict – for a given Fe index – Mg indices that are weaker than measured in the integrated spectra of elliptical galaxies (e.g. Worthey, Faber & González 1992 \[Worthey et al. (1992)\]; Davies, Sadler & Peletier 1993 \[Davies et al. (1993)\]; Mehlert et al. 2000 \[Mehlert et al. (2000)\]). Although the link from line indices to element abundance ratios is not straightforward (Greggio 1997 \[Greggio (1997)\]; Tantalo, Chiosi, & Bressan 1998 \[Tantalo et al. (1998)\]), the strong Mg absorption features are interpreted as an enhancement of $`\alpha `$-elements. More quantitative studies find an average \[Mg/Fe\] overabundance of $`0.30.4`$ dex (Weiss, Peletier, & Matteucci 1995 \[Weiss et al. (1995)\]; Greggio 1997 \[Greggio (1997)\]). This conclusion gets further support from the detection of enhancement of Mg and other $`\alpha `$-elements in the stars of the Milky Way bulge (McWilliam & Rich 1994 \[McWilliam and Rich (1994)\]). These findings imply short formation timescales and/or an initial mass function (IMF) that is flat at the high-mass end (Matteucci 1994 \[Matteucci (1994)\]). Scenarios that form elliptical galaxies in mergers of evolved spirals fail to reproduce such $`\alpha `$-enhanced stellar populations unless a significant flattening of the IMF is assumed (Thomas, Greggio, & Bender 1999 \[Thomas et al. (1999)\]). Galaxy formation models that are based on hierarchical clustering lead to extended star formation histories in elliptical galaxies which produces Mg/Fe element ratios that are too low compared to the values quoted above (Bender 1996 \[Bender (1996)\]; Thomas 1999 \[Thomas (1999)\]; Thomas & Kauffmann 2000 \[Thomas and Kauffmann (2000)\]).
In this paper we address the question of how robust the predictions from chemical evolution models are. For this purpose we analyze two recent Type II supernova (SNII) nucleosynthesis prescriptions (Woosley & Weaver 1995 \[Woosley and Weaver (1995)\], hereafter WW95; Thielemann, Nomoto, & Hashimoto 1996 \[Thielemann et al. (1996)\] and Nomoto et al. 1997 \[Nomoto et al. (1997)\], hereafter TNH96), that represent a crucial input for the determination of Mg/Fe element ratios as a function of timescales (see also Gibson 1997 \[Gibson (1997)\]; Portinari, Chiosi, & Bressan 1998 \[Portinari et al. (1998)\]).
## 2 Stellar yields
The ratio of Mg to Fe is a measure for the contribution from high-mass stars to the chemical enrichment, because Mg is only produced in SNII (short-lived, massive stars), while a significant part of Fe comes from Type Ia supernovae (SNIa, low-mass, long-lived binary systems). The crucial input ingredients to calculate the enrichment of these elements are both the supernova rates and the stellar yields (nucleosynthesis and ejection per supernova, see Chieffi, this volume; Limongi, this volume). In the following we briefly comment on the SNII yields of Mg and Fe that are calculated by WW95 and TNH96 (see also Thomas, Greggio, & Bender 1998 \[Thomas et al. (1998)\]).
#### Magnesium
Magnesium is produced in both hydrostatic (before the explosion) and explosive carbon burning. The synthesis thus depends upon the (still uncertain) parameters of stellar evolution (convection criteria, $`{}_{}{}^{12}\mathrm{C}(\alpha ,\gamma )^{16}\mathrm{O}`$-rate, etc.) and the modeling of the explosion. We find that the yields of Mg in the two sets of models are in good agreement for low initial stellar masses in the range $`1018M_{}`$. In a 20 $`M_{}`$ star, instead, TNH96 produce significantly more Mg than WW95, likely because they adopt the Schwarzschild criterion for convection (WW95; Thomas et al. 1998 \[Thomas et al. (1998)\]). At the high-mass end ($`3070M_{}`$), the total mass and the composition of the ejecta mainly depend on the explosion energy and on the mass-cut, which leads to slightly higher Mg-yields in the TNH96 models.
#### Iron
Since the iron ejected by SNeII is entirely produced during the explosion, the yield is very sensitive to details of the explosion model (total energy, mass-cut, fall-back effect, energy transport, etc.). Hence, particularly at higher masses, the Fe-yields from SNeII are very uncertain. For $`M<35M_{}`$, WW95 (model B) give higher Fe yields than TNH96.
Fig. 1 shows the resulting \[Mg/Fe\] ratio in the SNII ejecta for the two nucleosynthesis prescriptions WW95 and TNH96. For $`M20M_{}`$, the \[Mg/Fe\] predicted by TNH96 is $`1`$ dex higher than WW95. At the high-mass end, WW95 (model B) produce larger Mg/Fe because of the very low Fe yield due to the fall-back effect.
## 3 The solar neighborhood
The significant discrepancies between the Mg/Fe yields from WW95 and TNH96 lead to very different conclusions on star formation timescales and IMF slopes (Thomas et al. 1998 \[Thomas et al. (1998)\], 1999 \[Thomas et al. (1999)\]). It is therefore crucial to calibrate the chemical evolution model on the abundance patterns of the solar neighborhood, for which the observational data provide the most detailed information. Here we show the impact of the above discrepancies in the stellar yields on the evolution of \[Mg/Fe\] with \[Fe/H\] in the solar neighborhood. The model description for the chemical evolution in the Milky Way is based on the standard infall model (Matteucci & Greggio 1986 \[Matteucci and Greggio (1986)\]). The recipe for the rate of SNIa is taken from Greggio & Renzini (1983) \[Greggio and Renzini (1983)\], a Salpeter initial mass function ($`x=1.35`$, $`M_{\mathrm{min}}=0.1M_{}`$, $`M_{\mathrm{max}}=70M_{}`$) is adopted. For more details see Thomas et al. (1998) \[Thomas et al. (1998)\].
Fig. 2 shows observed stellar abundance ratios in the \[Mg/Fe\]-\[Fe/H\] plane. Models using the same input parameters but different sets of SNII nucleosynthesis prescriptions (WW95 and TNH96) are plotted as solid lines. With the TNH96-yields, the Mg-enhancement in the (metal-poor) halo stars ($`[\mathrm{Fe}/\mathrm{H}]<1`$) is well reproduced. The sharp decrease of \[Mg/Fe\] at $`[\mathrm{Fe}/\mathrm{H}]1`$ implies the enrichment from SNIa to be delayed by $`1`$ Gyr (Pagel & Tautvaisiene 1995 \[Pagel and Tautvaisiene (1995)\], this volume) which is well matched with the SNIa model adopted here (see also Greggio 1996 \[Greggio (1996)\]). Adopting WW95 yields without modifying the other input parameters, the resulting Mg/Fe ratios are well below the data for all metallicities (see also Timmes, Woosley, & Weaver 1995 \[Timmes et al. (1995)\]). Note that the strongest constraint on the SNII nucleosynthesis comes from the metal-poor halo stars, while the mismatch around solar metallicity could in principle be improved by adjusting the parameters of the star formation history and of the SNIa rate.
Models with mixed stellar yields, namely, Mg(TNH96)-Fe(WW95) and Mg(WW95)-Fe(TNH96) are shown by the dashed and dotted lines, respectively. This exercise demonstrates that the low Mg/Fe ratios deduced from the WW95 models is mainly due to the low Mg yields. Especially at the high metallicity regime, the large Fe SNII-yields from WW95 have only a small effect on the final abundance ratios as a considerable fraction ($`60`$ per cent) of Fe comes from SNeIa.
We conclude that the impact of uncertainties in stellar yields on chemical evolution models is not negligible. Chemical evolution models that are applied to the Bulge or extragalactic systems should be carefully calibrated on the solar neighborhood.
## 4 Discussion
### 4.1 Model improvements
An important requirement for the successful calibration of chemical evolution models is the accuracy and reliability of abundance and age determinations. Analyzing high quality spectra of galactic stars, Fuhrmann (1998) \[Fuhrmann (1998)\] demonstrates that the scatter in Mg/Fe found by Edvardsson et al. (1993) \[Edvardsson et al. (1993)\] can be reduced. His data also point towards a more pronounced separation of the various components of the Galaxy, that may require more sophisticated modeling (e.g., two-infall model, Chiappini, Matteucci, & Gratton 1997 \[Chiappini et al. (1997)\]).
The determination of the exact timescale for the delayed Fe enrichment from SNIa depends on the star formation history that is constrained through the observationally defined age-metallicity relation. As the latter still suffers from a large scatter, it may be more promising in the future to directly adopt the star formation history that is determined with chromospheric ages (Rocha-Pinto et al. 2000 \[Rocha-Pinto et al. (2000)\]).
Invoking a metallicity dependence of SNIa rates (Kobayashi et al. 1998 \[Kobayashi et al. (1998)\]) and the consideration of delayed and inhomogeneous mixing (Malinie et al. 1993 \[Malinie et al. (1993)\]; Thomas et al. 1998 \[Thomas et al. (1998)\]; Tsujimoto, Shigeyama, & Yoshii 1999 \[Tsujimoto et al. (1999)\]) represent further improvements of chemical evolution models.
### 4.2 Constraint on elliptical galaxies
Unless one allows for a flattening of the IMF, the very high Mg/Fe ratios in elliptical galaxies prohibit the occurrence of late star formation, i.e. induced by a merger (Thomas et al. 1999 \[Thomas et al. (1999)\]). This constraint seems to be at odds with the finding that ellipticals exhibit a considerable scatter in H$`\beta `$ line strengths (González 1993 \[González (1993)\]). However, composite stellar population models containing a small fraction of old metal-poor stars allow for an alternative interpretation: Such models succeed in reproducing the strong metallic and the strong Balmer lines observed in ellipticals without invoking young ages (Maraston, this volume; Maraston & Thomas 2000 \[Maraston and Thomas (2000)\]).
## Acknowledgments
DT thanks F. Matteucci and F. Giovannelli for a very lively and fruitful workshop. This work was supported by the ”Sonderforschungsbereich 375-95 für Astro-Teilchenphysik” of the Deutsche Forschungsgemeinschaft. |
no-problem/9912/astro-ph9912191.html | ar5iv | text | # A Paradigm Lost: New Theories for Aspherical PNe
## 1. Introduction
In the last decade Planetary Nebulae (PNe) have gone from mysterious to mundane and back again. Ten years ago the variety of PNe shapes was seen as puzzling and lacked a unified physical theory. Five years later it appeared that the needed theoretical interpretation had been found in the Generalized Interacting Stellar Winds (GISW) model. The wealth of new, high resolution space and ground based observations of PNe has made the GISW paradigm seem less universal than once hoped for and these objects once again present us with significant puzzles and theoretical challenges. It is remarkable that such a common phenomena as the end stages of low and intermediate stars should so consistently surprise and outwit us. In this paper my aim is to review the state of aspherical PNe theory. I am particularly interested in where the new observations show the limits of the GISW model and where, in my opinion, the next generation of theoretical models is likely to emerge.
## 2. Observations Outpace Theory: Limits of the GISW Model
Researchers in this community are, by now, well aware of the main contours of the GISW model. I repeat them for completeness (for more details see Kwok et al. 1978, Kahn & West 1986, Balick 1987, Icke 1988 and also Frank 1999 for a review). The defining paradigm for explaining aspherical PNe posits a single star evolving from the AGB to a white dwarf. As the star evolves so does its wind. A slow ($`10`$ km $`\mathrm{s}^1`$), dense ($`10^4`$ M) wind expelled during the AGB is followed by a fast ($`1000`$ km $`\mathrm{s}^1`$), tenuous ($`10^7`$ M) wind driven off the contracting proto-white dwarf during the PNe phase. The fast wind expands into the slow wind shocking and compressing it into a dense shell. The “Generalized” part of this interacting winds scenario occurs when the AGB wind assumes a toroidal geometry with higher densities in the equator than at the poles. Inertial gradients then confine the expanding shock leading to a wide range of morphologies. Numerical models (Soker & Livio 1988, Mellema et al. 1991, Icke et al. 1992, Frank & Mellema 1994, Mellema & Frank 1995, Dwarkadas et al. 1996), have shown this paradigm can embrace a wide variety of nebular morphologies including (and this point needs to be stressed) highly collimated jets (Icke et al. 1992, Frank & Mellema 1997, Mellema & Frank 1998, Borkowski et al. 1997). In spite of the success of these models, high resolution observations, primarily from the HST, have revealed new features of PNe morphology which appear difficult to recover with the classic GISW model. Below I list what I consider to be the most vexing issues raised by these observations.
Point-symmetry: This morphological class was first identified by Schwarz et al. 1992. It is most striking in PNe where jets or strings of knots are visible. There are however many bipolar (and even some elliptical) PNe which also show point symmetry in their brightness distributions. It has been suggested that point symmetry occurs due to precession of a collimated jet (Lopez 1997). If this is the case it is difficult to imagine that a large-scale out-flowing gaseous torus (needed for GISW) can provide a stiff precessing nozzle for the flow. Inhomogeneities in the torus would tend to smooth out on sound crossing times of $`\tau =L/c10^{17}cm/10^6cms^13\times 10^3y`$. This is on order of, or less than, many of the inferred precesession periods.
Jets and Ansae: While the GISW model can produce narrow jets it usually requires a large-scale “fat” torus. It is not clear if such structures exist in PNe (this point requires further study). The FLIERS/Ansae described by Balick et al. 1997 are more problematic in that they have, so far, resisted most attempts at theoretical explanation (see however Mellema et al. 1998) .
Multi-polar/Episodic Outflows: A number of PNe show multiple, nested bipolar outflows (Guerrero & Manchado 1998, Welch et al. 1999). In some cases these multiple bipolar shells have common axii of symmetry, while in others the axii are misaligned (multi-polar). Such objects present problems both for the GISW model and the “classical” view of post-AGB evolution. When the nebulae is multi-polar it is usually point-symmetric and presents the same challenges described above. In some cases (Kj PNe 8, Lopez et al. 1997) it appears as if the development of dense torus may occur more than once with separate axii in each case. Such a phenomena is difficult to reconcile with either a classic single-star model or appeals to binary scenarios. The most likely explanation for any multiple bubbles whether uni- or multi-polar is an episodic wind. A fast wind occurring in outbursts or varying periodically is not part of the standard model for post-AGB evolution (though a nova-like recurrence is possible in binary systems). These nebulae may originate from born-again PNe however the time-scales for the bubbles ($`<10^4`$ y) do not appear well matched with He shell inter-pulse timescales.
Post-AGB (P-AGB)/Proto-PNe (PPNe) Bipolar Outflows: One of the most startling results of the last five years is the recognition that fast ($`100`$ km $`\mathrm{s}^1`$) bipolar outflows can occur in the PPNe or even the Post-AGB stage. Objects like CRL 2688 and the Red Rectangle raise the question of how high-velocity collimated flows occur when the star is still in a cool giant or even supergiant stage (CRL 2688 has an F Supergiant spectral type: Sahai et al. 1998; Young et al. 1992). The origin of the wind in this early stage and the mechanisms which produce its collimation are critical questions because it appears that much of the shaping of PNe may occur before the “mature” PNe phase when the star has become hot enough to produce a strong ionizing flux (Sahai & Trauger 1998).
## 3. New Physics
In spite of its successes, it appears that the GISW model can not embrace the full range of behaviors observed in PPNe and PNe. In this section I briefly review (or suggest) some alternative scenarios for PNe evolution. In all cases the scenarios focus on the source of the outflow, either the nascent Central Star of a Planetary Nebulae (CSPNe) or an accretion disk surrounding the star.
### 3.1. Common Envelope Evolutionary Truncation (CEET)
The “story” of PNe evolution has long included the possibility of a common envelope phase (Iben & Livio 1993, Soker 1998). When a giant star swallows a companion the pair’s common envelope can be rapidly ejected (most likely in the orbital plane) leaving either a merged core or a short period binary. This process need not occur at the tip of the AGB when the giant’s stellar core has already evolved to a CSPNe configuration. Common envelope ejection can occur at any point in the evolution along the AGB (or RGB) branch. The initiation of the CE evolution depends on the size of the star and the orbital separation of the binary. Thus the envelope of the star can be torn off the core exposing it before it is in a stable CSPNe configuration.
If the core is not yet “prepared” to be a CSPNe then we might expect instabilities that could lead to rapid and violent ejection of material remaining in the atmosphere. This could provide a source of explosive energy release. This is an attractive idea because it may explain the fully non-asymmetric structures such as the elephant trunks seen in some PPNe and very young PNe (Trammel, these proceedings). For an exposed core of mass $`M_c`$ with a remnant envelope of mass $`M_e`$ the time-scale for thermal instabilities can be crudely represented as the Kelvin-Helmholtz timescale (Kippenhahn & Weigert 1989) For typical CSPNe this is $`\tau _{KH}<10^3`$ y, a short fraction of the AGB evolutionary timescale. Thus these instabilities could occur rapidly allowing them to escape direct detection. Note also that if these stars have a strong dynamo acting then the loss of the envelope could expose the kG fields at the base of the convective zone on short timescales such that magnetic instabilities could act as the source of explosive energy release.
Finally, consider the possibility that the secondary does not spiral all the way in to merge with the core but leaves instead an extremely close binary (with separation $`a`$). In the period just after the CE ejection the two stars will orbit rapidly in a environment still rich with circumbinary gas (Sandquist et al. 1998). Spiral shocks driven by the secondary’s orbital motion will heat the gas to temperatures proportional to the $`V_k^2`$ where $`V_k`$ is the Keplerian speed of the secondary. If the cooling time $`t_c`$ for the gas is greater or equivalent to its sound crossing time $`t_x`$ then the pressure gradients will set the gas in motion. It is possible therefore that the close binary will produce a kind of egg-beater effect driving a wind from the source at speeds
$$V_w=\zeta V_k=\zeta \sqrt{GM_c/a}$$
(1)
In the equation above $`\zeta 1`$ and would depend on the ratio of $`t_c/t_x`$.
While this CEET scenario is highly speculative it may provide routes for either explosive energy release or winds in the PPNe stage. In lieu of other mechanisms for driving relatively fast winds from cool PPN stars (Simis, Dominik & Icke, these proceedings) this feature makes the CEET a scenario worth further exploration.
### 3.2. Accretion Disk Winds
The possibility that accretion disks play a role in PNe formation was first suggested by Morris (1987). More recently Soker & Livio (1994) and Reyes-Ruiz & Lopez (these proceedings) have explored the formation of disks in binary PNe systems in more detail. In these works there has been a tacit assumption that disk = outflows. Obviously there is gap in the theory and it remains unclear if or how accretion disks in PNe can create collimated outflows that match observations. I now review two classes of disk wind models that may be applicable for PNe.
Magneto-centrifugal Launching: The potential for accretion disks to create strong, collimated winds has been explored in some detail by the both the YSO and AGN communities (Ouyed & Pudritz 1997, Shu et al. 1994). The most popular models rely on the presence of magnetic fields embedded in the disk (ie. the foot points of the field are tied to the disk via surface currents). The field co-rotates with the disk. If field lines are bent at an appropriate angle to the disk axis ($`\theta >30^o`$) energy can be extracted from rotation and matter loaded on the field lines is flung outward “like a bead on a wire”. This mechanism has been shown effective in both analytical and numerical studies. While it is clear that the mechanism can produce winds on the order of the escape speed at the launch point, the geometry of the wind that is generated is still uncertain. Some studies indicate that a narrow jet of hypersonic plasma will form almost immediately above the disk/star system (Ouyed & Pudritz 1997). Other researchers (Shu et al. 1994) find the collimation process to be slow leading to so-called “wide-angle” winds with cylindrical density stratification (the densest parts of the flow lie along the axis giving the appearance of a jet).
Magneto-centrifugal launching has many attractive features for PNe. For instance the presence of narrow jets in the midst of wider bipolar flows might be naturally explained by wide-angle wind models. The issue which must be addressed is, can such flows be established in PNe disk systems? YSO and AGN disks have typical size scales of hundreds of $`AU`$. A PNe disk could have a size scale of order the Roche Lobe. Can the appropriate physics be obtained in these smaller disks? For example, can magnetic fields of the right magnitude and geometry be generated in these disks? Finally, can the flows from the disk produce the observed morphology and kinematics of the nebulae? Recent MHD jet propagation studies using magneto-centrifugal launching models as input show promising results in terms of generating diverse flow characteristics (Frank et al. 1999) however such research remains in its infancy.
It is, at least, possible to estimate the terminal speed of a magneto-centrifugal disk wind as
$$V_{\mathrm{}}\mathrm{\Omega }(r)r_A(r)$$
(2)
where $`\mathrm{\Omega }(r)`$ is the rotation rate in the disk at radius $`r`$ and the $`r_A(r)`$ is the Alfv́en radius of the flow launched at that disk radius. Typically $`r_A`$ is of the order of a few times $`r`$ (Pelletier & Pudritz 1992). Thus assuming a keplerian disk with a characteristic size of order the stellar radius for a P-AGB star ($`rR_{}`$), the equation above yields $`V_{\mathrm{}}1001000km/s`$ which is of the order observed in PNe jets. Thus the application of MHD disk-wind models to PNe could be a promising field for future work (Frank et al. 1999).
Radiation-driven Disk Winds: Even without magnetic fields accretion disks can generate outflows. Angular momentum must be dissipated in order to allow material in the disk to spiral inward. Thermal energy created in the dissipational process can heat the disks’ surface layers to high enough temperature for line driving to become effective. A wind is then driven off the surface of the disk in the same manner as one is produced by a hot star.
In a series of numerical models Proga and collaborators (Proga, Stone & Drew 1998) have explicated the properties of radiative disk winds. They find the winds emerge with a conical geometry with large half-opening angles of $`\theta >45^o`$. It is noteworthy that the flow pattern in the winds can be unsteady. In general the maximum mass loss rates in these models tends to be low ($`\dot{M}<10^7`$ M yr<sup>-1</sup>) and are associated with high wind velocities ($`v_w>1000`$ km $`\mathrm{s}^1`$). Thus these winds would likely produce fairly wide lobed energy conserving bipolar outflows. Application of these models to the short lived accretion disks which would likely occur in PNe have not yet been attempted and further study in this area may prove fruitful.
### 3.3. Magnetized Wind Bubbles
One of the most promising new theoretical models invokes a toroidal magnetic field embedded in a normal radiation driven stellar wind. This so-called Magnetized Wind Bubble (MWB) model was first proposed by Chevalier & Luo (1994) and has been studied numerically by Rozyczka & Franco (1996) and Garcia Segura et al. (1999). In these models the field at the star is dipolar but assumes a toroidal topology due to rapid stellar rotation. When the wind passes through the inner shock, hoop stresses associated with the toroidal field dominate over isotropic gas pressure forces and material is drawn towards the axis producing a collimated flow. This mechanism has been shown capable of producing a wide variety of outflow morphologies including well collimated jets. When precession of the magnetic axis is included in fully 3-D simulations, the MWB model is capable of recovering point-symmetric morphologies as well (Garcia-Sequra 1997). The capacity for the magnetic field to act as a long lever arm imposing coherent structure across large distances makes these models particularly attractive.
The potential difficulties involved in application of the MWB models include the presence of field reversals at the “equatorial current sheet” (which need not be restricted to the equator) where reconnection could produce strong dissipation of the magnetic field (Soker 1998, Frank 1999). A more serious difficulty involves the rather extreme input parameters required for the hoop stresses to become effective. The critical parameter in the MWB model is the ratio of magnetic to kinetic energy in the wind, $`\sigma `$. In terms of parameters at the stellar surface
$$\sigma =\frac{B^2}{4\pi \rho _wV_w^2}=\frac{B_{}^2R_{}^2}{\dot{M}_wV_w}\left(\frac{V_{rot}}{V_w}\right)^2$$
(3)
where $`V_{rot}`$ is the rotational velocity of the star. The MWB model is only effective when $`\sigma >.01`$. It has been noted that this value is what obtains in the Sun. While such an identification may seem initially seem promising for the model one must recall that the $`\dot{M}_{PN}/\dot{M}_{}>10^7`$! Thus the additional factor of ten million or more must be made up by some combination of field strength, rotational velocity or stellar size. Unfortunately these are usually anti-correlated. Given that $`\dot{M}`$ is fairly well established it appears that one needs either very strong fields or very high rotation rates.
The situation becomes more difficult when one considers that without significant angular momentum transport in the star, mass losing AGB stars should spin-down during their evolution. Consider the simplest case of a constant density star rotating as a solid body. One can show that given a main sequence rotation rate, mass and radius of $`\mathrm{\Omega }_{ms},M_{ms}`$ and $`R_{ms}`$ respectively, the post-AGB rotation rate will be
$$\mathrm{\Omega }_P=\mathrm{\Omega }_{ms}\left(\frac{R_{ms}}{R_P}\right)^2\left(\frac{M_P}{M_{ms}}\right)^{2/3}$$
(4)
where $`P`$ denotes Post-AGB quantities. Note that since $`M_P<M_{ms}`$ and $`R_P>R_{ms}`$ we will always have $`\mathrm{\Omega }_P<\mathrm{\Omega }_{ms}`$.
It has been argued that effective mixing between the core and the envelope during the AGB stage can produce high surface rotational velocities (Garcia-sequra et al. 1999). While this may be possible it’s effectiveness would tend to diminish dynamo processes which may be needed to create the magnetic field.
### 3.4. Dynamos
If magnetic fields play a role in post-AGB and PNe evolution then one must address the source of the field. While it is possible that dynamically significant fields may be preserved as fossil relics from the main sequence, it is more likely that the fields may be generated via dynamo processes. The standard $`\alpha \omega `$ mean field dynamo model used to explain most astrophysical magnetic fields (stellar, galactic etc.) relies on a combination of convection and differential rotation. The rotation stretches the field lines while convection turns toroidal components into a poloidal (dipole-like) field. The effectiveness of a dynamo can be expressed in terms of the dynamo number $`N_D`$ (Thomas, Markiel & Van Horn 1995),
$$N_D\mathrm{\Delta }\mathrm{\Omega }\frac{r_c}{L}$$
(5)
where $`\mathrm{\Delta }\mathrm{\Omega }`$ is the differential rotation which occurs over a scale $`L`$ in the midst of a convection zone of size $`r_c`$. Effective dynamos in AGB, post-AGB or CSPNe require $`N_D>1`$. The equation above shows the difficulty of using a rapidly rotating star whose angular momentum has been well mixed as the source of strong magnetic fields. Unless the field is a fossil of the previous stages, dynamo generated fields require strong differential rotation.
There remains much work to be done in the application of dynamo theory to AGB stars (Pascoli 1997). The growing implication that MHD is a necessary part of the nebular dynamics is likely to require that such work be done.
## 4. Nebular Paleontology
There are many examples of PNe studies in which attempts have been made to directly link simulations with data via synthetic observations. These have usually involved calculation of various optical forbidden and permitted line intensity maps. As this approach matures it should become possible to carry out stellar wind paleontology studies in which the history of an individual object is reconstructed based on its morphology, kinematics, ionization and chemical structure.
For such studies to be successful they require objects that have been very well characterized observationally ($`\eta `$ Car, SN1987A; Collins et al 1998). The Egg Nebula is becoming one such object. Figure 1 presents the results of a paleontological study of the Egg nebula (Delamarter et al. 1999) in which simulations that included $`H_2`$ chemistry and excitation as well as post-processed scattered light image production were carried out. After more than 50 simulations we found that the GISW model could not recover the observed features of the Egg. Further simulations showed that the best fit to $`H_2`$ and scattered light intensity maps required a torus ejected at about the same time as a fully collimated jet. The torus and jet were distinct, non-interacting dynamical features. Based on the requirement that $`H_2`$ not become dissociated the results allowed for reasonably unique determination of the mass loss history of the central star. We note in Fig 1, however, that these models could not recover the unusual tuning-fork pattern seen in the scattered light images of the Egg. The images did produce a good match to other PPN such as IRAS 17150-3224.
#### Acknowledgments.
Support for this work was provided at the University of Rochester by NSF grant AST-9702484 and the Laboratory for Laser Energetics.
## References
Balick, B. 1987. AJ 94 671
Balick, B., Alexander, J., Hajian, A., Terzian, Y., Perinotto, M., Patriarchi, P, 1998, AJ, 116, 360
Borkowski, K., Blondin, J., & Harrington, J., 1997, ApJ, 482, 97
Chevalier, R., & Luo, D., 1994, ApJ, 421, 225
Cliffe, J.A., Frank, A., Livio, M., & Jones, J, 1995, ApJ, 44, L49
Collins, T., Frank, A., Bjorkman, J., & Livio, M., 1999, ApJ, 512, 322
Delamarter, G., Vieira, M., Frank, A., Woods, K., Welch, C., 1999, in preparation
Dwarkadas, V., Chevalier, R.A., & Blondin, J.M. 1996, ApJ ,457 773
Frank, A., & Mellema, G., 1994, ApJ, 430, 800
Frank, A. & Mellema, G. 1996, AJ, 472, 684.
Frank, A., 1999, New Astr Rev, 43, 31
Frank, A., Gardiner, T., Jones, T., Ryu, D., 1999, ApJ, submitted
Frank, A., 1999, in preparation
Garcia-Segura, G., 1997, ApJ, 489L, 189
Garcia-Segura, G., Langer, N., Rozyczka, M., & Franco, J., 1999, ApJ, 517, 767
Guerrero, M., Manchado, A., 1999, ApJ, 522, 378
Iben, I., & Livio, M., 1993, PASP, 105, 1373
Icke, V., 1988, A&A, 202, 177
Icke, V., Balick, B., & Frank, A., 1992, A&A, 253, 224
Kahn, F. D., & West, K. A., 1985, MNRAS, 212, 837
Kippenhahn, R., & Weigert, A., 1989, Stellar Structure and Evolution, (Springer-Verlag NYC), pg 238
Kwok, S., Purton, C. 1978, ApJ 219, L125
Livio, M., & Soker N., 1988, ApJ, 329, 764
Livio, M., & Pringle, J. E., 1997, ApJ, 486, 835
Lopez, J, 1997, in Planetary Nebulae, IUA Symp 180, ed H. Habing, H. Lamers, (Dordrecht: Kluwer Academic Publishers)
Mellema, G., & Frank, A., 1995, MNRAS, 273, 40
Mellema, G., Balick, B., Eulderink, F., & Frank, A., 1992, Nature, 355, 524.
Mellema, G. & Frank, A. 1997, MNRAS, 292, 795.
Mellema, G, Raga, A., Canto, J., Lundqvist, P., Balick, B., & Noreiga-Crespo, A. 1998, A&A, 331, 335
Morris, M. 1987, PASP, 99, 1115
Oyued R., & Pudritz, R. E. 1997, ApJ, 482, 712
Pascoli, 1997, ApJ, 489, 946
Pelletier & Pudritz, 1992, ApJ, 394, 117
Proga, D., Stone, J., Drew, J., 1998, MNRAS, 295, 595
Rozyczka, M., & Franco, J., 1996, ApJ, 469, 127
Sahai, R., et al. 1998, ApJ, 493, 301
Sahai, R., & Trauger, J. T. 1998, AJ, 116, 1357
Shu, F., Najita, J., Ostriker, E., Wilkin, F, Ruden, S., Lizano, S., 1994, ApJ, 429, 781
Soker N., 1998, ApJ, 496, 833
Soker, N., Livio, M., 1994, ApJ, 421, 219
Soker N., 1998 ASP Conf. Ser. 154, The Tenth Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, Edited by R. A. Donahue and J. A. Bookbinder, p.1901in press
Schwarz, H. E., Corradi, R. L. M., Melnick, J. 1992, A&AS, 96, 23
Thomas, J., Markiel, A., & Van Horn, H., 1995, ApJ, 453, 403
Welch, C., Frank, A., Pipher, J., Forrest, W., Woodward, C., 1999, ApJ, 522L, 69
Young, K., Serabyn, G., Phillips, T. G., Knapp, G. R., Gusten, R., & Schulz, A. 1992 |
no-problem/9912/astro-ph9912093.html | ar5iv | text | # The Connection with B[e] stars
## 1. Introduction
The existence of classical Be stars has been recogized for more than a century. The history of B\[e\] stars on the other hand is much shorter. About 30 years ago Geisel (1970) first reported a correlation between stars for which the continua exhibit excess radiation at infrared wavelengths $`>5\mu `$m and stars having spectra which exhibit low-excitation emission lines. This marked the beginning of the investigation of the enigmatic class of B\[e\] stars although then these stars were not yet know under this acronym. At about the same time Wackerling (1970) and Ciatti et al. (1974) also realized the existence of a group of hot emission line objects with “abnormal” spectra and forbidden emission lines for which they introduced the designation BQ\[ \] stars. Allen & Swings (1972) and in particular Allen & Swings (1976) systematically investigated this new class of stars which they described as peculiar hot emission line stars with infrared excesses. They found that these stars “… form a group spectroscopically discerned from normal Be stars and planetary nebulae…” which contains “objects ranging from almost conventional Be stars to high density planetary nebulae”. The dominant spectral features were found to be emission lines of singly ionized iron. The final step towards the B\[e\] stars was made by P. Conti during the general discussion in the 1976 IAU symposium entitled “Be and Shell Stars.” He suggested that “ … A second class of objects would be those B-type stars which show forbidden emission lines and I would suggest that we classify these as B with a small e in brackets B\[e\], following the notation for forbidden lines” (Conti 1976).
In the following sections I will first describe in more detail the defining characteristics of what nowadays we call “B\[e\]” stars, and then consider the various object classes which constitute the inhomogeneous group of B\[e\] stars. Finally, I will discuss in some detail the connection between B\[e\] stars and classical Be stars.
## 2. Defining characteristics of B\[e\] stars
Extensive infrared surveys of galactic early-type emission line stars, e.g. by Allen (1973), (1974), and Allen & Glass (1975), revealed that two populations of emission line stars exist: (a) emission-line stars with normal stellar infrared colours comprising the classical Be stars and other types of objects like normal supergiants, Luminous Blue Variables (LBVs), and S-type symbiotics; and (b) “peculiar” emission-line stars with IR excesses due to hot circumstellar dust.
Following Conti’s suggestion the latter group of peculiar and dusty Be stars is now usually called B\[e\] stars, i.e. they are early-type emission lines stars with low-excitation lines, forbidden lines, and hot dust visible in the near and mid infrared.
As we will see in the next section a more precise statement is that these stars show the “B\[e\] phenomenon” which is characterized by spectroscopic properties and by the continuum energy distribution as follows:
* Spectroscopic characteristics:
+ strong Balmer emission lines;
+ low-excitation permitted emission lines, predominantly of singly ionized metals, e.g. Fe ii;
+ forbidden emission lines of \[Fe ii\] and \[O i\];
+ higher ionization emission lines can be present, e.g. \[O iii\], He ii;
* continuum energy distribution:
+ Continuum distribution of an early-type star in the visual(/UV);
+ strong near/mid infrared excess due to hot circumstellar dust with temperatures around $`T5001000`$ K.
Stars showing the B\[e\] phenomenon form a distinct group of objects in $`(JH)(HK)`$ and $`(HK)(KL)`$ diagrams (cf. Fig. 1). One can therefore state that in connection with forbidden emission lines hot dust is the defining characteristic of B\[e\] stars. Hence, a star with forbidden emission lines but without thermal emission due to hot circumstellar dust (such as LBVs in certain phases), should not be classified as B\[e\] star. With respect to classical Be stars the presence of circumstellar dust is a distinctive feature because these stars definitely lack this property.
Linear polarization measurements showed that B\[e\] stars of all sub-types defined in the next section are characterized additionally by a non-spherical distribution of circumstellar scattering particles. Intrinsic polarization was observed e.g. by Coyne & Vrba (1976), Barbier & Swings (1982), Zickgraf & Schulte-Ladbeck (1989), and Schulte-Ladbeck et al. (1992). Oudmaijer & Drew (1999) obtained spectropolarimetry of B\[e\] and Herbig Be stars. They detected non-sphericity by polarization changes across the H$`\alpha `$ emission line in all B\[e\]-type stars of their sample.
In classical Be stars polarization is due to Thomson scattering in the ionized circumstellar disk. Two scattering mechanisms are responsible for the observed polarization of B\[e\] stars: scattering by dust particles and Thomson scattering anaolgous to classical Be stars. Both mechanisms may contribute simultaneously in B\[e\] type stars (cf. Zickgraf & Schulte-Ladbeck 1989)
As discussed by Lamers et al. (1998) the characteristics of the B\[e\] phenomenon can also be formulated in terms of physical conditions:
* The strong Balmer emission lines imply very large emission measures ($`EM`$) of the singly ionized gas above the stellar continuum forming region. Typically, for a supergiant B\[e\] star (see below) with H$`\alpha `$ luminosities of $`10^{37}`$ to $`10^{38}`$ erg s<sup>-1</sup>, the $`EM`$s are on the order of $`10^{62}`$ to $`10^{63}`$ cm<sup>-3</sup>. For less luminous stars, such as pre-main sequence B stars showing the B\[e\] phenomenon, the emission measure is about $`10^{57}`$ cm<sup>-3</sup>.
* The presence of emission lines of low-ionization metals like Fe ii indicates a temperature of $``$ 10<sup>4</sup> K in the emitting region.
* The presence of forbidden emission lines of low excitation metals such as \[Fe ii\] and \[O i\] indicates a geometrically extended envelope so that there is a large amount of low density gas. Applying the diagnostics described by Viotti (1976) leads to densities of the \[Fe ii\] emitting region of $`N_e<10^{11}`$ cm<sup>-3</sup>.
* According to Bjorkman (1998), infrared excesses from cool dust ($`T_{rmd}`$ = $``$500-1000 K) indicate a circumstellar density of $`\rho 10^{18}`$ gm cm<sup>-3</sup> at distances where the dust temperature can equilibriate ($`500`$ to 1000 R).
Taking into account that the circumstellar environments of B\[e\] stars are non-spherical, these conditions are consistent with circumstellar densities on the order of 10<sup>9</sup> to 10<sup>10</sup> cm<sup>-3</sup>. This is supported also by 2.3 $`\mu `$m CO overtone emission observations by MacGregor et al. (1988a,b) who derived similar densities for the molecular emission regions. The existence of dust is most likely related to the existence of molecules. The formation of molecules in turn drives the condensation of dust particles. Both types of matter require high densities. In contrast, the envelopes of Be stars may have densities too low to allow condensation of dust particles.
## 3. B\[e\] stars: a stellar melange
The definition of B\[e\] stars as given in the previous section describes certain physical conditions in terms of excitation and density in the circumstellar environment rather than intrinsic object properties. Because similar circumstellar conditions can prevail in the surroundings of objects belonging to intrinsically different classes it is not surprising that “B\[e\]” stars do not form a homogenous group of objects, but rather a melange of various classes.
This was already realized by Allen & Swings (1976) who distinguished three groups of peculiar Be stars with infrared excesses, namely (a) group 1: few emission lines, not always forbidden lines, almost conventional Be stars, (b) group 2: “most distinctive group”, spectra with permitted and forbidden Fe ii emission, and (c) group 3: additionally emission lines of higher ionization stages ($`IP>25`$eV). The phenomenological grouping indicated that B\[e\] stars are not a unique class of objects but comprise members of different classes which share the common property of showing the B\[e\] phenomenon (for a detailed discussion of the various object classes cf. Zickgraf 1998).
Actually, it would be desirable to determine the intrinsic stellar parameters in order to find the position of the B\[e\] stars in the H-R diagram. This would permit one to constrain the likely evolutionary status and therefore determine the objects’ B\[e\] classification types. However, in practice this turns out to be difficult or even impossible. In many cases photospheric absorption features are absent or at most weakly discernible. It is therefore difficult to determine reliable effective temperatures. In many cases only the stellar continuum energy distribution allows to estimate the star’s $`T_{\mathrm{eff}}`$, yielding rather uncertain results. Interstellar reddening increases the problem. Likewise, unknown distances lead to uncertain luminosities. Therefore the evolutionary status of many B\[e\] stars is unknown. Zorec (1998) collected distances and luminosities for galactic B\[e\]-type stars in order to establish the H-R diagram for galactic B\[e\] stars. But even with known $`T_{\mathrm{eff}}`$ and $`L`$ values, B\[e\] stars often offer obstacles to a determination of their evolutionary status, as I will discuss below for some near-main sequence objects.
In some cases more or less reliable information about the evolutionary status can be obtained. Some stars were found to represent objects in a post-main sequence phase of the evolution of massive stars. Others are obviously intermediate mass pre-main sequence Herbig Ae/Be stars, while still others are in late stages of the evolution of low-mass stars.
As an important result of the Workshop on B\[e\] stars held in Paris, 1997, Lamers et al. (1998) discussed an improved classification of B\[e\] stars and suggested several different subclasses of B\[e\] stars:
* evolved high-mass stars with $`L10^4\mathrm{L}_{}`$: B\[e\] supergiants $``$ sgB\[e\]
* intermediate mass pre-main sequence stars: Herbig Ae/Be stars $``$ HAeB\[e\]
in particular: isolated HAeB\[e\]
* evolved low-mass stars:
Compact low-excitation proto-planetary nebulae $``$ cPNB\[e\]
* D-type symbiotic stars $``$ symB\[e\]
* “unclassified” B\[e\] stars $``$ unclB\[e\]
As a further possible class one can add a group of main-sequence or near-main sequence stars, MSB\[e\], which could represent a link with classical Be stars.
Lamers et al. stressed that a unique classification is not always possible because the assignment to a class is typically ambiguous. It is therefore not surprising that their group of stars of type unclB\[e\] is the largest.
## 4. The connection: B\[e\] vs. Be
In this section I will discuss in more detail the connection between Be stars and the two subclasses of low-luminosity B\[e\] stars and high-luminosity B\[e\] supergiants. The dividing line between both classes is set around $`L10^4\mathrm{L}_{}`$.
### 4.1. Low-luminosity/(near)-MS B\[e\] stars
In the H-R diagram constructed by Zorec (1998) several B\[e\] stars are found close to or on the main sequence. These stars are of particular interest to resolving the question of their connection with classical Be stars. Among these objects are e.g. MWC 84, and HD 51585, which were classified as cPPNB\[e\] by Lamers et al. (1998), and HD 163296, HD 31648, and HD 190073 which are probably HAeB\[e\]-type stars. They belong to classes which are not related to classical Be stars.
This is different for the two B\[e\] stars HD 45677 (FS CMa) and HD 50138. In the H-R diagram they are located in the same region as classical Be stars, and they exhibit spectroscopic similarities with this class. For these two near-main sequence objects parallaxes are known from HIPPARCOS (cf. Zorec 1998) and therefore their distance and luminosity are well known.
#### HD 45677
This is one of the best-studied galactic B\[e\] stars. Allen & Swings (1976) describe HD 45677 as a kind of “proto-type” of their group 2 (s. above) and hence it is often regarded as a ”proto-type” B\[e\] star. Its spectral type is B2(III-V)e. The spectrum was described in detail e.g. by Swings (1973), de Winter & van den Ancker (1997), and by Andrillat et al. (1997). A detailed NLTE analysis by Israelian et al. (1996) yielded the stellar parameters $`\mathrm{log}g=3.9`$ and $`T_{\mathrm{eff}}=\mathrm{22\hspace{0.17em}000}`$ K.
Swings (1973) discussed the emission line spectrum of HD 45677. He found double-peaked Fe ii emission lines with a line splitting of $`\mathrm{\Delta }v=32`$ km s<sup>-1</sup>, and single-peaked forbidden \[Fe ii\] lines. He interpreted the observations in terms of a rotating equatorial disk. The non-spherical environment is confirmed by visual and UV polarization data, which indicate that the star is viewed edge-on through a dusty disk (Coyne & Vrba 1976, Schulte-Ladbeck et al. 1992). The existence of a disk should be correlated with a high stellar rotational velocity for which Swings & Allen (1971) had estimated in fact $`v\mathrm{sin}i200`$ km s<sup>-1</sup> which would be comparable with velocities measured for classical Be stars. The NLTE analysis of Israelian et al. (1996), however, yielded only $`v\mathrm{sin}i70`$ km s<sup>-1</sup>. It remains therefore unclear whether rapid rotation is responsible for the formation of the disk.
Although the distance of HD 45677 is known its evolutionary status is still under discussion. The location in the H-R diagram suggests that it is near the main sequence. In this respect it could be considered as extreme Be star. However, Grady et al. (1993) detected mass accretion in IUE spectra which indicates that it is still in a pre-main sequence phase of evolution, i.e. a Herbig Be star of the type HAeB\[e\]. This classification is somewhat doubtful because no association with a nebula is present. de Winter & van den Ancker (1997) therefore interpreted the observations in view of the isolated position in the sky in terms of a young object, but not in the sense of being pre-main sequence. It should be noted, however, that the isolated location does not necessarily contradict a Herbig Be classification. Grinin et al. (1989, 1991) discuss the existence of isolated Herbig stars, of which HD 45677 could be a member.
#### HD 50138
HD 50138 was considered by Allen & Swings (1976) as a group 1 object and they described this star as a kind of extreme Be star. For recent studies cf. Pogodin (1997) and Jaschek & Andrillat (1998). Its spectral type is B6III-IV, and in the H-R diagram it is located close to the main sequence. Houziaux (1960) determined a rotational velocity of $`v\mathrm{sin}i160`$ km s<sup>-1</sup>. Like HD 45677 it is not associated with a nebula, but exhibits spectral and polarimetric characteristics similar to young stellar objects. It could therefore also be a HAeB\[e\]-type star.
The conclusion for the (near)-main sequence B\[e\] stars is that their $`T_{\mathrm{eff}}`$ and $`L`$ are comparatively well known, and that in some respects they are similar to Be stars. Like the latter group, they posses disk-like circumstellar structures and may also rotate rapidly – although this question is not yet settled. However, their evolutionary status is still a controversial issue. Despite their well known distances and effective temperaturesm it is still not clear whether they are a kind of extreme Be stars or, alternatively, pre-main sequence HAeB\[e\] stars. The problem is that near the location of these stars in the H-R diagram the birthline of Herbig stars reaches the main sequence (Palla & Staller 1993) and hence a separation of true main sequence from pre-main sequence objects is difficult.
### 4.2. B\[e\] supergiants vs. Be stars
As discussed in the previous section, galactic B\[e\] stars are a mixture of different classes. Only for a few of these B\[e\] stars luminosities are known to be as high as those of supergiants, like e.g. CPD-529243 (Swings 1981, Winkler & Wolf 1989), MWC 300 (Wolf & Stahl 1985), MWC 349A (Cohen et al. 1985), GG Car (McGregor et al. 1988a, Lopes et al. 1992), HD 87643 (Oudmaijer et al. 1998), and MWC 137 (S 266) (Esteban & Fernandez et al. 1998).
However, despite many efforts the classification remains often uncertain, and confusion with other classes is still an issue, e.g. as for the stars MWC 300, HD 87643, MWC 349A, and MWC 137. These stars have alternatively been classified as HAeB\[e\]-type stars with lower luminosities than supergiants (cf. references in Lamers et al. 1998).
For extragalactic B\[e\] stars the situation is different. Until now the only galaxies in which B\[e\] have been found (except our Milky Way) are the Magellanic Clouds (MCs). Presently 15 B\[e\] supergiants known in the MCs. For a list of these stars cf. Lamers et al. (1998). Compared to the Milky Way the advantage of the MCs is that their distances are well known and hence the luminosities of the B\[e\] stars are also known. The location of the B\[e\] stars in the H-R diagram indicates that they are evolved massive post-main sequence objects. Most of the sgB\[e\] in the MCs have luminosities on the order of $`L10^510^6`$ L (e.g. Zickgraf et al. 1985, 1986, 1989). Recently, the luminosity range was extended downward to $`L10^4`$L (Zickgraf et al. 1992, Gummersbach et al. 1995) suggesting that a transition to lower-luminosity near-main sequence objects could exist.
Spectroscopically B\[e\] supergiants are characterized by hybrid spectra. This term describes the simultaneous presence of broad (1000-2000 km s<sup>-1</sup>) high excitation absorption features of N v, C iv, and Si iv in the satellite UV, or of He i in the visual wavelength region indicative of a hot high velocity wind component. At the same time narrow ($`<100`$ km s<sup>-1</sup>) B\[e\]-type low-excitation emission-lines of Fe ii, \[Fe ii\], and \[O i\] are observed, a fact which suggests a cool, slow wind component. Likewise, molecular emission bands of TiO in the visual and CO overtone bands in the near infrared are observed (Zickgraf et al. 1989, McGregor et al. 1988b). These observations indicate the presence of two wind regions with basically different physical conditions.
The observed properties were explained by an empirical model by Zickgraf et al. (1985). These authors invoked a two-component stellar wind with a bipolar wind structure. In the polar region a fast radiation-driven CAK-type wind, similar to those of normal OB supergiants, prevails. This component exhibits velocities $`10002000`$ km s<sup>-1</sup>. In the equatorial region region a slow, cool, and dense wind is present with outflow velocites on the order of $`100`$ km s<sup>-1</sup>. The equatorial outflow is concentrated in a disk-like configuration, similar to models for classical Be stars. However, in classical Be stars the polar wind is significantly less pronounced due to these stars’ lower luminosities. Likewise, supergiants are closer to the Eddington limit. They have values of $`\mathrm{\Gamma }=1(g_{\mathrm{eff}}/g_{\mathrm{grav}})0.30.5`$. Therefore even moderate rotation of 100-200 km s<sup>-1</sup> is near the break-up velocity (Zickgraf et al. 1996). In fact there is an indication for fast rotation for at least one case of a B\[e\] supergiant, namely R 50 in the SMC. This star rotates with a velocity on the order of $`v\mathrm{sin}i`$ 150 km s<sup>-1</sup> (cf. Fig. 2) and thus at $`>60`$% of its break-up velocity.
Linear polarization measurements support the assumption of non-spherical circumstellar environments for B\[e\] supergiants. Magalhaes (1992) and Schulte-Ladbeck et al. (1993) detected intrinsic polarization in several B\[e\] supergiants. Schulte-Ladbeck & Clayton (1993) obtained spectropolarimetry of Hen S22 in the LMC and detected intrinsic polarization due to electron scattering in a circumstellar disk.
## 5. Summary
The group of B\[e\] stars is an inhomogeneous class of objects comprising intrinsically different classes characterized by permitted and forbidden low-excitation line emission and hot circumstellar dust. In contrast, classical Be stars are more or less normal B stars with photospheric absorption lines and superimposed Balmer emission lines, plus occasionally Fe ii emission lines. B\[e\] stars in many cases show no or only weak photospheric absorption lines. Forbidden lines on the other hand are not typical for classical Be stars. Likewise, circumstellar dust signatures are not found in classical Be stars but are a defining characteristic of B\[e\] stars. High stellar rotational velocities as in classical Be stars are found in a few cases of B\[e\] stars, but not much information is available or else is controversial, as in the case of HD 45677. A common characteristic of B\[e\] and classical Be stars is certainly the non-sphericity of their circumstellar envelopes. Both groups appear to possess disk-like envelopes which for B\[e\] stars, but not for classical Be stars, are dense enough to allow dust formation.
##### Acknowledgments.
I would like to thank the organizers of the colloquium for generously granting travel funds. I also would like to thank B. Wolf for making the spectrum of R 50 available to me.
## References
Allen, D.A. 1973, MNRAS, 161, 145
Allen, D.A. 1974, MNRAS, 168, 1
Allen, D.A., Glass, I.S. 1975, MNRAS, 170, 579
Allen, D.A, Swings J.P. 1972, ApL, 10, 83
Allen, D.A, Swings J.P. 1976, A&A, 47, 293
Andrillat, Y., Jaschek, C., Jaschek, M. 1997, A&AS, 124, 441
Barbier, R., Swings, J.P. 1982. In: Be Stars. IAU Symp. 98, D. Reidel, Dordrecht, p. 103
Bjorkman, J.E. 1998. In: Dusty B\[e\] stars, eds. A.M. Hubert and C. Jaschek, Kluwer Academic Publishers, p. 189
Ciatti, F., D’Odorico, S., Mammano, A. 1974, A&A, 34, 181
Cohen , M., Bieging, J. H., Welch, W. J., Dreher, J. W. 1985, ApJ, 292, 249
Conti, P.S. 1976. In: Be and shell stars, IAU Symp. No. 70, ed. A. Slettebak, D. Reidel, Dordrecht, p.447
Coyne, G.V., Vrba, F.J. 1976, ApJ, 207, 790
de Winter, D., van den Ancker, M.E. 1997, A&AS, 121, 275
Esteban, C., Fernandez, M. 1998, MNRAS, 298, 185
Geisel, S.L. 1970, ApJ, 161, L105
Grady C.A., Bjorkman, K.S., Shepherd, D., et al. 1993, ApJ, 415, L39
Grinin V.P., Kisselev, N.N., Minikulov, N.K.H. 1989, SvAL, 15, 1028
Grinin V.P., Kiselev N.N., Chernova G.P., et al. 1991, Ap&SS, 186, 283
Gummersbach C.A., Zickgraf F.-J., Wolf B. 1995, A&A, 302, 409
Houziaux L. 1960, Publ. de l’Obs. de Haute-Provence, Vol. 5, no. 28
Israelian, G., Friedjung, M., Graham, J., et al. 1996, A& A, 311, 643
Jaschek C., Andrillat, Y. 1998, A&AS, 128, 475
Lamers, H.J.G.L.M., Zickgraf, F.-J., de Winter, D., et al. 1998, A&A, 340, 117
Lopes, D.F., Daminelli, Neto A., de Freitas Pacheco, J.A. 1992, A&A, 261, 482
Magalhaes A.M. 1992, ApJ, 398, 286
McGregor P.J., Hyland A.R., Hillier D.J. 1988a, ApJ, 324, 1071
McGregor P.J., Hillier D.J., Hyland A.R. 1988b, ApJ, 334, 639
Oudmaijer R.D., Proga D., Drew J.E. de Winter D. 1998, MNRAS, 300, 170
Oudmaijer, R., Drew, J. 1999, MNRAS, 305, 160
Palla, F., Staller, S.W. 1993, ApJ, 418, 414
Pogodin M.A. 1997, A&A, 317, 185
Schulte-Ladbeck R.L., Shepherd D.S., Nordsieck K.H. et al. 1992, ApJ, 401, L105
Schulte-Ladbeck R.L., Clayton G.C, 1993, AJ, 106, 790
Schulte-Ladbeck R.L., Clayton G.C., Leitherer C., et al. 1993, SSRv, 66, 193
Swings, J.P. 1973, A&A, 26, 443
Swings, J.P. 1981, A&A, 98, 112
Swings J.P., Allen, D.A. 1971, ApJ, 167, L41
Viotti, R. 1976, ApJ 204, 293
Wackerling, L.R. 1970, Mem. R. astr. Soc., 73, 153
Winkler, H., Wolf B. 1989, A&A, 219, 151
Wolf B., Stahl O. 1985, A&A, 148, 412
Zickgraf, F.-J. 1998. Dusty B\[e\] stars, eds. A.M. Hubert and C. Jaschek, Kluwer Academic Publishers, p. 1
Zickgraf, F.-J., Schulte-Ladbeck, R.E. 1989, A&A, 214, 274
Zickgraf, F.-J., Wolf, B., Stahl, O., Leitherer, C., Klare, G. 1985, A&A, 143, 421
Zickgraf, F.-J., Wolf, B., Stahl, O., Leitherer, C., Appenzeller, I. 1986, A&A, 163, 119
Zickgraf, F.-J., Wolf, B., Stahl, O., Humphreys, R.M. 1989, A&A, 220, 206
Zickgraf, F.-J., Stahl, O., Wolf, B. 1992, A&A, 260, 205
Zickgraf, F.-J., Humphreys R.M., Lamers H.J.G.L.M., Smolinski, J., Wolf, B., Stahl, O. 1996, A&A, 315, 510
Zorec, J. 1998. Dusty B\[e\] stars, eds. A.M. Hubert and C. Jaschek, Kluwer Academic Publishers, p. 27
## Discussion
Harmanec: 1. What is known about the variability of B\[e\] stars? 2. How did you derive the luminosities of B\[e\] stars used to plot particular stars into the HR diagram? Cannot these luminosities and also $`v\mathrm{sin}i`$ values refer to optically thick inner parts of their envelopes (i.e. pseudophotospheres) rather than genuine photospheres?
Zickgraf: 1. For several galactic B\[e\] stars variability has been observed. The B\[e\] supergiants in the Magellanic Clouds, however, in general do not show pronounced variations. An exception is R 4 which is a kind of B\[e\]/LBV-type star. 2. This question should be answered by J. Zorec who derived the stellar parameters.
Zorec: The key parameter for luminosity estimation is the stellar distance. The method used to derive the distance of some stars with the B\[e\] phenomenon is published in the proceedings of the “B\[e\] stars” workshop (eds. A.M. Hubert and C. Jaschek). The basic assumptions I made are: (a) The spectrum of the star underneath the circumstellar gas and dust envelope, determined using either the BCD spectrophotometry or excitation arguments for the observed intensity of visible emission lines, is assumed to correspond to a “normal-like” stellar energy distribution. (b) There is an emission component in the visible continuum spectrum resembling the one observed in classical Be stars. The amount of this emission, and the intrinsic reddening that is associated with it, are estimated from the second component of the Balmer discontinuity, as in classical Be stars. (c) I took into account the UV-visible dust absorption due to the circumstellar dust envelope and the corresponding re-emission of energy in the far-IR. (d) The interstellar absorption as a function of the distance in the direction of the star was determined as carefully as possible. Then, from an iterative procedure to get a description of the amount of energy absorbed and re-emitted by the gas-dust circumstellar envelope, which is also consistent with the excitation produced by the underlying object, it is possible to obtain an estimate of the circumstellar dust $`E(BV)`$ component as well as the interstellar $`E(BV)`$ and consequently the stellar distance.
The energy integrated over the whole spectral range is a quantity which is treated in each iteration step. So, when you stop the iteration you also get the right apparent bolometric luminosity which can be straightforwardly transformed into an absolute bolometric luminosity. The method produced distances which are quite comparable with the distances obtained from HIPPARCOS parallaxes, when they existed.
Najarro: In your color-color viewgraph there seems to be a clear separation between emission line stars and B\[e\]. I was wondering if some of the B\[e\]s on the lower edge of your diagram could just be LBV like stars (e.g. HD316285 and R4) with such dense winds that the bound-free and free-free emission from the wind could simulate the B\[e\] IR excess. A good way to test it would be to overplot the color-color values for W-R stars. Have you tested that?
Zickgraf: No, I have not compared the IR colours of B\[e\] supergiants with those of W-R stars. However, it seems difficult to produce such excesses with (f-f)-(f-b) emission as observed in the B\[e\] region of the $`(JH)(HK)`$ diagram. |
no-problem/9912/astro-ph9912151.html | ar5iv | text | # Evidence for Multiple Mergers among Ultraluminous IR Galaxies (ULIRGs): Remnants of Compact Groups? 1footnote 11footnote 1 Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract No. NAS5-26555.
## 1. Introduction
In the 1980’s, two samples of galaxies were identified that subsequently became the subjects of vigorous research activity. One of these was the sample of compact groups identified by Hickson (1982; see Hickson 1997 for a review; hereafter, we call these CGs). The other was the sample of ultraluminous IR galaxies (ULIRGs) identified in the IRAS all-sky survey (Sanders et al. 1988; see Sanders & Mirabel 1996 for a review). The high IR luminosity ($`L`$\[8–1000$`\mu `$m\] $`>10^{12}L_{}`$) of ULIRGs is commonly thought to arise from dust absorption and IR re-emission of the intense but obscured starburst+AGN radiation field. While it was known early on that ULIRGs nearly always show evidence for interactions (collisions/mergers), many investigators have debated the interaction fraction among ULIRGs. Published values range from $``$100 (Duc, Mirabel, & Maza 1997; Borne et al. 1999a, 1999b) down to $``$50 (Auriere et al. 1996). Under the assumption that mergers of gas-rich galaxies stimulate the ULIRG phase, several theoretical studies have modeled the dynamical events leading to the ultra-starburst event (Mihos & Hernquist 1996, and references therein; hereafter MH). In spite of this intense research activity, scant attention has been given to identifying the complete set of progenitor systems that may produce the ULIRGs seen in the local universe. While there are numerous examples of merger remnants and strongly interacting pairs of galaxies at low redshift, there is only one bona fide ULIRG with $`cz<10^4`$ km s<sup>-1</sup> (Arp 220). Hence it is unreasonable to believe that merging pairs of spirals alone can account for the dynamical diversity of the ULIRG population. There must be some other class of progenitors. As hypothesized by Xia et al. (1999), a possible progenitor sample is the set of CGs.
We undertake, in §2, a comparative analysis of ULIRG and CG properties in the light of several new results. We then present a short description of our observations in §3, our sample of multiple–merger candidates in §4, and a discussion of a possible CG–ULIRG connection in §5.
## 2. ULIRGs and Compact Groups: Common Features
One of the remarkable attributes of ULIRGs is their unique “once-in-a-galactic-lifetime” IR-ultraluminous status (MH). A galaxy may enjoy this status only once since the very property that defines the high-IR luminosity phase (i.e., the intense dust-absorbed and re-radiated radiation field) will likely blow out the dust from the galaxy. Furthermore, the intense starburst phase that characterizes the overwhelming majority of ULIRGs (Genzel et al. 1998) will likely consume the available gas supply or otherwise render the gas unavailable for future star formation (MH). These physical processes consequently prevent the onset of a subsequent dust-obscured super-starburst phase. The ULIRG phase is thus transitory, with a somewhat uncertain life span (duty cycle) for detectability and classification as a ULIRG. The duration of the phase is probably $``$$`10^8`$ yr (Devriendt, Guiderdoni, & Sadat 1999). As a result, ULIRGs are identified at a special phase in their dynamical history.
CGs may also be caught in a special state. Their dynamical timescales were originally considered to be quite short (0.01–0.1 $`H_0^1`$), implying a strong merging instability (Barnes 1985, 1989; Mamon 1987). Subsequent simulations that embedded the CG within a common massive halo significantly increased the merger timescale to $``$$`H_0^1`$ (Governato, Tozzi, & Cavaliere 1996; Athanassoula, Makino, & Bosma 1997). The common massive halo is consistent with current hierarchical merging scenarios (Kolatt et al. 1999). If the actual merging timescales are somewhere between these extremes, then CGs (like ULIRGs) are also transitory and will ultimately evolve out of the CG sample through the merger and coalescence of their constituent group members, as indicated in the numerical investigations of multiple mergers within a CG setting by Weil & Hernquist (1994, 1996).
Xia et al. (1999) suggested that ULIRGs are the dynamical descendents of CGs. The implied multiple–merger scenario was investigated in the case of Arp 220 by Taniguchi & Shioya (1998), who proposed multiple mergers as the origin for most ULIRGs. To test whether “evolved CGs” become ULIRGs, we compare the relative space densities and dynamical ages of ULIRGs and CGs. For a redshift survey-selected CG sample, Barton et al. (1996) give $`\mathrm{\Phi }=1.4\times 10^4h^3=4.8\times 10^5`$ Mpc<sup>-3</sup>. (For this paper, we assume $`H_0`$ = 100$`h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`h`$=0.7, and $`q_o=\frac{1}{2}`$.) For the IRAS 1-Jy ULIRG sample, Kim & Sanders (1998) give $`\mathrm{\Phi }=1.8\times 10^7(h/75)^3=1.5\times 10^7`$ Mpc<sup>-3</sup> (summed over all luminosity bins), with strong evolution proportional to $`(1+z)^n`$, where $`n=7.6\pm 3.2`$. Taken literally, the ratio of space densities is $`\mathrm{\Phi }`$(CGs):$`\mathrm{\Phi }`$(ULIRGs) = 32:1, similar to the ratio of life spans for the two populations (few Gyr : 100 Myr), assuming the longer CG dynamical age. This is therefore consistent with the notion that ULIRGs evolve out of the CG population. Of course, more detailed plausibility arguments are required to validate such a notion, such as comparing the gas content, X-ray and IR emission properties, galaxy morphological mix, total mass, and the wider environment of CGs to the corresponding properties of ULIRGs. Further investigations along these lines would likely be very illuminating.
Given the potentially short life spans of both ULIRGs and CGs, it is curious that we see very many of either. The best interpretation of this is that these are the surviving members (the tail of the distribution) of evolving populations. CGs may be continuously replenished through the dynamical evolution of loose groups (Diaferio, Geller, & Ramella 1994; Ramella et al. 1994), with the ones that we see today being the tail of the hierarchical evolution of large-scale structure, which favors the formation of small-scale structures early in the Universe. Similarly, what few ULIRGs we see at low redshift are likely the tail of a previously rich distribution of ULIRGs, as evidenced by their strong redshift evolution (Kim & Sanders 1998). Given the above discussions, it is thus not surprising that: (a) a significant population of ultraluminous dusty starbursting galaxies at high redshifts ($`z`$$``$1–5, peaking at $`z`$$`<`$3) has been identified as counterparts of the Submillimeter Common-User Bolometer Array (SCUBA) submm sources and as the contributors to the IR background (Dwek et al. 1998; Hughes et al. 1998; Barger et al. 1998; Smail et al. 1998; Blain et al. 1999; Trentham, Blain, & Goldader 1999); and that (b) this same epoch ($`z`$$``$3) witnesses the hierarchical merging of dense configurations of sub-halo galactic-scale fragments within massive halos (Somerville, Primack, & Faber 1998; Kolatt et al. 1999). Such fragments within massive halos are consistent with recent theoretical models of CGs (Athanassoula et al. 1997), and the ultraluminous dusty starburst SCUBA sources are consistent with ULIRGs — the connection between ULIRGs and CGs thus seems nearly certain in the cosmological setting.
## 3. Hubble Space Telescope Imaging Observations
Hubble Space Telescope (HST) images were obtained with the Wide Field Planetary Camera 2 (WFPC2) camera in the F814W $`I`$-band filter for a large sample of ULIRGs (Borne et al. 1997a, 1997b, 1999a, 1999b). For each target in our survey, we obtained two 400 s images in order to remove the effects of cosmic-ray radiation events in the CCDs. The angular scale is 0.0996<sup>′′</sup> per pixel (Trauger et al. 1994). Our comprehensive WFPC2 Snapshot Atlas paper (Borne et al. 2000, in preparation) will provide a thorough description of each of the more than 120 ULIRGs in our HST survey.
## 4. Multiple-Merger Candidates
With the high angular resolution ($``$0.1–0.2<sup>′′</sup>) of HST, some ULIRGs that were previously classified as “non-interacting” now show secondary nuclei at their centers and additional tidal features (see examples in Borne et al. 1999a, 1999b). It thus appears that the fraction of ULIRGs showing evidence for interaction is very nearly 100%. Our HST images indicate in some cases that the mergers are well developed with single nuclei and full coalescence, while others show clear evidence for two or more nuclei, while the rest of the sample can best be described as compact groupings of $``$2 distinct galaxies. It is not obvious from this that there is a well-defined point during such interactions at which the ULIRG phase develops, nor is it clear what the duration of the ultraluminous phase should be. One possible explanation for this dynamical diversity is the multiple–merger model. In this scenario, the existence of double active galactic nuclei/starburst nuclei is taken as evidence of more than one merger, following the creation of the current starburst nuclei from a prior set of mergers (Taniguchi & Shioya 1998) — the currently observed merger would be at least the third merger in the evolutionary sequence. Figure 1 presents images of nine ULIRGs from our HST sample that appear to have evolved from multiple mergers. The evidence for this includes the presence of either more than two distinct and well separated remnant nuclei or more than two component galaxies, often with an unusually complex system of tidal tails and loops that suggest multiple dynamical origins of the tidal features, as seen in the simulations of Weil & Hernquist (1994, 1996). A connection between the ULIRG and CG populations is therefore supported by these HST observations. Table 1 lists our multiple-merger ULIRG candidates. For a nearly complete subsample of 99 ULIRGs in the redshift range $`0.05<z<0.20`$, we identify at least 20 either as on-going mergers in a group ($`N_{\mathrm{gal}}>2`$) or as remnants of multiple ($``$2) mergers.
The most serious concern with the multiple-merger hypothesis is the ubiquitous presence of multiple condensations (in optical images of ULIRGs) that emerge through a complex dust obscuration pattern. To minimize this effect, cases with multiple knots in the core were specifically excluded. Only those cases that clearly reveal separate optically luminous galactic components were selected. In addition, objects were selected if they had very complex tidal features (several tails), which may indicate multiple mergers (independent of any core condensations or nuclear dust obscuration). These selection criteria reduced the fraction of multiple-merger candidates to only 20% of ULIRGs. If questionable cases with multiple knots were also included, then the “multiple-merger fraction” would be more like 80%. It is imperative that radio and IR imaging be obtained in order to test the multiple-merger classification for the objects in our sample (Table 1).
Another “feature” of some ULIRGs is that they are in that category simply because several galaxies appear in the large IRAS beam and thus conspire to produce a high IR flux. No single object or interacting pair in those systems is really a ULIRG. It is expected that some higher-redshift CGs (compared to local CGs) would occupy one IRAS beam and hence be classified as ULIRGs. These particular systems may be “once and future ULIRGs”.
## 5. Discussion: The CG–ULIRG Connection
We have used the HST to study a large sample of ULIRGs. The images are consistent with a multiple–merger origin for a significant fraction of the sample, whose rich variety of morphologies almost certainly relates to diverse interaction histories. However, morphology alone cannot confirm the hypothesis. We are therefore conducting a detailed photometric and surface brightness analysis of nearly 30 ULIRGs with both I-band and H-band (HST NICMOS) imaging to test whether the nuclei that we see are in fact galactic nuclei or super-starburst knots (Colina et al. 2000, in preparation; Bushouse et al. 2000, in preparation). Our preliminary results confirm that the observed cores are galactic nuclei.
ULIRGs and CGs may share a common evolutionary path. We find two morphological classes among our ULIRGs that are consistent with a multiple–merger origin and hence support the hypothesis that CGs are the progenitors for some ULIRGs. These classes are: (1) ULIRGs with multiple remnant nuclei in their core, sometimes accompanied by a complex system of tidal tails; and (2) ULIRGs that are in fact dense groupings of interacting (eventually merging) galaxies. These classes are assigned in Table 1. Borne et al. (1999b) find an equal likelihood for a ULIRG to be a recent merger (single) as to be involved in an on-going collision (multiple) and find very little variation in the mean $`L_{\mathrm{IR}}`$ between these two categories. This would be consistent with a series of mergers taking place in a typical ULIRG, producing a sustained super-starburst — the timing of each burst is strongly determined by the orbital orientation (prograde or retrograde) and the internal structure of the merging galaxies (bulge or no-bulge), as investigated in detail by MH. Consequently, deducing the phase of interaction for individual ULIRGs will be quite complicated: Which merger are we now witnessing? Is it the first, the second, or the N-th? While our observations conclusively show that some ULIRGs are the result of multiple mergers, more observational and theoretical investigations are required to validate the multiple–merger model for the sample, to estimate better the multiple–merger fraction, to verify a CG–ULIRG connection, and to elucidate the dynamical origin, state, and fate of these remarkable objects.
Support for this work was provided by NASA through grant number GO-06346.01-95A from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. K.D.B. thanks Raytheon for providing financial support during his Sabbatical Leave and thanks the STScI for sponsoring his Sabbatical Visit. We thank our STScI program coordinator Andy Lubenow and contact scientist Keith Noll for assistance in implementing the HST program, and we thank the referee for helpful comments. |
no-problem/9912/cond-mat9912067.html | ar5iv | text | # 1 The boundary condition at a liquid-solid interface
## 1 The boundary condition at a liquid-solid interface
At a macroscopic level, it is well known that the relative velocity of a fluid with respect to the solid vanishes at a liquid-solid interface. This is the ”no-slip” boundary condition, which although very general does not have a microscopic justification. At a microscopic level, it is however necessary to take into account a possible ”slip” of the liquid on the solid surface. The amount of slip is quantified by introducing a slipping length in the boundary condition at the solid interface, which in general reads
$$\frac{v_t}{z}|_{z=z_w}=\frac{1}{\delta }v_t|_{z=z_w},$$
(1)
where $`v_t`$ is the tangential velocity at the boundary, and $`z_w`$ is the position of the boundary (which is assumed here to be a plane perpendicular to the $`z`$ axis). The ”slip” length $`\delta `$ which appears in this equation can be interpreted as the length one has to extrapolate the velocity field of the fluid into the solid to obtain a vanishing value. Equation (1) can also be interpreted as expressing the continuity of the stress (or momentum flux) at the boundary. At the boundary, the viscous stress $`\eta \frac{v_t}{z}`$ in the fluid is then equal to a fluid friction stress between the solid and the liquid, $`\kappa v_t`$. $`\delta `$ is the ratio of the viscosity $`\eta `$ to the friction constant $`\kappa `$. The usual ”no-slip” boundary condition corresponds to $`\kappa =\mathrm{}`$, while at a free boundary $`\kappa =0`$.
In previous work it was shown that even for surfaces which are smooth at the atomic scale, slip is usually a small effect. A small (atomic) corrugation of the wall is enough to produce a ”no-slip” boundary condition. This accounts for the findings of experiments performed with the surface force apparatus . However, this result appears to break down when the solid surface is strongly nonwetting for the liquid, i.e. when a liquid drop on this solid substrate has a large contact angle. In that case, it appears that the slip length $`\delta `$ can become much larger than the molecular size. Physically, this can be traced back to the fact that the liquid does not ”want” to be in contact with the solid. Hence a microscopically thin depletion layer forms between the bulk fluid and the solid, making the momentum transfer much less efficient and effectively decoupling the fluid from the substrate. In the following we discuss how the diffusion of a molecule will be affected when a thin liquid film is confined between two identical parallel plates that are characterized by a ”partial slip” boundary condition such as (1). The quantity we focus on is the relative change of the diffusion constant parallel to the plates as a function of the distance $`h`$ between the plates,
$$\mathrm{\Delta }=\frac{D_{}(h)D_{bulk}}{D_{bulk}}.$$
(2)
We will be interested in cases where the confinment is moderate (typically $`h`$ is larger than 10 molecular sizes), so that the film is still in a clearly fluid state.
## 2 Confinement effects on diffusion.
### 2.1 Qualitative discussion.
In this section, we briefly describe, at a qualitative level, how the boundary conditions can influence the diffusion of a molecule confined in a pore. Two complementary points of view are possible, and yield essentially identical results . The first one is a microscopic, ”mode-coupling”, type of approach. The idea is the following. Very generally, the diffusion constant of a tagged particle can be written as
$$D=_0^{\mathrm{}}𝑑t<\stackrel{}{v}(t).\stackrel{}{v}(0)>$$
(3)
In the bulk, two contributions to the velocity autocorrelation function $`<\stackrel{}{v}(t).\stackrel{}{v}(0)>`$ of the tagged particle can be isolated . A short time part describes the ”rattling” motion in the cage formed by the neighbours. A more subtle contribution, which appears for long times, is related to the so called ”backflow” effect. The idea is that the initial momentum of the particle is transferred at intermediate times to the long wavelength, hydrodynamic motion of the fluid. According to the Stokes equations, this momentum diffuses away from the tagged particle. However, the properties of the diffusion equation imply that a fraction of this momentum eventually returns to the origin and ”pushes” the tagged particle. Let us now consider how this mechanism is modified by confinment. First of all, the ”rattling” contribution is not expected to change, since it is governed by the local environment. The hydrodynamic backflow, on the other hand, will be strongly modified. If the confining boundaries correspond to a ”no-slip” situation, they will absorb the incoming momentum. In that case the amount of backflow will be reduced, and $`\mathrm{\Delta }`$ will be negative. On the other hand in a case of perfect slip the momentum will be reflected at the boundary, and the backflow effect will be enhanced. This argument can be made quantitative in both cases and has in fact been used to intrepret experimental results on free standing liquid crystal films . However, it turns out that a quantitative calculation for the case of partial slip is difficult.
An alternative, more macroscopic line of thought consists in computing the mobility $`\mu _{}(z)`$ of a particle at a distance $`z`$ from a solid wall, with a boundary condition 1, using macroscopic hydrodynamics. The diffusion constant in a fluid slab is then obtained using the Einstein relation between diffusion and mobility, averaged over the thickness $`h`$ of the slab. For the no-slip or perfect slip cases, such a calculation was shown to yield results identical to those obtained within the mode coupling approach. For the general ”partial slip” boundary conditions, it offers the advantage of being tractable analytically. The method and results are summarized in the next section.
### 2.2 Hydrodynamic estimate of the diffusion constant.
Consider a spherical particle of diameter $`R`$ moving past a solid boundary characterized by equation 1, with a constant velocity parallel to the boundary. The mobility is obtained by calculating the viscous drag on the particle, which implies solving the Stokes equation for the velocity and pressure fields. This can be achieved using the method of reflections . In this method, the velocity field at zeroth order corresponds to the one obtained for an infinite fluid, and therefore obeys the correct boundary condition on the particle. A first correction term is introduced to obtain the correct boundary condition on the wall, therefore violating the no-slip boundary condition on the particle. A third correction is introduced to correct again the boundary condition on the particle. Assuming convergence of the series, one eventually ends up with a velocity field that has the correct behaviour both at the particle surface and at the solid boundary. The force can then be calculated from the pressure tensor on the particle surface. The details of the calculation are described in . Here we only quote the result, which gives the force on a particle at an altitude $`z`$ from the wall, with velocity $`\stackrel{}{U}`$ as
$$\stackrel{}{F}=\frac{6\pi \eta R\stackrel{}{U}}{1\frac{9}{16}\frac{R}{z}C\left[\frac{\delta }{z}\right]}$$
(4)
where $`\mathrm{C}\left[\frac{1}{y}\right]=\frac{1}{6}y^2\frac{1}{2}y\frac{2}{3}+\left(\frac{1}{6}y^3+\frac{2}{3}y^2+\frac{2}{3}y\right)\mathrm{E}(y)+\frac{8}{3}y\mathrm{E}\left(\frac{y}{2}\right)`$ and $`E(y)=e^y\mathrm{Ei}(1,y)`$, with $`\mathrm{Ei}(1,y)`$ the exponential integral function. The altitude dependent mobility is then averaged over the chanel to compute the effective diffusion constant. When applied to the extreme cases of no-slip or perfect slip, this formula yields a relative decrease or increase, respectively, of the diffusion constant, in accordance with the qualitative analysis made in the previous section. The results for $`\mathrm{\Delta }`$ as a function of $`h/R`$ and $`\delta /R`$ are summarized in figure 1. An increase in the diffusion constant can be observed as soon as $`\delta `$ becomes larger than the pore size.
### 2.3 Molecular dynamics simulations.
In order to confirm qualitatively the general trend predicted in the above section, we present in figure 2 the results for the diffusion constant in a Lennard Jones fluid confined between two solid walls. The two cases correspond to a wetting situation with a zero slipping length, and to a nonwetting one with a large slipping length. We stress that while varying the pore width $`h`$, some care has been taken to keep the density of the fluid at the center of the pore fixed to its “bulk” value, so that variations of the diffusion constant can only originate from confinment contributions.
In the nonwetting case, the increase of the diffusion constant is clearly visible as soon as the pore size becomes smaller than the slip length. Each of these curves corresponds to a constant $`\delta `$ cut of the surface in figure 1.
## 3 Conclusion
Both hydrodynamic arguments and microscopic simulations indicate that the diffusion of a tagged molecule in a confined geometry will indirectly be correlated to the wetting properties of the fluid, through the hydrodynamic boundary condition at the interface. In particular, an increase of the diffusion constant with confinment is predicted in the ”nonwetting” case. We emphasize, however, that the mechanism discussed in this paper is quite generic, and does not consider the possibility of specific interactions with the substrate. Care should also be taken in measuring the diffusion constant in the bulk and in the confined medium under similar thermodynamic conditions (pressure), in order to make a sensible comparison. |
no-problem/9912/astro-ph9912365.html | ar5iv | text | # Mid-IR Imaging of AGB Circumstellar Envelopes
## 1. Looking for asymmetrical progenitors of PN
Intermediate and low mass stars (1–8 M) are characterized, in the phase known as Asymptotic Giant Branch (AGB), by the formation of an optically opacque circumstellar envelope of gas and dust, which will later evolve into a Planetary Nebula. The detailed physical processes involved in this phenomena are still uncertain, but there are growing evidences that they are connected to radial stellar oscillations and non uniform density distributions (Lebertre & Winters 1998; Fleisher et al. 1992).
Recent observations at different wavelengths support the idea that these inhomogeneities can propagate in the circumstellar envelope, giving rise to structures with strong deviations from spherical symmetry. Clumpy structures in the dust forming regions of the C-rich AGB star IRC $`+`$10216 were found by near-IR masking and speckle interferometry at Keck and SAO telescopes (Monnier et al. 1997, Weigelt et al. 1998). A sequence of detached dust shells were also found around this source by deep optical imaging (Mauron & Huggins 1999), suggesting a complex mass loss history similar to the one that characterized the more evolved post-AGB “Egg Nebula” (Sahai et al. 1998), or the O-rich Mira R Hya (Hashimoto et al. 1998). All these observations suggest the idea that the asymmetry observed in many PN already starts during the AGB, where it shapes the evolution of the circumstellar envelope towards the Planetary Nebula phase.
Mid-IR is the ideal spectral range to image the spatial distribution of dust in the circumstellar environment of AGB stars, and provides an effective diagnostic tool to derive the physical and chemical parameters of circumstellar dust (Marengo et al. 1999). The availability of a large sample of spatially resolved AGB envelopes in the mid-IR would be essential to improve our knowledge of the mass loss processes at the end of the AGB, in search for departures from spherical symmetry and the “steady mass loss” $`1/r^2`$ radial density profile of the stellar outflow.
## 2. Modelling the envelope emission
In most cases, the thermal radiation coming from AGB circumstellar envelopes is too faint to be detected around the bright AGB star at the center of the system. Furthermore, only the nearest sources are extended enough to be spatially resolved with the angular resolution provided by available IR telescopes.
For these reasons we have compiled our target list fitting all the available IRAS Low Resolution Spectra (LRS) of AGB sources with the public domain radiative transfer code DUSTY (Ivezić & Elitzur 1997). Each computed radial brightness profile was then transformed into a two dimensional image, convolved with the instrumental PSF, and resampled into the MIRAC3 final image array. Gaussian noise was finally added to produce a peak S/N of $``$1,000, as expected for the real observations. Only sources showing a detectable excess emission above the instrumental Point Spread Function (PSF), in a minimum area of 6–8 arcsec in diameter, were selected for observations.
## 3. Diffraction limited imaging in the mid-IR
The circumstellar dust emission predicted by our models is typically characterized by a compact component, only partially resolved as an enlarged (in terms of the FWHM) PSF, plus a faint “halo” that can be separated by the “wings” of the PSF only when the S/N is of the order of $``$1,000, or larger. For a positive detection of these two components is necessary to maximize the achievable angular resolution and sensitivity. To meet these requirements, we are using a technique based on the “fast readout” mode of the camera MIRAC3, that allows the acquisition of frames with very short integration times (0.1-0.2 sec), capable of “freezing” the atmospheric seeing.
The source is imaged with a standard nodding and chopping technique, in order to remove the background signal, and dithered on the array to obtain for each beam a number of images that can be of the order of $``$500. Each image is then rebinned on a sub-pixel grid to increase the PSF sampling, shifted and coadded.
To maximize the sampling of the dithered images without degrading the angular resolution, we have adopted the “drizzling” method developed by Fruchter & Hook (1997), whenever the number of single frames available for the coadding is sufficient to provide an uniform distribution of the dithered “drops” on the final image (tipically 200-250 frames for each beam). As shown in Fig. 1, this technique allow to increase the S/N reducing the PSF FWHM, producing images that are virtually diffraction limited, and largely independent from the atmospheric seeing.
One of the most extended sources observed in June 1999 run with MIRAC3 at IRTF is the O-rich semiregular variable W Hya. We present here its 18 $`\mu `$m image, compared with the reference star $`\alpha `$ Boo (Fig. 2). Note the much larger FWHM of the AGB source, and its oval shape elongated in the N-S direction, compared to the more compact and symmetric image of the reference star. In June 1999 run, a total of 10 AGB sources where observed at 8.8, 9.8, 11.7, 12.5 and 18 $`\mu `$m; the analysis of the collected data is currently in progress, and further observing runs are scheduled for IRTF and MMT.
## References
Lebertre, T., Winters, J.M., 1998, A&A 334, 173
Fleisher, A.J., Gauger, A., Sedlmayr, E., 1992, A&A 266, 321
Fruchter, A.S., Hook, R.N., 1998, astro-ph/9808087
Marengo, M., Busso, M., Silvestro, G., Persi, P., Lagage, P.O., 1999, A&A 348, 501
Mauron, N., Higgins, P.J., 1999, A&A 349, 203
Monnier, J.D., Tuthill, P.G., Danchi, W.C., Haniff, C., AAS Meeting 191, 114.05
Sahai, R., Trauger, J.T., Watson, A.M., Stapelfeldt, K.R., Hester, J.J., Burrows, C.J., Ballister, G.E., Clarke, J.T., Crisp., D., Evans, R.W., Gallagher, J.S.III, Griffiths, R.E., Hoessel, J.G., Holtzman, J.A., Mould, J.R., Scowen, P.A., Westphal, J.A., 1998, ApJ 493, 301
Weigelt, G., Balega, Y., Blöcker T., Fleisher A.J., Osterbart R., Winters J.M., 1998, A&A 333, 51L |
no-problem/9912/hep-ex9912031.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The discovery of the W and Z gauge bosons at the S$`p\overline{p}`$S in 1983 marked the beginning of direct electroweak measurements at a hadron machine. These measurements vindicated the tree level predictions of the Standard Model. The new generation of hadron collider machines now have data of such precision that the electroweak measurements are probing the quantum corrections to the Standard Model. The importance of these quantum corrections was recognised in the award of the 1999 Nobel Prize . These corrections are being tested by a wide variety of measurements ranging from atomic parity violation in caesium to precision measurements at the Z pole and above in $`e^+e^{}`$ collisions. In this article, the latest experimental electroweak data from hadron machines is reviewed. I have taken a broad definition of a hadron machine to include the results from NuTeV ($`\nu `$N collisions) and HERA ($`ep`$ collisions) as well as the results from the Tevatron ($`p\overline{p}`$ collisions). This is not an exhaustive survey of all results , but a summary of the new results of the past year and in particular those results which have an influence on the indirect determination of the Higgs mass. This article will cover the direct determinations of the W boson and top quark masses from $`p\overline{p}`$ collisions at $`\sqrt{\mathrm{s}}`$ = 1.8 TeV from the two Tevatron experiments, CDF and DO/. These results are based on $`s`$-channel production of single W bosons and top quark pairs. The results presented here from the NuTeV and HERA experiments allow one to make complementary measurements and probe the electroweak interaction in the space-like domain up to large momentum transfers in the $`t`$-channel.
## 2 Data Samples
The first observation and measurements of the W boson were made at the CERN S$`p\overline{p}`$S by the UA1 and UA2 experiments. These measurements were based on modest event samples ($``$k events) and integrated luminosity (12 $`\mathrm{pb}^1`$). Since that time the Tevatron and LEP2 experiments have recorded over 1 $`\mathrm{fb}^1`$ of W data. The Tevatron experiments have the largest sample of W events : over 180,000 from a combined integrated luminosity of $``$ 220 $`\mathrm{pb}^1`$. The LEP experiments, despite a very large integrated luminosity ($``$ 15000 $`\mathrm{pb}^1`$ total across all experiments), have event samples substantially smaller than the Tevatron experiments. The LEP2 W results presented at this conference are based on event samples of $``$k events per experiment. However, despite the smaller statistics of the W event sample in comparison to the Tevatron experiments, the LEP2 experiments ultimately achieve a comparable precision. On an event by event basis, the LEP2 events have more information; in particular the LEP2 experiments can impose energy and momentum constraints because they have a precise knowledge of the initial state through the beam energy measurement. The NuTeV experiment at FNAL has a large sample ($`10^6`$) of charged current events mediated by the $`t`$-channel exchange of a W boson. This allows an indirect determination of the W mass through a measurement of $`\mathrm{sin}^2\theta _\mathrm{w}`$. This is done by comparing the neutral and charged current cross sections in $`\nu `$Fe and $`\overline{\nu }`$Fe collisions. The event samples available for electroweak tests at HERA are still rather modest and thus at present their results do not attain the precision of the $`p\overline{p}`$ and $`\nu N`$ results. However, the ability to span a large range in momentum transfers and have both $`e^+p`$ and $`e^{}p`$ collisions allows a number of unique electroweak observations to be made. The results from the Tevatron experiments on the top quark are now reaching their conclusion. These measurements based on only $``$ 100 events selected from over $`10^{12}`$ $`p\overline{p}`$ collisions at the Tevatron represent what it is possible to achieve with a robust trigger and imaginative analysis techniques.
## 3 W Boson and Top Quark Mass Measurements
A precise W mass measurement allows a stringent test of the Standard Model beyond tree level where radiative corrections lead to a dependence of the W mass on both the top quark mass and the mass of the, as yet unobserved, Higgs boson. The dependence of the radiative corrections on the Higgs mass is only logarithmic whilst the dependence on the top mass is quadratic. Simultaneous measurements of the W and top masses can thus ultimately serve to further constrain the Higgs mass beyond the LEP1/SLD limits and potentially indicate the existence of particles beyond the Standard Model. Similarly, non Standard Model decays of the W would change the width of the W boson. A precise measurement of the W width can therefore be used to place constraints on physics beyond the Standard Model. The latest results on the W mass from the LEP2 and Tevatron experiments are now of such a precision that the uncertainty on the top mass is beginning to become the limiting factor in predicting the mass of the Higgs boson.
## 4 Latest Top Mass Measurements
The top quark discovery at the Tevatron in 1995 was the culmination of a search lasting almost twenty years. The top quark is the only quark with a mass in the region of the electroweak gauge bosons and thus a detailed analysis of its properties could possibly lead to information on the mechanism of electroweak symmetry breaking. In particular, its mass is strongly affected by radiative corrections involving the Higgs boson. As such a measurement of the top mass with a precision $`<`$ 10 GeV can provide information on the mass of the Higgs boson. The emphasis in top quark studies over the past two years at the Tevatron has thus been to make the most precise measurement of the top quark mass . Substantial progress has been made in bringing the mass uncertainty down from $`>`$ 10 GeV, at the point of discovery, to 5.1 GeV in 1999. In the past year, CDF has revised it systematic error analysis in the “all-hadronic” event sample and both experiments have published an analysis of the “di-lepton” event sample. The nomenclature of the event samples refers to the decay chain of the top quarks. At the Tevatron, top quarks are produced in pairs predominantly by $`q\overline{q}`$ annihilation and each top quark decays $`>99.9\%`$ of the time to $`Wb`$. If both Ws decay to $`qq^{}`$, then the final state from the top system is $`qq^{}`$$`qq^{}`$$`b\overline{b}`$ and the event sample is referred to as “all-hadronic”. Conversely, the “di-lepton” event sample is realised by selecting a $`l^+\overline{\nu }l^{}\nu `$$`b\overline{b}`$ final state, where both Ws have decayed leptonically (to $`e`$ or $`\mu `$). The “lepton+jet” event sample is one in which one W has decayed hadronically and one leptonically. The precision with which the top mass can be measured with each sample depends on the branching fractions, the level of background and how well constrained the system is. The all-hadronic sample has the largest cross section but has a large background from QCD six jet events (S/N $`0.3`$) whilst the di-lepton sample has a small background (S/N $``$ 4) but suffers from a small cross section and the events are “under-constrained” since they contain two neutrinos. The optimum channel in terms of event sample size, background level and kinematic information content is the lepton+jet channel. Indeed this channel has a weight of $``$ 80 % in the combined Tevatron average. In the new “di-lepton” analysis, it is not possible to perform a simple constrained fit for the top mass because the solution, owing to the two neutrinos, is under-constrained. One thus makes a comparison of the dynamics of the decay with Monte-Carlo expectations e.g. the $`\overline{)}\mathrm{E}_\mathrm{T}`$ distribution and assigns an event weight to each possible solution of the fit : two for each top decay corresponding to the two-fold ambiguity in neutrino rapidity.
The mass distributions of figure 1 along with that from CDF’s analysis of the all-hadronic event sample have recently been combined to provide a Tevatron average in which all correlations between the various measurements have been carefully accounted for. The two experiments have assumed a 100 % correlation on all systematic uncertainties related to the Monte Carlo models. Indeed, the uncertainty in the Monte Carlo model of the QCD radiation is one of the largest systematic uncertainties. This error source will require a greater understanding if the top mass precision is to be significantly improved in the next Tevatron run. The other dominant systematic error is the determination of the jet energy scale which relies on using in-situ control samples e.g. Z+jet, $`\gamma `$+jet events. In the next run, due to significant improvements in the trigger system, both experiments should be able to accumulate a reasonable sample of Z $``$ $`b\overline{b}`$ events which will be of great assistance in reducing the uncertainty in the $`b`$-jet energy scale.
The combined Tevatron mass value is 174.3 $`\pm `$ 3.2 (stat.) $`\pm `$ 4.0 (syst.) GeV. The individual mass measurements, the correlations between them and the relative weights of the measurements in the average are shown in figure 2.
This mass measurement is a supreme vindication of the Standard Model, which, based on other electroweak measurements, predicts a top mass of : 172 $`{}_{11}{}^{}{}_{}{}^{+14}`$ GeV. Of all the quark mass measurements the top quark is now the best measured.
## 5 Tevatron W Mass Measurements
At the Tevatron W bosons are predominantly produced singly by quark anti-quark annihilation. The quarks involved are mostly valence quarks because the Tevatron is a $`p\overline{p}`$ machine and the $`x`$ values involved in W production (0.01 $`\stackrel{<}{}`$ $`x`$ $`\stackrel{<}{}`$ 0.1) are relatively high. The W bosons are only detected in their decays to $`e\nu `$ (CDF and DO/) and $`\mu \nu `$ (CDF only) since the decay to $`qq^{}`$ is swamped by the QCD dijet background whose cross section is over an order of magnitude higher in the mass range of interest. At the Tevatron one does not know the event $`\widehat{\mathrm{s}}`$ and one cannot determine the longitudinal neutrino momentum since a significant fraction of the products from the $`p\overline{p}`$ interaction are emitted at large rapidity where there is no instrumentation. Consequently, one must determine the W mass from transverse quantities namely : the transverse mass ($`\mathrm{M}_\mathrm{T}`$), the charged lepton $`\mathrm{P}_\mathrm{T}`$ ($`\mathrm{P}_\mathrm{T}^l`$) or the missing transverse energy ($`\overline{)}\mathrm{E}_\mathrm{T}`$). $`\overline{)}\mathrm{E}_\mathrm{T}`$ is inferred from a measurement of $`\mathrm{P}_\mathrm{T}^l`$ and the remaining $`\mathrm{P}_\mathrm{T}`$ in the detector, denoted by $`\stackrel{}{\mathrm{U}}`$ i.e.
$$\begin{array}{cc}\stackrel{}{\overline{)}\mathrm{E}_\mathrm{T}}=(\stackrel{}{\mathrm{U}}+\stackrel{}{\mathrm{P}_\mathrm{T}^l})\hfill & \mathrm{and}\mathrm{M}_\mathrm{T}\mathrm{is}\mathrm{defined}\mathrm{as}\hfill \\ \mathrm{M}_\mathrm{T}=\sqrt{2\mathrm{P}_\mathrm{T}^l\overline{)}\mathrm{E}_\mathrm{T}(1\mathrm{cos}\varphi )}\hfill & \mathrm{where}\varphi \mathrm{is}\mathrm{the}\mathrm{angle}\mathrm{between}\stackrel{}{\overline{)}\mathrm{E}_\mathrm{T}}\mathrm{and}\stackrel{}{\mathrm{P}_\mathrm{T}^l}\hfill \end{array}$$
$`\stackrel{}{\mathrm{U}}`$ receives contributions from two sources. Firstly, the so-called W recoil i.e. the particles arising from initial state QCD radiation from the $`q\overline{q}`$ legs producing the hard-scatter and secondly contributions from the spectator quarks ($`p\overline{p}`$ remnants) and additional minimum bias events which occur in the same crossing as the hard scatter. This second contribution is generally referred to as the underlying-event contribution. Experimentally these two contributions cannot be distinguished. Owing to the contribution from the underlying-event, the missing transverse energy resolution has a significant dependence on the instantaneous $`p\overline{p}`$ luminosity. $`\mathrm{M}_\mathrm{T}`$ is to first order independent of the transverse momentum of the W ($`\mathrm{P}_\mathrm{T}^\mathrm{W}`$) whereas $`\mathrm{P}_\mathrm{T}^l`$ is linearly dependent on $`\mathrm{P}_\mathrm{T}^\mathrm{W}`$. For this reason, and at the current luminosities where the effect of the $`\overline{)}\mathrm{E}_\mathrm{T}`$ resolution is not too severe, the transverse mass is the preferred quantity to determine the W mass. However, the W masses determined from the $`\mathrm{P}_\mathrm{T}^l`$ and $`\overline{)}\mathrm{E}_\mathrm{T}`$ distributions provide important cross-checks on the integrity of the $`\mathrm{M}_\mathrm{T}`$ result since the three measurements have different systematic uncertainties.
The systematics of the LEP2 and Tevatron measurements are very different and thus provide welcome complementary determinations of the W mass. The systematics at LEP2 are dominated by the uncertainty in the beam energy (which is used as a constraint in the mass fits) and by the modeling of the hadronic final state, particularly for the events where both W bosons decay hadronically. At the Tevatron, the systematics are dominated by the determination of the charged lepton energy scale and the Monte-Carlo modeling of the W production, in particular its $`\mathrm{P}_\mathrm{T}`$ and rapidity distribution. At the Tevatron, one cannot use a beam energy constrain to reduce the sensitivity of the W mass to the absolute energy (E) and momentum (p) calibration of the detector. Any uncertainty in the detector E, p scales thus enters directly as an uncertainty in the Tevatron W mass. This means that the absolute energy and momentum calibration of the detectors must be known to better than 0.01%. By contrast at LEP, an absolute calibration of 0.5 % is sufficient.
The W mass at the Tevatron is determined through a precise simulation of the transverse mass line-shape, which exhibits a Jacobian edge at $`\mathrm{M}_\mathrm{T}\mathrm{M}_\mathrm{W}`$. The simulation of the line-shape relies on a detailed understanding of the detector response and resolution to both the charged lepton and the recoil particles. This in turn requires a precise simulation of the W production and decay. The similarity in the production mechanism and mass of the W and Z bosons is exploited in the analysis to constrain many of the systematic uncertainties in the W mass analysis. The lepton momentum and energy scales are determined by a comparison of the measured Z mass from $`\mathrm{Z}\mathrm{e}^+\mathrm{e}^{}`$ and $`\mathrm{Z}\mu ^+\mu ^{}`$ decays with the value measured at LEP. The simulation of the W $`\mathrm{P}_\mathrm{T}`$ and the detector response to it are determined by a measurement of the Z $`\mathrm{P}_\mathrm{T}`$ which is determined precisely from the decay leptons and by a comparison of the leptonic (from the Z decay) and non-leptonic $`\mathrm{E}_\mathrm{T}`$ quantities in Z events. The reliance on the Z data means that many of the systematic uncertainties in the W mass analyses are determined by the statistics of the Z sample.
The W and Z events in these analyses are selected by demanding a single isolated high $`\mathrm{P}_\mathrm{T}`$ charged lepton in conjunction with missing transverse energy (W events) or a second high $`\mathrm{P}_\mathrm{T}`$ lepton (Z events). Depending on the analyses, the $`\overline{)}\mathrm{E}_\mathrm{T}`$ cuts are either 25 or 30 GeV and the lepton $`\mathrm{P}_\mathrm{T}`$ cuts are similarly 25 or 30 GeV. CDF only uses $`\mathrm{W}\mathrm{e}\nu `$ and $`\mathrm{W}\mu \nu `$ events in the rapidity region : $`|\eta |<1`$, whereas DO/ uses $`\mathrm{W}\mathrm{e}\nu `$ events out to a rapidity of $``$ 2.5. In total $``$ 84k events are used in the W mass fits and $``$ 9k Z events are used for calibration.
### 5.1 Lepton Scale Determination
The lepton scales for the analyses are determined by comparing the measured Z masses with the LEP values. The mean lepton $`\mathrm{P}_\mathrm{T}`$ in Z events ($`\mathrm{P}_\mathrm{T}`$ $``$ 42 GeV) is $``$ 5 GeV higher than in W events, consequently in addition to setting the scale one also needs to determine the non-linearity in the scale determination i.e. to determine whether the scale has any $`\mathrm{P}_\mathrm{T}`$ dependence. DO/ does this by comparing the Z mass measured with high $`\mathrm{P}_\mathrm{T}`$ electrons with $`\mathrm{J}/\psi `$ and $`\pi ^0`$ masses measured using low $`\mathrm{P}_\mathrm{T}`$ electrons as well as by measuring the Z mass in bins of lepton $`\mathrm{P}_\mathrm{T}`$. In the determination of CDF’s momentum scale the non-linearity is constrained using the very large sample of $`\mathrm{J}/\psi `$ $`\mu \mu `$ and $`\mathrm{{\rm Y}}`$ $`\mu \mu `$ events which span the $`\mathrm{P}_\mathrm{T}`$ region : 2 $`<`$ $`\mathrm{P}_\mathrm{T}`$ $`<`$ 10 GeV. The non-linearity in the CDF transverse momentum scale is consistent with zero (see Fig. 3). This fact in turn can be exploited to determine the non-linearity in the electron transverse energy scale through a comparison of the measured E/p with a MC simulation of E/p where no $`\mathrm{E}_\mathrm{T}`$ non-linearity is included. The lepton scale uncertainties form the largest contribution to the W mass systematic error. The non-linearity contribution to the scale uncertainty is typically $``$ 10% or less.
The Z lineshape is also used by both experiments to determine the charged lepton resolution functions i.e. the non-stochastic contribution to the calorimeter resolution and the curvature tracking resolution in the case of the CDF muon analysis.
### 5.2 W Production Model
The lepton $`\mathrm{P}_\mathrm{T}`$ and $`\overline{)}\mathrm{E}_\mathrm{T}`$ distributions are boosted by the non zero $`\mathrm{P}_\mathrm{T}^\mathrm{W}`$ and the $`\overline{)}\mathrm{E}_\mathrm{T}`$ vector is determined in part from the W-recoil products. As such a detailed simulation of the $`\mathrm{P}_\mathrm{T}^\mathrm{W}`$ spectrum and the detector response and resolution functions is a necessary ingredient in the W mass analysis. The W $`\mathrm{P}_\mathrm{T}`$ distribution is determined by a measurement of the Z $`\mathrm{P}_\mathrm{T}`$ distribution (measured from the decay leptons) and a theoretical prediction of the W to Z $`\mathrm{P}_\mathrm{T}`$ ratio. This ratio is known with a small uncertainty and thus the determination of the W $`\mathrm{P}_\mathrm{T}`$ is dominated by the uncertainty arising from the limited size of the Z data sample. The $`\mathrm{P}_\mathrm{T}^\mathrm{Z}`$ distribution of the CDF $`\mathrm{Z}\mu ^+\mu ^{}`$ sample is shown in Fig. 3. The detector response and resolution functions to the W-recoil and underlying event products are determined by both experiments using Z and minimum bias events. Since the W-recoil products are typically produced along the direction of the vector boson $`\mathrm{P}_\mathrm{T}`$ and the underlying event products are produced uniformly in azimuth, the response and resolution functions are determined separately in two projections – one in the plane of the vector boson and one perpendicular to it. Typically one finds the resolution in the plane of the vector boson is poorer owing to the presence of jets (initial state QCD radiation from the quark legs) which are absent in the perpendicular plane where the resolution function matches closely that expected from pure minimum bias events. The parton distribution functions (PDFs) determine the rapidity distribution of the W and hence of the charged lepton. Both experiments impose cuts on the rapidity of the charged lepton and so a reliable simulation of this cut is necessary if the W mass determination is not to be biassed. On average the u quark is found to carry more momentum than the d quark resulting in a charge asymmetry of the produced W i.e. W<sup>+(-)</sup> are produced preferentially along the ($`p\overline{p}`$) direction. Since the V-A structure of the W decay is well understood, a measurement of the charged lepton asymmetry therefore serves as a reliable means to constrain the PDFs. To determine the uncertainty in the W mass arising from PDFs, MRS PDFs were modified to span the CDF charged lepton asymmetry measurements . This is illustrated in Fig. 3.
### 5.3 Mass Fits
The W mass is obtained from a maximum likelihood fit of $`\mathrm{M}_\mathrm{T}`$ templates generated at discrete values of $`\mathrm{M}_\mathrm{W}`$ with $`\mathrm{\Gamma }_W`$ fixed at the Standard Model value. The templates also include the background distributions, which are small ($`<`$ 5%) and have three components : W $`\tau \nu `$, followed by $`\tau \mu /e\nu \nu `$, QCD processes where one mis-measured jet mimics the $`\overline{)}\mathrm{E}_\mathrm{T}`$ signature and the other jet satisfies the charged lepton identification criteria and finally Z events where one of the lepton legs is not detected. The transverse mass fits for the DO/ end-cap electrons and the two CDF measurements are shown in Fig. 4.
The uncertainties associated with the measurements are listed in Table 1. The uncertainties of the published DO/ central-electron analysis are also listed.
For both experiments the largest errors are statistical in nature, both from the statistics of the W sample and also the statistics of the Z samples which are used to define many of the systematic uncertainties e.g. the uncertainties in the lepton energy/momentum scales and the W $`\mathrm{P}_\mathrm{T}`$ model. The CDF and DO/ measurements are combined with a 25 MeV common uncertainty which accounts for the uncertainties in PDFs and QED radiative corrections which by virtue of being constrained from the same source are highly correlated. Together the two experiments yield a W mass value of 80.450 GeV with an uncertainty of 63 MeV. For the first time, both Tevatron experiments have measurements with uncertainties below 100 MeV and the combined uncertainty is comparable with the LEP2 results presented at this conference.
## 6 W Mass Result from NuTeV
Neutrino scattering experiments have contributed to our understanding of electroweak physics for more than three decades. Early determinations of $`\mathrm{sin}^2\theta _W`$ served as the critical ingredient to the Standard Model’s successful prediction of the W and Z boson masses. More precise investigations in the late 1980’s set the first useful limits on the top quark mass. The recent NUTEV measurement of the electroweak mixing angle from neutrino-nucleon scattering represents the most precise determination to date. The result is a factor of two more precise than the previous most accurate $`\nu `$N measurement . In deep inelastic neutrino-nucleon scattering, the weak mixing angle can be extracted from the ratio of neutral current (NC) to charged current (CC) total cross sections. Previous measurements relied on the Llewellyn-Smith formula, which relates these ratios to $`\mathrm{sin}^2\theta _W`$ for neutrino scattering on isoscalar targets . However such measurements were plagued by large uncertainties in the charm contribution (principally due to the imprecise knowledge of the charm quark mass). An alternate method for determining $`\mathrm{sin}^2\theta _W`$ that is much less dependent on the details of charm production and other sources of model uncertainty is derived from the Paschos-Wolfenstein quantity, $`R^{}`$ :
$$R^{}\frac{\sigma (\nu _\mu N\nu _\mu X)\sigma (\overline{\nu }_\mu N\overline{\nu }_\mu X)}{\sigma (\nu _\mu N\mu ^{}X)\sigma (\overline{\nu }_\mu N\mu ^+X)}=\frac{R^\nu rR^{\overline{\nu }}}{1r}=\frac{1}{2}\mathrm{sin}^2\theta _W$$
(1)
Because $`R^{}`$ is formed from the difference of neutrino and anti-neutrino cross sections, almost all sensitivity to the effects of sea quark scattering cancels. This reduces the error associated with heavy quark production by roughly a factor of eight relative to the previous analysis. The substantially reduced uncertainties, however, come at a price. The ratio $`R^{}`$ is difficult to measure experimentally because neutral-current neutrino and anti-neutrino events have identical observed final states. The two samples can only be separated via *a priori* knowledge of the incoming neutrino beam type. This is done by using the FNAL Sign Selected Quadrupole Train (SSQT) which selects mesons of the appropriate sign. The measured $`\overline{\nu }_\mu `$ contamination in the $`\nu _\mu `$ beam is less than 1/1000 and the $`\nu _\mu `$ contamination in the $`\overline{\nu }_\mu `$ beam is less than 1/500. In addition, the beam is almost purely muon-neutrino with a small contamination of electron neutrinos ($`1.3\%`$ in neutrino mode and $`1.1\%`$ in anti-neutrino mode). The NC and CC events are selected by their characteristic event length : the CC events produce a muon and thus register activity in the detector over a long length, whereas NC events just produce a short hadronic shower. This is illustrated in figure 5 where the two event-length contributions to the event sample are shown. The events are separated by a cut at the 20<sup>th</sup> counter i.e. after $``$ 2 m of steel.
From the $`\nu N`$ interactions, 386 $`k`$ NC and 919 $`k`$ CC events are recorded and from the $`\overline{\nu }N`$ sample 89 $`k`$ NC and 210 $`k`$ CC events. The extracted value of $`\mathrm{sin}^2\theta _W`$(on-shell) = 0.2254 $`\pm `$ 0.0021 which can be translated into an Mw value of 80.26 $`\pm `$ 0.1 (stat.) $`\pm `$ 0.05 (syst.) GeV, where the systematic error also receives a contribution from the unknown Higgs mass.
## 7 Electroweak Results from HERA
The two HERA experiments, ZEUS and H1, are now beginning to probe the electroweak interaction in the space-like domain at scales of $``$ 10<sup>-3</sup> fm. The results up to $`Q^2`$ values of 40,000 GeV<sup>2</sup> are in good agreement with the Standard Model predictions. The use of both $`e^+p`$ and $`e^{}p`$ collisions and the fact that the experiments can measure both neutral current and charged current processes, allows one to directly observe electroweak unification in a single experiment. This is illustrated in figure 6 where both the neutral current and charged current cross sections are shown as a function of the Q<sup>2</sup>. At low Q<sup>2</sup>, the neutral current cross section, mediated by $`\gamma `$ exchange, dominates; but as Q<sup>2</sup> increases one observes the emergence of the charged current cross section (mediated by $`W`$ exchange) with a magnitude comparable to that of the neutral current cross section (which becomes dominated by $`Z_0`$ exchange at high Q<sup>2</sup>). The detailed comparison of the four cross sections can be used to measure parton distributions : in particular one can separate the light quark contributions to the cross sections and determine the $`u`$ and $`d`$ quark distributions . By using the helicity variable $`y`$, where $`y`$ is related to the scattering angle in the electron-quark center of mass frame, via : $`(1y)=\mathrm{cos}^2\frac{\theta }{2}`$ and measuring the NC cross section as a function of $`(1y)^2`$ one observes direct evidence for the $`\gamma Z_0`$ interference contribution to the NC cross section at high $`Q^2`$.
Although the HERA experiments have only produced a handful of direct Ws, they have several thousand charged current events in which a virtual W is exchanged in the $`t`$-channel. From a measurement of the Q<sup>2</sup> dependence of the cross-section one becomes sensitive to the charged current propagator and hence W mass. This determination (see figure 7) of the W mass agrees with the direct determinations but is presently not competitive owing to the statistical uncertainty and the systematics associated with parton distribution functions : an uncertainty which can also be reduced with more data. This determination is made more powerful if one also considers the magnitude of the charged current cross section and assumes the Standard Model relation between $`G_F`$ and Mw and Mz. This relation also receives radiative corrections which depend on the masses of the fundamental gauge bosons and fermions. One thus has a dependence on Mw in both the $`Q^2`$ variation (propagator) and the cross section magnitude (via $`G_F`$). By fixing the measured cross section to the Standard Model value, one can obtain a W mass value with an uncertainty of $``$ 400 MeV.
## 8 Comparison of Mw results
The LEP2 mass values are compared with the Tevatron values in figure 8. They are in excellent agreement despite being measured in very different ways with widely differing sources of systematic error. These direct measurements are also in very good agreement with the indirect measurement from NuTeV and the prediction based on fits to existing, non W, electroweak data.
The precision of these measurements has increased the sensitivity that one now has to the mass of the Higgs Boson. Indeed, it is now the uncertainty on the top quark mass that is now becoming the limiting factor in the determination of the Higgs mass. As figure 9 shows, the available data tends to favour a Higgs boson with a mass $`<`$ 250 GeV.
## 9 W Width and branching fraction measurements
The Tevatron presently has the most precise direct determination of the W width and has measurements of the W branching fractions comparable in precision to the LEP2 measurements. The Tevatron experiments determine the width by a one parameter likelihood fit to the high transverse mass end of the transverse mass distribution. Detector resolution effects fall off in a Gaussian manner such that at high transverse masses ($`\mathrm{M}_\mathrm{T}`$ $`\stackrel{>}{}`$ 120 GeV), the distribution is dominated by the Breit-Wigner behaviour of the cross section (see figure 10). In the fit region, CDF has 750 events, in the electron and muon channels combined.
At LEP2, the W branching fractions are determined by an explicit cross section measurement whilst at the Tevatron they are determined from a measurement of a cross section ratio. Specifically, the W branching fraction can be written as : $`\sigma .\mathrm{BR}(We\nu )=\frac{\sigma _W}{\sigma _Z}\frac{\mathrm{\Gamma }(Zee)}{\mathrm{\Gamma }(Z)}\frac{1}{R};`$ where $`R=\frac{\sigma _WBr(We\nu )}{\sigma _ZBr(Zee)}`$ is the measurement made at the Tevatron. This determination thus relies on the LEP1 measurement of the Z branching fractions and the theoretical calculation of the ratio of the total Z and W cross sections. The Tevatron measurement, $`\sigma .\mathrm{BR}(We\nu )`$ = 10.43 $`\pm `$ 0.25 % is now becoming systematics limited. In particular, the uncertainty due to QED radiative corrections in the acceptance calculation and in $`\frac{\sigma _W}{\sigma _Z}`$ contributes 0.19 % GeV to the total systematic uncertainty of 0.23 %. The corresponding measurement from LEP2 is $`\sigma .\mathrm{BR}(We\nu )`$ = 10.52 $`\pm `$ 0.26 %
The large sample of W events at the Tevatron has also allowed a precise determination of $`g_\tau /g_e`$ through a measurement of the ratio of W $`\tau \nu `$ to W $`e\nu `$ cross sections. The latest Tevatron measurement of this quantity is $`g_\tau /g_e`$ = 0.99 $`\pm `$ 0.024, in excellent agreement with the Standard Model prediction of unity and the LEP2 measurement of $`g_\tau /g_e`$ = 1.01 $`\pm `$ 0.022.
## 10 Outlook
The majority of electroweak results presented here are presently dominated by statistical uncertainties : either directly or in the control/calibration samples. For NuTeV no further data is planned and thus the precision of their measurement of $`\mathrm{sin}^2\theta _W`$ will remain dominated by the statistical uncertainty. In contrast, both the Tevatron and HERA are undergoing luminosity upgrades augmented with substantial improvements in the collider’s detectors. At HERA, 150$`\mathrm{pb}^1`$ per year per experiment is expected, as is the availability of polarised electrons and positrons in $`ep`$ collider mode. At the Tevatron, at least 2$`\mathrm{fb}^1`$ is expected per experiment at an increased center of mass energy of 2 TeV. It is expected that both the top quark mass and W mass measurements will become limited by systematic uncertainties. The statistical part of the Tevatron W mass error in the next run will be $``$ 10 MeV, where this also includes the part of the systematic error which is statistical in nature e.g. the determination of the charged lepton E and p scales from Z events. At present the errors non-statistical in nature contribute 25 MeV out of the total Tevatron W mass error of 60 MeV. A combined W mass which is better than the final LEP2 uncertainty can thus be anticipated . The W width is expected to be determined with an uncertainty of 20–40 MeV. The statistical uncertainty, in the next Tevatron run, on the top quark mass will be $``$ 1 GeV. The systematic error arising from uncertainties in the jet energy scale and modeling of QCD radiation are expected to be the dominant errors, with a total error value of 2–3 GeV per experiment expected.
## 11 Conclusions
The hadron collider experiments continue to make significant contributions to electroweak physics which complement those from $`e^+e^{}`$ machines. The higher cross sections and $`\sqrt{\mathrm{s}}`$ allow some unique measurements to be made e.g the mass of the top quark. The precision of many measurements is comparable to, if not better than, that achieved at $`e^+e^{}`$ machines. All results : top mass, W mass, W width, branching fractions, $`\mathrm{sin}^2\theta _W`$, cross sections etc are in excellent agreement with the Standard Model in a range of processes : $`qq,eq,\nu q`$ over a wide span in $`Q^2`$. The future is bright and significant new results are expected from the Tevatron and HERA experiments before the LHC. |
no-problem/9912/hep-th9912090.html | ar5iv | text | # 1 Introduction
## 1 Introduction
In a remarkable series of papers , A. Sen showed that Type II string theories contain, besides D-branes, other extended objects which are not supersymmetric. These non-BPS branes can be viewed as the bound state formed by two coincident D-branes carrying opposite Ramond-Ramond charge when the world-volume tachyon is condensed in a real kink solution. In this way, starting from two BPS D(p+1)-branes, one can describe a single non-BPS $`p`$-brane. Alternatively , it is possible to describe the same non-BPS configuration by modding out the theory by the operator $`(1)^{F_L}`$, whose effect is to change the sign of all R-R and R-NS states. In this case, the system of a D$`p`$-brane and a $`\overline{\mathrm{D}}p`$-brane in the Type IIA (IIB) theory becomes a non-BPS $`p`$-brane in the Type IIB (IIA) theory<sup>1</sup><sup>1</sup>1For the details of the different constructions we refer to the reviews .. From this second point of view, a non-BPS brane can be defined as a hyper-plane where two different kinds of open strings, distinguished by a Chan-Paton factor, can end. The first sector, with C-P factor 1l, is identical to the one living on usual D-branes, whereas the second sector, with C-P sector $`\sigma _1`$, differs from the first one in that it contains only states which are odd under the GSO operator $`(1)^F`$. Due to this non-standard projection, a real open string tachyon survives in the $`\sigma _1`$ sector, and a non-BPS brane is therefore unstable in a flat Type II theory.
Interestingly, it is possible to construct stable states from the above configurations by exploiting discrete symmetries of the original Type II theory, under which the tachyonic field is odd. So far, two cases have been thoroughly studied: Type I theory, that is essentially Type IIB theory modded out by the world-sheet parity operator $`\mathrm{\Omega }`$ ; second, Type II theories compactified on the orbifold $`T^4/𝐙_2`$ , the $`𝐙_2`$ being generated by the reflection $`_4`$ of the four compact directions or its T-dual version $`_4(1)^{F_L}`$. These stable objects are interesting for several reasons. First, they are part of the spectra of the above string theories, and a description which misses them would be incomplete. Moreover, despite the fact that these states do not preserve any supersymmetry, they are simple enough to allow for a precise analysis of their physical properties, like masses or couplings. Indeed, non BPS-branes are much on the same footing as usual D-branes: their exact microscopic description is given by the conformal field theory of the open strings ending on them. From the closed string point of view, the properties of this conformal field theory can be resumed in the boundary state formalism , in which D-branes are described by a coherent closed string state inserting a boundary on the string world-sheet and enforcing on it the appropriate boundary conditions. The boundary state approach can be naturally extended also to non-BPS branes , and is often a very useful tool for describing branes from the point of view of bulk theory.
The effective action for non-BPS brane has been studied by Sen in Ref. mainly in flat type II context or, in the stable case, by restricting to couplings to states of the untwisted closed string sector.
In this letter, we study additional couplings to states in the twisted sectors arising at the fixed-points where the curvature is concentrated. For convenience, we focus on the standard $`_4`$ orbifold<sup>2</sup><sup>2</sup>2For a detailed analysis of the perturbative and the D-brane spectrum in this theory see ., but in the $`T`$-dual case $`_4(1)^{F_L}`$ one recovers similar results. In particular, we show that the non-BPS brane action includes a Wess-Zumino term involving twisted R-R states. Beside the expected minimal coupling to the appropriate form, this contains also anomalous couplings to lower forms. We will show that these couplings induce a tree-level inflow which compensates one-loop anomalies that can arise in the world-volume theory; they are therefore crucial for the consistency of the theory. We also show that the appearance of such anomalous couplings for these stable non-BPS branes is in agreement with their interpretation as BPS branes wrapped on non-supersymmetric cycles.
## 2 Non-BPS branes in Type II on $`T^4/𝐙_2`$
In Ref. it was pointed out that a non-BPS brane with an odd number of directions wrapped on the orbifold $`T^4/_4`$ is a stable object in a certain region of the moduli space. Let us briefly recall under what conditions the tachyonic field disappears. Consider a non-BPS $`(p+n)`$-brane with $`n`$ (odd) Neumann directions in the compact space ($`x^6,x^7,x^8,x^9`$). As usual in toroidal compactification, open string states living on the brane can have Kaluza-Klein modes along Neumann directions ($`x^a`$) and winding modes along Dirichlet ones ($`x^i`$). The effective mass of these states is
$$m^2=\underset{a}{}\left(\frac{n_a}{R_a}\right)^2+\underset{i}{}\left(\frac{w^iR_i}{\alpha ^{}}\right)^2\frac{1}{2\alpha ^{}}.$$
(2.1)
If the radii of the compact dimensions satisfy the relations $`R_a\sqrt{2\alpha ^{}}`$ and $`R_i\sqrt{\alpha ^{}/2}`$, the only open string state which is really tachyonic is the zero-mode $`n_a=w^i=0`$. Clearly, this instability can not be cured by adjusting the value of the moduli in a simple toroidal compactification, and in order to stabilize the non-BPS branes the $`𝐙_2`$-projection plays a crucial role.
Let us first recall some generalities of the $`D=6`$ $`N=2`$ bulk theory. In the untwisted closed string sector, modding out $`_4`$ kills half of the original physical degrees of freedom. One is then left with a gravitational multiplet and either 4 vector multiplets of $`N=(1,1)`$ supersymmetry for the Type IIA theory and 5 tensor multiplet of $`N=(2,0)`$ supersymmetry for the Type IIB theory. At each of the 16 orbifold fixed-planes, there are also twisted sectors, in which strings close up to an $`_4`$ identification. It turns out that, in this case, one recovers a supersymmetric spectrum by using, also in the twisted sector, the natural GSO projection and by keeping the even states under $`_4`$ . At each of the orbifold fixed-points, one gets a vector multiplet in the Type IIA case and a tensor multiplet in the Type IIB case.
In order to discuss the low-energy world-volume theory on the previously introduced non-BPS brane, one has then to define the $`𝐙_2`$ action on the open string states living on it, and in particular on their C-P wave-function. The correct procedure is to impose the conservation of the quantum numbers with respect to $`_4`$ in the interactions among open and closed strings. In the 1l sector, the natural choice of taking the open string vacuum as an even state under $`_4`$ turns out to be consistent. Conversely, in the $`\sigma _1`$ sector, one is forced to take the opposite choice. To see this, it is sufficient to look at the two-point amplitude between an untwisted R-R state and an open string tachyon. This is the well-known coupling of a non-BPS brane to the untwisted forms arising when the tachyon is not constant $`𝑑TC^{(p)}`$. In momentum space this gives a vertex of the form $`k_T^0C^{\mu _1\mathrm{}\mu _p\mu _{p+1}\mathrm{}\mu _{p+n}}`$; since $`C`$ (like the brane emitting it) has an odd number of Lorentz indices in the compact space, it is odd under $`_4`$ and thus also the tachyon field has to be so. This means that the tachyonic zero mode is projected out and the non-BPS brane is stable when one considers the range of radii specified after Eq. (2.1). Thus, one can resume the projection rules on the open string sectors by saying that the $`𝐙_2`$ operation acts also on the C-P factor adding a minus sign to the states in the $`\sigma _1`$ sector.
From the closed string point of view these non-BPS brane are described by a boundary state containing only the untwisted part of the NS-NS sector and the twisted part of the R-R sector . Since the boundary state encodes the couplings of the brane with all the states of the closed string spectrum (see e.g. ), we can conclude that the effective action has to contain a DBI part, describing the couplings to NS-NS untwisted states, and a WZ part, encoding the interactions with twisted R-R states. As usual, the orbifold projection does not change the couplings among the fields in the untwisted sector, thus one can read this part of the action from the result obtained in flat Type II theory by simply setting to zero the fields which are odd under $`_4`$. On the contrary the WZ part, involving twisted fields, has to be explicitly calculated and, as we shall see in the next section, the result contains couplings to lower (twisted) forms which are related to an inflow of anomaly. This may seem strange, since non-BPS branes contain twice as many fermions as the usual D-branes and usually in type II theories the two sets come with opposite chirality. However, by analyzing carefully the effect of the $`𝐙_2`$ projection on the Ramond sector, one sees that a chiral open string spectrum may emerge.
Consider first the spacetime filling case $`p=5`$. In this case, the ten-dimensional Lorentz group $`SO(9,1)`$ is broken to $`SO(5,1)\times SO(4)`$. In the 1l sector, the standard GSO-projection leads to an $`N=2`$ gauge multiplet. In the bosonic sector, this contains one gauge boson in the $`(\mathrm{𝟔},\mathrm{𝟏})`$ and 4 scalar fields in the $`(\mathrm{𝟏},\mathrm{𝟒})`$, corresponding to the dimensional reduction of a gauge boson from $`D=10`$. In the fermionic sector, there are spinors in the $`(\mathrm{𝟒},\mathrm{𝟐})(\mathrm{𝟒}^{},\mathrm{𝟐}^{})`$, corresponding to the dimensional reduction of a chiral Majorana-Weyl spinor $`\mathrm{𝟏𝟔}`$ from $`D=10`$. In the $`\sigma _1`$ sector, the non-standard GSO-projection leads instead to a bosonic tachyon and spinors in the $`(\mathrm{𝟒},\mathrm{𝟐}^{})(\mathrm{𝟒}^{},\mathrm{𝟐})`$, corresponding to the dimensional reduction of an anti-chiral Majorana-Weyl spinor $`\mathrm{𝟏𝟔}^{}`$ from $`D=10`$. Finally, one has to keep only $`𝐙_2`$-invariant states; but since the projection also depends on the C-P factor, this means that one has to keep $`_4^0`$-even states in the 1l sector and $`_4^0`$-odd states in the $`\sigma _1`$ sector, where $`_4^0`$ is the orbital contribution to the $`𝐙_2`$ orbifold operator (without the C-P contribution). As already discussed, the tachyon is odd under the global $`_4`$, and is therefore projected out. Moreover, the $`\mathrm{𝟒}`$ of $`SO(4)`$ is by construction odd under $`_4^0`$, and using the decomposition $`\mathrm{𝟒}=\mathrm{𝟐}\mathrm{𝟐}^{}`$, one concludes that if the $`\mathrm{𝟐}`$ is chosen to be even under $`_4^0`$, then the $`\mathrm{𝟐}^{}`$ has to be odd. The surviving spectrum is therefore found to consist of a gauge boson $`(\mathrm{𝟔},\mathrm{𝟏})`$ ( 1l sector) and the spinors $`(\mathrm{𝟒},\mathrm{𝟐})`$ ( 1l sector) and $`(\mathrm{𝟒},\mathrm{𝟐}^{})`$ ($`\sigma _1`$ sector). In order to check that this projection is correct also in the R open string sector, it is sufficient to consider the 3-point amplitude between a tachyon and two massless fermions, the first in the 1l sector and the second in the $`\sigma _1`$ sector. This amplitude is proportional to $`\mathrm{Tr}(\sigma _1^2)0|S_{1/2}^{\dot{\alpha }}\mathrm{e}_1^{\mathrm{i}kX}S_{1/2}^\beta |0`$, and does not vanish when the first spinor is in the $`(\mathrm{𝟒},\mathrm{𝟐})`$ and the second is in the $`(\mathrm{𝟒}^{},\mathrm{𝟐})`$, because the product of these representations contains the ten-dimensional scalar. This shows that the two spinors have opposite eigenvalue under $`_4`$ and that one can not keep both of them in the orbifolded theory. The non-supersymmetric world-volume theory is therefore chiral.
In the cases $`p<5`$, the six-dimensional Lorentz group $`SO(5,1)`$ is further broken to $`SO(p,1)\times SO(5p)`$. The world-volume theory is obtained by simple dimensional reduction of the world-volume theory for the $`p=5`$ case from 6 to $`p+1`$ dimensions.
## 3 Anomalies
We have seen in previous section that the resulting world-volume theory is potentially chiral. More precisely, the theory is strictly chiral only for the spacetime filling case $`p=5`$ (non-BPS 6,8-brane wrapped on 1,3 directions). For the cases $`p=3`$ and $`p=1`$, the world-volume theory is obtained by dimensional reduction and is correspondingly non-chiral, since fermions decompose into two sets of opposite chiralities. However, due to the fact that the original fermions were chiral, these two sets of spinors transform as fermionic representations of opposite chiralities also with respect to the group of transverse rotations $`SO(5p)`$, and a chiral asymmetry is generated when the normal bundle to the brane (in the non-compact spacetime) is non-trivial.
Therefore, as happens in the BPS case, also non-BPS branes can have an anomalous world-volume theory, despite the fact that their string theory construction is perfectly well-defined. This is nothing but a particular case of the well known fact that a topological defect in a consistent quantum field theory leads to an apparent (local) violation of charge conservation if it supports fermionic zero modes . Obviously, if the starting theory is consistent, this can not be the final result, and actually it happens that the topological defect develops suitable anomalous couplings to bulk fields, leading to an inflow of charge from the bulk. In other words, the world-volume one-loop anomaly is exactly canceled by a tree-level anomaly, and charge conservation is restored .
The occurrence of R-R anomalous couplings for BPS topological defects like D-branes or O-planes in string theory vacua is by now well established. These couplings are completely determined through the requirement that the inflow of anomaly associated to the corresponding magnetic interactions cancel all possible world-volume anomalies (see also ). They have been also determined through direct string theory computations . Since also non-BPS branes potentially support anomalies, it is natural to expect that they will also develop anomalous couplings, and indeed we will show that they do so. We shall follow the approach of , and extract the R-R anomalous by factorization from a string theory computation of the anomaly and the inflow.
The one-loop partition function on the non-BPS D-brane is given by the projected annulus vacuum amplitude
$$Z(t)=\frac{1}{4}\mathrm{Tr}_{RNS}[(1+_4)(1+(1)^F)e^{tH}],$$
(3.2)
where $`H`$ is the open-string Hamiltonian and the trace contains a sum over the two C-P sectors 1l and $`\sigma _1`$. The generating functional of one-loop correlation functions of photons and gravitons on the non-BPS D-brane is obtained by integrating with the correct measure over the modular parameter $`t`$ of the annulus the above partition function, evaluated in a gauge and gravitational background: $`\mathrm{\Gamma }=_0^{\mathrm{}}𝑑t\mu (t)Z(t)`$. Clearly, possible anomalies can emerge only in the CP-odd part of this effective action, associated to the odd spin-structure, and happen to be boundary terms in moduli space.
In , a general method to compute directly the anomalous part of the effective action through an explicit string computation has been presented. The gauge variation is represented by the insertion of an unphysical vertex in the amplitude, representing a photon or a graviton with pure-gauge polarization. After formal manipulations, this unphysical vertex combines with the world-sheet supercurrent appearing in odd spin-structure amplitudes, to leave the $`t`$-derivative of the correlation of a certain effective vertex operator in a generic gauge and gravitational background. Interestingly, the effect of this operator was recognized to correspond to obtaining the anomaly as Wess-Zumino descent, $`A=2\pi iI^{(1)}`$, the anomaly polynomial $`I`$ being given by the background-twisted partition function. We use here the standard descent notation: given a gauge-invariant polynomial $`I`$ of the gauge and gravitational curvatures $`F`$ and $`R`$, one defines $`I^{(0)}`$ such that $`I=dI^{(0)}`$ and $`I^{(1)}`$ through the gauge variation $`\delta I^{(0)}=dI^{(1)}`$.
In consistent string vacua, only the UV boundary $`t0`$ can potentially lead to anomalies, and it turns out that this vanishes by itself. At low-energy, this is interpreted as Green-Schwarz mechanism, through which the quantum one-loop anomaly is cancelled by an equal and opposite classical tree-level inflow. It was suggested in that such an interpretation can be recovered by taking the limit of slowly varying background fields, corresponding to low momenta for external particles in the anomalous graph. In this limit, the partition function becomes a topological index which is independent of the modulus $`t`$, and one ends up with the extremely simple recipe that the polynomial $`I`$ from which both the anomaly and the inflow descend (a ($`D`$+2)-form in $`D`$ dimensions) is given simply by this partition function, with the bosonic zero modes excluded and the convention of working in two dimensions higher.
In our case, the anomaly polynomial is given by
$$I=\frac{1}{4}\mathrm{Tr}_R[(1+_4)(1)^Fe^{tH(F,R)}],$$
(3.3)
where from now on it will be understood that one has to keep only the ($`D`$+2)-form component, with $`D=p`$ in our case. Recall now that both $`_4`$ and $`(1)^F`$ act with opposite signs in the two C-P sectors 1l and $`\sigma _1`$. The $`(1)^F`$ part gives a vanishing contribution, due both to a cancellation between the two C-P sectors and to the four fermionic zero modes in the compact directions, which are not twisted by the background. The $`_4(1)^F`$ gives instead a non-vanishing result, since the above fermionic zero modes are absent and the two C-P sectors give exactly the same result, generating a factor of 2. For the rest, the computation of the partition function proceeds exactly as in . The internal part contributes<sup>3</sup><sup>3</sup>3These numbers can be obtained using $`\zeta `$-function regularization, as in . a factor of $`4`$: the $`n`$ Neumann bosons give a factor $`2^n`$ which cancels the fixed points degeneracy $`2^n`$, whereas each of the four fermions contributes $`\sqrt{2}`$. Setting $`4\pi ^2\alpha ^{}=1`$, one finds finally
$$I(F,R,R^{})=2\mathrm{ch}(F)\mathrm{ch}(F)\frac{\widehat{A}(R)}{\widehat{A}(R^{})}e(R^{}).$$
(3.4)
Here $`F`$, $`R`$ and $`R^{}`$ indicate the curvature forms of the gauge, tangent and normal bundles. $`\widehat{A}(R)`$ and $`\widehat{A}(R^{})`$ are the Roof genera of the tangent and the normal bundles, $`e(R^{})`$ is the Euler class of the normal bundle (defined to be $`1`$ when the latter is null), and $`\mathrm{ch}(F)`$ is the Chern character of the gauge bundle in the fundamental representation.
This result gives both the one-loop anomaly on the non-BPS D-brane and the opposite inflow that cancels it. The latter is related to the presence of anomalous couplings of the non-BPS D(p+n)-brane to the R-R fields in the twisted sectors arising at the $`2^n`$ fixed-points contained in the $`n`$ compact directions of the world-volume. The general from of these couplings is:
$$S=\underset{i}{}\frac{\mu _i}{2}C_iY_i(F,R,R^{}),$$
(3.5)
where $`i=1,\mathrm{},2^n`$ labels the fixed-points. $`\mu _i`$ is an arbitrary charge and $`Y_i(F,R,R^{})`$ a polynomial of the curvatures <sup>4</sup><sup>4</sup>4We use the conventions of , and normalize $`\mu _i`$ such that $`Y_i(0)=1`$.. Finally, $`C_i`$ is the formal sum of all the twisted R-R forms and their duals in the $`i`$-th sector. More precisely, in the relevant Type IIB case, one gets one $`N=2`$ tensor multiplet in each sector, with a R-R content of one scalar and one an anti-self-dual 2-form<sup>5</sup><sup>5</sup>5This can be understood in the $`N=1`$ language of , where the $`N=2`$ tensor multiplet decomposes into a hypermultiplet containing 1 R-R and 3 NS-NS scalars, plus a tensor multiplet, containing a R-R anti-self-dual 2-form and a NS-NS scalar., and each $`C_i`$ is therefore the sum of forms of degree 0, 2 and 4. In , it was shown that the inflow generated by such anomalous couplings is given in modulus by
$$I(F,R,R^{})=\underset{i}{}\frac{\mu _i^2}{4\pi }Y_i(F,R,R^{})Y_i(F,R,R^{})e(R^{}).$$
(3.6)
Comparing Eq. (3.4) with Eq. (3.6), one finally extracts
$$\mu _i=\pm \sqrt{\frac{8\pi }{2^n}},Y_i(F,R,R^{})=\mathrm{ch}(F)\sqrt{\frac{\widehat{A}(R)}{\widehat{A}(R^{})}}.$$
(3.7)
As expected , in the $`n=1`$ case the charge $`\mu _i`$ with respect to each of the two $`C_i`$ is identical to the twisted charge of a fractional brane at the same orbifold point. The factorization leaves obviously an unimportant sign ambiguity in each $`\mu _i`$, that we will ignore.
We conclude therefore that the stable non-BPS object obtained by wrapping a non-BPS (p+n)-brane of Type IIB along $`n`$-directions of $`T^4/𝐙_2`$ has the anomalous couplings
$$S=\frac{\mu }{2}C\mathrm{ch}(F)\sqrt{\frac{\widehat{A}(R)}{\widehat{A}(R^{})}}$$
(3.8)
where
$$\mu =\sqrt{8\pi },C=\frac{1}{\sqrt{2^n}}\underset{i=1}{\overset{2^n}{}}C_i,$$
(3.9)
Notice that, strictly speaking, the gauge bundle is restricted to have structure group $`U(1)`$ in the general construction above. To construct configurations with a $`U(N)`$ bundle, one would have to take $`N`$ wrapped non-BPS branes on top of each other, but unfortunately such a configuration is unstable since non-BPS branes repulse each other. However, it was pointed out in that when all the radii take the critical value, such force vanishes at the leading one-loop level, due to an accidental boson-fermion degeneracy in the world-volume theory. In this particular case, the configuration of $`N`$ overlapping non-BPS (p+n)-branes becomes stable, and has indeed the anomalous coupling (3.8).
## 4 Discussion
It is well known that the $`_4`$ orbifold describes a particular limit of a smooth K3 manifold where 16 of the 22 2-cycles are shrinking in cone singularities which correspond to the twisted sectors arising in the conformal analysis. The result of previous section has a natural interpretation also from this geometrical point of view. In fact, in Ref. Sen proposed that the non-BPS branes considered here can be viewed as BPS D-branes wrapped on particular non-supersymmetric cycles present in the K3. This is very similar to the interpretation of fractional D-branes at fixed-points given in : they can be viewed as ten-dimensional D-branes wrapped on the exceptional cycles coming from the resolution of orbifold singularities. More precisely, it is known that non-BPS branes decay, outside the stability region in the moduli space, into a D-brane and an anti-D-brane (i.e. with opposite R-R untwisted charge) wrapped on two supersymmetric cycles. This implies that the two configurations, the non-BPS brane and the D-brane - anti-D-brane pair into which it can decay, have to carry the same R-R charge.
From a geometrical point of view, where one interprets these six-dimensional branes as ten-dimensional objects wrapped on cycles in K3, this statement means that the cycle giving rise to the non-BPS brane and the ones producing the BPS branes are in the same homology class. In the $`n=1`$ case, the latter are simply the two exceptional 2-cycles, coming from the resolution of the singularities touched by the brane. Interestingly, this is enough to deduce the couplings of the non-BPS brane from those of fractional branes. Indeed, the WZ action of the non-BPS has to come from the usual WZ action of the ten-dimensional BPS-brane, reduced on the appropriated 2-cycle. Fortunately, the integration over the cycle is in this case easy: the WZ part of the action contains only forms and so the only relevant thing of the considered cycle is its homology class. Using then the above homological decomposition, one finds that the WZ action for the non-BPS brane is given by the sum of the WZ action of a BPS brane and that of an anti-brane, wrapped on the two exceptional cycles. These are fractional branes, and the integration over the compact space must then give for each of these the action studied in <sup>6</sup><sup>6</sup>6It should be possible to check through an explicit calculation that the twisted sector couplings arise by decomposing the R-R forms on the exceptional 2-cycle, whereas the untwisted sector couplings comes from integrating the NS-NS two-form, which is known to have a non-vanishing flux across the cycle .. The $`n=3`$ case is related to the case just discussed by T-duality, and thus the general pattern is the same. The only difference is that the BPS brane configuration in which the non-BPS brane can decay present more complicated supersymmetric 2-cycles. In both cases, summing all the WZ actions for the BPS branes, the untwisted part cancels, and for the twisted part one gets precisely the result of Eq. (3.8). This provides strong evidence that both the geometrical interpretation of and the anomalous couplings derived here are correct.
As was already the case for BPS D-branes, the part of the action involving R-R field presents some particular features: from the open string point of view the result is determined by an odd spin-structure amplitude, where only an effective zero mode part of the various vertices is relevant for the final result. This is related to the fact that from the field theory point of view, these couplings are related to possible 1-loop anomalies in the world-volume theory. Finally from the geometrical point of view, the WZ term is determined simply by the homology class of the 2-cycle defining the non-BPS brane.
The DBI part of the action does not share these simplifying features and so can not be determined directly with the above techniques. The only simple observation we can make at this point is that there is no linear coupling to twisted fields. In absence of a gauge field on the brane, this is evident from the form of the boundary state written in which does not present the NS-NS twisted part. The same conclusion, of course, can be derived from an annulus calculation along the lines of , by focusing on the NS$`_4`$ sector. As pointed out in Ref. , this part gives a vanishing contribution due to a cancellation between the two open string sectors ( 1l and $`\sigma _1`$). This calculation is not essentially modified by the introduction of a gauge field on the non-BPS brane: in fact, both sectors are charged under the relative $`U(1)`$ field and the cancellation between the two sectors still holds, showing that no linear coupling to the NS-NS twisted scalars are produced by turning on a gauge field<sup>7</sup><sup>7</sup>7We are grateful to C. Bachas for important discussions on this point.. A possible way to go beyond this approximation is to use the geometrical interpretation of non-BPS branes and to derive not only the homology class of the 2-cycle defining them, but also its shape. Work is in progress in this direction.
Acknowledgments
We would like to thank A. Sen for useful comments and for reading this manuscript, and C. Bachas, A. Bilal, M. Billó, C.-S. Chu, M. Frau, A. Lerda, E. Scheidegger, M. Serone for interesting discussions and suggestions. C.A.S. is also grateful to the Physics Institute of the University of Neuchâtel for hospitality. This work has been supported by the EEC under TMR contract ERBFMRX-CT96-0045 and the Fond National Suisse. |
no-problem/9912/hep-ph9912460.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The anomalous magnetic moment of the muon, $`a_\mu (g_\mu 2)/2`$, is potentially a significant constraint for any extension of the Standard Model (SM). The measured value quoted by the Particle Data Group is
$$a_\mu ^{PDG}=(\mathrm{11\hspace{0.17em}659\hspace{0.17em}230}\pm 84)\times 10^{10}.$$
(1)
This value does not take into account the early results <sup>1</sup><sup>1</sup>1The 1998 result is still preliminary. from the E821 experiment at BNL
$`a_\mu ^{BNL\mathrm{\hspace{0.17em}97}}=(\mathrm{11\hspace{0.17em}659\hspace{0.17em}250}\pm 150)\times 10^{10},`$ (2)
$`a_\mu ^{BNL\mathrm{\hspace{0.17em}98}}=(\mathrm{11\hspace{0.17em}659\hspace{0.17em}191}\pm \mathrm{\hspace{0.17em}59})\times 10^{10}.`$ (3)
On the other hand, the SM prediction yields ( and references therein)
$$a_\mu ^{SM}=(\mathrm{11\hspace{0.17em}659\hspace{0.17em}159.6}\pm 6.7)\times 10^{10},$$
(4)
where the QED, electroweak and hadronic contributions are sumed and the errors are combined in quadrature. When we take into account all these results, contributions from new physics beyond the SM are constrained to fit within the window
$$28\times 10^{10}<\delta a_\mu ^{NEW}<+124\times 10^{10},\mathrm{at}\mathrm{\hspace{0.33em}90}\%C.L.$$
(5)
The width of the window is at present dominated by the experimental uncertainty. That, however, will change after the E821 experiment is completed. The inclusion of the current BNL data already reduces the available window by a factor of 2. After two more years of running the E821 is eventually expected to reach the accuracy $`\pm 3\times 10^{10}`$. If the measured central value then turns out to be exactly equal to the SM prediction, eq.(4), the new constraint will be
$$12\times 10^{10}<\delta a_\mu ^{NEW}<+12\times 10^{10},\mathrm{at}\mathrm{\hspace{0.33em}90}\%C.L.,$$
(6)
with the window for new physics narrowed by more than a factor of 6. <sup>2</sup><sup>2</sup>2 This estimate does not take into account possible improvements over time in the hadronic uncertainty, and thus the constraint on new physics may actually be even tighter.
In the MSSM, there are significant contributions from new physics due to the chargino-sneutrino and neutralino-smuon loops:
$`\delta a_\mu ^{\chi ^+}`$ $`=`$ $`{\displaystyle \frac{1}{8\pi ^2}}{\displaystyle \underset{A\alpha }{}}{\displaystyle \frac{m_\mu ^2}{m_{\stackrel{~}{\nu }_\alpha }^2}}\left[(|C_{A\alpha }^L|^2+|C_{A\alpha }^R|^2)F_1(x_{A\alpha })+{\displaystyle \frac{m_{\chi _A^+}}{m_\mu }}Re\{C_{A\alpha }^LC_{A\alpha }^R\}F_3(x_{A\alpha })\right],`$ (7)
$`\delta a_\mu ^{\chi ^0}`$ $`=`$ $`{\displaystyle \frac{1}{8\pi ^2}}{\displaystyle \underset{a\alpha }{}}{\displaystyle \frac{m_\mu ^2}{m_{\stackrel{~}{\mu }_\alpha }^2}}\left[(|N_{a\alpha }^L|^2+|N_{a\alpha }^R|^2)F_2(x_{a\alpha })+{\displaystyle \frac{m_{\chi _a^0}}{m_\mu }}Re\{N_{a\alpha }^LN_{a\alpha }^R\}F_4(x_{a\alpha })\right],`$ (8)
where $`x_{A\alpha }=m_{\chi _A^+}^2/m_{\stackrel{~}{\nu }_\alpha }^2`$, $`x_{a\alpha }=m_{\chi _a^0}^2/m_{\stackrel{~}{\mu }_\alpha }^2`$,
$`C_{A\alpha }^L`$ $`=`$ $`g_2V_{A1}(\mathrm{\Gamma }_{\nu L}^{})_{2\alpha },`$
$`C_{A\alpha }^R`$ $`=`$ $`+\lambda _\mu U_{A2}(\mathrm{\Gamma }_{\nu L}^{})_{2\alpha },`$
$`N_{a\alpha }^L`$ $`=`$ $`\lambda _\mu N_{a3}(\mathrm{\Gamma }_{eR}^{})_{2\alpha }+({\displaystyle \frac{g_1}{\sqrt{2}}}N_{a1}+{\displaystyle \frac{g_2}{\sqrt{2}}}N_{a2})(\mathrm{\Gamma }_{eL}^{})_{2\alpha },`$
$`N_{a\alpha }^R`$ $`=`$ $`\lambda _\mu N_{a3}(\mathrm{\Gamma }_{eL}^{})_{2\alpha }\sqrt{2}g_1N_{a1}(\mathrm{\Gamma }_{eR}^{})_{2\alpha },`$ (9)
and the standard integrals over the two-dimensional Feynman parameter space are
$`F_1(x)`$ $`=`$ $`{\displaystyle \frac{1}{12(x1)^4}}(x^36x^2+3x+2+6x\mathrm{ln}x),`$
$`F_2(x)`$ $`=`$ $`{\displaystyle \frac{1}{12(x1)^4}}(2x^3+3x^26x+16x^2\mathrm{ln}x),`$
$`F_3(x)`$ $`=`$ $`{\displaystyle \frac{1}{2(x1)^3}}(x^24x+3+2\mathrm{ln}x),`$
$`F_4(x)`$ $`=`$ $`{\displaystyle \frac{1}{2(x1)^3}}(x^212x\mathrm{ln}x).`$ (10)
In these relations, notation of is assumed. $`C`$’s and $`N`$’s are the couplings of the chiral muon states with the charginos and neutralinos, respectively. $`U_{A2}`$, $`V_{A1}`$ and $`N_{a3}`$ are the elements of the chargino and neutralino mixing matrices, $`A=1,2`$; $`a=1,4`$. $`\mathrm{\Gamma }`$’s are the mixing matrices of the sleptons after the flavor eigenstates have been first rotated by the unitary matrices which diagonalize the respective fermionic states. Note that $`\alpha =1,3`$ for sneutrinos and $`\alpha =1,6`$ for charged sleptons. In particular, for the charged sleptons each $`(\mathrm{\Gamma }_e)_{\alpha i}`$ is a 6$`\times `$3 matrix defined as $`(\mathrm{\Gamma }_{eL})_{\alpha i}=Z_{\alpha i}`$, $`(\mathrm{\Gamma }_{eR})_{\alpha i}=Z_{\alpha i+3}`$, ($`i=1,3`$), where $`Z`$ is a 6$`\times `$6 mixing matrix for charged sleptons of all three generations. For 3 sneutrinos, $`\mathrm{\Gamma }_{\nu L}`$ directly diagonalizes the 3$`\times `$3 sneutrino mass matrix. $`m_{\chi _A^+}`$, $`m_{\chi _a^0}`$, $`m_{\stackrel{~}{\nu }_\alpha }^2`$ and $`m_{\stackrel{~}{\mu }_\alpha }^2`$ are the chargino, neutralino, sneutrino and charged slepton mass eigenvalues, $`m_\mu `$ and $`\lambda _\mu `$ are the muon mass and diagonal muon Yukawa coupling, and $`g_1`$ and $`g_2`$ are the electroweak gauge couplings.
Note that at first glance the terms proportional to $`F_3`$ and $`F_4`$ in eqs.(7, 8), respectively, are enhanced by $`m_\chi /m_\mu `$. The net enahncement is actually of the order $`v_u/v_d\mathrm{tan}\beta `$. ($`v_d`$ and $`v_u`$ are the Higgs vevs which give masses to the $`d`$-quarks and charged leptons, and to the $`u`$-quarks, respectively.) <sup>3</sup><sup>3</sup>3 Note that there is no additional enhancement due to $`1/\lambda _\mu `$ in the terms proportional to $`F_3`$ and $`F_4`$ after the sum over the mass eigenstates is performed. In fact, any combination of chiral muon states contributes to $`\delta a_\mu ^{SUSY}`$ with the net contribution suppressed by small Yukawa coupling $`\lambda _\mu `$, as one can see, for instance, from the relevant Feynman diagrams in terms of flavor eigenstates (i.e., in the interaction basis). That also explains why the electron anomalous magnetic moment is less sensitive to new physics than its muon analogy. The enhancement by tan$`\beta `$ can be traced back to the diagrams with the chirality flip inside the loop (or in one of the vertices) as opposed to the terms proportional to $`F_1`$ and $`F_2`$ where the chirality flip takes place in the external muon leg. There are no similarly enhanced terms in the SM, where the chirality can only be flipped in the external muon leg. Thus for tan$`\beta 1`$ we expect that the terms proportional to $`F_3`$ and $`F_4`$ dominate in eqs.(7) and (8), and that compared to the SM electroweak gauge boson contribution $`\delta a_\mu ^{EW}`$, the SUSY contribution scales approximately as
$$\delta a_\mu ^{SUSY}\delta a_\mu ^{EW}\left(\frac{M_W}{\stackrel{~}{m}}\right)^2\mathrm{tan}\beta 15\times 10^{10}\left(\frac{100\mathrm{G}\mathrm{e}\mathrm{V}}{\stackrel{~}{m}}\right)^2\mathrm{tan}\beta ,$$
(11)
where $`\stackrel{~}{m}`$ stands for the heaviest sparticle mass in the loop. The effect has been noticed and emphasized by the earlier studies focusing on the muon anomalous magnetic moment in the context of the MSSM .
Relation (11) predicts a very simple tan$`\beta `$ dependence. It suggests that the large tan$`\beta `$ regime of the MSSM may already be constrained by the currently allowed window, eq.(5). That is of interest for models of grand unification based on simple SO(10), since the SO(10) GUT constraint $`y_t=y_b=y_\tau `$ for the Yukawa couplings of the third generation implies tan$`\beta 50`$. In this study, we first apply our analysis to the MSSM constrained only by gauge coupling unification and, as a warm-up, compute the muon anomalous magnetic moment for fixed values tan$`\beta =2`$ and tan$`\beta =20`$. Next, we proceed to the best fits of a simple SO(10) SUSY GUT . We present our results in terms of the contour lines of constant $`a_\mu ^{SUSY}`$ drawn in the SUSY parameter space. In section 2, we review our numerical analysis. Sections 3 and 4 discuss the results and prospects for the future.
## 2 Numerical Analysis
Our numerical analysis has relied on the top down global analysis introduced in . The soft SUSY breaking mediated by supergravity (SUGRA) was assumed throughout this paper. As explained at the end of the previous section, the magnitude of tan$`\beta `$ is of special interest for the analysis. Hence we discuss separately three cases with low, medium, and large value of tan$`\beta `$. First, fixed tan$`\beta =2`$ was considered, along with the scalar trilinear parameter $`A_0`$ set to zero to simplify the analysis which is insensitive to $`A_0`$ in this case. Next, tan$`\beta `$ was raised to $`20`$ and $`A_0`$ set free to vary. In the third case, we used the results of the global analysis of model 4c, a simple SO(10) model with minimal number of effective operators leading to realistic Yukawa matrices. The model was suggested by Lucas and Raby and its low-energy analysis was presented in . A direct model dependence of $`a_\mu `$ on the details of the Yukawa matrices is, however, very limited. In this case, tan$`\beta `$ was not fixed to any particular value. Instead, it was a free parameter of the global analysis but since the model predicted exact $`tb\tau `$ Yukawa coupling unification, the best fit values of tan$`\beta `$ were always found between 50 and 55, dependent on a particular $`(m_0,M_{1/2})`$ point as explained below. $`A_0`$ was free to vary — as for tan$`\beta =20`$ — which allowed the best fits to optimize the effects of the left-right stop mixing, which is enhanced by tan$`\beta `$, in the analysis.
In each of the three cases, gauge coupling unification was imposed up to a small (less than 5%) negative correction, called $`ϵ_3`$, to $`\alpha _s`$ at the unification scale $`M_G`$. Scale $`M_G`$ has been defined as the scale where $`\alpha _1`$ and $`\alpha _2`$ are exactly equal to the common value $`\alpha _G`$. For low and medium tan$`\beta `$ ($`2`$ and $`20`$), the minimal set of the initial SUSY parameters
$$M_{1/2},m_0,A_0,\text{sign}\mu ,\text{and tan}\beta $$
(12)
was assumed, with $`M_{1/2}`$, $`m_0`$, and $`A_0`$ (the universal gaugino mass, scalar mass and trilinear coupling) introduced at $`M_G`$. The Yukawa couplings of the third generation fermions at $`M_G`$ were unconstrained and free to vary independently on each other. For large tan$`\beta `$ in the third case, the scalar Higgs masses were allowed to deviate from $`m_0`$ in order to alleviate strong fine-tuning required for the correct electroweak symmetry breaking. Thus instead of (12), the set
$$M_{1/2},m_0,(m_{H_d}/m_0)^2,(m_{H_u}/m_0)^2,A_0,\text{sign}\mu ,\text{and tan}\beta $$
(13)
was actually used as initial SUSY parameters. As already mentioned, the third generation yukawas were strictly set equal to each other at $`M_G`$ in this case. In fact, the SO(10)-based equality among them is the main reason why such a large tan$`\beta `$ is attractive.
The rest of the analysis was then the same for each of the three cases. Particular values of $`m_0`$ and $`M_{1/2}`$ were picked up, while the rest of the initial parameters varied. That included varying $`\alpha _G`$, $`M_G`$, $`ϵ_3`$, the third generation Yukawa couplings, and $`A_0`$ (for medium and large tan$`\beta `$). Using the 2-loop RGEs for the dimensionless couplings and 1-loop RGEs for the dimensionful couplings the theory was renormalized down to the SUSY scale, which was set equal to the mass of the $`Z`$ boson. The electroweak symmetry breaking was checked to one loop as in , based on the effective potential method of ref.. One-loop SUSY threshold corrections to fermion masses were calculated consistently at this scale. That is of particular importance for $`m_b`$ which receives significant corrections if tan$`\beta `$ is large. Also, the experimental constraints imposed by the observed branching ratio $`BR(bs\gamma )`$ and by direct sparticle searches were taken into account. Finally, $`\delta a_\mu ^{SUSY}\delta a_\mu ^{\chi ^+}+\delta a_\mu ^{\chi ^0}`$ was evaluated following eqs.(7) and (8) for those values of the initial parameters which gave the lowest $`\chi ^2`$ calculated out of the ten low energy observables $`M_Z`$, $`M_W`$, $`\rho _{new}`$, $`\alpha _s(M_Z)`$, $`\alpha `$, $`G_\mu `$, $`M_t`$, $`m_b(M_b)`$, $`M_\tau `$, and $`BR(bs\gamma )`$. Details of the low energy analysis can be found in . The calculated value of $`\delta a_\mu ^{SUSY}`$ did not have any effect on the $`\chi ^2`$ calculation and the subsequent selection of the SUSY parameter space.
## 3 Results and Discussion
The results for tan$`\beta =2`$ and tan$`\beta =20`$ are shown in figures 14. The figures 1 and 2 show the contour lines of constant $`\delta a_\mu ^{SUSY}\times 10^{10}`$ in the $`(m_0,M_{1/2})`$ plane. These results are then transformed into the dependence on physical masses in figures 3 and 4, where we chose the $`(m_{\stackrel{~}{\nu }},m_{\chi ^+})`$ plane, with $`m_{\stackrel{~}{\nu }}`$ being the muon sneutrino mass and $`m_{\chi ^+}`$ being the mass of the lighter chargino. The contour lines in these figures are bound from below by the limit on the neutralino mass (a limit $`m_{\chi ^0}>55`$GeV was imposed flatly, for simplicity), and from above by the stau mass ($`m_{\stackrel{~}{\tau }}>60`$GeV, in this analysis). The important observation is that the present limits on $`\delta a_\mu ^{SUSY}`$, eq.(5) are not excluding any region of the parameter space which is left available by other experiments. The size of $`\delta a_\mu ^{SUSY}`$ is in agreement with the results of Chattopadhyay and Nath, and of Moroi . We can also confirm their observation that the neutralino contribution $`\delta a_\mu ^{\chi ^0}`$ is generally much smaller than the chargino contribution $`\delta a_\mu ^{\chi ^+}`$ in the whole parameter space available. Yet several interesting characteristics of the presented results cannot be read off directly from the earlier studies. It is first of all the simple pattern suggested by figures 3 and 4. The pattern in the figures shows a strong dependence of $`\delta a_\mu ^{SUSY}`$ on the sneutrino mass and very little sensitivity to the mass of the lighter chargino. That can be understood from the fact that the loop integrals are in general most sensitive to the heaviest mass in the loop, and the universal initial conditions (see (12)) together with the experimental limits from direct searches always lead to $`m_{\stackrel{~}{\nu }}m_{\chi ^+}`$ at the electroweak scale. Actually, the figures show how well the approximate relation (11) works. One could e.g., directly read out the value of sneutrino mass from a figure of this type once a more accurate measurement of $`a_\mu `$ and the value of tan$`\beta `$ become available. Alternatively, a more accurate measurement of $`a_\mu `$ can be converted into a limit on tan$`\beta `$ in a straightforward way, if the sneutrino mass is known. Also note how well the linear dependence $`\delta a_\mu ^{SUSY}\mathrm{tan}\beta `$ holds. If we overlapped figures 3 and 4, the contour lines marked as 10, 5, 2, 1, and 0.5 in figure 3 would be practically on top of the contour lines marked as 100, 50, 20, 10, and 5 in figure 4. That suggests that the tan$`\beta `$ enhanced terms in eqs.(7) and (8) become dominant already for tan$`\beta 2`$. We checked this feature in the numerical analysis: typically, the term proportional to $`F_3`$ in eq.(7) accounts for about 80–85% of the net $`\delta a_\mu ^{\chi ^+}`$ for tan$`\beta =2`$, while it is 97-99% for tan$`\beta =20`$. (The analogous term in eq.(8) is less dominant, but has a smaller net effect since $`\delta a_\mu ^{\chi ^0}<\delta a_\mu ^{\chi ^+}`$.)
From these results one could extrapolate that the current $`a_\mu `$ limit, eq.(5), places an important constraint on the analysis if tan$`\beta `$ is as large as 50.
However, case tan$`\beta 50`$ is qualitatively different. As discussed in the study by Blažek and Raby , the global analysis yields two distinct fits, figs. 5a–b. The fits differ by the sign of the Wilson coefficient $`C_7^{MSSM}`$ in the effective quark Hamiltonian below the electroweak scale. (The coefficient $`C_7`$ determines the $`bs\gamma `$ decay amplitude, after the QCD renormalization effects are taken into account .) In the MSSM with large tan$`\beta `$, the sign of $`C_7`$ can be the same, or the opposite, as in the SM. It can be reversed due to the fact that the chargino contribution can be of either sign. This contribution is enhanced by tan$`\beta `$ compared to the SM and charged Higgs contributions, whose signs are fixed and alike, and thus the flipped sign of $`C_7^{MSSM}`$ cannot be obtained for low tan$`\beta `$. <sup>4</sup><sup>4</sup>4 The contributions with neutralinos or gluinos in the loop always turn out to be small in our analysis. For tan$`\beta 50`$ the fit with the flipped sign is equally good as the fit with the sign unchanged, see figure 5. The fits differ in the range of the allowed SUSY parameter space: to reverse the sign the chargino contribution accepts lower squark masses and different values of $`A_0`$ than in the case when the sign is unchanged.
One can anticipate that these differences will be reflected in the analysis of $`\delta a_\mu ^{SUSY}`$ with tan$`\beta 50`$, since the quark sector is correlated with the lepton sector through the unification constraint, and the soft masses are interrelated through the universality assumption. Due to differences in SUSY spectra we can expect $`\delta a_\mu ^{SUSY}`$ to be more significant in the fit with the flipped sign of $`C_7^{MSSM}`$ than in the fit where the signs are the same. These expectations are indeed realized in our results in figs. 6a and 6b. In these figures, we plot the contour lines in the ($`m_0,M_{1/2}`$) plane of $`\delta a_\mu ^{SUSY}`$, calculated according to eqs. (7) and (8), with all the masses, mixings and couplings taken over from the best fit values at the specific ($`m_0,M_{1/2}`$) point in the SUSY space.<sup>5</sup><sup>5</sup>5 We do not show plots in the $`(m_{\stackrel{~}{\nu }},m_{\chi ^+})`$ plane in this case. For large tan$`\beta `$ the best fits of the global analysis result in the lightest chargino being higgsino-like, with its mass close to the electroweak scale across the whole $`(m_0,M_{1/2})`$ plane. Thus the two-dimensional plots in the $`(m_{\stackrel{~}{\nu }},m_{\chi ^+})`$ plane would contract to a dependence on $`m_{\stackrel{~}{\nu }}`$ only. The important feature, which runs contrary to the naive extrapolation from the previous two cases, is that the MSSM contribution to the muon anomalous magnetic moment stays within the currently allowed range at $`90\%C.L.`$, even for tan$`\beta `$ as large as predicted by simple SO(10) GUTs. Despite the fact that figures 1-4 suggest that the analysis with tan$`\beta =5055`$ becomes sensitive to the current constraint on $`a_\mu `$, eq.(5), our results in fig.6 clearly indicate that this is not the case. The reson for this is that we consistently demand that all known particle physics constraints (besides those imposed by $`a_\mu `$) are accounted for. For tan$`\beta `$ as large as 50, the allowed SUSY parameter range is further reduced by strong constraints on the $`b`$ quark mass and the branching ratio $`BR(bs\gamma )`$, when compared to the regime where tan$`\beta =2`$ or $`20`$.
Finally, we make a note on the sign of $`\mu `$. As clearly indicated on the top of each figure, we have presented our results just for $`\mu >0`$. For this sign of the $`\mu `$ parameter, the SUSY contributions to $`a_\mu `$ are positive across the whole ($`m_0,M_{1/2}`$) plane. For $`\mu <0`$, the contributions change sign too. However, the chargino contribution to $`C_7`$ also changes sign in this case, which leads to unacceptable values of $`BR(bs\gamma )`$ for medium and large tan$`\beta `$. It is amazing to observe that for this range of tan$`\beta `$ the sign of the $`\mu `$ parameter favored by $`bs\gamma `$ is the same as the sign preferred by the experimental window open for $`\delta a_\mu ^{NEW}`$. <sup>6</sup><sup>6</sup>6 For low tan$`\beta `$, the $`bs\gamma `$ constraint disappears (the chargino contribution to $`C_7`$ is small) and $`\mu `$ can be negative. However, figure 1 shows that $`\delta a_\mu ^{SUSY}`$ is very small in this case. We know from the earlier studies that no substantial change in the magnitude of $`\delta a_\mu ^{SUSY}`$ occurs for different signs of $`\mu `$. Thus we conclude that for $`\mu <0`$ only the low tan$`\beta `$ case is viable, but then the SUSY contributions to $`a_\mu `$ stay below the electroweak contributions and will be hard to observe even after the E821 experiment is completed.
## 4 Prospects for the New BNL Experiment and Conclusions
When the future BNL experiment reduces the uncertainty on $`a_\mu `$ down to $`\pm 12\times 10^{10}`$ at 90%C.L., eq.(6), the muon anomalous magnetic moment will undoubtfully turn into a powerful constraint on the MSSM analysis. It is already clear from figures 1, 2, and 6 that it will be a major constraint for large and medium tan$`\beta `$ in the region $`m_0<400500`$GeV. Of course, the BNL result may drastically affect the MSSM analysis with any value of tan$`\beta `$ if the observed central value turns out to be below (or well above) the SM prediction. For such an outcome, the muon anomalous magnetic moment will actually become a dominant constraint for the MSSM analysis with the universal SUGRA-mediated SUSY breaking terms.
In the meantime, $`a_\mu `$ does not pose any new constraints on the MSSM analysis under the terms considered in this work.
Acknowledgments
I would like to thank Greg Anderson, Stuart Raby and Carlos Wagner for helpful discussions, various suggestions and support. |
no-problem/9912/astro-ph9912477.html | ar5iv | text | # Radio, optical and X-ray nuclei in nearby 3CRR radio galaxies
## 1 Introduction
On sub-arcsecond scales, the central radio components, or ‘cores’, of radio galaxies are understood, as a result of VLBI observations, to be the unresolved bases of the relativistic jets that transport energy to the lobes, seen in partly self-absorbed synchrotron radiation. Superluminal motion is seen in some sources, and the supposed unification of radio galaxies with BL Lac objects and quasars (e.g. Urry & Padovani 1995) requires the cores to be relativistically beamed with Lorentz factors $`5`$. Hardcastle & Worrall (1999) and Canosa et al. (1999) have found that the nuclear soft-X-ray emission in radio galaxies is well correlated with the core radio emission. This finding implies that the X-ray emission is also relativistically beamed and so must originate in the jet, and is qualitatively consistent with models in which the strong X-ray emission of BL Lac objects is largely a result of relativistic boosting.
A snapshot survey of 3CR radio sources with the HST Wide Field and Planetary Camera 2 (WFPC2) (De Koff et al. 1996) has detected unresolved optical nuclear components in a large number of objects, particularly at low redshift (Martel et al. 1999), where, because of the luminosity-redshift correlation in the 3CR sample, most radio sources are of type FRI rather than FRII (Fanaroff & Riley 1974). Chiaberge, Capetti & Celotti (1999) find that the optical luminosities of the FRI radio galaxies are correlated with the luminosities of their 5-GHz radio cores, which, by the same argument as used by Hardcastle & Worrall for the X-ray emission, implies a jet-related origin for the nuclear optical emission. Capetti & Celotti (1999) compare the optical nuclear luminosities of five of the detected FRI radio galaxies with those of BL Lac objects matched in isotropic properties and find support for unification in the optical assuming relativistic jets with Lorentz factors in the range 5 to 10. In this paper we examine the radio, optical and X-ray nuclear fluxes and luminosities of a sample of low-redshift radio galaxies in order to explore the X-ray and optical emission mechanisms.
Throughout the paper we adopt a cosmology with $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. Spectral index $`\alpha `$ is defined in the sense that flux density is proportional to $`\nu ^\alpha `$.
## 2 Observations and data reduction
We selected the 27 radio galaxies with $`z<0.06`$ from the 3CRR sample (Laing, Riley & Longair 1983, Laing & Riley, in prep.); the redshift limit was chosen to ensure good spatial resolution, so that nuclei could adequately be separated from the dusty discs that surround them. The status of the archival WFPC2 HST observations of these objects is tabulated in Table 1. The majority of the observations were made as part of the snapshot survey, and so are short (280 s) and use the Planetary Camera (PC) with the F702W filter, but we have used longer observations in this or other broad-band red filters where they exist in the archives. Bias subtraction and flat-field corrections had been applied automatically as part of the standard pipeline, and we performed cosmic ray rejection using the crrej program in iraf.
Almost every observed source (17/20) in this sample has an optical nucleus which is unresolved with the HST (i.e., has a linear size less than $`50`$ pc) , and the majority (13) of these also show disc-like dust features. The two broad-line objects in the sample, 3C 382 and 3C 390.3, show atypically strong nuclei, as would be expected from their ground-based classification as N-galaxies.
For the 17 sources with detected point-like nuclei we used iraf to carry out small-aperture photometry on the nucleus. Source regions were typically only 3–5 PC pixels (1 pixel is 46 milliarcsec), and backgrounds were taken from an annulus around the source close to the nucleus (up to 10 pixels) to minimise the contribution from the stellar continuum to the net core emission. The resulting measurements were corrected for flux lying outside the extraction regions using the aperture corrections given by Holtzman et al. (1995), and then converted to flux densities at the central frequency of the filter ($`4.3\times 10^{14}`$ Hz for F702W, $`3.8\times 10^{14}`$ Hz for F791W and F814W) using the synphot package in iraf, assuming a power-law spectrum with $`\alpha =0.8`$ (the results are insensitive to the assumed spectrum). Flux densities are tabulated in Table 2, together with corresponding radio and X-ray values.
Our analysis is independent of that of Chiaberge et al. (1999), though their sample contains a large number of objects (12 out of the 17 in Table 2) in common with ours. Comparing the flux densities tabulated in their table 3 with our measurements for the overlapping objects, we find agreement to better than a factor of 2 in almost all cases, which is reasonable considering the uncertainties in background subtraction.
## 3 Results
### 3.1 Radio-optical correlation
The 5-GHz radio versus optical correlation for the 17 sources with point-like optical cores is shown in Fig. 1. The correlation is strong over several orders of magnitude of luminosity, extending further than that of Chiaberge et al. (1999) primarily because of the inclusion of the luminous broad-line objects 3C 382 and 3C 390.3. The optical and radio flux densities are also well correlated, and a partial Kendall’s $`\tau `$ test shows that the luminosity-luminosity correlation is not induced by a common correlation with redshift (at the 95 per cent confidence level).
The one narrow-line FRII in the sample, 3C 192, lies some way off the correlation. This may be an artefact of the lower resolution of the HST observations (Table 1), or it may indicate a difference between FRIs and FRIIs.
All three objects which do not have a measured point-like optical nucleus in the HST observations do not have a nucleus for reasons which are likely to be unrelated to their true optical nuclear flux (see notes to Table 1). The sub-sample of 17 sources that we consider is therefore not biased with respect to the optical flux, and the three missing objects would not be expected to alter the correlation of Fig. 1.
### 3.2 Optical-X-ray correlation
The 17 sources with point-like optical cores were all targets for ROSAT pointed observations, and X-ray flux densities for the cores (almost all of which were detected) are given by Hardcastle & Worrall (1999) and tabulated in Table 2. The X-ray versus radio correlation (Fig. 2) is at least as good as that seen in the radio-optical plot (Fig. 1), and the scatter about a regression line appears to be considerably less, suggesting that there may be a closer relationship between the X-ray and optical emission. Again, 3C 192 is an outlier.
### 3.3 Colour-colour plots
Rest-frame radio-to-optical and optical-to-X-ray spectral indices ($`\alpha _{\mathrm{RO}}`$ and $`\alpha _{\mathrm{OX}}`$, respectively) are given in Table 2 and plotted in Fig. 3. Also plotted for comparison are the corresponding spectral indices for the X-ray-observed BL Lac objects with measured redshifts in the sample of Fossati et al. (1998). K-corrections were made to all the data points using $`\alpha _\mathrm{R}=0`$, $`\alpha _\mathrm{O}=0.8`$ and $`\alpha _\mathrm{X}=0.8`$, and the BL Lac optical fluxes are also ‘corrected’ from the V-band measurements tabulated by Fossatti et al. to our observing wavelength of $`7000`$ Å.
We might expect from unified models that the cores of the FRI radio galaxies would occupy a similar region of the $`\alpha _{\mathrm{RO}}`$$`\alpha _{\mathrm{OX}}`$ plane to radio-selected BL Lac objects (RBLs). In fact, Fig. 3 shows that they occupy a region which overlaps with the RBLs, and is clearly distinguishable from the region occupied by X-ray selected BL Lacs (XBLs); the only two objects to overlap with the XBLs are the broad-line radio galaxies 3C 382 and 3C 390.3. However, there is a clearly distinguishable difference between the radio galaxies and RBLs, in the sense that radio galaxies tend to have steeper $`\alpha _{\mathrm{RO}}`$ and flatter $`\alpha _{\mathrm{OX}}`$ than RBLs. Since the distribution of $`\alpha _{\mathrm{RX}}`$ of FRI radio galaxies, both in this sample and in 3CRR as a whole (Hardcastle & Worrall 1999), is very similar to that of RBLs, this result at first sight implies that radio galaxies tend to have fainter optical cores than we would expect from the behaviour of BL Lacs. We return to this point below.
## 4 Discussion
### 4.1 FRII objects
The only narrow-line FRII object in the sample is 3C 192, which is an outlier on all the correlations involving optical data, since its optical flux is over-bright relative to its radio or X-ray flux density. There is likely to be a significant amount of line contamination in the HST flux of this object, and it was observed with the WFC rather than the PC, reducing our ability to remove background emission. Nevertheless, it is possible that the anomalous behaviour of 3C 192 is an indication of a difference between FRI and FRII objects. No such difference is apparent in the X-ray and radio data alone (Hardcastle & Worrall 1999).
The two broad-line objects, also FRIIs, have colours which are significantly different from those of the rest of the sample. Both objects are affected by saturation of the PC CCDs, but this effect means that we are underestimating the optical fluxes, and they may be even more extreme outliers on Fig. 3. However, both objects are known to be variable in the radio and X-ray, and the data we have used are of varying vintage, so it is dangerous to draw conclusions concerning broad-line FRII galaxies based on Fig. 3.
### 4.2 The origins of the optical and X-ray emission
The correlation in Fig. 1 implies that the optical emission originates in the jet. The most likely emission mechanism is then synchrotron radiation, particularly as optical synchrotron emission is seen from the kiloparsec-scale jets of several radio galaxies (e.g. Martel et al. 1998 and references therein), including 3C 66B, 3C 264 and M87 in our sample.
Several factors make it difficult to constrain the process responsible for the X-radiation using the radio and optical emission. We know from the flat radio spectra of the cores that they are self-absorbed at cm wavelengths, so we can draw no strong conclusions on the origins of the optical and X-ray emission from the generally convex shapes of the radio-optical-X-ray spectra; there are few observations of cores of radio galaxies at radio frequencies high enough to avoid self-absorption.
A well-determined core optical spectrum could provide strong clues about the mechanism of the X-rays emission. A second optical colour, in the F547M or F555W filters, is available for a few nuclei in our sample, and in the majority of cases the derived optical spectral index is steeper than the optical-to-X-ray spectral index, which would naïvely suggest that the X-ray and optical emission cannot both be synchrotron emission from a single population of electrons. However, even modest amounts of intrinsic or Galactic reddening can make a substantial difference to the measured optical spectral index (e.g. Ho 1999). Reddening will be significant if the geometry is such that part of a dusty disc lies along the line of sight to the nucleus. In the case of NGC 6251, Ferrarese & Ford (1999) have shown that the reddening inferred from a comparison of the obscured and unobscured regions of the galaxy is consistent with the intrinsic $`N_H`$ inferred from X-ray absorption by Birkinshaw & Worrall (1993), so that the dusty disc may be obscuring both the optical and X-ray nuclear emission. 3C 264, where the plane of the disc appears to be almost perpendicular to the line of sight, has the flattest measured optical spectral index in the sample, and so may be the least affected by reddening, consistent with this picture.
Therefore the present data cannot rule out a synchrotron origin for the X-ray emission, when the possible effects of intrinsic reddening are taken into account. If the X-ray cores were synchrotron, it would go some way towards explaining the tight relationship between the optical and X-ray nuclear luminosities. However, an inverse-Compton origin for the X-ray nuclei may be required by unified models, as discussed in the next section. Further HST observations are needed to measure the reddening and the intrinsic optical spectral index in a large sample of radio galaxies, if broad-band spectra are to constrain the X-ray emission process.
### 4.3 Contraints from unified models
We can combine the X-ray, optical and radio data for FRI radio galaxies to provide a new test of the standard BL-Lac/FRI unified model, in which relativistic beaming is important in the nuclei and BL Lac objects are FRIs seen at small angles to the line of sight. In a such model, the spectral energy distributions (SEDs) of FRIs should be dimmed, redshifted versions of those of RBLs. We can test this model by investigating the regions that the two populations occupy in the $`\alpha _{\mathrm{RO}}`$$`\alpha _{\mathrm{OX}}`$ plane of Fig. 3.
If BL Lac spectra were one-component power laws extending from the radio to the X-ray, we would expect that the FRIs and BL Lacs would populate similar regions of the $`\alpha _{\mathrm{RO}}`$$`\alpha _{\mathrm{OX}}`$ plane. In fact, the SEDs of BL Lac objects between radio and X-ray bands are often well modelled by a smoothly curved function (e.g. a parabola; Landau et al. 1986, Sambruna et al. 1996). Redshifting and dimming such a spectrum to produce the expected spectrum of an FRI radio galaxy should result in both $`\alpha _{\mathrm{RO}}`$ and $`\alpha _{\mathrm{OX}}`$ being somewhat steeper in radio galaxies than in RBLs, which is not observed. Instead, $`\alpha _{\mathrm{OX}}`$ is flatter in radio galaxies. (The effect on the SED from cosmological redshift, given that the RBLs of Fig. 3 are more distant than our sample of FRIs, is small compared to that expected from relativistic beaming.)
The nuclear X-ray emission from the FRIs has been separated from the thermal emission from their host groups or clusters (Hardcastle & Worrall 1999), and we are confident that we are not overestimating the nuclear X-ray emission significantly in these sources. In contrast, the discrepancy between the $`\alpha _{\mathrm{OX}}`$ distributions for FRIs and BL Lac objects may be underestimated as a result of a thermal contribution to the X-ray fluxes of the RBLs, which can be significant at the $`10`$ per cent level (e.g. Hardcastle, Worrall & Birkinshaw 1999).
Reddening in the optical nuclei observed with HST, as a result of absorption by the dusty disc seen in the HST images, may have an effect. The FRIs can be made to have a very similar distribution of $`\alpha _{\mathrm{RO}}`$ to the RBLs if it is assumed that obscuration causes us to underestimate the red optical fluxes by a factor $`2`$ to $`3`$ in the radio galaxies only. However, such a high degree of reddening would imply a typical $`A_\mathrm{V}2`$, and therefore, for typical gas/dust ratios, an intrinsic column density of order $`5\times 10^{21}`$ cm<sup>-2</sup>. Such column densities are somewhat higher than is observed for the soft X-ray cores of well-studied FRIs (e.g. Birkinshaw & Worrall 1993; Worrall & Birkinshaw 1994). More importantly, they would mean that we are underestimating the unabsorbed 1-keV X-ray flux densities of the FRIs (which are calculated on the assumption of galactic absorption) by a similar factor. Thus $`\alpha _{\mathrm{OX}}`$ is not strongly affected by reddening at this level, and we cannot account for the flat $`\alpha _{\mathrm{OX}}`$ in radio galaxies in this way.
So the relatively flat $`\alpha _{\mathrm{OX}}`$ in FRIs appears to be real. If unified models are correct, this observation gives us a strong reason to believe that the X-ray nuclei of FRIs are dominated by inverse-Compton emission. To make the data consistent with unified models, it must be the case that the $`\alpha _{\mathrm{OX}}`$ of FRIs is observed to be relatively flat because a new component of the RBL spectrum, not well modelled by the simple parabolae of Landau et al. (1986), is dominant at X-ray energies in the FRIs but largely blueshifted out of the observed band in RBLs. In BL Lacs, an inverse-Compton component of the emission has long been expected to dominate at high energies, and this component is believed to be responsible for the second peak in the SED seen in $`\gamma `$-rays at MeV–GeV energies (e.g. Fosatti et al. 1998); discrepancies between the simple parabolic models and the data, and the flat spectra seen in the X-ray in some BL Lacs, can also be attributed to this component (Worrall 1989; Sambruna et al. 1996).
We therefore favour an inverse-Compton model for the nuclear X-ray emission seen in these nearby radio galaxies. If the X-ray nuclei of FRIs are indeed dominated by inverse-Compton emission, then we expect them to have flat soft X-ray spectra, a prediction that will be testable with forthcoming Chandra observations.
## Acknowledgements
This work was supported by PPARC grant GR/K98582. |
no-problem/9912/physics9912002.html | ar5iv | text | # High sensitivity two-photon spectroscopy in a dark optical trap, based on electron shelving.
\[
## Abstract
We propose a new spectroscopic method for measuring weak transitions in cold and trapped atoms, which exploits the long interaction times and tight confinement offered by dark optical traps together with an electron shelving technique to achieve extremely high sensitivity. We demonstrate our scheme by measuring a $`5S_{1/2}5D_{5/2}`$ two-photon transition in cold Rb atoms trapped in a new single-beam dark optical trap, using an extremely weak probe laser power of $`25`$ $`\mu `$W. We were able to measure transitions with as small excitation rate as 0.09 sec<sup>-1</sup>.
PACS number(s): 39.30.+w, 32.80.Pj, 32.80.Rm, 32.90.+a
\]
The strong suppression of Doppler and time-of-flight broadenings due to the ultra low temperatures, and the possibility to obtain very long interaction times are obvious advantages of using cold atoms for spectroscopy. Convincing examples of such precision spectroscopic measurements are cold atomic clocks . For RF clock transitions long interaction time is usually obtained in an atomic fountain , while for optical metastable clock transitions free expanding atomic clouds are used .
Even longer interaction times can be obtained for cold atoms trapped in optical dipole traps. To obtain long atomic coherence times, spontaneous scattering of photons and energy level perturbations caused by the trapping laser are reduced by increasing the laser detuning from resonance . To further reduce scattering, blue-detuned optical traps, where repulsive light forces confine atoms mostly in the dark (dark traps), have been developed, achieving atomic coherence of 7 s . The wide use of dark traps was limited by relatively complex setups that require multiple laser beams or gravity assistance. Recent development of single-beam dark traps make them more attractive for precision spectroscopy ,.
Dark traps have an additional advantage that makes them especially useful for the spectroscopic measurements of extremely weak optical transitions. While preserving long atomic coherence times those traps can provide large spring constants and tight confinement of trapped atoms to ensure good spatial overlap even with a tightly focused excitation laser beam. Therefore the atoms can be exposed to a much higher intensity of the excitation laser, yielding a further increase in sensitivity for very weak transitions.
In this letter we present a new and extremely sensitive method for measuring weak transitions with cold atoms in a far detuned single-beam dark trap using electron shelving spectroscopy . Recently, a similar technique was adapted to demonstrate quantum-limited detection of narrow-linewidth transitions on a free expanding cold atomic cloud . Our scheme is based on a $`\mathrm{\Lambda }`$ system. Atoms with two ground states (for example, two hyperfine levels) are stored in the trap in a level $`|g_1`$ that is coupled to the upper (excited) state, $`|e`$, by an extremely weak transition. An atom that undergoes the weak transition, may be shelved by a spontaneous Raman transition on the second ground level, $`|g_2`$, that is uncoupled to the excited level by the weak transition. After waiting long enough, a significant fraction of the atoms will be shelved on this second level. Finally, the detection scheme benefits from the multiply excited fluorescence of a strong closed transition from $`|g_2`$, that utilizes quantum amplification due to the electron shelving technique.
We realized this scheme on a $`5S_{1/2}5D_{5/2}`$ two-photon transition in cold and trapped <sup>85</sup>Rb atoms (see Fig. 1 for the relevant energy levels) using extremely weak ($`25`$ $`\mu `$W) laser beam and we were able to measure transitions with an excitation rate as small as 0.09 s<sup>-1</sup>.
Precision spectroscopy of the two-photon transition in Rb atoms was previously demonstrated in a hot vapor with much higher laser power . In cold Rb atoms this transition was measured either on free expanding atoms using a mode-locked laser or on atoms trapped in a doughnut mode magneto-optical trap. In all those schemes the fluorescent $`420`$ nm photons were used to detect the two-photon transition.
Our spectroscopic measurement was made on cold <sup>85</sup>Rb atoms trapped in a rotating-beam optical trap (ROBOT). The operation principles of the ROBOT are described elsewhere . Briefly, a linearly polarized, tightly focused ($`16`$ $`\mu `$m $`1/e^2`$ radius) Gaussian laser beam is rapidly ($`100`$ kHz) rotated by two perpendicular acousto-optic scanners, as seen in Fig. 2. This forms a dark volume which is completely surrounded by a time-averaged dipole potential walls. The wavelength of the trapping laser was $`770`$ nm ($`10`$ nm above the D<sub>2</sub> line) and its power was $`380`$ mW. The initial radius of the rotation was optimized for efficient loading of the ROBOT from a magneto-optical trap (MOT). $`700`$ ms of loading, $`47`$ ms of compression and $`3`$ ms of polarization gradient cooling produced a cloud of $`3\times 10^8`$ atoms, with a temperature of $`9`$ $`\mu `$K and a peak density of $`1.5\times 10^{11}`$ cm<sup>-3</sup>. On the last stage of the loading procedure, the atoms were optically pumped into the $`F=2`$ ground state by shutting off the repumping laser $`1`$ ms before shutting off the MOT beams.
After all laser beams were shut off (except for the ROBOT beam which was overlapping the center of the MOT), $`310^5`$ atoms were typically loaded into the trap, with temperature and density comparable with those of the MOT. Next, we adiabatically compressed the trap by reducing the radius of rotation of the trapping beam from $`70`$ $`\mu `$m to $`29`$ $`\mu `$m such that the atoms will match the waist of the two-photon laser, to further increase the efficiency of the transition. The size of the final cloud in the radial direction was measured by absorption imaging and the temperature of the atoms was measured by time of flight fluorescence imaging. From these measurements and using our precise characterization of the trapping potential , the parameters of the final cloud are: radial size ($`1/e^{2\text{ }}`$radius) of $`19`$ $`\mu `$m, axial size of $`750`$ $`\mu `$m, rms temperatures of $`55`$ $`\mu `$K \[$`9`$ $`\mu `$K\] in the radial \[axial\] direction, and a density of $`710^{11}`$atoms/cm<sup>-3</sup>. The $`1/e`$ lifetime of atoms in the trap was measured to be $`350`$ ms for both hyperfine ground-states and was limited by collisions with background atoms. We measured the spin relaxation time of the trapped atoms to be $`>1`$ s, by measuring spontaneous Raman scattering between the two ground state levels .
The spectroscopy was performed with an external-cavity diode laser which was tuned to the $`5S_{1/2}F=35D_{5/2}F^{}`$ two-photon transition ($`777.9`$ nm) and was split into two parts. The first part ($`10`$ mW) was used to frequency stabilize the laser using the $`420`$ nm fluorescence signal from the two-photon excitation obtained from a $`130^0C`$ Rb vapor cell. The laser was focused into the cell to $`100`$ $`\mu `$m $`1/e^2`$ radius and reflected back to obtain Doppler-free spectra. We locked the laser to the atomic line either by Zeeman modulation technique or directly to the side of the line. From the locking signal we estimated the peak-to-peak frequency noise of the laser to be $`3`$MHz. The second part of the diode laser beam passed through an acousto-optic modulator that shifted the laser frequency toward two-photon resonance with the $`5S_{1/2}F=25D_{5/2}F^{}`$ transition. The laser beam was then focused to a $`26`$ $`\mu m`$ ($`1/e^2`$ radius) spot size in the center of the vacuum chamber, in order to optimize the efficiency of the two-photon transition and was carefully aligned with the long (axial) axis of the ROBOT.
We used a normalized detection scheme to measure the fraction of atoms transferred to $`F=3`$ by the two-photon laser. To detect the total number of atoms in the trap we applied a strong $`200`$ $`\mu `$s laser pulse, resonant with the $`5S_{1/2}F=35P_{3/2}F=4`$ closed transition together with the repumping laser and imaged the fluorescent signal on photomultiplier tube (PMT). To measure only the $`F=3`$ population we applied the detection pulse without the repumping laser. The $`F=3`$ atoms were simultaneously accelerated and Doppler-shifted from resonance by the radiation pressure of the detection beam within the first $`100`$ $`\mu `$s of the pulse. Then we could detect the $`F=2`$ atoms by switching on the repumping laser that pumped $`F=2`$ population to the $`F=3`$ state where atoms were measured by the second part of the detection pulse. This normalized detection scheme is insensitive to shot-to-shot fluctuations in atom number as well as fluctuations of the detection laser frequency and intensity.
After the adiabatic compression of the atoms in the ROBOT was completed, the two-photon laser on resonance with $`5S_{1/2}F=25D_{5/2}F=4`$ was applied for various time intervals and the resulting $`F=3`$ normalized population fraction was detected. The results for a $`170`$ $`\mu `$W two-photon laser are presented on Fig. 3. After $`100`$ ms, $`85\%`$ of the atoms are pumped to the $`F=3`$ state. This steady state population is less then 100% since spontaneous Raman scattering from the trapping laser and from the two-photon laser (absorption of one photon followed by spontaneous emission ) tend to equalize the populations of the two ground levels and therefore compete with the measured two-photon process. The characteristic $`1/e`$ time of the four-photon spontaneous Raman scattering process which is induced by the two-photon laser ($`5S_{1/2}F=25D_{5/2}F^{}6P_{3/2}F^{}5S_{1/2}F=3`$, see Fig. 1) is obtained from a fit to the data as $`\tau _{4p}=25`$ ms. The corresponding (four-photon) rate is $`\gamma _{4p}=1/\tau _{4p}=40`$ s<sup>-1</sup>. Using the theoretical value of the two-photon cross-section of $`\sigma =0.57\times 10^{18}`$cm<sup>4</sup>/W , the exact branching ratio ($`68\%`$) for the two-photon excitation to decay to $`F=3`$ , and our maximal excitation laser intensity of $`16`$ W/cm<sup>2</sup> we calculate $`\gamma _{4p}=391`$s<sup>-1</sup>, a factor of $`10`$ larger than the measured rate. Using a measured value for the two-photon cross-section yields a somewhat larger value of $`\gamma _{4p}=823`$s<sup>-1</sup>.
The main factor that reduced the measured excitation rate was the linewidth of the two-photon laser that was $`6`$ times larger than the $`300`$ kHz natural linewidth of the two-photon transition. The inhomogeneous broadening due to Stark-shift was calculated for the compressed trapping potential to be $`400`$ kHz, which is smaller than the laser linewidth , hence it does not contribute to the reduced excitation rate. An additional reduction of the excitation rate may be caused by imperfect matching between the trapped atomic sample and the maximal intensity of the two-photon laser, so the overall agreement between the measured and the expected $`\gamma _{4p}`$ is reasonable.
To measure the excitation spectrum of the $`5S_{1/2}F=25D_{5/2}F^{}`$ transition we scanned the frequency of the two-photon laser using the acousto-optic modulator. For each frequency point the whole experimental cycle was repeated, with $`50`$ ms interrogation time of the two-photon laser. The $`F=3`$ fraction of atoms as a function of the frequency of the two-photon laser is presented in Fig. 4a. A $`1.75`$ MHz linewidth (FWHM) of the atomic lines was determined by fitting the data with a multi-peak Gaussian function and is limited by the linewidth of the two-photon laser. This measurement agrees well with the frequency noise of the laser estimated from the locking signal. The distances between the lines obtained from this fit are $`4.48`$ MHz, $`3.76`$ MHz and $`2.76`$ MHz, and are in excellent agreement with previously reported values of $`4.50`$ MHz, $`3.79`$ MHz and $`2.74`$ MHz . The height-ratio between the lines obtained from the fit are $`1:0.86:0.47:0.21`$ for $`F^{}=4,3,2,1`$ respectively. The expected values were calculated using the strength of the two-photon transitions together with the two photon decay via the $`6P_{3/2}`$ level to be $`1:0.85:0.4:0.1`$, in good agreement with the measured values, except for the weakest line. Note that although the two-photon transition $`F=2F^{}=0`$ is allowed, a two-photon decay with $`\mathrm{\Delta }F=3`$ is forbidden and therefore this line is not detected.
Finally, we reduced the power of the two-photon laser to $`25`$ $`\mu `$W, which reduced the transition rate by a factor of $`46`$. Here, the interrogation time of the two-photon laser was $`500`$ ms and the measured $`F=3`$ population is shown in Fig. 4b. A spectrum similar to that taken with higher intensity is observed. A transition rate as small as $`0.09`$ s<sup>-1</sup> (for the $`F=3F^{}=1`$ transition) is detected in this scan. The ”quantum rate amplification” due to electron shelving (the ratio between the measured $`\gamma _{4p}`$ transition rate and the rate of the one-photon transition used for detecting the F=3 population) is $`10^7`$ for this case.
In conclusion, we demonstrate a new and extremely sensitive scheme to measure weak transitions using cold atoms. The key issues in our scheme are the long spin relaxation times combined with tight confinement of the atoms in a dark optical dipole trap, and the use of a shelving technique to enhance the signal to noise ratio. We demonstrated our scheme by measuring a two-photon transition $`5S_{1/2}5D_{5/2}`$ for <sup>85</sup>Rb atoms trapped in a far-detuned rotating beam dark trap using only $`25`$ $`\mu `$W laser power. The huge quantum amplification due to electron shelving increases the sensitivity of our scheme far beyond the photon shot noise and technical noise encountered in the direct detection of two-photon induced fluorescence .
Our measurements may be improved in several ways. Improvements of the lifetime and spin relaxation time of atoms in the trap will allow much longer observation times and enable detection of much weaker transitions. This can be done by increasing the trapping laser detuning, where even longer spin-relaxation times are expected due to quantum interference between the two D lines. Reduction of the linewidth of the two-photon laser will allow further improvements in the sensitivity of our scheme. It can also be combined with mode-locked laser spectroscopy to obtain even larger sensitivities for a given time-average power of the laser. Finally, our technique can be applied for other weak (forbidden) transitions such as optical clock transitions and parity violating transitions where a much lower mixing with an allowed transition could be used. |
no-problem/9912/cond-mat9912404.html | ar5iv | text | # Observation of domain wall resistivity in SrRuO₃
## Abstract
$`\mathrm{SrRuO}_3`$ is an itinerant ferromagnet with $`T_c150\mathrm{K}`$. When $`\mathrm{SrRuO}_3`$ is cooled through $`T_c`$ in zero applied magnetic field, a stripe domain structure appears whose orientation is uniquely determined by the large uniaxial magnetocrystalline anisotropy. We find that the ferromagnetic domain walls clearly enhance the resisitivity of $`\mathrm{SrRuO}_3`$ and that the enhancement has different temperature dependence for currents parallel and perpendicular to the domain walls. We discuss possible interpretations of our results.
Ferromagnetic domain walls (DW) are interfaces between magnetic domains whose magnetization points in different directions. For decades, there have been considerable theoretical and experimental efforts to elucidate the effect of DW on electrical resistivity. Early theoretical works predicted positive domain wall resistivity (DWR) due to reflection of electrons from the DW. More recently, a different mechanism for positive DWR has been proposed (based on spin-dependent scattering due to defects) which pointed out the similarity between the magnetic structure induced by DW and that present in Giant Magnetoresistance (GMR) devices. On the other hand, other models predicted negative DWR due to the destruction of electron weak localization induced by dephasing at the domain wall; or predicted that DWR can have either sign, depending on the difference between the spin-dependent scattering lifetimes of the charge carriers, based on a semiclassical treatment. The experimental situation is also quite confusing. Reports on observed positive DWR in materials like Co, were followed by studies which indicated that the additional resistivity was actually a bulk effect. Moreover, there are now strong indications that domain walls in iron (previously believed to enhance resistivity) actually decrease resistivity.
Contrary to most previous studies of DWR that focused on the elemental $`3d`$ ferromagnets, we present here a study of DWR in $`\mathrm{SrRuO}_3`$ which is a metallic perovskite and a $`4d`$ itinerant ferromagnet ($`T_c150\mathrm{K}`$). We find large positive DWR (interface resistance of $`10^{15}\mathrm{\Omega }\mathrm{m}^2`$ at low temperatures) and present unique detailed temperature dependence of DWR for currents parallel and perpendicular to the DW. These results are important for better understanding the mechanisms involved in DWR and since DW in this compound are extremely narrow ($`10\mathrm{\AA }`$), a justified comparison can be made with models proposed for magnetic multilayers.
$`\mathrm{SrRuO}_3`$ has special advantages for studying DWR for its large uniaxial magnetocrystalline anisotropy field ($`10\mathrm{T}`$) combined with its much smaller self field ($`4\pi M0.2\mathrm{T}`$). The large uniaxial anisotropy induces stripe magnetic structure (see inset in Figure 1) which enables the study of DWR for currents parallel and perpendicular to the domain walls; furthermore, it is also responsible for the narrow width of the DW which contributes to the large magnitude of the DWR. The combination of the large uniaxial anisotropy with a much smaller self field not only enables the stability of a saturated magnetization state at zero applied field but it also prevents the creation of closure domains near the sample surface when the sample is in its domain state. As we discuss below, these two properties enable unequivocal determination of DWR and avoid the need to consider bulk effects that may lead to mistaken or ambiguous identification of DWR.
Our measurements were done on high-quality single-crystal films of $`\mathrm{SrRuO}_3`$ grown on slightly-miscut ($`2^{}`$) substrates of $`\mathrm{SrTiO}_3`$. The orthorhombic film grows with its $`[001]`$ and $`[\overline{1}10]`$ axes in the film plane and its magnetic moment ($`1.4\mu _B`$ per ruthenium at saturation) along the out-of-the-plane axis of uniaxial magnetocrystalline anisotropy (see Figure 1) whose direction varies in the (001) plane from approximately $`[010]`$ ($`45^{}`$ out of the plane) at $`T_c`$ to about $`30^{}`$ relative to the normal to the film at low temperatures. The residual resistivities of our films are as low as $`4.6\mu \mathrm{\Omega }\mathrm{cm}`$ (corresponding to resistivity ratio between $`T=300\mathrm{K}`$ and $`T=1.8\mathrm{K}`$ of $`45`$) and their thicknesses vary between $`800\mathrm{\AA }`$ and $`2000\mathrm{\AA }`$. The films were patterned for simultaneous resistivity measurements along the $`[001]`$ and $`[\overline{1}10]`$ directions (see Figure 1).
The magnetic domain structure of $`\mathrm{SrRuO}_3`$ was extensively studied using transmission electron microscopy (TEM) in Lorentz mode (see inset in Figure 1) after removing the $`\mathrm{SrTiO}_3`$ substrate with a chemical etch. The details of this study are reported elsewhere; here we mention the main results relevant to this report: (a) there is a single easy axis along which the spontaneous magnetization lies at zero applied field (no indication for flux closure domains with a different magnetic orientation); (b) the DW are parallel to the in-plane projection of the spontaneous magnetization ($`[\overline{1}10]`$); therefore, each time the sample is cooled through $`T_c`$ the DW appear in the same direction; (c) at low temperatures the spacing of the DW is $`2000\mathrm{\AA }`$ and it does not change up to few degrees below $`T_c`$; (d) once the DW are annihilated (at temperatures lower than few degrees below $`T_c`$) by applying sufficiently-large magnetic field, they do not renucleate when the field is set back to zero; only when a sufficiently-high negative field is applied, new (less dense) domain walls are nucleated in the process of magnetization reversal.
The effect of the DW on the resistivity was measured using two methods. In the first method the sample is cooled in zero field from above $`T_c`$ to a temperature below $`T_c`$. Then the resistivity is measured during a hysteresis loop where the maximum field is high enough to annihilate all domain walls. Figure 2 shows a hysteresis loop taken at $`5\mathrm{K}`$. We note that the initial zero-field resistivity (marked by full circles) is higher than the zero-field resistivity after the sample’s magnetization was saturated. Based on our TEM measurements we know that the initial resistivity here is measured in the presence of DW, while no DW are present in the following zero-field resistivity measurements. Consequently, we attribute the difference between the two zero-field resistivities to DWR. In the second method the sample is cooled in zero field from above $`T_c`$ to $`1.8\mathrm{K}`$ and the resistivity ($`\rho ^{ZFC}`$) is measured as a function of temperature. Afterwards, the sample is cooled from above $`T_c`$ in a field of $`2\mathrm{T}`$ (to prevent the formation of domain structure) down to $`1.8\mathrm{K}`$ where the the field is set to zero and the resistivity ($`\rho ^{FC}`$) is measured as a function of temperature. Since $`\rho ^{ZFC}`$ is measured with DW and $`\rho ^{FC}`$ is measured without them, we attribute the difference between the two to DWR. Figure 3 shows DWR as measured in the two methods as a function of temperature for the two different current directions. However, before discussing these plots (which are the main result of this paper), we address a crucial issue. Being aware of contentions that previous reports on positive DWR originated from anisotropic magnetoresistance (AMR) effects, we find it vital to eliminate this possibility in our case.
The AMR effect is a dependence of the resistivity in a magnetic metal on the angle between the current and the magnetic moment which results from spin-orbit coupling. If in the domain state of a magnetic metal the magnetization is along more than one axis, then saturating the magnetization of the sample must induce in some parts of the sample changes in the angle between the current and the magnetic moment; hence, an AMR effect. The existence of uniaxial anisotropy is not sufficient to ensure that the magnetization points in the same direction in all the domains. It was pointed out that when $`Q=K/2\pi M_s^21`$ (here $`K`$ is the anisotropy energy and $`M_s`$ is the saturated magnetization), flux closure domains in which magnetization points in different directions are created near the surface of the sample; consequently, an AMR effect contributes to the change of resistivity when the sample is saturated. While Fe and Co are in the small $`Q`$ limit, $`\mathrm{SrRuO}_3`$ is in the $`Q1`$ limit ($`Q_{\mathrm{SrRuO}_3}>10`$). Therefore, it is not surprising that contrary to Co and Fe, closure domains were not observed by TEM in $`\mathrm{SrRuO}_3`$. This means that when we measure the resistivity with or without DW the bulk magnetization is along the same axis. This ensures not only the lack of any AMR effect but also any effect related to changes in the direction of the self field which can give rise to changes in the regular Lorentz magnetoresistance.
After eliminating the possibility that an intrinsic AMR effect contributes to our measured DWR, we want to exclude the possibility of a ’dirt’ effect; namely, that small inclusions of $`\mathrm{SrRuO}_3`$ with different magnetic anisotropy are causing the effect. This is excluded not only by the low residual resistivity, and the consistent values among different samples of the DWR in both current directions, but also by a direct test of the magnetic anisotropy of the regions responsible for the observed drop in resistivity in the hysteresis loops. We identify the DWR in a finite field as the difference between the resistivity on the first branch of the loop (the initial increase of the field from zero to above the saturating field), where DW are partially annihilated, and the resistivity on the second branch of the loop (where the field is decreased from above the saturating field to zero), where no DW are present; and we look at the dependence of the finite-field DWR on the angle between the applied magnetic field and the film. If the observed effect is a ’dirt’ effect, we do not expect a preferred direction (except for geometric considerations); on the other hand, if our interpretation is correct, we expect that the change in resistivity attributed to the process of annihilating the domain walls will depend on the component of the field parallel to the known direction of the uniaxial anisotropy. Figure 4 shows finite-field DWR for perpendicular current at some of the angles at which we measured. The inset shows that the angular dependence clearly supports our scenario. Similar results were also obtained for the parallel current. Based on the above, we argue that indeed we measure DWR and not a ’dirt’ or a bulk effect.
We go back now to Figure 3 which shows DWR as a function of temperature for currents parallel ($`\rho _{DW}^{}`$) and perpendicular ($`\rho _{DW}^{}`$) to the DW. To the best of our knowledge, this is the most detailed temperature dependent measurement of domain wall resistivity ever reported for either current direction. The results presented in Figure 3 were obtained by measuring the sample with the lowest residual resistivity, and while results slightly vary among samples, we find the following characteristics in all our measured samples. At low temperatures, $`\rho _{DW}^{}`$ is always larger than $`\rho _{DW}^{}`$ and their ratio is $`2`$ in the highest-quality samples. The magnitude of the DWR resistivity of various samples in the zero temperature limit is very similar despite variations of more than a factor of 2 in the value of the residual resistivity. The specific features of the temperature dependence of $`\rho _{DW}^{}`$ and $`\rho _{DW}^{}`$ are preserved. Up to $`15\mathrm{K}`$, $`\rho _{DW}^{}`$ is flat; between $`15\mathrm{K}`$ and $`100\mathrm{K}`$ there is a sharp decrease in $`\rho _{DW}^{}`$; between $`100\mathrm{K}`$ and $`120\mathrm{K}`$, $`\rho _{DW}^{}`$ slightly increases; and between $`120\mathrm{K}`$ and $`T_c`$ $`\rho _{DW}^{}`$ decreases to zero. $`\rho _{DW}^{}`$ has very different behavior except for the sharp decrease above $`120\mathrm{K}`$ which is correlated with the sharp decrease in the spontaneous magnetization. $`\rho _{DW}^{}`$ is quite flat up to $`120\mathrm{K}`$ with a shallow minimum around $`60\mathrm{K}`$. In the zero temperature limit we find that $`\rho _{DW}^{}0.48\mu \mathrm{\Omega }\mathrm{cm}`$ and $`\rho _{DW}^{}0.24\mu \mathrm{\Omega }\mathrm{cm}`$. The width of the domain wall is $`10\mathrm{\AA }`$ while the spacing between the domain walls is $`2000\mathrm{\AA }`$. Therefore, in terms of bulk resistivity, the resistivity for the perpendicular current within the domain wall is $`100\mu \mathrm{\Omega }\mathrm{cm}`$ and the interface resistance is $`10^{15}\mathrm{\Omega }\mathrm{m}^2`$. This value is more than three orders of magnitude larger than the $`negative`$ interface resistance of $`6\pm 2\times 10^{19}\mathrm{\Omega }\mathrm{m}^2`$ reported for cobalt. We believe that the huge difference in the magnitude of the DWR is related to the width of the DW in cobalt estimated as 15 times larger than in $`\mathrm{SrRuO}_3`$.
Models proposed for DWR have concentrated on the limit where the width of the DW is much larger than the Fermi wavelength. This limit, however, is not applicable here and therefore we cannot compare our results to these models. Instead, we compare our data to results obtained for magnetic multilayers. The configuration in which $`\rho _{DW}^{}`$ is measured is similar to the so-called CPP (current perpendicular to plane) geometry in GMR structures (only that in our case we study solely the effect of the magnetic interface without having to consider issues such as surface roughness and matching between different materials). Therefore, mechanisms considered for the GMR structures may also be responsible for the observed DWR. Such a mechanism is the potential step scattering, previously studied for layered magnetic structures by Barnas and Fert. The two found that an interface between a magnetic and a nonmagnetic metal, or between different magnetic domains within a magnetic metal acts like a potential step whose height is related to the exchange splitting. While there is no closed form equation for the interface resistance, the numerical solution of Barnas and Fert indicates that for commonly used materials in magnetic multilayers (e.g., Co or Cu) the interface resistance is on the order of $`10^{15}\mathrm{\Omega }\mathrm{m}^2`$. Since DW in our case are of atomic width we believe that it is possible to treat them as potential steps. The exchange splitting and the Fermi energy in $`\mathrm{SrRuO}_3`$ are $`0.65\mathrm{eV}`$ and $`2\mathrm{eV}`$, respectively. Therefore, we can expect an interface resistance on the order of $`10^{15}\mathrm{\Omega }\mathrm{m}^2`$, as observed.
Another potential source for DWR also considered for GMR structures is spin accumulation. When a polarized current crosses an interface there is spin accumulation near the interface that induces a potential barrier which results in excess resistivity. The spin accumulation (and its related resistivity) is strongly affected by the spin flip length $`l_{sf}`$; namely, the length that a quasiparticle travels without flipping its spin. This length is strongly affected by magnetic scattering; therefore, the decrease in $`\rho _{DW}^{}(T)`$ above $`T=15\mathrm{K}`$ may be related to sharp decrease in the spin-accumulation resistivity induced by the shortening of $`l_{sf}`$ due to magnetic scattering.
Contrary to $`\rho _{DW}^{}(T)`$ which exhibits complex temperature dependence, $`\rho _{DW}^{}(T)`$ is almost flat up to $`120\mathrm{K}`$ despite big changes particularly in the resistivity but also in the magnetization of $`\mathrm{SrRuO}_3`$. We have no model for this behavior; although, it is interesting to note that we would have obtained temperature-independent resistivity if we excluded the volume of the DW including a distance from the DW proportional to the charge carrier mean free path.
In conclusion, special properties of domain walls in $`\mathrm{SrRuO}_3`$ enable clear observation of large $`positive`$ DW resistivity. Our main result here is the detailed temperature dependence of $`\rho _{DW}^{}`$ and $`\rho _{DW}^{}`$ which requires further theoretical consideration. This, we hope, will yield deeper understanding not only of DWR but also of transport mechanisms in magnetic multilayers. Further experiments in which the mean free path will be changed by electron irradiation, and the magnetization will be changed by doping, are planned for quantitative identification of the different contributions to DWR.
We thank A. Fert and J. S. Dodge for useful discussions. This research was supported by THE ISRAEL SCIENCE FOUNDATION founded by the Israel Academy of Sciences and Humanities and by Grant No. 97-00428/1 from the United States-Israel Binational Science Foundation (BSF), Jerusalem, Israel. |
no-problem/9912/astro-ph9912089.html | ar5iv | text | # STATUS OF COSMOLOGY
## 1 Introduction
In this brief summary I will concentrate on the values of the cosmological parameters. The other key questions in cosmology today concern the nature of the dark matter and dark energy, the origin and nature of the primordial inhomogeneities, and the formation and evolution of galaxies. I have been telling my theoretical cosmology students for several years that these latter topics are their main subjects for research, since determining the values of the cosmological parameters is now mainly in the hands of the observers.
In discussing cosmological parameters, it will be useful to distinguish between two sets of assumptions: (a) general relativity plus the assumption that the universe is homogeneous and isotropic on large scales (Friedmann-Robertson-Walker framework), or (b) the $`\mathrm{\Lambda }`$CDM family of models. The $`\mathrm{\Lambda }`$CDM models assume that the present matter density $`\mathrm{\Omega }_m`$ plus the cosmological constant (or its equivalent in “dark energy”) in units of critical density $`\mathrm{\Omega }_\mathrm{\Lambda }=\mathrm{\Lambda }/(3H_0^2)`$ sum to unity ($`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$) to produce the flat universe predicted by simple cosmic inflation models. The $`\mathrm{\Lambda }`$CDM family of models was introduced by Blumenthal et al. (1984), who worked out the linear power spectra $`P(k)`$ and a semi-analytic treatment of structure formation compared to the then-available data. We did ths for the $`\mathrm{\Omega }_m=1`$, $`\mathrm{\Lambda }=0`$ “standard” cold dark matter (CDM) model, and also for the $`\mathrm{\Omega }_m=0.2`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.8`$ $`\mathrm{\Lambda }`$CDM model. In addition to $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$, these $`\mathrm{\Lambda }`$CDM models assumed that the primordial fluctuations were Gaussian with a Zel’dovich spectrum ($`P_p(k)=Ak^n`$, with $`n=1`$), and that the dark matter is mostly of the cold variety.
The table below summarizes the current observational information about the cosmological parameters. The quantities in brackets have been deduced using at least some of the $`\mathrm{\Lambda }`$CDM assumptions. The rest of this paper discusses these issues in more detail. But it should already be apparent that there is impressive agreement between the values of the parameters determined by various methods.
## 2 Age of the Universe $`t_0`$
The strongest lower limits for $`t_0`$ come from studies of the stellar populations of globular clusters (GCs). In the mid-1990s the best estimates of the ages of the oldest GCs from main sequence turnoff magnitudes were $`t_{GC}1516`$ Gyr (Bolte & Hogan 1995; VandenBerg, Bolte, & Stetson 1996; Chaboyer et al. 1996). A frequently quoted lower limit on the age of GCs was $`12`$ Gyr (Chaboyer et al. 1996), which was then an even more conservative lower limit on $`t_0=t_{GC}+\mathrm{\Delta }t_{GC}`$, where $`\mathrm{\Delta }t_{GC}`$$`>`$$``$$`0.5`$ Gyr is the time from the Big Bang until GC formation. The main uncertainty in the GC age estimates came from the uncertain distance to the GCs: a 0.25 magnitude error in the distance modulus translates to a 22% error in the derived cluster age (Chaboyer 1995).
In spring of 1997, analyses of data from the Hipparcos astrometric satellite indicated that the distances to GCs assumed in obtaining the ages just discussed were systematically underestimated (Reid 1997, Gratton et al. 1997). It follows that their stars at the main sequence turnoff are brighter and therefore younger. Stellar evolution calculation improvements also lowered the GC age estimates. In light of the new Hipparcos data, Chaboyer et al. (1998) have done a new Monte Carlo analysis of the effects of varying various uncertain parameters, and obtained $`t_{GC}=11.5\pm 1.3`$ Gyr ($`1\sigma `$), with a 95% C.L. lower limit of 9.5 Gyr. The latest detailed analysis (Carretta et al. 1999) gives $`t_{GC}=11.8\pm 2.6`$ Gyr from main sequence fitting using parallaxes of local subdwarfs, the method used in the 1997 analyses quoted above. These authors get somewhat smaller GC distances when all the available data is used, with a resulting $`t_{GC}=13.2\pm 2.9`$ Gyr (95% C.L.).
Stellar age estimates are of course based on standard stellar evolution calculations. But the solar neutrino problem reminds us that we are not really sure that we understand how even our nearest star operates; and the sun plays an important role in calibrating stellar evolution, since it is the only star whose age we know independently (from radioactive dating of early solar system material). An important check on stellar ages can come from observations of white dwarfs in globular and open clusters (Richer et al. 1998).
What if the GC age estimates are wrong for some unknown reason? The only other non-cosmological estimates of the age of the universe come from nuclear cosmochronometry — radioactive decay and chemical evolution of the Galaxy — and white dwarf cooling. Cosmochronometry age estimates are sensitive to a number of uncertain issues such as the formation history of the disk and its stars, and possible actinide destruction in stars (Malaney, Mathews, & Dearborn 1989; Mathews & Schramm 1993). However, an independent cosmochronometry age estimate of $`15.6\pm 4.6`$ Gyr has been obtained based on data from two low-metallicity stars, using the measured radioactive depletion of thorium (whose half-life is 14.2 Gyr) compared to stable heavy r-process elements (Cowan et al. 1997, 1999). This method could become very important if it were possible to obtain accurate measurements of r-process element abundances for a number of very low metallicity stars giving consistent age estimates, and especially if the large errors could be reduced.
Independent age estimates come from the cooling of white dwarfs in the neighborhood of the sun. The key observation is that there is a lower limit to the luminosity, and therefore also the temperature, of nearby white dwarfs; although dimmer ones could have been seen, none have been found (cf. however Harris et al. 1999). The only plausible explanation is that the white dwarfs have not had sufficient time to cool to lower temperatures, which initially led to an estimate of $`9.3\pm 2`$ Gyr for the age of the Galactic disk (Winget et al. 1987). Since there was evidence, based on the pre-Hipparcos GC distances, that the stellar disk of our Galaxy is about 2 Gyr younger than the oldest GCs (e.g., Stetson, VandenBerg, & Bolte 1996, Rosenberg et al. 1999), this in turn gave an estimate of the age of the universe of $`t_011\pm 2`$ Gyr. Other analyses (cf. Wood 1992, Hernanz et al. 1994) conclude that sensitivity to disk star formation history, and to effects on the white dwarf cooling rates due to C/O separation at crystallization and possible presence of trace elements such as <sup>22</sup>Ne, allow a rather wide range of ages for the disk of about $`10\pm 4`$ Gyr. One determination of the white dwarf luminosity function, using white dwarfs in proper motion binaries, leads to a somewhat lower minimum luminosity and therefore a somewhat higher estimate of the age of the disk of $`10.5_{1.5}^{+2.5}`$ Gyr (Oswalt et al. 1996; cf. Chabrier 1997). More recent observations (Leggett, Ruiz and Bergeron 1998) and analyses (Benvenuto & Althaus 1999) lead to an estimated age of the galactic disk of $`8\pm 1.5`$ Gyr.
We conclude that $`t_013`$ Gyr, with $`11`$ Gyr a lower limit. Note that $`t_0>13`$ Gyr implies that $`h0.50`$ for matter density $`\mathrm{\Omega }_m=1`$, and that $`h0.73`$ even for $`\mathrm{\Omega }_m`$ as small as 0.3 in flat cosmologies (i.e., with $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$). If $`t_0`$ is as low as $`11`$ Gyr, that would allow $`h`$ as high as 0.6 for $`\mathrm{\Omega }_m=1`$.
## 3 Hubble Parameter $`H_0`$
The Hubble parameter $`H_0100h`$ km s<sup>-1</sup> Mpc<sup>-1</sup> remains uncertain, although no longer by the traditional factor of two. The range of $`h`$ determinations has been shrinking with time (Kennicutt, Freedman, & Mould 1995). De Vaucouleurs long contended that $`h1`$. Sandage has long contended that $`h0.5`$, although a recent reanalysis of the Type Ia supernovae (SNe Ia) data coauthored by Sandage and Tammann concludes that the latest data are consistent with $`h=0.6\pm 0.04`$ (Saha et al. 1999).
The Hubble parameter has been measured in two basic ways: (1) Measuring the distance to some nearby galaxies, typically by measuring the periods and luminosities of Cepheid variables in them; and then using these “calibrator galaxies” to set the zero point in any of the several methods of measuring the relative distances to galaxies. (2) Using fundamental physics to measure the distance to some distant object(s) directly, thereby avoiding at least some of the uncertainties of the cosmic distance ladder (Rowan-Robinson 1985). The difficulty with method (1) was that there was only a handful of calibrator galaxies close enough for Cepheids to be resolved in them. However, the HST Key Project on the Extragalactic Distance Scale has significantly increased the set of calibrator galaxies. The difficulty with method (2) is that in every case studied so far, some aspect of the observed system or the underlying physics remains somewhat uncertain. It is nevertheless remarkable that the results of several different methods of type (2) are rather similar, and indeed not very far from those of method (1). This gives reason to hope for convergence.
### 3.1 Relative Distance Methods
One piece of good news is that the several methods of measuring the relative distances to galaxies now mostly seem to be consistent with each other. These methods use either “standard candles” or empirical relations between two measurable properties of a galaxy, one distance-independent and the other distance-dependent. The favorite standard candle is SNe Ia, and observers are now in good agreement. Taking account of an empirical relationship between the SNe Ia light curve shape and maximum luminosity leads to $`h=0.65\pm 0.06`$ (Riess, Press, & Kirshner 1996), $`h=0.64_{0.06}^{+0.08}`$ (Jha et al. 1999), or $`h=0.63\pm 0.03`$ (Hamuy et al. 1996, Phillips et al. 1999), and the slightly lower value mentioned above from the latest analysis coauthored by Sandage and Tammann agrees within the errors. The HST Key Project result using SNe Ia is $`h=0.65\pm 0.02\pm 0.05`$, where the first error quoted is statistical and the second is systematic (Gibson et al. 1999), and their luminosity-metallicity relationship (Kennicutt et al. 1998) has been used (this lowers $`h`$ by 4%). Some of the other relative distance methods are based on old stellar populations: the tip of the red giant branch (TRGB), the planetary nebula luminosity function (PNLF), the globular cluster luminosity function (GCLF), and the surface brightness fluctuation method (SBF). The HST Key Project result using these old star standard candles is $`h=0.66\pm 0.04\pm 0.06`$. The old favorite empirical relation used as a relative distance indicator is the Tully-Fisher relation between the rotation velocity and luminosity of spiral galaxies. The “final” value of the Hubble constant from the HST Key Project taking all of these into account is $`h=0.71\pm 0.06`$ (Ferrarese et al. 1999, and this conference, for a nice summary).
### 3.2 Fundamental Physics Approaches
The fundamental physics approaches involve either Type Ia or Type II supernovae, the Sunyaev-Zel’dovich (S-Z) effect, or gravitational lensing of quasars. All are promising, but in each case the relevant physics remains somewhat uncertain.
The <sup>56</sup>Ni radioactivity method for determining $`H_0`$ using Type Ia SNe avoids the uncertainties of the distance ladder by calculating the absolute luminosity of Type Ia supernovae from first principles using plausible but as yet unproved physical models for <sup>56</sup>Ni production. The first result obtained was that $`h=0.61\pm 0.10`$ (Arnet, Branch, & Wheeler 1985; Branch 1992); however, another study (Leibundgut & Pinto 1992; cf. Vaughn et al. 1995) found that uncertainties in extinction (i.e., light absorption) toward each supernova increases the range of allowed $`h`$. Demanding that the <sup>56</sup>Ni radioactivity method agree with an expanding photosphere approach leads to $`h=0.60_{0.11}^{+0.14}`$ (Nugent et al. 1995). The expanding photosphere method compares the expansion rate of the SN envelope measured by redshift with its size increase inferred from its temperature and magnitude. This approach was first applied to Type II SNe; the 1992 result $`h=0.6\pm 0.1`$ (Schmidt, Kirschner, & Eastman 1992) was subsequently revised upward by the same authors to $`h=0.73\pm 0.06\pm 0.07`$ (1994). However, there are various complications with the physics of the expanding envelope (Ruiz-Lapuente et al. 1995; Eastman, Schmidt, & Kirshner 1996).
The S-Z effect is the Compton scattering of microwave background photons from the hot electrons in a foreground galaxy cluster. This can be used to measure $`H_0`$ since properties of the cluster gas measured via the S-Z effect and from X-ray observations have different dependences on $`H_0`$. The result from the first cluster for which sufficiently detailed data was available, A665 (at $`z=0.182`$), was $`h=(0.40.5)\pm 0.12`$ (Birkinshaw, Hughes, & Arnoud 1991); combining this with data on A2218 ($`z=0.171`$) raised this somewhat to $`h=0.55\pm 0.17`$ (Birkinshaw & Hughes 1994). The history and more recent data have been reviewed by Birkinshaw (1999), who concludes that the available data give a Hubble parameter $`h0.6`$ with a scatter of about 0.2. But since the available measurements are not independent, it does not follow that $`h=0.6\pm 0.1`$; for example, there is a selection effect that biases low the $`h`$ determined this way.
Several quasars have been observed to have multiple images separated by $`\theta `$ a few arc seconds; this phenomenon is interpreted as arising from gravitational lensing of the source quasar by a galaxy along the line of sight (first suggested by Refsdal 1964; reviewed in Williams & Schechter 1997). In the first such system discovered, QSO 0957+561 ($`z=1.41`$), the time delay $`\mathrm{\Delta }t`$ between arrival at the earth of variations in the quasar’s luminosity in the two images has been measured to be, e.g., $`409\pm 23`$ days (Pelt et al. 1994), although other authors found a value of $`540\pm 12`$ days (Press, Rybicki, & Hewitt 1992). The shorter $`\mathrm{\Delta }t`$ has now been confirmed (Kundic et al. 1997a, cf. Serra-Ricart et al. 1999 and references therein). Since $`\mathrm{\Delta }t\theta ^2H_0^1`$, this observation allows an estimate of the Hubble parameter. The latest results for $`h`$ from 0957+561, using all available data, are $`h=0.64\pm 0.13`$ (95% C.L.) (Kundic et al. 1997a), and $`h=0.62\pm 0.07`$ (Falco et al. 1997), where the error does not include systematic errors in the assumed form of the lensing mass distribution.
The first quadruple-image quasar system discovered was PG1115+080. Using a recent series of observations (Schechter et al. 1997), the time delay between images B and C has been determined to be about $`24\pm 3`$ days. A simple model for the lensing galaxy and the nearby galaxies then leads to $`h=0.42\pm 0.06`$ (Schechter et al. 1997), although higher values for $`h`$ are obtained by more sophisticated analyses: $`h=0.60\pm 0.17`$ (Keeton & Kochanek 1996), $`h=0.52\pm 0.14`$ (Kundic et al. 1997b). The results depend on how the lensing galaxy and those in the compact group of which it is a part are modelled.
Another quadruple-lens system, B1606+656, leads to $`h=0.59\pm 0.08\pm 0.15`$, where the first error is the 95% C.L. statistical error, and the second is the estimated systematic uncertainty (Fassnacht et al. 1999). Time delays have also recently been determined for the Einstein ring system B0218+357, giving $`h=0.69_{0.19}^{+0.13}`$ (95% C.L.) (Biggs et al. 1999).
Mainly because of the systematic uncertainties in modelling the mass distribution in the lensing systems, the uncertainty in the $`h`$ determination by gravitational lens time delays remains rather large. But it is reassuring that this completely independent method gives results consistent with the other determinations.
### 3.3 Conclusions on $`H_0`$
To summarize, relative distance methods favor a value $`h0.60.7`$. Meanwhile the fundamental physics methods typically lead to $`h0.40.7`$. Among fundamental physics approaches, there has been important recent progress in measuring $`h`$ via the Sunyev-Zel’dovich effect and time delays between different images of gravitationally lensed quasars, although the uncertainties remain larger than via relative distance methods. For the rest of this review, we will adopt a value of $`h=0.65\pm 0.08`$. This corresponds to $`t_0=6.52h^1\mathrm{Gyr}=10\pm 2`$ Gyr for $`\mathrm{\Omega }_m=1`$ — probably too low compared to the ages of the oldest globular clusters. But for $`\mathrm{\Omega }_m=0.2`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, or alternatively for $`\mathrm{\Omega }_m=0.4`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$, $`t_0=13\pm 2`$ Gyr, in agreement with the globular cluster estimate of $`t_0`$. This is one of the several arguments for low $`\mathrm{\Omega }_m`$, a non-zero cosmological constant, or both.
## 4 Hot Dark Matter Density $`\mathrm{\Omega }_\nu `$
The recent atmospheric neutrino data from Super-Kamiokande (Fukuda et al. 1998) provide strong evidence of neutrino oscillations and therefore of non-zero neutrino mass. These data imply a lower limit on the hot dark matter (i.e., light neutrino) contribution to the cosmological density $`\mathrm{\Omega }_\nu `$$`>`$$``$$`0.001`$. $`\mathrm{\Omega }_\nu `$ is actually that low, and therefore cosmologically uninteresting, if $`m(\nu _\tau )m(\nu _\mu )`$, as is suggested by the hierarchical pattern of the quark and charged lepton masses. But if the $`\nu _\tau `$ and $`\nu _\mu `$ are nearly degenerate in mass, as suggested by their strong mixing, then $`\mathrm{\Omega }_\nu `$ could be substantially larger. Although the Cold + Hot Dark Matter (CHDM) cosmological model with $`h0.5`$, $`\mathrm{\Omega }_m=1`$, and $`\mathrm{\Omega }_\nu =0.2`$ predicts power spectra of cosmic density and CMB anisotropies that are in excellent agreement with the data (Primack 1996, Gawiser & Silk 1998), as we have just seen the large value measured for the Hubble parameter makes such $`\mathrm{\Omega }_m=1`$ models dubious. It remains to be seen whether including a significant amount of hot dark matter in low-$`\mathrm{\Omega }_m`$ models improves their agreement with data. Primack & Gross (1998) found that the possible improvement of the low-$`\mathrm{\Omega }_m`$ flat ($`\mathrm{\Lambda }`$CDM) cosmological models with the addition of light neutrinos appears to be rather limited, and the maximum amount of hot dark matter decreases with decreasing $`\mathrm{\Omega }_m`$ (Primack et al. 1995). For $`\mathrm{\Omega }_m`$$`<`$$``$$`0.4`$, Croft, Hu, and Davé (1999) find that $`\mathrm{\Omega }_\nu `$$`<`$$``$$`0.08`$. Fukugita et al. (1999) find more restrictive upper limits with the constraint that the primordial power spectrum index $`n1`$, but this may not be well motivated.
## 5 Cosmological Constant $`\mathrm{\Lambda }`$
The strongest evidence for a positive $`\mathrm{\Lambda }`$ comes from high-redshift SNe Ia, and independently from a combination of observations indicating that $`\mathrm{\Omega }_m0.3`$ together with CMB data indicating that the universe is nearly flat. We will discuss these observations in the next section. Here we will start by looking at other constraints on $`\mathrm{\Lambda }`$.
The cosmological effects of a cosmological constant are not difficult to understand (Felton & Isaacman 1986; Lahav et al. 1991; Carroll, Press, & Turner 1992). In the early universe, the density of energy and matter is far more important than the $`\mathrm{\Lambda }`$ term on the r.h.s. of the Friedmann equation. But the average matter density decreases as the universe expands, and at a rather low redshift ($`z0.2`$ for $`\mathrm{\Omega }_m=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$) the $`\mathrm{\Lambda }`$ term finally becomes dominant. Around this redshift, the $`\mathrm{\Lambda }`$ term almost balances the attraction of the matter, and the scale factor $`a(1+z)^1`$ increases very slowly, although it ultimately starts increasing exponentially as the universe starts inflating under the influence of the increasingly dominant $`\mathrm{\Lambda }`$ term. The existence of a period during which expansion slows while the clock runs explains why $`t_0`$ can be greater than for $`\mathrm{\Lambda }=0`$, but this also shows that there is an increased likelihood of finding galaxies in the redshift interval when the expansion slowed, and a correspondingly increased opportunity for lensing by these galaxies of quasars (which mostly lie at higher redshift $`z`$$`>`$$``$$`2`$).
The observed frequency of such lensed quasars is about what would be expected in a standard $`\mathrm{\Omega }=1`$, $`\mathrm{\Lambda }=0`$ cosmology, so this data sets fairly stringent upper limits: $`\mathrm{\Omega }_\mathrm{\Lambda }0.70`$ at 90% C.L. (Maoz & Rix 1993, Kochanek 1993), with more recent data giving even tighter constraints: $`\mathrm{\Omega }_\mathrm{\Lambda }<0.66`$ at 95% confidence if $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$ (Kochanek 1996b). This limit could perhaps be weakened if there were (a) significant extinction by dust in the E/S0 galaxies responsible for the lensing or (b) rapid evolution of these galaxies, but there is much evidence that these galaxies have little dust and have evolved only passively for $`z`$$`<`$$``$$`1`$ (Steidel, Dickinson, & Persson 1994; Lilly et al. 1995; Schade et al. 1996). An alternative analysis by Im, Griffiths, & Ratnatunga 1997 of some of the same optical lensing data considered by Kochanek 1996 leads them to deduce a value $`\mathrm{\Omega }_\mathrm{\Lambda }=0.64_{0.26}^{+0.15}`$, which is barely consistent with Kochanek’s upper limit. Malhotra, Rhodes, & Turner (1997) presents evidence for extinction of quasars by foreground galaxies and claims that this weakens the lensing bound to $`\mathrm{\Omega }_\mathrm{\Lambda }<0.9`$, but this is not justified quantitatively. Maller, Flores, & Primack (1997) shows that edge-on disk galaxies can lens quasars very effectively, and discusses a case in which optical extinction is significant. But the radio observations discussed by Falco, Kochanek, & Munoz (1998), which give a $`2\sigma `$ limit $`\mathrm{\Omega }_\mathrm{\Lambda }<0.73`$, are not affected by extinction. Recently Chiba and Yoshii (1999) have suggested that a reanalysis of lensing using new models of the evolution of elliptical galaxies gives $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7_{0.2}^{+0.1}`$, but Kochanek et al. (1999, see especially Fig. 4) show that the available evidence disfavors the models of Chiba and Yoshii.
A model-dependent constraint appeared to come from simulations of $`\mathrm{\Lambda }`$CDM (Klypin, Primack, & Holtzman 1996) and OpenCDM (Jenkins et al. 1998) COBE-normalized models with $`h=0.7`$, $`\mathrm{\Omega }_m=0.3`$, and either $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ or, for the open case, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$. These models have too much power on small scales to be consistent with observations, unless there is strong scale-dependent antibiasing of galaxies with respect to dark matter. However, recent high-resolution simulations (Klypin et al. 1999) find that merging and destruction of galaxies in dense environments lead to exactly the sort of scale-dependent antibiasing needed for agreement with observations for the $`\mathrm{\Lambda }`$CDM model. Similar results have been found using simulations plus semi-analytic methods (Benson et al. 1999, but cf. Kauffmann et al. 1999).
Another constraint on $`\mathrm{\Lambda }`$ from simulations is a claim that the number of long arcs in clusters is in accord with observations for an open CDM model with $`\mathrm{\Omega }_m=0.3`$ but an order of magnitude too low in a $`\mathrm{\Lambda }`$CDM model with the same $`\mathrm{\Omega }_m`$ (Bartelmann et al. 1998). This apparently occurs because clusters with dense cores form too late in such models. This is potentially a powerful constraint, and needs to be checked and understood. It is now known that including cluster galaxies does not alter these results (Meneghetti et al. 1999; Flores, Maller, & Primack 1999).
## 6 Measuring $`\mathrm{\Omega }_m`$
The present author, like many theorists, has long regarded the Einstein-de Sitter ($`\mathrm{\Omega }_m=1`$, $`\mathrm{\Lambda }=0`$) cosmology as the most attractive one. For one thing, there are only three possible constant values for $`\mathrm{\Omega }`$ — 0, 1, and $`\mathrm{}`$ — of which the only one that can describe our universe is $`\mathrm{\Omega }_m=1`$. Also, cosmic inflation is the only known solution for several otherwise intractable problems, and all simple inflationary models predict that the universe is flat, i.e. that $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$. Since there is no known physical reason for a non-zero cosmological constant, it was often said that inflation favors $`\mathrm{\Omega }=1`$. Of course, theoretical prejudice is not a reliable guide. In recent years, many cosmologists have favored $`\mathrm{\Omega }_m0.3`$, both because of the $`H_0t_0`$ constraints and because cluster and other relatively small-scale measurements have given low values for $`\mathrm{\Omega }_m`$. (For a summary of arguments favoring low $`\mathrm{\Omega }_m0.2`$ and $`\mathrm{\Lambda }=0`$, see Coles & Ellis 1997. A review that notes that larger scale measurements favor higher $`\mathrm{\Omega }_m`$ is Dekel, Burstein, & White 1997.) But the most exciting new evidence has come from cosmological-scale measurements.
Type Ia Supernovae. At present, the most promising techniques for measuring $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ on cosmological scales use the small-angle anisotropies in the CMB radiation and high-redshift Type Ia supernovae (SNe Ia). We will discuss the latter first. SNe Ia are the brightest supernovae, and the spread in their intrinsic brightness appears to be relatively small. The Supernova Cosmology Project (Perlmutter et al. 1997a) demonstrated the feasibility of finding significant numbers of such supernovae. The first seven high redshift SNe Ia that they analyzed gave for a flat universe $`\mathrm{\Omega }_m=1\mathrm{\Omega }_\mathrm{\Lambda }=0.94_{0.28}^{+0.34}`$, or equivalently $`\mathrm{\Omega }_\mathrm{\Lambda }=0.06_{0.34}^{+0.28}`$ ($`<0.51`$ at the 95% confidence level) (Perlmutter et al. 1997a). But adding one $`z=0.83`$ SN Ia for which they had good HST data lowered the implied $`\mathrm{\Omega }_m`$ to $`0.6\pm 0.2`$ in the flat case (Perlmutter et al. 1997b). Analysis of their larger dataset of 42 high-redshift SNe Ia gives for the flat cast $`\mathrm{\Omega }_m=0.28_{0.080.04}^{+0.09+0.05}`$ where the first errors are statistical and the second are identified systematics (Perlmutter et al. 1999). The High-Z Supernova team has also searched successfully for high-redshift supernovae to measure $`\mathrm{\Omega }_m`$ (Garnavich et al. 1997, Riess et al. 1998), and their three HST SNe Ia, two at $`z0.5`$ and one at 0.97, imply $`\mathrm{\Omega }_m=0.4\pm 0.3`$ in the flat case. The main concerns about the interpretation of this data are evolution of the SNe Ia (Drell, Loredo, & Wasserman 1999) and dimming by dust. A recent specific supernova evolution concern that was discussed at this workshop is that the rest frame rise-times of distant supernovae may be longer than nearby ones (Riess et al. 1999). But a direct comparison between nearby supernova and the SCP distant sample shows that they are rather consistent with each other (Aldering, Nugent, & Knop 1999). Ordinary dust causes reddening, but hypothetical grey dust would cause much less reddening and could in principle provide an alternative explanation for the fact that high-redshift supernovae are observed to be dimmer than expected in a critical-density cosmology. It is hard to see why the largest dust grains, which would be greyer, should preferentially be ejected by galaxies (Simonsen & Hannestad 1999). Such dust, if it exists, would also absorb starlight and reradiate it at long wavelengths, where there are other constraints that could, with additional observations, rule out this scenario (Aguirre & Haiman 1999). But another way of addressing this question is to collect data on supernovae with redshift $`z>1`$, where the dust scenario predicts considerably more dimming than the $`\mathrm{\Lambda }`$ cosmology. The one $`z>1`$ supernova currently available, SCP’s “Albinoni” (SN1998eq) at $`z=1.2`$, will help, and both the SCP and the High-Z group are attempting to get a few more very high redshift supernovae.
CMB anisotropies. The location of the first Doppler (or acoustic, or Sakharov) peak at angular wavenumber $`l250`$ indicated by the presently available data (Scott, this volume) is evidence in favor of a flat universe $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }1`$. New data from the MAXIMA and BOOMERANG balloon flights apparently confirms this, and the locations of the second and possibly third peak appear to be consistent with the predictions (Hu, Spergel, & White 1997) of simple cosmic inflation theories. Further data should be available in 2001 from the NASA Microwave Anisotropy Probe satellite.
Large-scale Measurements. The comparison of the IRAS redshift surveys with POTENT and related analyses typically give values for the parameter $`\beta _I\mathrm{\Omega }_m^{0.6}/b_I`$ (where $`b_I`$ is the biasing parameter for IRAS galaxies), corresponding to $`0.3`$$`<`$$``$$`\mathrm{\Omega }_m`$$`<`$$``$$`3`$ (for an assumed $`b_I=1.15`$). It is not clear whether it will be possible to reduce the spread in these values significantly in the near future — probably both additional data and a better understanding of systematic and statistical effects will be required. A particularly simple way to deduce a lower limit on $`\mathrm{\Omega }_m`$ from the POTENT peculiar velocity data was proposed by Dekel & Rees (1994), based on the fact that high-velocity outflows from voids are not expected in low-$`\mathrm{\Omega }`$ models. Data on just one nearby void indicates that $`\mathrm{\Omega }_m0.3`$ at the 97% C.L. Stronger constraints are available if we assume that the probability distribution function (PDF) of the primordial fluctuations was Gaussian. Evolution from a Gaussian initial PDF to the non-Gaussian mass distribution observed today requires considerable gravitational nonlinearity, i.e. large $`\mathrm{\Omega }_m`$. The PDF deduced by POTENT from observed velocities (i.e., the PDF of the mass, if the POTENT reconstruction is reliable) is far from Gaussian today, with a long positive-fluctuation tail. It agrees with a Gaussian initial PDF if and only if $`\mathrm{\Omega }_m1`$; $`\mathrm{\Omega }_m<1`$ is rejected at the $`2\sigma `$ level, and $`\mathrm{\Omega }_m0.3`$ is ruled out at $`4\sigma `$ (Nusser & Dekel 1993; cf. Bernardeau et al. 1995). It would be interesting to repeat this analysis with newer data.
Measurements on Scales of a Few Mpc. A study by the Canadian Network for Observational Cosmology (CNOC) of 16 clusters at $`z0.3`$, mostly chosen from the Einstein Medium Sensitivity Survey (Henry et al. 1992), was designed to allow a self-contained measurement of $`\mathrm{\Omega }_m`$ from a field $`M/L`$ which in turn was deduced from their measured cluster $`M/L`$. The result was $`\mathrm{\Omega }_m=0.19\pm 0.06`$ (Carlberg et al. 1997). These data were mainly compared to standard CDM models, and they appear to exclude $`\mathrm{\Omega }_m=1`$ in such models.
Estimates on Galaxy Halo Scales. Work by Zaritsky et al. (1993) has confirmed that spiral galaxies have massive halos. They collected data on satellites of isolated spiral galaxies, and concluded that the fact that the relative velocities do not fall off out to a separation of at least 200 kpc shows that massive halos are the norm. The typical rotation velocity of $`200250`$ km s<sup>-1</sup> implies a mass within 200 kpc of $`2\times 10^{12}M_{}`$. A careful analysis taking into account selection effects and satellite orbit uncertainties concluded that the indicated value of $`\mathrm{\Omega }_m`$ exceeds 0.13 at 90% confidence (Zaritsky & White 1994), with preferred values exceeding 0.3. Newer data suggesting that relative velocities do not fall off out to a separation of $`400`$ kpc (Zaritsky et al. 1997) presumably would raise these $`\mathrm{\Omega }_m`$ estimates.
Cluster Baryons vs. Big Bang Nucleosynthesis. White et al. (1993) emphasized that X-ray observations of the abundance of baryons in clusters can be used to determine $`\mathrm{\Omega }_m`$ if clusters are a fair sample of both baryons and dark matter, as they are expected to be based on simulations (Evrard, Metzler, & Navarro 1996). The fair sample hypothesis implies that
$$\mathrm{\Omega }_m=\frac{\mathrm{\Omega }_b}{f_b}=0.3\left(\frac{\mathrm{\Omega }_b}{0.04}\right)\left(\frac{0.13}{f_b}\right).$$
(1)
We can use this to determine $`\mathrm{\Omega }_m`$ using the baryon abundance $`\mathrm{\Omega }_bh^2=0.019\pm 0.001`$ from the measurement of the deuterium abundance in high-redshift Lyman limit systems, of which a third has recently been discovered (Kirkman et al. 1999). Using X-ray data from an X-ray flux limited sample of clusters to estimate the baryon fraction $`f_b=0.075h^{3/2}`$ (Mohr, Mathiesen, & Evrard 1999) gives $`\mathrm{\Omega }_m=0.25h^{1/2}=0.3\pm 0.1`$ using $`h=0.65\pm 0.08`$. Estimating the baryon fraction using Sunyaev-Zel’dovich measurements of a sample of 18 clusters gives $`f_b=0.77h^1`$ (Carlstrom et al. 1999), and implies $`\mathrm{\Omega }_m=0.25h^1=0.38\pm 0.1`$.
Cluster Evolution. The dependence of the number of clusters on redshift can be a useful constraint on theories (e.g., Eke et al. 1996). But the cluster data at various redshifts are difficult to compare properly since they are rather inhomogeneous. Using just X-ray temperature data, Eke et al. (1998) conclude that $`\mathrm{\Omega }_m0.45\pm 0.2`$, with $`\mathrm{\Omega }_m=1`$ strongly disfavored.
Power Spectrum. In the context of the $`\mathrm{\Lambda }`$CDM class of models, two additional constraints are available. The spectrum shape parameter $`\mathrm{\Gamma }\mathrm{\Omega }_mh0.25\pm 0.05`$, implying $`\mathrm{\Omega }_m0.4\pm 0.1`$. A new measurement $`\mathrm{\Omega }_m=0.34\pm 0.1`$ comes from the amplitude of the power spectrum of fluctuations at redshift $`z3`$, measured from the Lyman $`\alpha `$ forest (Weinberg et al. 1999). This result is strongly inconsistent with high-$`\mathrm{\Omega }_m`$ models because they would predict that the fluctuations grow much more to $`z=0`$, and thus would be lower at $`z=3`$ than they are observed to be.
## 7 Conclusion
One of the most striking things about the present era in cosmology is the remarkable agreement between the values of the cosmological parameters obtained by different methods — except possibly for the quasar lensing data which favors a higher $`\mathrm{\Omega }_m`$ and lower $`\mathrm{\Omega }_\mathrm{\Lambda }`$, and the arc lensing data which favors lower values of both parameters. If the results from the new CMB measurements agree with those from the other methods discussed above, the cosmological parameters will have been determined to perhaps 10%, and cosmologists can turn their attention to the other subjects that I mentioned at the beginning: origin of the initial fluctuations, the nature of the dark matter and dark energy, and the formation of galaxies and large-scale structure. Cosmologists can also speculate on the reasons why the cosmological parameters have the values that they do, but this appears to be the sort of question whose answer may require a deeper understanding of fundamental physics — perhaps from a superstring theory of everything.
###### Acknowledgements.
I wish to thank Stéphane Courteau for organizing this exciting conference! I am grateful to my students and colleagues for helpful discussions. This work was supported in part by NSF and NASA grants at UCSC. |
no-problem/9912/cond-mat9912158.html | ar5iv | text | # Effect of Order-Parameter Suppression on Scattering by Isolated Impurities in Asymmetric Bands
## Abstract
The single-impurity problem in $`d`$-wave superconductors with asymmetric bands is discussed. The effect of local order parameter suppression near the impurity is to shift the quasiparticle resonance. Contrary to previous work \[A. Shnirman et al., Phys. Rev. B 60, 7517 (1999)\] we find that the direction of the shift is not universally towards the strong scattering limit.
There have been many theoretical studies of the local density of states (DOS) near an isolated impurity in a $`d`$-wave superconductor recently. The main conclusions of this body of work are that the $`d`$-wave symmetry of the order parameter (OP) is reflected in the spatial structure of the DOS, and that a pair of quasiparticle resonaces at energies $`\pm \mathrm{\Omega }_0`$ occur within the $`d`$-wave gap. While these conclusions appear to be robust, the detailed structure of the DOS and the magnitude of $`\mathrm{\Omega }_0`$ depend sensitively on details of the band structure and impurity potential.
In this work, we show that local suppression of the OP near the impurity acts as an important anomalous scattering potential. With the exception of Refs. , this effect has generally been ignored. In , a relatively simple and physically appealing model of OP suppression was used to study the resonance structure of the 1-impurity scattering T-matrix. It was found that scattering from inhomogeneities in the OP renormalizes the energies $`\mathrm{\Omega }_0`$ towards the unitary limit $`\mathrm{\Omega }_0=0`$ while leaving qualitative aspects of the resonances unchanged. The authors suggested this as an explanation for the apparent proximity of Zn to the unitary limit in high $`T_c`$ materials. Here, we extend this discussion to the case of asymmetric bands, which have been shown previously to be important in the single impurity problem. Our main conclusion is that the renormalisation of $`\mathrm{\Omega }_0`$ is not generally towards the unitary limit, except in the unphysical case of perfectly symmetric bands.
The calculations are based on an exact T-matrix method, described elsewhere. The model consists of a tight-binding lattice with nearest neighbour hopping, nearest neighbour pairing, and a single, point-like-impurity at the origin. The Hamiltonian is:
$``$ $`=`$ $`t{\displaystyle \underset{i,j}{}}c_{i\sigma }^{}c_{j\sigma }+{\displaystyle \underset{i}{}}(u_0\delta _{i,0}\mu )n_i`$ (1)
$`+{\displaystyle \underset{i,j}{}}\mathrm{\Delta }_{ij}c_i^{}c_j^{}+\mathrm{\Delta }_{ij}^{}c_ic_j`$
where $`c_{i\sigma }`$ is the annihilation operator for an electron on site $`i`$ with spin $`\sigma `$ and $`n_i`$ is the electron density at site $`i`$. The hopping matrix element $`t`$ is the unit of energy throughout this work. Self-consistent solutions for $`\mathrm{\Delta }_{ij}`$ from the Bogoliubov-deGennes equations show that $`\mathrm{\Delta }_{ij}`$ is suppressed along bonds connected to the impurity site, and regains its homogeneous clean-limit value within a few lattice constants. The inhomogeneous portion of $`\mathrm{\Delta }_{ij}`$ is extracted, and treated as a spatially-extended anomalous scattering potential, additional to the on-site impurity potential. The T-matrix, which relates the scattering states to the eigenstates of the impurity-free system, is found by solving the Lippmann-Schwinger equation.
In Fig. 1, the effect of band asymmetry on the spatially-integrated DOS is shown. The controlling parameter is $`\mu `$, with half-filling ($`\mu =0`$) corresponding to a symmetric band. From (a), we see that the OP suppression is a weakly asymmetric function of $`u_0`$ for $`\mu 0`$. Typical quasiparticle resonances are illustrated in (b). For the model parameters chosen, the $`d`$-wave OP is $`\mathrm{\Delta }_d=0.39`$ and the gap edge in the DOS is $`0.5`$. The spectral weight within the resonances at $`\pm \mathrm{\Omega }_0`$ is transferred primarily from the gap edges (not shown). The broadening of the resonances is determined by the background density of states $`\rho _0(\mathrm{\Omega }_0)`$ and specifies the lifetime of a quasiparticle near the impurity before it leaks away through continuum states. The two resonances shown in (b) occur for the same set of model parameters, with and without inclusion of the scattering from the inhomogeneous OP field, and are an instance in which OP suppression drives the resonance away from $`\mathrm{\Omega }_0=0`$.
We plot the positions of the resonance peaks in Figs. (c)-(e), for different values of $`\mu `$ and $`u_0`$. In (c), the band is symmetric, and the shift of $`\mathrm{\Omega }_0`$ is always towards the strong scattering limit, as predicted by . For $`\mu =0.6`$, the bands become asymmetric and two qualitative aspects are changed: the unitary ($`\mathrm{\Omega }_0=0`$) and strong impurity ($`u_0^1=0`$) limits no longer coincide, and the direction of the shift in $`\mathrm{\Omega }_0`$ is nonuniversal. The former effect has been discussed at length in , but the latter effect is new. For bands which are still more asymmetric (eg. $`\mu =1.2`$) an additonal factor becomes important; scatterers which are near the unitary limit may actually have sufficiently small values of $`u_0`$ that the amount of OP suppression is small. Hence, the shift in $`\mathrm{\Omega }_0`$ for $`u_0^1>0.2`$ is quite small.
We have shown that the simple ansatz for nearest-neighbour OP suppression made by gives a good approximation to both the impurity resonance position and the momentum-space structure of the inhomogeneous OP in a $`d`$-wave superconductor with symmetric bands. Qualitatively different results were obtained for more realistic asymmetric bands. We will report on novel effects of OP suppression in bulk disordered systems elsewhere. |
no-problem/9912/cond-mat9912034.html | ar5iv | text | # Molecular Crystals and High-Temperature Superconductivity.
## Abstract
A simple model of the molecular crystal of $`N`$ atoms as a statistical mixture in real space of $`NX`$ atoms in excited and $`N(1X)`$ atoms in well localized ground state is considered. The phase coherence of the atomic wave functions is suppose to be absent. A bond energy of crystal is supposed to be a result of the pair interaction of $`NX`$ excited atoms. These molecular type pair excitations do not interact one with another before the metallization, and do not contribute to the pressure. Nevertheless, the pressure of such kind of crystals is determined by the interatomic distances, and by the binding energy of pairs. The possibility of the insulator-superconductor transition of such a “gas” of $`NX/2`$ pairs, “dissolved” among $`N(1X)`$ atoms in ground state is discussed. This kind of transition is supposed to occur in the oxigen $`O_2`$ , in the sulphur $`S`$, and, possibly, in the xenon $`Xe`$ crystals under pressure. The same kind of transition is likely to take place in HTSC materials, metal-ammonia and hydrogen-palladium solutions under normal conditions, due to similarity of some of their properties with the corresponding ones of molecular crystals.
The problem of finding an adequate description of the type of chemical bonding, sometimes known as van der Waals bonding in molecular crystals (MC), is of great practical importance. Bonding of this type plays an important role in physicochemical processes, such as adsorption, condensation, catalysis, etc. It can determine some of the properties of contact between insulators and semiconductors or metals, and the properties of layered compounds and nanocomposites. Interaction of that type may be responsible also for the some aspects of the high-temperature superconductivity (HTSC), regarding the metallization of the MC.
The metallization of condensed gases (MC) is a problem that arose back in 19-th century. Then a supposition was made about the liquid hydrogen to be metallic . In the first half of the 20-th century the metallic atomic hydrogen attract an interest as a simplest and superlight alcali metal analogy. In 1927 a simple criterion of metallization of dielectric condensates was found . In 1935 the density of atomic hydrogen at metallization was calculated ($`0.6g/cm^3`$), and the critical pressure ($`50GPa`$ ) was evaluated . Such pressure was out of the experimental possibilities at that time.
The theory of the van der Waals forces, and the phenomenological theory of superconductivity were developed by F.London and, perhaps, he was the first who connected superconductivity with the Bose condensation phenomenon: “… the degenerate Bose-Einstein gas provides a good example of a molecular model for such a condensed state in space of momenta, such as seems need for the superconductive state.. The “Bose condensation” direction in the superconductivity research included investigations of the systems with the insulator-metal transition from the insulator side (before the Fermi degeneration). Nevertheless, most of the studies at that time were carried out on metals (a stable and convenient objects). The problem of HTSC was absent. Both the high pressure equipment, and understanding of the insulator-metal physics were absent too. It was the reason, why the first attempt to undertake the examination of the system with the insulator-metal transition (metal-ammonia solution) in 1946 was fail . This system was too complicate. There was only narrow intermediate interval of the sodium concentrations in ammonia, which lied between the dielectric and the Fermi metal, in which the anomalous conductivity was observed. Nevertheless, a small persistent currents was detected, and upper limit of resistance was about $`10^{13}Ohm`$ at about $`200K`$. But in 1946 several attempts to reproduce the results was not successful . Another (and independent of the superconductivity problem) trend arose in the physics of the metal-insulator transition . As about metals, they was very convenient objects for experiments on superconductivity, but very intricate for theory. In metal electrons are in the Fermi-degenerate state, and attempts to transform them into bosons were not successful . The BCS theory have resolved this problem in 1957. An attempts to find out the SuperHTSC (“electronic” mechanism of the electron pairing) in similar materials were undertaken in 60-th . Very significant progress was attained in the high pressure technique due to the elaboration of the metallic hydrogen problem . Discovering of the HTSC in 1986 again attract attention to substances with “…the quasi-metallic character…, which sometimes display the metal-insulator transition. In 1989 optical properties of metallic xenon at $`140÷200GPa`$ were carefully measured . In 1991 the same measurements were made on metallic sulphur at $`90÷157GPa`$ . However, in 1997 it was found, that the sulphur under pressure is not a metal, but a superconductor with critical temperature $`T_c10÷17K`$, and with fast increase of $`T_c`$ with pressure . In 1998 insulator-superconductor transition of oxygen $`O_2`$ ( $`90GPa`$, $`0.6K`$) was observed too . These substances ($`S`$ and $`O_2`$) are typical MC, as well as the condensed xenon $`Xe`$. The careful inspection of the optics data allowed to interpret them as a properties of superconductor with the energy gap $`1.5eV`$ and $`T_c5000K`$ .
Let us consider some properties of condensed xenon as of a typical MC . Only two spectroscopic parameters are quite enough. It is the ionization potential $`E_1=12.13eV`$, and the energy difference of the excited and ground states of the atom $`E_2=8.32eV`$. The hydrogen-like radii of the ground and excited states are $`r_1=e^2/2E_1=0.593\AA `$; $`r_{21}=e^2/2(E_1E_2)=1.89\AA `$. Radius $`r_{21}`$ in condensate must be increased by multiplier about $`1.15÷1.17`$ according to well known the Goldschmidt effect. So, we have $`r_{20}=2.2\AA `$. This value ($`2r_{20}`$) is very close to the equilibrium interatomic distances in the xenon crystal at the normal conditions. It means that the condensed xenon may be treated as an ordinary “chemical” substance with interaction via the atomic orbitals of excited state (radius $`r_2`$), but with their population $`X<1`$ (“weak excimer crystal”). Relation $`r_{20}/r_1=y^{1/3}`$ is about $`3.7`$, and, according to the Mott criterion of metallization, the atomic wave functions (radius $`r_1`$) may be treated as a localized ones with the chaotic phases. It allows us to express the parameter $`X`$ as $`X\mathrm{exp}\left(E_2/w\right)=\mathrm{exp}(2)\mathrm{exp}\left[\left(y^{1/3}+y^{1/3}\right)\right]`$, where $`w=e^2/2\left(r_2r_1\right)`$, — the mean energy of the chaotic interatomic interaction of the ground state electrons. At normal conditions $`X_0=0.14`$ for xenon. In reality it means, that there are $`NX`$ atoms in excited state (the excimer analogy of $`Cs`$) and $`N(1X)`$ atoms in the ground state among $`N`$ atoms of condensate at every moment. It leads to the conclusion, that the excited atoms exist by pairs, to take part in bonding of crystal. These pairs are hydrogen-like virtual molecules $`(Xe)_2`$ of excimer xenon atoms. Their bond energy is $`Q_{20}=0.28e^2/2r_{20}0.9eV`$ as for the covalent type molecules. The mean bond energy of condensed xenon is $`q_0=X_0Q_{20}0.13eV`$. The compressibility $`k\mathrm{30\hspace{0.17em}10}^6atm^1`$ of condensed xenon is also close to the calculated value: $`k_c=0.16r_{20}^4/X_0\mathrm{27\hspace{0.17em}10}^6atm^1`$. One can obtain the equation of state (EOS) :
$$P(y)=P_0E_1^4(y2)^{7/3}\mathrm{exp}\left\{\left[(y2)^{1/3}+(y2)^{1/3}\right]\right\}𝑑y.$$
(1)
The transition from $`y`$ to $`y2`$ is a result of subtraction the “dead” volume $`r_1^3`$ from the volume $`r_2^3`$, and of the bond energy $`Q`$ from $`E_1`$. This EOS satisfactory corresponds to the experimental data for $`Xe`$ . This model does not contradict with the results of the traditional description of MC, utilizing the superposition of the ground and the excited state wave functions, and seems to be more adequate for the “Mott situation”. The same approach is applicable for the sulphur $`S`$, and for the oxygen $`O_2`$ condensates (excimer molecules are $`(S)_2`$ and $`(O_2)_2`$ ) .
At metallization the molar volume of xenon is $`10.27cm^3/mol`$ ($`34.7cm^3/mol`$ at normal conditions). It means that $`r_m=r_{2\mathrm{s}\mathrm{i}}=1.47\AA `$, $`y_{\mathrm{si}}^{1/3}=2.47`$, which is about the Mott criterion of metallization, and corresponds to the band structure appearance. Parameter $`X_{\mathrm{si}}0.42`$. For the pair excitations (the “Frenkel biexcitons”), $`X_{2\mathrm{s}\mathrm{i}}0.21`$. It is about the percolation threshold for the xenon “site” lattice.
For the metallic state the bond energy of molecules $`(Xe)_2`$ is $`Q_{2\mathrm{s}\mathrm{i}}=0.28e^2/2r_{2\mathrm{s}\mathrm{i}}1.37eV`$. It is close to the energy gap ($`1.5eV`$) of hypothetic superconductivity state, evaluated for “metallic”xenon in from the optical data . Metallization of xenon likely is an insulator-superconductor transition too.
Near the transition (from the insulator side) there are “gas” of noninteracting hydrogen-like $`(Xe)_2`$ molecules, distributed over the lattice sites and not contributing to the pressure of crystal (Fig.1, insertion). Their bond energy, and elastic properties of crystal are related with the existence of electron pairs, having zero momentum and spin (bosons). External pressure changes only bond energy of these electron pairs (and their concentration) by changing the interatomic distances. The situation is similar to dependence of the Cooper pair energy on the dynamic properties of lattice. The possibility of the stable situation below the equilibrium distance $`r_{20}`$ under pressure is shown in Fig.1 (curve $`E_1`$ and $`E_3`$). Curve $`E_2`$ corresponds to the “unstable lattice” of $`ns^1`$ or $`ns^2`$ atoms at $`r_2>r_{10}`$, — the equilibrium distance. At $`r_2>r_{2\mathrm{s}\mathrm{i}}`$ we have the insulator state in both cases. Below $`r_{2\mathrm{s}\mathrm{i}}`$ the tunneling processes, and possible superconductivity (for lattice of metal atoms, and for clusters of $`(Xe)_2`$, $`(S)_2`$ and $`(O_2)_2`$ molecules) becomes significant. In this region MC looks like a granular superconductors with the Josephson tunneling (Fig.1, insertion). The intermolecular interaction energy of the excimer molecules, for example of $`(Xe)_2`$, is about zero up to the Bose or to the Fermi degeneracy point. The Bose degeneracy temperature for the xenon crystal is $`T_B=0.084h^2(NX_{2\mathrm{s}\mathrm{i}})^{2/3}/2m_e5500K`$. It is close to values obtained above. The bond energy per one $`(Xe)_2`$ boson is $`Q_B1/r_2`$ (Fig.1). The mean energy $`Q_F`$ per one $`(Xe)`$ fermion for the “Fermi gas” approximation is proportional to $`1/R^2(NX)^{2/3}2Q_Br_2^1\mathrm{exp}\left(2r_2/3r_1\right)`$ (Fig.1 ). Bosons seem to be more preferable than fermions for the interval $`r_{2\mathrm{m}\mathrm{s}}r_{2\mathrm{s}\mathrm{i}}`$. The situation is quite opposite at $`r_2<r_{2\mathrm{m}\mathrm{s}}`$. Below $`r_{2\mathrm{m}\mathrm{s}}`$ evolution of the both systems ($`E_1`$ and $`E_2`$) is likely demonstrates Fermi degeneracy. It should be noted, that the palladium is the only metal with interatomic distances, corresponding to $`r_2`$ ($`1.3\AA `$), but not to $`r_1`$ ($`0.56\AA `$) . Such kind of a insulator-superconductor transition is likely occurs for the oxygen ($`0.6K`$), sulphur ($`17K`$) and, perhaps, xenon ($`5000K`$). Most atoms in sulphur and molecules in oxygen (namely, $`N(1X)`$) are in ground state, they possess magnetic momenta (especially $`O_2`$), and play the role of magnetic impurities . For the sulphur magnetic susceptibility is $`1/(w+kT)1/kT`$, and may be neglected, but not for the electronic processes. The critical temperature $`T_c`$ is known to be strongly dependent on concentration of the magnetic impurities . The situation for $`Xe`$ is much better, because the xenon atoms in ground state are diamagnetic.
In the HTSC oxide type materials the ion $`O^2`$ is probably in an excited $`3s^2`$ state . Together with $`ns^{2+}`$ ions this system resembles MC at the threshold of metallization (superconductivity).
When the insulator-insulator contact is changed by the insulator-metal one (see Fig.2), several effects arise:
1. Here, too, the “vacuum” gap ($`r_2r_1`$) between the insulator and metal appears. It was clearly observed for the first time in capillaries of atomic diameters in zeolites .
2. Atoms or molecules of insulators in contact with atoms of metal become “virtual excimers” ($`X>0`$), and receive strong chemical activity (catalytic action).
3. Parameter $`X`$ increases, because the perturbation energy $`w=e^2/2(r_2r_1)`$ is changed by $`w_m=e^2/(r_2r_1)`$. For $`Xe`$ decrease of $`r_2`$ by factor 1.5 corresponds to pressure about $`150GPa`$. At the surface of the metal, the insulator may be transformed into the superconductor as if under high pressure. The magnetic and “proximity” effects should decrease $`T_c`$. Possible examples of such a situation are the sodium (metal) atoms in ammonia (MC and the molecular excimer type pairs $`(NH_3)_2`$), and the hydrogen (MC and the molecular excimer type pairs $`(H)_2`$) atoms in palladium (metal). The volume of the solution increases with the appearance of the “interface gap”, and of the volume difference $`\mathrm{~}(r_2^3r_1^3)`$ in both cases.
The MC has a very significant difference from the covalent crystals, and from the metals with well defined band structures. The Mott criterion for the MC corresponds to insulator state, and to well localized atomic wave functions with chaotic phases. It is a reason to utilize the mixture of atoms in excited and ground states in real space, instead of the superposition of the atomic wave functions and the band theory description of MC. The electronic properties (and the bond energy) of the weakly bonded MC depends on the fluctuations of the electron-electron interaction energy $`we^2/2(r_2r_1)`$, but not on the dynamic properties of the lattice and on the thermal energy. When this part of the electronic subsystem of atoms becomes correlated, and the band structure appears, the phase transition of the MC into the covalent crystal happens. The mean fluctuation energy $`w`$ changed by the constant $`w_0w`$.
The situation looks like if in the ordinary semiconductor one changes the dynamic properties of the lattice by that of the interacting electrons subsystem of atoms in ground state. The mean energy $`kT`$ must be changed by the mean electron-electron interaction energy $`w(r)`$. The concentration $`X`$ of electrons within the excited state orbitals (in the conducting band) depends on the interatomic distance (pressure) but not on the heat. Molecular electron pairs in excited state (bosons) resembles small bipolarons in ordinary lattice. Phonons must be changed by the “homoplasmons” (fluctuations of the electron density). A simple model of such a statistical mixture of atoms in real space with random field interaction allow us to made some conclusions on the possibility of the boson–like excitations existance, and of the electronic mechanism of the superconductivity of the MC and of the HTSC materials.
It is seen from Fig.1 (curve $`E_3`$), that superconductivity is probable for the compressed MC at $`r<r_{20}`$, or for the metals at $`r>r_{10}`$ (e.g. disordered systems, insulator-metal solutions, or compounds). Some hypothetical metal-insulator “nanocomposite” with regular lattice of heteroclusters (for example, $`(\mathrm{Metal})Xe_3`$, $`(\mathrm{Metal})Xe_{12}`$, etc.) appeared to be interesting materials to search for the high-temperature superconductivity. |
no-problem/9912/math9912205.html | ar5iv | text | # Averages over curves with torsion
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.